Building The First Ternary Microprocessor

Your computer uses ones and zeros to represent data. There’s no real reason for the basic unit of information in a computer to be only a one or zero, though. It’s a historical choice that is common because of convention, like driving on one side of the road or having right-hand threads on bolts and screws. In fact, computers can be more efficient if they’re built using different number systems. Base 3, or ternary, computing is more efficient at computation and actually makes the design of the computer easier.

For the 2016 Hackaday Superconference, Jessie Tank gave a talk on what she’s been working on for the past few years. It’s a ternary computer, built with ones, zeros, and negative ones. This balanced ternary system is, ‘Perhaps the prettiest number system of all,’ writes Donald Knuth, and now this number system has made it into silicon as a real microprocessor.

After sixty or seventy years of computing with only ones and zeros, why would anyone want to move from bits to trits? Radix economy, or the number of digits required to express a number in a particular base, plays a big part. The most efficient number system isn’t binary or ternary – it’s base e, or 2.718. Barring the invention of an irrational number of transistors, base three is the most efficient way to store numbers in memory.

Given that ternary computing is so efficient, why hasn’t it ever been done before? Well, it has. The SETUN was a ternary computer built by a Soviet university in the late 1950s. Like Jessie’s computer, it used a balanced ternary design using vacuum tubes. The SETUN is the most modern ternary computer that has ever made it to production, until now.

For the last few years Jessie has been working on a ternary computer based on ICs and integrated circuits, making it much smaller than its vacuum tube ancestor. Basically, the design for this ternary logic relies on split rails – a negative voltage, a positive voltage, and ground. The logic is still just NANDs and NORs (ternary logic does provide more than two universal logic gates, but that’s just needlessly complicated), and ternary muxes, adders, and XORs are built just like their binary counterparts.

This isn’t the first time we’ve heard about Jessie’s ternary computer. It was an entry for the Hackaday Prize two years ago, and she made it down to our 10th-anniversary conference to speak on this weird computer architecture. Over the last two years, Jessie found a team and funding to turn these sketches on engineering notebook paper into circuits on real silicon. Turning this chip into a real computer – think something along the lines of a microcomputer trainer from the 1970s – will really only require a few switches, LEDs, and a nice enclosure.

Where will the ternary computer go in the future? According to Jessie, the Internet of Things. This elicited a few groans in the audience during her talk, but it does make sense: that’s a growing market where efficiency matters, and we’re more than happy to see something questioning the foundations of computer architecture make it to market.

72 thoughts on “Building The First Ternary Microprocessor

  1. Computers use 0 and 1 because of how diodes and transistors work. Before unsubscribing / unfollowing hackaday please specify if “There’s no real reason for be the basic unit of information in a computer to be only a one or zero, though. ” is a quote and not an author statement.

    1. The interesting point for using ternary system in the SETUN was to take advantage of the three states allowed by a memory core: magnetized clockwise, counter-clockwise, and no magnetized. In my opinion, at a pure electrical level the binary form is the most eficient.

      1. We’re throwing away 1/3 additional er… state width?

        To put it another way, given a collection of standard IC’s and comparable controllers to ARM with the same as comparable binary controllers and IC’s. How much additional circuit board space is required beyond the negative power rail to a comparable binary board?

        8 states would require only 2 pins on a ternary while it takes 3 on a binary. Seems to me, it would save on board space allowing us to create much better toys to play with.

        I don’t think binary systems should go away, but I don’t think it should be the only player.

      2. Yes, it is possible for a magnetic core to have many states, and it might seem obvious to use CW, CCW, and “none”, but in practice it’s very hard to get to “none”. Due to the hysteresis of the materials used for magnetic core memory, it would rather flip from one direction to the other than just find a nice comfortable in-between state. It’s like trying to make a toggle switch stop in the half-way position – you can do it, but not easily.

  2. One of the most intractable truths in many domains is that good enough is the enemy of better usually expressed as the cost/benefit ratio. While there are some features of using ternary in computing that are advantageous the real question (as always) is if it is worth it.

    1. For many general purpose applications, generally not. But as is typical, there are always niche cases where certain things don’t weigh as heavily. Most consumer products are a far cry from being “perfect” if there even is such a thing but this comes into play far more often than you would think. Even for such mundane things as roofing materials. Pay a “little” now for a cheap roof or pay a lot now for a better roof but have it hold up for several decades or more. Or the use of extremely specialized and niche mechanical devices when the need for them is there and cost is less of a concern. Surgery, space flight, F1 racing, etc.

  3. “There’s no real reason for be the basic unit of information in a computer to be only a one or zero, though.”

    What!? There are very good, very real reasons why binary is used. Saturating transistors can be used, which reduces power. Having only two voltages to work with reduces power rails. And having only one threshold to worry about makes the noise margins better for a given voltage swing, and allows for simpler and faster circuits. Those advantages outweigh the benefits of a higher radix encoding.

    Now, if what Jessie is doing is using pairs of binary circuits to encode three states (akin to how some hardware uses base-16 encoding of floating point numbers), I’m unconvinced how that is more efficient than encoding four states with two binary circuits.

    1. Don’t forget that someone would need at least two transistors per trite-bit (tit?) and maybe a way to stop accidental follow through (both on at same time) and then the address bus decode being a high tit decode and a low tit decode presuming a zero tit meaning unused (off).
      So how would you determine if the tit is high, low or extra low (damn that’s low)? Via voltage references and/or opto-iso feedback?

      Binary can be just two transistors and a current limit resistor (oldskool discrete transistor logic)

      (tits weren’t meant as a pun :D)

      1. Didn’t you or the guy above read the article here? She uses +V, GND, and -V for the three states, so the device needs a positive and negative rail. Nothing’s stored as binary, otherwise it’d be a binary computer with a load of pointless intermediate convertors and unconvertors.

        1. Storing two sets of binary is still storing two sets of binary. All you’ve done is hide that implementation detail from the user. But we’re not the users, we’re here to discuss the actual implementation detail. Moreover, it’s the exact reason why there is a very good reason we use binary – it’s the natural way our current electrical components work. For a microprocessor to be ternary you’d need to use fundamental components that offer that property natively; not through various workarounds such as what they discussed in this video to “simulate” such property. Turns out it’s not exactly a trivial task. So really, we can say they’ve developed a binary electrical circuit that simulates a ternary circuit.

        2. Quote:
          “She uses +V, GND, and -V”

          That was the basis of my thinking:

          +V is high (or very high)
          GND is unused state (or low/high relative to what you want)
          -V is low (or very low if you want)

          now:
          0 3 6
          0 0 0 = 0
          + 0 0 = 1
          – 0 0 = 2
          0 + 0 = 3
          + + 0 = 4
          – + 0 = 5
          and so on.

          My comment is how would you use such a thing as an address line, as in how does the address translate the + and – voltages without lots of transistors.

          Also in theory, yes it is efficient…. Only if you look at the table above.
          Now that we are in the habit of Jordan Maxwell-ifying facts here:

          One could look at the 2.718 value and say that 2.X means two states are efficient and 0.718 means it is about 72% efficient with the remainder being the glue logic to make it work. However that is using the same logic as a guy (Jordan Maxwell) who takes an ancient name, Krishna, and claims it came from Christ (Jesus Christ) which would only of worked if English was older than say Latin, Arabic or Sanskrit, That among his other New-agism Hipster agenda pushing blag of a technique.

          Next someone here will say “ternary computers…. because aliens did it”

          Bludi hipsta logik!!!!!

  4. I can count to 512 on just two hands…. And…. probably get arrested for public disorder sticking two middle fingers up in public when counting in base two on my hands.

    The problem will be having to learn a whole numbering system. Binary seems easy to count.

    Ternary, well I can’t be asked to traipse the internet on how to count past 2 in base 3 when I’ll never have use for it as far as I can see to my future.

    1. [quote]I can count to 512 on just two hands…. And…. probably get arrested for public disorder sticking two middle fingers up in public when counting in base two on my hands.[/quote]
      This one is very good! Could be a handy excuse in front of a judge, “i wasn’t insulting him, i was counting in binary!” :-) Not sure if it will be accepted though…

    1. You… You realize I linked to Setun in the article, right? It’s almost as if you’re only reading the headline, and complaining the headline is inaccurate.

      NOW HIRING: BETTER TROLLS. Please send your application in the form of a brick thrown through my front window.

  5. The best reason to use something other than binary is so that we could reduce the number of traces needed to convey instruction sets and memory locations. Say for example that you had a new kind of diode that would not just give high or low from a voltage, but it could distinguish between many many different voltage levels that means you could theoretically create a computer where the instruction, and addresses locations could be transmitted in a single channel single clock cycle.

    But as someone above mentioned, you get voltage drift and resistance can lower the signal voltage etc, so it would be really hard to control, but it could be really really small since you would need fewer channels.

    We are doing something similar with fiber lines. Usually its just red light on or off, but there is some experimentation using RGB and breaking it apart on the other end. It would essentially be the same thing.

    Another correlation is radar detectors used to just use a single point bounce, to measure distance twice and determine speed. Now they send a hamming pattern and then when they receive it, they can actually get much more precision by looking at the pattern when the signal is returned.

    Anyway, I was asking about this in my undergrad and got in trouble for “distracting the class” … very annoyed. :)

    1. “Say for example that you had a new kind of diode that would not just give high or low from a voltage, but it could distinguish between many many different voltage levels”

      Like a triode?

    2. Some signaling protocols use more than two voltage levels – your ground return quality matters more and your signal eyes are smaller, but it’s a valid strategy for trading off clock speed, number of signals and bit rate.

    1. There are 10 type of people: those who understand binary, those who don’t and Nikolay Brusentsov.

      It’s funny that this applies to any base (10 in base X is equal to X).
      So besides the first use (related to base 2) when the joke come from people being used only to read 10 like ten in base ten and it was something new, now i don’t see it funny anymore.

      1. Except for better educated IT and math guys, most people don’t even know about other bases. If their school system does teach them about it, converting between two or more systems is somewhat hard to them. So in my book joke with 10 people is always funny.

  6. Just wanted to point out that we already use “more than binary” in lots of practical applications right now. Just not in processors.

    Some modern DRAM cells can represent as many as 6 different states. That’s six states, not six bits. The DRAM controller handles the translation from binary to whatever n-ary the RAM uses. Some of you may have such memory in your computer right now.

    There are non-binary Flash memories. If you have an SSD in your computer it is likely that you have a multi-level NAND Flash drive.

    IBM had mainframe tape drives that used 10 bits per symbol. In the 1983.

    Telecommunications technology has been doing this for just as long, if not longer.

    The US Army had a radio system that used a modulation method that had more than two states per symbol in the 1940’s.

    Using multi-level digital modulation over radio was already old-school a couple of decades ago.

    56K modems? 3 bits per symbol.

    802.11ac wifi? 512 bits per symbol.

    “But those are ANALOG!” you say. Okay, maybe you can argue that a digital modulation method over radio or old phone lines isn’t “fully” digital. So how about Fiber Channel? 10 bits per symbol.

    Long story short: Many of our digital systems have already gone beyond binary. It’s mostly just our microprocessors that haven’t.

  7. I was disappointed to see nothing but block diagrams, especially since those looked just like the diagrams for binary logic and arithmetic. Show us some transistors implementing a gate!

    And how about flip-flops? Flip-flap-flops, I mean? So much of computing (aside from high density storage and communications protocols) take advantage of how easy it is to make two-state circuits. So show us how easy it is to make three state circuits!

  8. See also Thomas Fowler’s 1840 ternary calculating machine – he needed to calculate parish contributions to poor relief by reducing pounds, shillings and pence to farthings. There’s a demo of a reconstruction online.

  9. Balanced ternary is the most efficient computing base if the circuit complexity follows radix x length. But it may not. If we look at the number of possible gates for example:
    Binary Ternary
    1 input 4 27
    2 input 16 729
    3 input 64 19,683
    Now many of these gates are not useful, the only useful 1 input binary gate is an inverter, the other 3 are always 1 always 0 and same as input are not really gates. But if the complexity of the circuit to implement these gates relates to the number of possible gates then binary wins out over ternary.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s