A one-bit processor

Put on that abstract thinking cap, get out the pen and paper, and spend some time figuring out how this one-bit processor works. [Strawdog] came up with the concept one day during his commute to work (don’t worry, he takes the train… much safer than [Dave Jones’] frightening drive-time podcasts). He sketched it out on paper to make sure he knew where he was going with the project, then collaborated with Legion Labs to implement it in processing as an easier way to visualize its functionality. Since it’s one-bit there’s only room for one instruction. That instruction is a copy, then branch-if instruction. It copies the current bit to one address, and if that bit was one, it branches to a second address.

Going a bit fast for you? We think the description is fairly good, but if you can’t quite put it together from the article’s description, you may want to build this 2-bit paper processor and learn how it works first. It should teach you the basic concepts you need to understand the 1-bit version. As you can see in the image above, there’s also a single-step feature in the processing example that lets you analyze the effects of each instruction during program execution.

30 thoughts on “A one-bit processor

  1. Since it’s one-bit there’s only room for one instruction

    Er, in my world, bits can take two states (‘0′ and ‘1’) so that’s two instructions.

      1. @Stripe so, how is it “1-bit”? Sounds like the classic 0-bit processor to me.

        This is not something new or something this guy just figured out recently. I remember reading about this years ago. Did he rediscover it on his own? Or did he read about this on someone’s blog and build an emulator? I’m not sure what the accomplishment is here.

    1. I agree, this is an 11-bit Single Instruction computer.

      The question is could you make a turing-complete system that used only 1-bit in its instruction?

      1. The single instruction computer, technically uses ZERO bits, and IS Turing Complete. Just click the link in the comment you replied to.

      2. Excuse me, that should be “technically uses ZERO bits per instruction” (since there is only one instruction, information theory dictates that you don’t need to track which one you expect the computer to execute).

      3. The computer in the wiki has more than 1 bit per instruction. Just because there are no op-code bits, does not make it a 0-bit computer.

        What I want to know(and have been trying to work out) is if you can create a Turing-complete computer where each instruction is 1-bit wide including any operands.

      4. Yes sure you could, just imagine there is a serial to paralell converter in front of the fetch stage of this processor. Or if you don’t accept that it would be logicaly equivalent if you use extra registers where you set each bit every clock cycle (that would use implicite operands which you don’t need to store in the instruction) and once that is done set a flag to perform the actual operation which uses the registers as operands.

  2. Whoa – I appreciate the attention but i didnt do this all myself. LegionLabs deserves the credit (or perhaps blame) for thinking it up. I dont think on my ride to work, its a good day if im even awake. My participation was more on the simulator writing end.

    Please – credit where its due!
    –strawdog

  3. While designing it, I wasn’t sure whether it was truly one bit either. I suppose it depends on what part of the processor you consider 1 bit. While trying to reduce the ALU for another design, I first reduced that part to 1 bit, then started eliminating transistors from the ALU design (until it was just a wire, actually). That’s where the name came from, and it sort of stuck.

    While there is only one instruction, NOP can be emulated by adding a line of code that copies a 0 to itself. This is akin to clearing a register actually, but if you don’t use the register it may as well be NOP.

    Anyway, I think it is a fun toy and run it in my head or on some graph paper sometimes. I hope you have fun with it too!

    1. I can’t decide whether it qualifies as a 1-bit processor myself.

      I typically expect a 1-bit processor to have 2 possible instructions: 0/1 and a single bit operand but I’ve been experimenting with a 4-bit processor that has 5-bit instructions so who’s to say.

      I’ll have to think on this one for a bit (oops).

      Cool project all the same.

    2. Anyway, I think it is a fun toy and run it in my head or on some graph paper sometimes. I hope you have fun with it too!

      Indeed, I think it’s a great way to introduce someone to efficient processor design and understanding how an instruction can be decoded. Unfortunately I can’t see myself playing with this as it’s too close to work (in a professional capacity I work on processor design).

      Your point about it being 1-bit depending upon what part of the processor you look at is a good example of the difference between the instruction set and the micro-ops (the internal instructions of the processor). While the processor could be considered 0-bit (it’s the same instruction executed each cycle), internally it sounds like you’ve decoded it into a 1-bit micro-op.

      I think one could also argue this is anything but a 1-instruction cpu and you “self-modify” the code because each instruction includes a 1-bit and 2x5bit operands. Another way to look at the instructions is that you have is a 1-bit ISA:
      0 = clear bit at #A
      1 = set bit at #A, branch to #B
      where #A and #B are supplied literals but, as they’re literals, they could be argued as part of ISA (if I was to draw a classic fetch-decode-execute, it would highlight that the instruction is got in the ‘fetch’ and that would be the entire 11-bits)

      1. @Sheldon: Thanks for drawing out the explanation and looking at it from various angles… Particularly regarding the micro-ops point of view. I quite enjoy looking at simple processor designs like this and thinking about how they work, probably because it doesn’t resemble what I do at work.

  4. if I understand this right, the columns are b, X, Y
    and the instruction is:

    b[X] = b[PC]; if (b[PC]) PC=Y[PC];

    I have a craving to implement this in hardware, but
    a) I cant think of anything even slighly usefull to apply it to
    b) I dont have any static 1 bit ram anymore…

    1. Yep thats the essence of it – the instruction is “b” in what you have written.

      Useful? I’d never get any projects done if they had to be useful! I think you should just do it.

    2. Hey, I know it’s been a while, but I moved to Vietnam and I have some time over new year to implement this. I’ll take the lazy route and use a few lines of ASM on an AtMega to implement the opcode, and memory map some I/O with the MCU ports. I have some IS61C64AL 8-bit CMOS static ram, and some MAX232 level converters. I figure I can load software to the SRAM via RS232, then flip a switch to put it into ‘CPU Mode’. Since it has fully static operation, I’m not sure what to use as a clock. I don’t have an Arduino, my RaspberryPi is occupied, and my quantum TRNG (for some bell’s inequality+halting problem) is on the other end of the planet. That rules out most of the ‘trolling’ options!

      I guess I could use an actual wallclock as the clock, or a 555/hex inverter with a knob to change the speed dynamically. Obviously the speed knob would go up to 11.

      If I have time I’ll write a small compiler of some sort.

  5. Hey, I really have to point out for the record that this is a joint project, between Sean (aka LegionLabs) and me. Its not all my work and i never meant to take credit for it all. Credit where its due – Sean thought it up and worked out the theory, I mainly worked on the simulator and assembly. And I made the blog post, which is I guess where the confusion started. In any case, glad to see some people enjoying it.

  6. Motorolla had a “one bit CPU” out about thirty years ago now. It had a 1-bit ALU, A & B in = X out, but it actually had 4+1 instruction lines giving 16 instructions (and their inverse).

    {@rue_mohr – use 8 bit static RAM and ignore the other seven bits?}

  7. It’s amazing how many people find the “bitness” of a processor hard to understand. The “bitness” or “width” of the processor is the number of bits it can naturally and generally process at a time. This device operates on a single bit at a time, so it’s a one-bit computer. It doesn’t matter how many instructions it has, or how many addresses are encoded in it – that’s the instruction set width.

    A Turing machine is a one-bit computer with multiple instructions and an infinite instruction width (since there is no limit to the number of states) – it is still 1-bit.

    An x86 processor in a typical PC is 32-bit or 64-bit, since that is the maximum width for general purpose data. It doesn’t matter that it can also work with 128 or 256-bit data, nor that some instructions are 8-bit long and others can be huge.

    1. @David: I think in this case the “bitness” is convoluted because yes it has a 1-bit ALU but it has three datapaths, one to handle the instruction and the others to handle the addresses held in the operands.

      The crux is not that it has a 5-bit wide address bus, it’s that it acts on data representing each address 5-bits at a time.

      It’s more like a 1-instruction, 1-bit cpu with a 5-bit co-processor (for lack of a better term).

      Now I want to build one : D

  8. I actually build a demo system for the MC14500
    (based on the Moto’s documentation), as my undergrad project (back in 1986-87, more than 30 years ago).

    Program is easy to write (16 instructions only), but it takes a lot of effort to key it into the memory (hex keyboard).

    Fun to play around with, but I don’t think they are very successful commercially. One data bit does not carry around enough information.

    TM

    1. I might have to disagree! If you have an arbitrarily fast 1-bit processor, you can emulate any number of bits as output at some speed using memory-mapped I/O.

      Seems silly? Consider that when messing around with new CPU technologies, a 1-bit processor might be suddenly useful. It also might not, I suppose!

      Or, consider putting a whole bunch of these in parallel. Define how they are linked together and mapped to memory in software, like logic cells in an FPGA. Probably performance would be unimpressive, but it’s a really interesting thing to think about as an academic exercise.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s