Forget Digital Computing, You Need An Analog Computer

The analog computer of decades-gone-by is something many of us younger engineers never got the chance to experience first hand. It’s pretty much a case of reading about them on these fine pages or perhaps looking at a piece of one behind glass in one of the more interesting museums out there. But now, there is another option, (THAT) The Analog Thing. Developed by Berlin-based Analog computer-on-chip specialist Anabrid, THAT is an Open Source analog computer you can build yourself (eventually) or buy from them fully assembled. At least, that’s their plan.

From the 1970s onwards, digital computers became powerful enough to replace analog computers in pretty much every area, and with the increased accuracy this brought, the old analog beasts became obsolete overnight. Now, there seems to be a move to shift back a little, with hybridized analog-digital approaches looking good for some applications, especially where precision is not paramount. After all, that pile of fatty grey matter between your ears is essentially a big analog computer, and that’s pretty good at problem solving.

Looking over the project Wiki there are a few application examples and some explanatory notes. Schematics are shown, albeit only images for now. We can’t find the PCB files either, but the assembly instructions show many bodge wires, so we guess they’re re-spinning the PCB to apply fixes before releasing them properly. This is clearly work-in-progress and as they say on the main site, their focus is on chips for hybrid analog-digital computing, with a focus on energy-efficient approximate methods. With that in mind, we can forgive that the community-focused learning tools are still being worked on. All that said, this is still a very interesting project, and definitely would be a Christmas present this scribe would be more than happy to unwrap.

We’ve covered many aspects of analog computing lately, like this story of restoration (and ultimately demise), a maritime astrolabe found in the Arabian Sea, and even an FPGA simulating an analog computing architecture.

The video shows a simulation of a Lorenz attractor running on THAT, slowed down 100x (by changing integrator settings) so that the simulation can run slow enough to be plotted by a mechanical X-Y plotter.


62 thoughts on “Forget Digital Computing, You Need An Analog Computer

    1. I don’t understand. Analog and digital are two fundamentally different ways of representing values in a calculating device, where analog uses continuous variables and digital uses discrete numeric values. “Mechanical” is just one way of implementing these devices, with other possiblilities being electromechanical, electronic, pneumatic, and hydraulic. There are mechanical analog and digital computers, and there are electronic analog and digital computers. So how can “analog” be an alternative to “mechanical”?

      1. Digital uses discrete voltage values, to represent zero and one. Math involves manipulating and comparing multiple “bits”.
        Analog uses a variable voltage, which is (in theory) infinitely variable. Basic opamps let you sum or subtract.
        You can also make differentiation and / or integration circuits, effectively allowing you to tackle differential calculus.

        “Mechanical” or moving parts can be digital (a Babbage style adding machine) OR analog (a slide rule). You can also have electronic analog / digital computers using a mechanical device (printer or plotter) to display their results.

        Nobody is calling “analog” an alternative to “mechanical”, this post is show “analog” as an alternative toi “digital”.

        1. “Nobody is calling “analog” an alternative to “mechanical”, this post is show “analog” as an alternative toi “digital”.”

          No? What about:
          “Analog computers are for those who are afraid of building mechanical computers! ;)”, which was the statement I was responding to.

          Now, maybe you aren’t that familiar with the English language, but when someone says A is for people who don’t like B, the implication is that B is not a subset of A, since without that it wouldn’t make sense. As this statement didn’t. Or am I missing something?

          1. Nice. You know, I used to run into bottlenecks with digital computing when I was doing instrument prototyping in professional astronomy. Even with fast CPUs running LabView, doing real-time corrections for optics on a digital computer is kind of nightmarish (as in maxing out at 10 corrections/sec without spending tons of money). Comparing that to an analogue computer I made out of ~$200 in op-amps to mag-lev a ball-bearing correcting at 20kHz really puts things in perspective. Late.

      1. really? when were DC power supplies and meters invented? after all 2 voltages added and result displayed on meter is an electrical analogue computer, isn’t it? i would have thought that would have been possible decades before an analogue mechanical machine was made. Mind you, im just thinking of the difference engine, I suppose if you include the antikythera mechanism mechanical analogue computers are millennia old.
        So, upon reflection, I am wrong. Ignore everything I have said.

  1. What was old is made new…again. In the 80s and 90s, there was a lot of work done in analog approaches to neural nets and machine vision until affordable computers became available to simulate them.

  2. It’s not that digital computers caught up, but that digital became cheap.

    Analog couldn’t do everything, but there were thing where they were great at, and for that, they were cheaper and smaller than a digital computer until the days of microcomputers.

    And nothing likea digital meter to accurately read the results of an analog computer.

      1. they actually exists, field programmable analog arrays – FPAA. Google “anadigm” to see boards. Most depend on switch cap technology and bandwidths are limited in 10s of MHz, but flexibility they offer is priceless. Paul Hasler (well, now Jennifer Hasler! from GeorgiaTech also has a lot of papers on FPAA design).

      2. There are also hybrids. I used to have an Ensoniq SQ-80 synthesizer, which was made to be programmed like an analog synth, but was mostly 12-bit digital. In its day, it wasn’t quite practical to implement multiple simultaneous digital filters using contemporary microprocessors, so everything up to the filters was done digitally, but then routed to eight (I think) separate analog filter chips. This gave it the capability of being 8-note polyphonic while otherwise acting very much like an analog synth.

        1. Kawai did that too. I’ve seen the schematic which is nigh for Ensoniq stuff.
          I wondered what it would be like to route the 8 channels to separate speaker channels.

          1. The Ensoniq EPS (and, presumably, EPS 16+) has a DB-9 connector that provides 8 “solo” outputs that could each be assigned to an instrument channel. While each of these could be connected to different speaker channels, my dad connected them to inputs on a 16-track tape recorder in a studio that the local PBS radio station used to rent time in.

            These outputs were in addition the stereo outputs, to which the instrument channels could be positioned.

  3. Nice project! I’d like to see the patch panel and cables colour coded in logical regions like the EAI units, the all-white panel and use of monochrome patch cables could be made more interesting. Apart from that, great effort and a lot cheaper than finding an old TR-20.

  4. I suggest that there’s really no such thing as an analog computer. A computer is something that can perform a sequence of calculations, where that sequence can change depending on the results of previous calculations. This is a rather rough definition of Turing-completeness. But in practice, digital computers use calculating circuits that are reconfigured on-the-fly, so that the same adder is used for all calculations requiring addition, the same multiplier is used for all calculations requiring multiplication, and so on. This is what allows digital computers to be substituted for each other – there is no calculation that can be done digitally that requires specialized circuits; any Turing-complete computer can do any sequence of operations that any other Turing-complete computer can do. But in every analog “computer” I’ve seen, there are as many circuit modules as there are data paths required to complete a sequence of calculations. Analog computers don’t in general use the same circuit modules for all calculations needing that particular function, so they are limited to the number of function modules they physically have. THIS is what made analog “computers” obsolete, not their speed or their relative imprecision.

    I could of course be wrong. Is there a Turing-complete analog computer out there, one that can store a number of analog values in memory and run them through a shared set of functional modules? Because lacking an example this, all I have ever seen were analog calculators.

    On Anabrid’s website, they mention one of the benefits of analog computers being that they are inherently immune to malware. Which is laughable. It’s like saying a lawnmower won’t try to sell your information, or a piece of wallpaper won’t round off your bank balance improperly. I would counter that if you made an analog computer anywhere near as connected as the lowest-end smartphone, it would be quite capable of being infected by malware.

    1. A “computer” computes some results from some inputs. Richard Feynman use human computers where each human was responsible for one step, like multiply by a constant. Each human was a Computer in the same way someone had a title of Mechanic. Analog computers in a Norden bomb sight did ballistics and drag based targeting. Mechanical analog computers in US WWII submarines and torpedoes and naval guns solved for the future relative position of a target. Your definition from discrete computing theory is too narrow and specific.

    2. Lets see if typical forum IMG, /IMG tags work here:


        1. “Wait, HTML tags?”


          Just click on the link in my second attempt/post to see the image. The page is from Principles of Analog Computation 1959, G W Smith & R C Wood and supports BrightBlueJim’s contention that in the strictest sense there isn’t anything such as an analog “computer”.

          The major advantages of analog “computers” back in the day had nothing to do with virus immunity (duh!), but all to do with speed and parallelism.

        2. Well, the link works fine, and the document does agree with my point (that analog computers aren’t really computers). And yes, I understand what is meant by “analog computer”, but my point is that we don’t consider a collection of digital modules to be a computer unless it is Turing-complete, i.e., it can perform all of the functions of Turing’s proposed machine, with the exception that the machine in question does not have to have access to infinite memory. So why don’t we make this distinction in analog circuitry?

          In theory, true analog computers should be possible: the essential difference between analog and digital being the use of a continuous value in the former and discrete steps in the latter, using switched capacitor circuits, an actual analog computer could be made that DOES route signals through different modules, such that the computations is not limited by the number of calculating modules (adders, multipliers, integrators, etc.), but by the amount of (analog) memory provided. I’m not sure this results in something that would have any distinct advantage over a digital computer, being limited to the dynamic range defined by the noise level and clipping level of the circuitry.

          Maybe this is what FPAAs are about – I haven’t looked into them recently, and don’t remember what sort of specs they had.

          In my own experience, analog circuitry has a big disadvantage that just can’t be overcome: the inherent non-repeatability of any calculation. You run the same calculation a number of times, and you never get exactly the same result.

          1. I think you slightly miss the magic of Analogue computing as I understand it with ” You run the same calculation a number of times, and you never get exactly the same result.” – its not about being ‘precise’ to heaps of decimal places and the same every time, but about getting to a good answer basically instantaneously. It really doesn’t matter if its not exactly repeatable, as long as its usefully repeatable. (remember just because digital systems are now damn quick it still takes them many cycles to do the basic arithmetic – and longer the more accurate your demands – doing an operation on a longer string takes longer, where feed in as close as you can get to the start numbers in an analogue and you have the result with fair accuracy probably faster than a single digital cycle still in an electronic analogue and if its reacting to changing input data the output should reliably follow along rapidly…)

            Also I’m sure I’ve seen a Turing complete analogue computer, think it was pneumatic, but no idea how it described itself or where to try and find it again, think it was in documented in German (which I don’t really speak but like most European languages know enough of to just about make sense of written), definitely wasn’t English…

    3. The human brain (and quite a few intelligent animals) are distinctly analog, but can do computation (as opposed to mere calculation).

      My wife might like to argue that mine is programmable, too, although I might be “Feyn”ing that….

    4. Your problem is that you’re defining a computer in a way that any implementation must be digital, which cannot be defeated and therefore isn’t a reasonable argument.

      A feedback system can perform a sequence of actions which can change depending on the results of the previous actions, whether that system is dealing with discrete or continuous values. For example, a variable gain amplifier that feeds back on itself through a delay line, but there’s taps along the delay line which can change position and the measurement of the signal along those taps can change the gain of the amplifier and/or move the taps around. The function and outcome of the machine is “programmed” by the signal you put into it, much like what happens when you point a video camera at a television screen and complex fractal patterns appear.

      1. Not sure who you’re talking to, but if it’s me, then you didn’t read my comments. As I said, an analog system CAN be a computer, just by having analog memory and the means of routing signals from one module to another on the fly, like digital computers do. I didn’t say there CAN’T be an analog computer, just that all I’ve ever seen called “analog computers” do not meet the minimum requirements for a computer of any sort. They are equivalent to plugboard-programmed tabulating machines from the turn of the 20th century.

        It does no good to compare analog computers with digital computers, if the word ‘computer’ has a different meaning in each case.

        1. I think you have a point, even though can’t reinforce it. Certainly a lot of “analog computers” are embedded, two variables giving a result. I know what’s in analog computers as desktops, but I know little about what they were used for.

          Your point seems worthy of consideration. Maybe I’ll dig out that book abkut analog computers.

    5. I would concur on analog computers not really being computers. They are physical modelers based on the profound scientific fact that certain ideal circuits are exact *analogs* of basic functions in arithmetic and calculus. The error systematics are another profound window on physics (e.g., noise and environmental phenomena). In electronic analogs we take the output amplitude to represent the answer. To make a classical digital computer, we take electronic analogs with well-enough-behaved errors to maintain amplitude ranges as place-holders in a finite representation (usally binary). The advantage is obviously arithmetical fidelity (countable file-transfers/transmission distances/archiving). The disadvantage is also arithmetical fidelity (lack of “friction of the distance” — to borrow a phrase from a friend — for individuals and society). Ever notice that resistors are indespensable in circuits? Somehow the first proof function in crypto (a digital attempt to solve a societal problem) is proof of work (which somehow ends in energy dissipation or resistance).
      The main generator of cyber-vulnerabilities is bi-directional protocols (see Information Diodes). Luckily, analog modelers do not have sufficient precision to make them attractive for use with bi-directional protocols which can be a strength. As you mention below, analog modelers could possibly be made Turing-complete, however, I think analog’s strength lies in physical educational insight (students need to see this to understand how digital computers are made and understand physics in general), speed in certain applications, and increased entropy for mitigating problems that numerical precision enables. BTW, numerical precision is somewhat of an illusory goal in physics or even computational math (being a trade-off in practical application):

      1. Yes, yes, and yes. Especially as a way of visualizing how systems behave, i.e., physical educational insight. I have found SPICE to be quite helpful in doing analog circuit design, because it can simulate aspects of a circuit that would be tedious to the point of uselessness to understand through analysis alone. And SPICE is an example of something I alluded to in one of my comments here: use of digital computers to emulate analog models of systems, to get most of the benefits if analog modeling without most of the disadvantages.

        Of course, that’s a little weird – using a digital system to emulate an electronic analog system to model .. an electronic analog system. But the same applies to other digital simulators that model analog systems that aren’t necessarily electronic in nature.

        It’s an interesting take you have on proof-of-work. I hadn’t thought of it in terms of DEPENDING on the inefficiencies of computing machines. But I suppose that even if we had superconducting computers that required no power to operate, the proof of work would just shift to the computing resources used, i.e., the cost of the hardware to run the work problems.

  5. I used to work in an avionics backshop…which was chock full of older tech. In fact, I considered it the art of working on uber expensive obsolete electronics. Lot’s of stuff was basically analog designs from the 1960s. For example the Cabin Pressure Computer for a 737 was this with exactly the kind of circuits described, a whole bunch of op amps in a maze of feedback loops. The computer history museum in Mountain View has a whole some cool analog and hybrid computers you can see, although non working.

  6. I’m 74 years old, and I remember ordering an analog computer kit from Edmund Scientific back in the sixties. I don’t remember what it cost but it consisted of a piece of particle board with three round holes and a square hole, and three potentiometers (variable resistors or volume controls) that could be attached through the holes in the board and a second piece of particle board with an array of holes in it. There was a little plastic bag with a slide switch, a few resistors and a choke (coil) and capacitor to make an oscillator to send a signal to a speaker that was also included. I believe that there was also a little grain-of-wheat light that would glow more brightly as the oscillator got louder. There were also a bunch of spring clips that mounted in the holes in the second board which was a breadboard. there was an instruction sheet showing how to build the oscillator on the breadboard and attach it to the potentiometers and speaker and light along with the battery holder for two D cells. Took me about an hour to put it together and choose one of the three cardboard sheets that fit over the front board. I don’t remember what the other two were but one of them was for multiplication – the first and second holes were the numbers 1-10 in a wide semicircle corresponding to the range of the potentiator and the third (right) hole had the numbers 1-100 represented by a semicircular line with graduated tick marks and a number above every fifth tick. you set the first two pointers (on the plastic knobs provided with the potentiometers) to the numbers you wanted to multiply and turned the third pointer until the sound was loudest and the light was brightest. Your answer was whatever the third pointer was pointed to.

    1. I realize that you are joking, but this is just not possible. Blockchains (and anything else depending on cryptography) don’t work unless your arithmetic is absolutely repeatable, which analog circuits are not, due to real-world limitations in DC offset and gain control, and also due to noise. In the video above showing a Lorenz attractor being calculated, which is a chaotic function, I would bet that you can’t get it to draw the same curve twice.

  7. I worked at Honeywell, we used an analog computer to model HVAC control along with seasonal weather data. Latter switched to numerical simulation in Fortran using Apollo workstations.

    Here is a link to and early report:

  8. Wait, brains are analog? But a neuron is either firing or not right? How can a brain be analog?

    Also, analog computers were less accurate? But they would be accurate to whatever degree the measurement tool can measure a voltage right? I would have thought that without forcing values into discreet steps one could get more accuracy.

    1. Brains are arguably neither strictly digital or strictly analog. Recent studies have shown this to be true even in the signaling between neurons:

      I’d also argue that, since the point at which a neuron “decides” to fire is regulated by various neurotransmitters which can vary in availability, it’s not exactly digital – less like a transistor, and more like a wonky collection of electrochemical op-amps hooked up to a digital output.

  9. So, are analog computers going to be the new vacuum tube, or audio recordings on vinyl? Having grown up in the bad old days where all we had were vinyl records and to a certain extent, vacuum-tube electronics, this sort of thing amuses me. But I don’t want to change the subject, so I’ll leave vacuum tubes and vinyl be.

    When I was in the U.S. Air Force, we had analog computers all over the place. My specialty was aircraft surveillance radar, and one big part of that was height-finding systems. These relied on analog electronics to display radar echoes, and then to provide cursors on a CRT screen to measure the altitude of any aircraft seen. To do this, they had to take inputs from the radar’s antenna, to get its elevation angle, from the system’s timing generator to get distance and to adjust for earth curvature, and from manual inputs indicating atmospheric conditions that affected the index of refraction of the local air.

    These constituted analog computers, with all the benefits and drawbacks of analog computers. What I’m not seeing, either in the documentation for THE ANALOG THING, or in this article and its comments, are the drawbacks.

    The main one of these is drift. Analog circuits drift with temperature. They just do. In the height finding radar systems I worked with/on, we could measure 0 to 100,000 ft of altitude, and with these analog computing displays, we were only expected to maintain an accuracy of +/- 1500 ft. That’s 1.5%. But here’s the shocker: even to meet that, we had to do a DAILY calibration of the antenna and the display scopes (which housed the computers), which took at least half an hour. Every day.

    Now, I realize these systems were built a long time ago, and analog ICs have improved the performance of analog integrators, multipliers, and op-amps in general, but even today, it gets expensive to get more than five digits of precision or accuracy better than around 0.1%.

    I submit that an app could be written for Android or iOS that would out-perform THE ANALOG THING. Heck, a microcontroller with some analog multiplexers and an alphanumeric LCD module could do the same job, and NOT have any analog drift, precision, or accuracy issues.

    Let’s get real.

  10. Submicroscopic mechanical computers may end up exceeding the capability of electronic digital computers before the next century, possibly proving Babbage to have been right all along. Future computers will be steampunk.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.