When your only tool is a hammer, everything starts to look like a nail. That’s an old saying and perhaps somewhat obvious, but our tools do color our solutions and sometimes in very subtle ways. For example, using a computer causes our solutions to take a certain shape, especially related to numbers. A digital computer deals with numbers as integers and anything that isn’t is actually some representation with some limit. Sure, an IEEE floating point number has a wide range, but there’s still some discrete step between one and the next nearest that you can’t reduce. Even if you treat numbers as arbitrary text strings or fractions, the digital nature of computers will color your solution. But there are other ways to do computing, and they affect your outcome differently. That’s why [Bill Schweber’s] analog computation series caught our eye.
One great example of analog vs digital methods is reading an arbitrary analog quantity, say a voltage, a temperature, or a shaft position. In the digital domain, there’s some converter that has a certain number of bits. You can get that number of bits to something ridiculous, of course, but it isn’t easy. The fewer bits, the less you can understand the real-world quantity.
For example, you could consider a single comparator to be a one-bit analog to digital converter, but all you can tell then is if the number is above or below a certain value. A two-bit converter would let you break a 0-3V signal into 1V steps. But a cheap and simple potentiometer can divide a 0-3V signal into a virtually infinite number of smaller voltages. Sure there’s some physical limit to the pot, and we suppose at some level many physical values are quantized due to the physics, but those are infinitesimal compared to a dozen or so bits of a converter. On top of that, sampled signals are measured at discrete time points which changes certain things and leads to effects like aliasing, for example.
Continue reading “Continuous Computing The Analog Way”
We were always taught that the fundamental passive components were resistors, capacitors, and inductors. But in 1971, [Leon Chua] introduced the idea of a memristor — a sort of resistor with memory. HP created one in 2008 and since then we haven’t really had the burning need to use one. In a recent Nature article, [Mohammed Zidan] and others discuss a 32 by 32 memristor array on a chip they call a memory processing unit. This analog computer on a chip is useful for certain kinds of operations that CPUs are historically not efficient at, including solving differential equations. Other applications include matrix operations used in things like machine learning and weather prediction. The paper is behind a paywall, although the usual places to find scholarly papers will probably have it soon.
There are several key ideas for using these analog elements for high-precision computing. First, the array is set up in a passive crossbar arrangement. In addition, the memristors are quantized so that different resistance values represent different numbers. For example, a memristor element that could have 16 different resistance values would allow it to operate as a base-16 digit.
Continue reading “Memristors On A Chip Solve Partial Differential Equations”
I’ll be brutally honest. When I set out to write this post, I was going to talk about IBM’s Q Experience — the website where you can run real code on some older IBM quantum computing hardware. I am going to get to that — I promise — but that’s going to have to wait for another time. It turns out that quantum computing is mindbending and — to make matters worse — there are a lot of oversimplifications floating around that make it even harder to understand than it ought to be. Because the IBM system matches up with real hardware, it is has a lot more limitations than a simulator — think of programming a microcontroller with on debugging versus using a software emulator. You can zoom into any level of detail with the emulator but with the bare micro you can toggle a line, use a scope, and hope things don’t go too far wrong.
So before we get to the real quantum hardware, I am going to show you a simulator written by [Craig Gidney]. He wrote it and promptly got a job with Google, who took over the project. Sort of. Even if you don’t like working in a browser, [Craig’s] simulator is easy enough, you don’t need an account, and a bookmark will save your work.
It isn’t the only available simulator, but as [Craig] immodestly (but correctly) points out, his simulator is much better than IBM’s. Starting with the simulator avoids tripping on the hardware limitations. For example, IBM’s devices are not fully connected, like a CPU where only some registers can get to other registers. In addition, real devices have to deal with noise and the quantum states not lasting very long. If your algorithm is too slow, your program will collapse and invalidate your results. These aren’t issues on a simulator. You can find a list of other simulators, but I’m focusing on Quirk.
What Quantum Computing Is
As I mentioned, there is a lot of misinformation about quantum computing (QC) floating around. I think part of it revolves around the word computing. If you are old enough to remember analog computers, QC is much more like that. You build “circuits” to create results. There’s also a lot of difficult math — mostly linear algebra — that I’m going to try to avoid as much as possible. However, if you can dig into the math, it is worth your time to do so. However, just like you can design a resonant circuit without solving differential equations about inductors, I think you can do QC without some of the bigger math by just using results. We’ll see how well that holds up in practice.
Continue reading “Quantum Weirdness in Your Browser”
I recently had the chance to visit Belgrade and take part in the Hackaday | Belgrade conference. Whenever I travel, I like to make some extra field trips to explore the area. This Serbian trip included a tour of electronics manufacturing, some excellent museums, and a startup that is weaving FPGAs into servers and PCIe cards.
Continue reading “Belgrade Experience: MikroElektronika, Museums, and FPGA Computing”
There is an argument to be made that whichever hue of political buffoons ends up in Number 10 Downing Street, the White House, the Élysée Palace, or wherever the President, Prime Minister or despot lives in your country, eventually they will send the economy down the drain.
Fortunately, there is a machine for that. MONIAC is an analogue computer with water as its medium, designed to simulate a national economy for students. Invented in 1949 by the New Zealand economist [WIlliam Phillips], it is a large wooden board with a series of tanks interconnected by pipes and valves. Different sections of the economy are represented by the water tanks, and the pipes and valves model the flow of money between them. Spending is downhill gravitational water flow, while taxation is represented by a pump which returns money to the treasury at the top. It was designed to represent the British economy in the late 1940s as [Philips] was a student at the London School of Economics when he created it. Using the machine allowed students and economists for the first time to simulate the effects of real economic decisions in government, in real time.
So if you have a MONIAC, you can learn all about spectacularly mismanaging the economy, and then in a real sense flush the economy down the drain afterwards. The video below shows Cambridge University’s restored MONIAC in operation, and should explain the device’s workings in detail.
Continue reading “Retrotechtacular: MONIAC”
It’s funny how creation and understanding interact. Sometimes the urge to create something comes from a new-found deep understanding of a concept, and sometimes the act of creation leads to that understanding. And sometimes creation and understanding are linked together in such a way as to lead in an entirely new direction, which is the story behind this plywood recreation of the Michelson Fourier analysis machine.
For those not familiar with this piece of computing history, it’s worth watching the videos in our article covering [Bill “The Engineer Guy” Hammack]’s discussion of this amazing early 20th-century analog computer. Those videos were shown to [nopvelthuizen] in a math class he took at the outset of degree work in physics education. The beauty of the sinusoids being created by the cam-operated rocker arms and summed to display the output waveforms captured his imagination and lead to an eight-channel copy of the 20-channel original.
Working with plywood and a CNC router, [nopvelthuizen]’s creation is faithful to the original if a bit limited by the smaller number of sinusoids that can be summed. A laser cutter or 3D printer would have allowed for a longer gear train, but we think the replica is great the way it is. What’s more, the real winners are [nopvelthuizen]’s eventual physics students, who will probably look with some awe at their teacher’s skills and enthusiasm.
Continue reading “Fourier Machine Mimics Michelson Original in Plywood”
John Napier was a Scottish physicist, mathematician, and astronomer who usually gets the credit for inventing logarithms. But his contributions to simplifying mathematics and building shorthand solutions didn’t end there. In the course of performing the many calculations he needed to practice these subjects in the 1500s, Napier invented a kind of computing mechanism for multiplication. It’s a physical manifestation of an old system known as lattice multiplication or gelosia.
Lattice multiplication makes use of the multiplication table in order to multiply huge numbers together quickly and easily. It is thought to have originated in India and moved west into Europe. When the lattice method reached Italy, the Italians named it gelosia after the trellised window covering it resembled, which was commonly used to keep prying eyes away from one’s possessions and wife.
Continue reading “Bone Up on Your Multiplication Skills”