A while back, [Chris Lu] was studying how analog circuits, specifically op-amps can be used to perform mathematical operations and wondered if they could be persuaded to solve differential equations, such as the wave equation. After sitting on the idea for a few years, it was time to make it a reality, and the result is an entry into the Op-Amp Challenge.
Unlike many similar interactive LED matrix displays that are digital in nature (because it’s a lot easier), this design is pure analog, using many, many op-amps. A custom PCB houses a 4×4 array of compute units, each with a blue and white LED indicating the sign and magnitude of the local signal.
The local input signal is provided by an IR photodiode, AC coupled to only respond to change, with every other circuit sharing a sensor to keep it simple. Each circuit is connected to its immediate neighbors on the PCB, and off the PCB via board-to-board connectors. This simple scheme makes this easily scalable if desired in the future.
[Chris] does a great job of breaking down the math involved, which makes this project a neat illustration of how op-amp circuits can implement complex mathematical problems in an easy-to-understand process. Even more op-amps are pressed into service for generating the split-rail voltage reference and for amplifying the weak photodiode signals, but the computation circuit is the star of the show.
We like analog computing a fair bit around these parts. Here’s a little something we were previously drooling over.
Continue reading “Op-Amp Challenge: Interactive Analog LED Wave Array”
Today, most of what we think of as a computer uses digital technology. But that wasn’t always the case. From slide rules to mechanical fire solution computers to electronic analog computers, there have been plenty of computers that don’t work on 1s and 0s, but on analog quantities such as angle or voltage. [Ken Shirriff] is working to restore an analog computer from around 1969 provided by [CuriousMarc]. He’ll probably write a few posts, but this month’s one focuses on the op-amps.
For an electronic analog computer, the op-amp was the main processing element. You could feed multiple voltages in to do addition, and gain works for multiplication. If you add a capacitor, you can do integration. But there’s a problem.
Continue reading “Secrets From A 1969 Analog Computer”
If you have had trouble with ordinary calculus, you may not be pleased to hear about “photonic calculus” — a recent idea from [Nader Engheta] of the University of Pennsylvania. The idea is that materials with certain properties could manipulate an electromagnetic wave in a way to solve a specific mathematical equation. [Engheta] proposed this idea back in 2014 and recently announced that he and his team have a demonstration device that proves the concept. The analog computer is about twice the size of an airplane’s tray table and made of CNC-shaped polystyrene. It solves Fredholm integral equations of the second kind.
The analog computer uses microwaves for the input and the polystyrene acts as a dielectric full of air holes. The team likens its structure to that of Swiss cheese. The shape is generated through an inverse design process which builds the shapes from known solutions to the equations. That means a particular set of shapes will do one specific equation. The equation could, for example, model the sound volume in a concert hall. You can encode certain parameters in the input wave and the output would specify the volume at different locations. However, a change to the actual equation would require a new set of plastic pieces.
The computation is very fast. Using microwaves, the answer comes out in a few hundred nanoseconds — a speed a conventional computer could not readily match. The team hopes to scale the system to use light which will speed the computation into the picosecond range. Creating a new optical analog computer could be similar to how we burn a CD or DVD today.
Analog computers predate digital ones by a lot. We really want to build one like [Bill Schweber’s]. Then again, we wouldn’t mind finding a Donner 3500 at a hamfest, either.
When your only tool is a hammer, everything starts to look like a nail. That’s an old saying and perhaps somewhat obvious, but our tools do color our solutions and sometimes in very subtle ways. For example, using a computer causes our solutions to take a certain shape, especially related to numbers. A digital computer deals with numbers as integers and anything that isn’t is actually some representation with some limit. Sure, an IEEE floating point number has a wide range, but there’s still some discrete step between one and the next nearest that you can’t reduce. Even if you treat numbers as arbitrary text strings or fractions, the digital nature of computers will color your solution. But there are other ways to do computing, and they affect your outcome differently. That’s why [Bill Schweber’s] analog computation series caught our eye.
One great example of analog vs digital methods is reading an arbitrary analog quantity, say a voltage, a temperature, or a shaft position. In the digital domain, there’s some converter that has a certain number of bits. You can get that number of bits to something ridiculous, of course, but it isn’t easy. The fewer bits, the less you can understand the real-world quantity.
For example, you could consider a single comparator to be a one-bit analog to digital converter, but all you can tell then is if the number is above or below a certain value. A two-bit converter would let you break a 0-3V signal into 1V steps. But a cheap and simple potentiometer can divide a 0-3V signal into a virtually infinite number of smaller voltages. Sure there’s some physical limit to the pot, and we suppose at some level many physical values are quantized due to the physics, but those are infinitesimal compared to a dozen or so bits of a converter. On top of that, sampled signals are measured at discrete time points which changes certain things and leads to effects like aliasing, for example.
Continue reading “Continuous Computing The Analog Way”