50-Year-Old Program Gets Speed Boost

At first glance, getting a computer program to run faster than the first electronic computers might seem trivial. After all, most of us carry enormously powerful processors in our pockets every day as if that’s normal. But [Mark] isn’t trying to beat computers like the ENIAC with a mobile ARM processor or other modern device. He’s now programming with the successor to the original Intel integrated circuit processor, the 4040, but beating the ENIAC is still little more complicated than you might think with a processor from 1974.

For this project, the goal was to best the 70-hour time set by ENIAC for computing the first 2035 digits of pi. There are a number of algorithms for performing this calculation, but using a 4-bit processor and an extremely limited memory of only 1280 bytes makes a number of these methods impossible, especially with the self-imposed time limit. The limited instruction set is a potential bottleneck as well with these early processors. [Mark] decided to use [Fabrice Bellard]’s algorithm given these limitations. He goes into great detail about the mathematics behind this method before coding it in JavaScript. Generating assembly language from a working JavaScript was found to be fairly straightforward.

[Mark] is also doing a lot of work on the 4040 to get this program running as well, including upgrades to the 40xx tool stack, the compiler and linker, and an emulator he’s using to test his program before sending it to physical hardware. The project is remarkably well-documented, including all of the optimizations needed to get these antique processors running fast enough to beat the ENIAC. We won’t spoil the results for you, but as a hint to how it worked out, he started this project using the 4040 since his original attempt using a 4004 wasn’t quite fast enough.

Guitar Distortion With Diodes In Code, Not Hardware

Guitarists will do just about anything to get just the right sound out of their setup, including purposely introducing all manner of distortion into the signal. It seems counter-intuitive, but it works, at least when it’s done right. But what exactly is going on with the signal? And is there a way to simulate it? Of course there is, and all it takes is a little math and some Arduino code.

Now, there are a lot of different techniques for modifying the signal from an electric guitar, but perhaps the simplest is the humble diode clipping circuit. It just uses an op-amp with antiparallel diodes either in series in the feedback loop or shunting the output to ground. The diodes clip the tops and bottoms off of the sine waves, turning them into something closer to a square wave, adding those extra harmonics that really fatten the sound. It’s a simple hack that’s easy to implement in hardware, enough so that distortion pedals galore are commercially available.

In the video below, [Sebastian] explains that this distortion is also pretty easy to reproduce algorithmically. He breaks down the math behind this, which is actually pretty approachable — a step function with a linear part, a quadratic section, and a hard-clipping function. He also derives a second, natural exponent step function from the Schockley diode equation that is less computationally demanding. To implement these models, [Sebastian] chose an Arduino GIGA R1 WiFi, using an ADC to digitize the guitar signal and devoting a DAC to each of the two algorithms. Each distortion effect has its own charms; we prefer the harsher step function over the exponential algorithm, but different strokes.

Kudos to [Sebastian] for this easy-to-understand treatment of what could otherwise be a difficult subject to digest. We didn’t really expect that a guitar distortion pedal would lead down the rabbit hole to diode theory and digital signal processing, but we’re glad it did.

Continue reading “Guitar Distortion With Diodes In Code, Not Hardware”

Formation Flying Does More Than Look Good

Seeing airplanes fly in formation is an exciting experience at something like an air show, where demonstrations of a pilot’s skill and aircraft technology are on full display. But there are other reasons for aircraft to fly in formation as well. [Peter] has been exploring the idea that formation flight can also improve efficiency, and has been looking specifically at things like formation flight of UAVs or drones with this flight planning algorithm.

Aircraft flying in formation create vortices around the wing tips, which cause drag. However, another aircraft flying through those vortices will experience less drag and more efficient flight. This is the reason birds instinctively fly in formation as well. By planning paths for drones which will leave from different locations, meet up at some point to fly in a more efficient formation, and then split up close to their destinations, a significant amount of energy can potentially be saved. Continue reading “Formation Flying Does More Than Look Good”

Martian Wheel Control Algorithms Gain Traction

Imagine the scene: You’re puttering along in your vehicle when, at least an hour from the nearest help, one of your tires starts losing air. Not to worry! You’ve got a spare tire along with the tools and knowhow to change it. And if that fails, you can call roadside assistance. But what if your car isn’t a car, has metal wheels for which no spares are available, and the nearest help is 200 million miles away? You just might be a Jet Propulsion Laboratory Engineer on the Curiosity Mars Rover mission, who in 2017 was charged with creating a new driving algorithm designed to extend the life of the wheels.

High Performance Rock Crawler, Courtesy Spidertrax.com License: CC BY 3.0

You could say that the Curiosity Mars rover is the ultimate off-road vehicle, and as such it has to deal with conditions that are in some ways not that different from some locations here on Earth. Earth bound rock crawlers use long travel suspensions, specialized drivetrains, and locking differentials to keep the tires on the ground and prevent a loss of traction.

On Mars, sand and rocks dominate the landscape, and a rover must navigate around the worst of it. It’s inevitable that, just like a terrestrial off-roader, the Mars rovers will spin a tire now and then when a wheel loses traction. The Mars rovers also have a specialized drivetrain and long travel suspensions. They don’t employ differentials, though, so how are they to prevent a loss of traction and the damaging wheel spin that ensues? This where the aforementioned traction control algorithm comes in.

By controlling the rotation of the wheels with less traction, they can still contribute to the motion of the vehicle while avoiding rock rash. Be sure to check out the excellent article at JPL’s website for a full explanation of their methodology and the added benefits of uploading new traction control algorithms from 200 million miles away! No doubt the Perseverance Mars rover has also benefited from this research.

But why should NASA get to have all the fun? You can join them by 3d printing your own Mars rover and just maybe some Power Wheels derived traction control. What fun!

Weather Station Predicts Air Quality

Measuring air quality at any particular location isn’t too complicated. Just a sensor or two and a small microcontroller is generally all that’s needed. Predicting the upcoming air quality is a little more complicated, though, since so many factors determine how safe it will be to breathe the air outside. Luckily, though, we don’t need to know all of these factors and their complex interactions in order to predict air quality. We can train a computer to do that for us as [kutluhan_aktar] demonstrates with a machine learning-capable air quality meter.

The build is based around an Arduino Nano 33 BLE which is connected to a small weather station outside. It specifically monitors ozone concentration as a benchmark for overall air quality but also uses an anemometer and a BMP180 precision pressure and temperature sensor to assist in training the algorithm. The weather data is sent over Bluetooth to a Raspberry Pi which is running TensorFlow. Once the neural network was trained, the model was sent back to the Arduino which is now capable of using it to make much more accurate predictions of future air quality.

The build goes into quite a bit of detail on setting up the models, training them, and then using them on the Arduino. It’s an impressive build capped off with a fun 3D-printed case that resembles an old windmill. Using machine learning to help predict the weather is starting to become more commonplace as well, as we have seen before with this weather station that can predict rainfall intensity.

Quick And Simple Morse Decoder

[Rostislav Persion] wrote a simple Morse Code decoder to run on his Arduino and display the text on an LCD shield. This is probably the simplest decoder possible, and thus its logic is pretty straightforward to follow. Simplicity comes at a price — changing the speed requires changing constants in the code. We would like to see this hooked up to a proper Morse code key, and see how fast [Rostislav] could drive it before it conks out.

In an earlier era of Morse code decoders, one tough part was dealing with the idiosyncrasies of each sender. Every operator’s style, or “fist”, has subtle variations in the timings of the dots, dashes, and the pauses between these elements, the letters, and the words. In fact, trained operators can recognize each other because of this, much like we can often recognize who is speaking on the phone just by hearing their voice. The other difficulty these decoders faced was detecting the signal in low signal-to-noise ratio environments — pulling the signal out of the noise.

A Morse decoder built today is more likely to be used to decode machine-generated signals, for example, debugging information or telemetry. This would more than likely be sent at fixed, known speeds over directly connected links with very high S/N ratios (a wire, perhaps). In these situations, a simple decoder like [Rostislav]’s is completely sufficient.

We wrote about a couple of Morse code algorithms back in 2014, the MorseDetector and the Magic Morse algorithm. While Morse code operators usually rank their skills by speed — the faster the better — this Morse code project for very low power transmitters turns that notion on its head by using speeds more suitably measured in minutes per word (77 MPW for that project). Have you used Morse code in any of your projects before? Let us know in the comments below.

Quantum Inspired Algorithm Going Back To The Source

Recently, [Jabrils] set out to accomplish a difficult task: porting a quantum-inspired algorithm to run on a (simulated) quantum computer. Algorithms are often inspired by all sorts of natural phenomena. For example, a solution to the traveling salesman problem models ants and their pheromone trails. Another famous example is neural nets, which are inspired by the neurons in your brain. However, attempting to run a machine learning algorithm on your neurons, even with the assistance of pen and paper would be a nearly impossible exercise.

The quantum-inspired algorithm in question is known as the wavefunction collapse function. In a nutshell, you have a cube of voxels, a graph of nodes, or simply a grid of tiles as well as a list of detailed rules to determine the state of a node or tile. At the start of the algorithm, each node or point is considered in a state of superposition, which means it is considered to be in every possible state. Looking at the list of rules, the algorithm then begins to collapse the states. Unlike a quantum computer, states of superposition is not an intrinsic part of a classic computer, so this solving must be done iteratively. In order to reduce possible conflicts and contradictions later down the line, the nodes with the least entropy (the smallest number of possible states) are solved first. At first, random states are assigned, with the changes propagating through the system. This process is continued until the waveform is ultimately collapsed to a stable state or a contradiction is reached.

What’s interesting is that the ruleset doesn’t need to be coded, it can be inferred from an example. A classic use case of this algorithm is 2D pixel-art level design. By providing a small sample level, the algorithm churns and produces similar but wholly unique output. This makes it easy to provide thousands of unique and beautiful levels from an easy source image, however it comes at a price. Even a small level can take hours to fully collapse. In theory, a quantum computer should be able to do this much faster, since after all, it was the inspiration for this algorithm in the first place.

[Jabrils] spent weeks trying to get things running but ultimately didn’t succeed. However, his efforts give us a peek into the world of quantum computing and this amazing algorithm. We look forward to hearing more about this project from [Jabrils] who is continuing to work on it in his spare time. Maybe give it a shot yourself by learning the basics of quantum computing for yourself.

Continue reading “Quantum Inspired Algorithm Going Back To The Source”