The Quadratic Equation Solution A Few Thousand Years In The Making

Everyone learns (and some readers maybe still remember) the quadratic formula. It’s a pillar of algebra and allows you to solve equations like Ax2+Bx+C=0. But just because you’ve used it doesn’t mean you know how to come up with the formula itself. It’s a bear to derive so the vast majority of us simply memorize the formula. A Carnegie Mellon mathematician named Po-Shen Loh didn’t expect to find a new way to derive the solution when he was reviewing math materials for middle school use to make them easier to understand. After all, people have been solving that equation for about 4,000 years. But that’s exactly what he did.

Before we look at the new solution, let’s talk about why you want to solve quadratic equations. They are used in many contexts. In ancient times you might use them to determine how much more crop to grow to cover pay tax payments without eating in to the crop you needed to subsist. In physics, it can describe motion. There’s seemingly no end to how many things you can describe with a quadratic equation.

Babylonians, in particular, would solve simultaneous equations to find the roots of a quadratic. Egyptians, Grecians, Indians, and Chinese peoples used graphical methods to solve the equations. The entire history is a bit much to get into, but still a great read. For this article, let’s dig into how the new derivation was discovered.

Continue reading “The Quadratic Equation Solution A Few Thousand Years In The Making”

How Random Is Random?

Many languages feature a random number generator library for help with tasks like rolling a die or flipping a coin. Why, you may ask, is this necessary when humans are perfectly capable of randomly coming up with values?

[ex-punctis] was curious about the same quandary and decided to code up an experiment to test the true randomness of human. A script guesses the user’s next input from two choices, keeping a tally in the JavaScript backend that holds on to the past five choices. If the script guesses correctly, they take $1 from the user. Otherwise, the user earns $1.05.

The data from gathered from running the script with 200 pseudo-random inputs 100,000 times resulted in a distribution of correct guess approximately normal (µ=50% and σ=3.5%). The probability of the script correctly guessing the user’s input is >57% from calculating µ+2σ. The result? Humans aren’t so good at being random after all.

It’s almost intuitive why this happens. Finger presses tend to repeat certain patterns. The script already has a database of all possible combinations of five presses, with a counter for each combination. Every time a key is pressed, the latest five presses is updated and the counter increases for whichever combination of five presses this falls under. Based on this data, the script is able to make a prediction about the user’s next press.

In a follow-up statistic analysis, [ex-punctis] notes that with more key presses, the accuracy of the script tended to increase, with the exception of 1000+ key presses. The latter was thought to be due to the use of a psuedo random number generator to achieve such high levels of engagement with the script.

Some additional tests were done to see if holding shorter or longer sequences in memory would account for more accurate predictions. While shorter sequences should theoretically work, the risk of players keeping a tally of their own presses made it more likely for the longer sequences to reduce bias.

There’s a lot of literature on behavioral models and framing effects for similar games if you’re interested in implementing your own experiments and tricking your friends into giving you some cash.

Sensor Filters For Coders

Anybody interested in building their own robot, sending spacecraft to the moon, or launching inter-continental ballistic missiles should have at least some basic filter options in their toolkit, otherwise the robot will likely wobble about erratically and the missile will miss it’s target.

What is a filter anyway? In practical terms, the filter should smooth out erratic sensor data with as little time lag, or ‘error lag’ as possible. In the case of the missile, it could travel nice and smoothly through the air, but miss it’s target because the positional data is getting processed ‘too late’. The simplest filter, that many of us will have already used, is to pause our code, take about 10 quick readings from our sensor and then calculate the mean by dividing by 10. Incredibly simple and effective as long as our machine or process is not time sensitive – perfect for a weather station temperature sensor, although wind direction is slightly more complicated. A wind vane is actually an example of a good sensor giving ‘noisy’ readings: not that the sensor itself is noisy, but that wind is inherently gusty and is constantly changing direction.

It’s a really good idea to try and model our data on some kind of computer running software that will print out graphs – I chose the Raspberry Pi and installed Jupyter Notebook running Python 3.

The photo on the left shows my test rig. There’s a PT100 probe with it’s MAX31865 break-out board, a Dallas DS18B20 and a DHT22. The shield on the Pi is a GPS shield which is currently not used. If you don’t want the hassle of setting up these probes there’s a Jupyter Notebook file that can also use the internal temp sensor in the Raspberry Pi. It’s incredibly quick and easy to get up and running.

It’s quite interesting to see the performance of the different sensors, but I quickly ended up completely mangling the data from the DS18B20 by artificially adding randomly generated noise and some very nasty data spikes to really punish the filters as much as possible. Getting the temperature data to change rapidly was effected by putting a small piece of frozen Bockwurst on top of the DS18B20 and then removing it again.

Continue reading “Sensor Filters For Coders”

Fourier Explained: [3Blue1Brown] Style!

If you ask most people to explain the Fourier series they will tell you how you can decompose any particular wave into a sum of sine waves. We’ve used that explanation before ourselves, and it is not incorrect. In fact, it is how Fourier first worked out his famous series. However, it is only part of the story and master video maker [3Blue1Brown] explains the story in his usual entertaining and informative way. You can see the video below.

Paradoxically, [3Blue1Brown] asserts that it is easier to understand the series by thinking of functions with complex number outputs producing rotating vectors in a two-dimensional space. If you watch the video, you’ll see it is an easier way to work it out and it also lets you draw very cool pictures.

Continue reading “Fourier Explained: [3Blue1Brown] Style!”

Wolfram Engine Now Free… Sort Of

You’ve probably used Wolfram Alpha and maybe even used the company’s desktop software for high-powered math such as Mathematica. One of the interesting things about all of Wolfram’s mathematics software is that it shares a common core engine — the Wolfram Engine. As of this month, the company is allowing free use of the engine in software projects. The catch? It is only for preproduction use. If you are going into production you need a license, although a free open source project can apply for a free license. Naturally, Wolfram gets to decide what is production, although the actual license is pretty clear that non-commercial projects for personal use and approved open source projects can continue to use the free license. In addition, work you do for a school or large company may already be covered by a site license.

Given how comprehensive the engine is, this is reasonably generous. The engine even has access to the Wolfram Knowledgebase (with a free Basic subscription). If you don’t want to be connected, though, you don’t have to be. You just won’t be able to get live data. If you want to play with the engine, you can use the Wolfram Cloud Sandbox in which you can try some samples.

Continue reading “Wolfram Engine Now Free… Sort Of”

The Kalman Filter Exposed

If we are hiring someone such as a carpenter or an auto mechanic, we always look for two things: what kind of tools they have and what they do when things go wrong. For many types of embedded systems, one important tool that serious developers use is the Kalman filter. It is also something you use when things go “wrong.” [Carcano] recently posted a tutorial on Kalman filter equations that tries to demystify the topic. His example — a case of things going wrong — is when you have a robot that knows how far it is supposed to move and also has GPS coordinates of its positions. Since the positions probably don’t agree, you can consider that a problem with the system.

The obvious answer is to average the two positions. That’s fine if the error is small. But a Kalman filter is much more robust in more situations. [Carcano] does a good job of taking you through the math, but we will warn you it is plenty of math. If you don’t know what a Gaussian distribution is or the word covariance makes you think of sailboats, you are going to have to do some reading to get through the post.

Continue reading “The Kalman Filter Exposed”

Understanding Math Rather Than Merely Learning It

There’s a line from the original Star Trek where Khan says, “Improve a mechanical device and you may double productivity, but improve man and you gain a thousandfold.” Joan Horvath and Rich Cameron have the same idea about improving education, particularly autodidacticism or self-learning. They share what they’ve learned about acquiring an intuitive understanding of difficult math at the Hackaday Superconference and you can watch the newly published video below.

The start of this was the pair’s collaboration on a book about 3D printing science projects. Joan has a traditional education from MIT and Rich is a self-taught guy. This gave them a unique perspective from both sides of the street. They started looking at calculus — a subject that scares a lot of people but is really integral (no pun intended) to a lot of serious science and engineering.

You probably know that Newton and Leibniz struck on the fundamentals of calculus about the same time. The original papers, however, were decidedly different. Newton’s approach was more physical and less mathematical. Leibniz used formal logic and algebra. Although both share credit, the Leibniz notation won out and is what we use today.

Continue reading “Understanding Math Rather Than Merely Learning It”