Even with ten fingers to work with, math can be hard. Microprocessors, with the silicon equivalent of just two fingers, can have an even harder time with calculations, often taking multiple machine cycles to figure out something as simple as pi. And so 40 years ago, Intel decided to give its fledgling microprocessors a break by introducing the 8087 floating-point coprocessor.
If you’ve ever wondered what was going on inside the 8087, wonder no more. [Ken Shirriff] has decapped an 8087 to reveal its inner structure, which turns out to be closely related to its function. After a quick tour of the general layout of the die, including locating the microcode engine and ROM, and a quick review of the NMOS architecture of the four-decade-old technology, [Ken] dug into the meat of the coprocessor and the reason it could speed up certain floating-point calculations by up to 100-fold. A generous portion of the complex die is devoted to a ROM that does nothing but store constants needed for its calculation algorithms. By carefully examining the pattern of NMOS transistors in the ROM area and making some educated guesses, he was able to see the binary representation of constants such as pi and the square root of two. There’s also an extensive series of arctangent and log2 constants, used for the CORDIC algorithm, which reduces otherwise complex transcendental calculations to a few quick and easy bitwise shifts and adds.
[Ken] has popped the hood on a lot of chips before, finding butterflies in an op-amp and reverse-engineering a Sinclair scientific calculator. But there’s something about seeing constants hard-coded in silicon that really fascinates us.
The logo for the United States Postal Service is a mean-looking eagle. But a true fluid dynamics geek might look at it and realize that eagle is moving so fast it’s causing a shock wave. But just how fast is it moving? [Andrew Higgins] asked and answered this question, posting his analysis of the logo’s supersonic travel. He claims it’s Mach 4.9, but, how do we know? Science!
It turns out if something is going fast enough, you can tell just how fast with a simple picture! We’ve all seen pictures of jets breaking the sound barrier, this gives us information about the jet’s speed.
How does it work?
Think about it like this: sound moves at roughly 330 m/s on Earth at sea level. If an object moves through air at that velocity, the air disturbances are transmitted as sound waves. If it’s moving faster than sound, those waves get distributed downstream, behind the moving object. The distance of these waves behind the moving object is dependent on the object’s speed.
This creates a line of these interactions known as a “Mach line.” Find the angle difference of the Mach line and the direction of travel and you have the “Mach angle” (denoted by α or µ).
There is a simple formula for determining the speed of an object using the Mach angle, the speed of sound (a), and an object’s velocity (v):
sin(µ) = a / v. The ratio of v to a is known as the Mach number, (M). If an object is going exactly the speed of sound, it’s going Mach 1 (because v = a).
Since Mach number (M) is
v / a, we can plug it into the formula from above as
1 / M and use [Andrew]’s calculation shown in the image at the top of the article for a Mach angle (µ) of ~11.7°:
The real question is, did the USPS chose Mach 4.93 as a hint to some secret government postal project? Or, was it simply a 1993 logo designer’s attempt to “capture the ethos of a modern era which continues today”?
You may remember that I collect slide rules. If you don’t, it probably doesn’t surprise you. I have a large number of what I think of as normal slide rules. I also have the less common circular and cylindrical slide rules. But I recently picked up a real oddity that I had to share: the Smarty Cat. It isn’t exactly a slide rule but it sort of is if you stretch the definition a bit.
Real Slide Rules
A regular slide rule takes advantage of the fact that you can multiply and divide by adding logarithms. Imagine having two rulers marked in inches or centimeters — it doesn’t matter (see the adjoining image). Suppose you want to add 5 and 3. You count off 5 marks on one ruler and line it with up the zero inch mark on the other ruler. Now you count off 3 marks on the second ruler and that position on the first ruler will indicate the result. Here it lines up with the 8 mark, which is, of course, the correct answer.
That’s a simple addition. But if you can convert your numbers into logarithms, add the logarithms, and then back out to a regular number, you can multiply.
Continue reading “Hands-On: Smarty Cat Is Junior’s First Slide Rule”
We’ve often noted that whether had ancient man known binary, we could all count to 1023 on our fingers. We thought about that while watching [Numberphile’s] latest video about “Russian” multiplication (see below). Apparently, the method dates back quite a way, sometimes known as Ethiopian or peasant multiplication. Even the ancient Egyptians did a form of it.
If you’ve ever written long multiplication code for a microcontroller, you can probably tell how this works. Each halving of the number amounts to a right shift. Each doubling is a left shift. Throwing out the even numbers means you only take the values when the least-significant bit is zero. Booth’s algorithm is more efficient, but the “Russian” method is simple to do on paper.
Continue reading “Hacking Multiplication: Binary Multiply On Paper”
You can do a lot of electronics without ever touching a tensor, but there are some situations in which tensors are absolutely essential. The problem is that most math texts give you a very dry description that is difficult to internalize. That’s where [The Science Asylum] comes in. Their recent video (see below) starts with the dry definition and then shows you what it means and why.
According to the video, the textbook definition is:
A rank-n tensor in m-dimensions is a mathematical object that has n indices and mn components and obeys certain transformational rules.
That sounds a lot like an array but we are not sure what “certain transformational rules” really means to anyone.
Wikipedia does a little better:
[A]n algebraic object that describes a linear mapping from one set of algebraic objects to another.
These constructs are key to anything electromagnetic (including antennas) and show up a lot in stress calculations and quantum mechanics. Even Einstien’s theory of relativity uses tensors.
Continue reading “Tensors Explained”
Everyone learns (and some readers maybe still remember) the quadratic formula. It’s a pillar of algebra and allows you to solve equations like Ax2+Bx+C=0. But just because you’ve used it doesn’t mean you know how to come up with the formula itself. It’s a bear to derive so the vast majority of us simply memorize the formula. A Carnegie Mellon mathematician named Po-Shen Loh didn’t expect to find a new way to derive the solution when he was reviewing math materials for middle school use to make them easier to understand. After all, people have been solving that equation for about 4,000 years. But that’s exactly what he did.
Before we look at the new solution, let’s talk about why you want to solve quadratic equations. They are used in many contexts. In ancient times you might use them to determine how much more crop to grow to cover pay tax payments without eating in to the crop you needed to subsist. In physics, it can describe motion. There’s seemingly no end to how many things you can describe with a quadratic equation.
Babylonians, in particular, would solve simultaneous equations to find the roots of a quadratic. Egyptians, Grecians, Indians, and Chinese peoples used graphical methods to solve the equations. The entire history is a bit much to get into, but still a great read. For this article, let’s dig into how the new derivation was discovered.
Continue reading “The Quadratic Equation Solution A Few Thousand Years In The Making”
Many languages feature a random number generator library for help with tasks like rolling a die or flipping a coin. Why, you may ask, is this necessary when humans are perfectly capable of randomly coming up with values?
The data from gathered from running the script with 200 pseudo-random inputs 100,000 times resulted in a distribution of correct guess approximately normal (µ=50% and σ=3.5%). The probability of the script correctly guessing the user’s input is >57% from calculating µ+2σ. The result? Humans aren’t so good at being random after all.
It’s almost intuitive why this happens. Finger presses tend to repeat certain patterns. The script already has a database of all possible combinations of five presses, with a counter for each combination. Every time a key is pressed, the latest five presses is updated and the counter increases for whichever combination of five presses this falls under. Based on this data, the script is able to make a prediction about the user’s next press.
In a follow-up statistic analysis, [ex-punctis] notes that with more key presses, the accuracy of the script tended to increase, with the exception of 1000+ key presses. The latter was thought to be due to the use of a psuedo random number generator to achieve such high levels of engagement with the script.
Some additional tests were done to see if holding shorter or longer sequences in memory would account for more accurate predictions. While shorter sequences should theoretically work, the risk of players keeping a tally of their own presses made it more likely for the longer sequences to reduce bias.
There’s a lot of literature on behavioral models and framing effects for similar games if you’re interested in implementing your own experiments and tricking your friends into giving you some cash.