Intel C4004

Inventing The Microprocessor: The Intel 4004

We recently looked at the origins of the integrated circuit (IC) and the calculator, which was the IC’s first killer app, but a surprise twist is that the calculator played a big part in the invention of the next world-changing marvel, the microprocessor.

There is some dispute as to which company invented the microprocessor, and we’ll talk about that further down. But who invented the first commercially available microprocessor? That honor goes to Intel for the 4004.

Path To The 4004

Busicom calculator motherboard based on 4004 (center) and the calculator (right)
Busicom calculator motherboard based on 4004 (center) and the calculator (right)

We pick up the tale with Robert Noyce, who had co-invented the IC while at Fairchild Semiconductor. In July 1968 he left Fairchild to co-found Intel for the purpose of manufacturing semiconductor memory chips.

While Intel was still a new startup living off of their initial $3 million in financing, and before they had a semiconductor memory product, as many start-ups do to survive they took on custom work. In April 1969, Japanese company Busicom hired them to do LSI (Large-Scale Integration) work for a family of calculators.

Busicom’s design, consisting of twelve interlinked chips, was considered a complicated one. For example, it included shift-register memory, a serial type of memory which complicates the control logic. It also used Binary Coded Decimal (BCD) arithmetic. Marcian Edward Hoff Jr — known as “Ted”, head of the Intel’s Application Research Department, felt that the design was even more complicated than a general purpose computer like the PDP-8, which had a fairly simple architecture. He felt they may not be able to meet the cost targets and so Noyce gave Hoff the go-ahead to look for ways to simplify it.

Continue reading “Inventing The Microprocessor: The Intel 4004”

Twitter RNG Is Powered By Memes

Twitter is kind of a crazy place. World leaders doing verbal battle, hashtags that rise and fall along with the social climate, and a never ending barrage of cat pictures all make for a tumultuous stream of consciousness that runs 24/7. What exactly we’re supposed to do with this information is still up to debate, as Twitter has yet to turn it into a profitable service after over a decade of operation. Still, it’s a grand experiment that offers a rare glimpse into the human hive-mind for anyone brave enough to dive in.

One such explorer is a security researcher who goes by the handle [x0rz]. He’s recently unveiled an experimental new piece of software that grabs Tweets and uses them as a “noise” to mix in with the Linux urandom entropy pool. The end result is a relatively unpredictable and difficult to influence source of random data. While he cautions his software is merely a proof of concept and not meant for high security applications, it’s certainly an interesting approach to introducing humanity-derived chaos into the normally orderly world of your computer’s operating system.

Noise sampling before and after being merged with urandom

This hack is made possible by the fact that Twitter offers a “sample” function in their API, which effectively throws a randomized collection of Tweets at anyone who requests it. There are some caveats here, such as the fact that if multiple clients request a sample at the same time they will both receive the same Tweets. It’s also worth mentioning that some characters are unusually likely to make an appearance due to the nature of Twitter (emoticons, octothorps pound signs, etc), but generally speaking it’s not a terrible way to get some chaotic data on demand.

On its own, [x0rz] found this data to be a good but not great source of entropy. After pulling a 500KB sample, he found it had an entropy of 6.5519 bits per byte (random would be 8). While the Tweets weren’t great on their own, combining the data with the kernel’s entropy pool at /dev/urandom provided something that looked a lot less predictable.

The greatest weakness of using Twitter as a source of entropy is, of course, the nature of Twitter itself. A sufficiently popular hashtag on the rise might be just enough to sink your entropy. It’s even possible (though admittedly unlikely) that enough Twitter spam bots could ruin the sample. But if you’re at the point where you think hinging your entropy pool on a digital fire hose of memes and cat pictures is sufficient, you’re probably not securing any national secrets anyway.

(Editor’s note: The way the Linux entropy pool mixes it together, additional sources can only help, assuming they can’t see the current state of your entropy pool, which Twitter cats most certainly can’t. See article below. Also, this is hilarious.)

We’ve covered some fantastic examples of true random number generators here at Hackaday, and if you’re looking for a good primer for the Kingdom of the Chaotic, check out the piece by our own [Elliot Williams].

Raspberry Pi Offers Soulless Work Oversight

If you’re like us, you spend more time than you care to admit staring at a computer screen. Whether it’s trying to find the right words for a blog post or troubleshooting some code, the end result is the same: an otherwise normally functioning human being is reduced to a slack-jawed zombie. Wouldn’t it be nice to be able to quantify just how much of your life is being wasted basking in the flickering glow of your monitor? Surely that wouldn’t be a crushingly depressing piece of information to have at the end of the week.

With the magic of modern technology, you need wonder no longer. Prolific hacker [dekuNukem] has created the aptly named “facepunch”, which allows you to “punch in” with nothing more than your face. Just sit down in front of your Raspberry Pi’s camera, and the numbers start ticking away. It’s like the little clock in the front of a taxi: except at the end you don’t have to pay anyone, you just have to come to terms with what your life has become. So that’s cool.

It doesn’t take much hardware to play along at home. All you need is a Raspberry Pi and the official camera accessory. Though for the full effect you should add one of the displays supported by the Luma.OLED driver so you can see the minutes and hours ticking away in real-time.

To get the facial recognition going, all you need to do is take a well-lit picture of your face and save it as a 400×400 JPEG. The Python 3 script will take care of the rest: checking the frames from the camera every few seconds to see if your beautiful mug is in the frame, and incrementing the counters accordingly.

Even if you’re not in the market for an Orwellian electronic supervisor, this project is a great example to get you started in the world of facial recognition. With a little luck, you’ll be weaponizing it in no time.

Spectre And Meltdown: How Cache Works

The year so far has been filled with news of Spectre and Meltdown. These exploits take advantage of features like speculative execution, and memory access timing. What they have in common is the fact that all modern processors use cache to access memory faster. We’ve all heard of cache, but what exactly is it, and how does it allow our computers to run faster?

In the simplest terms, cache is a fast memory. Computers have two storage systems: primary storage (RAM) and secondary storage (Hard Disk, SSD). From the processor’s point of view, loading data or instructions from RAM is slow — the CPU has to wait and do nothing for 100 cycles or more while the data is loaded. Loading from disk is even slower; millions of cycles are wasted. Cache is a small amount of very fast memory which is used to hold commonly accessed data and instructions. This means the processor only has to wait for the cache to be loaded once. After that, the data is accessible with no waiting.

A common (though aging) analogy for cache uses books to represent data: If you needed a specific book to look up an important piece of information, you would first check the books on your desk (cache memory). If your book isn’t there, you’d then go to the books on your shelves (RAM). If that search turned up empty, you’d head over to the local library (Hard Drive) and check out the book. Once back home, you would keep the book on your desk for quick reference — not immediately return it to the library shelves. This is how cache reading works.

Continue reading “Spectre And Meltdown: How Cache Works”

Inexpensive Display Jumps To Life

If you’ve ever been to a local fair or amusement park, chances are you’ve seen an illusion known as Pepper’s Ghost. To perform the illusion, essentially all that’s needed is a thin sheet of plastic or one-way mirror and a light source. Get it right, and you’ll have apparitions popping up in all kinds of interesting places. With just the right software, though, one of those places could be in your own 3D display.

Using just a tablet and a sheet of plastic rolled into a cone, a three-person team was able to create a 3D display using the Pepper’s Ghost illusion. Using special software that the team developed, an image is altered so that when it reflects off of the plastic cone the image appears as a 3D rendering of the original picture. The rendering is perspective-correct and offers a novel way to interact with a 3D model without needing expensive equipment or special glasses.

If you do have some fancy equipment sitting around, like a computer monitor and some plexiglass, similar 3D displays have been made which utilize similar effects. Right now the team that developed this one haven’t made their code open yet, but have promised to release it soon so that others can build their own displays.

Thanks to [bmsleight] for the tip!

Printed PC Speakers Are Way Cooler Than Yours

On the off chance you’re reading these words on an actual desktop computer (rather than a phone, tablet, smart mirror, game console…), stop and look at the speakers you have on either side of your monitor. Are you back now? OK, now look at the PC speakers and amplifier [Kris Slyka] recently built and realize you’ve been bested. Don’t feel bad, she’s got us beat as well.

The speaker and amplifier enclosures were painstakingly printed and assembled over the course of three months, and each piece was designed to be small enough to fit onto the roughly 4 in x 4 in bed of her PrintrBot Play. While limited print volume made the design considerably trickier, it did force [Kris] to adopt a modular design approach with arguably made assembly (and potential future repairs or improvements) easier.

The amplifier is made up of rectangular “cells” which are connected to each other via 3 mm threaded rods. For now the amplifier only has 4 cells, but this could easily be expanded in the future without having to design and print a whole new case. Internally the amplifier is using two TDA8932 digital amplifier modules, and some VU meters scored off of eBay.

Each speaker enclosure is made up of 10 individual printed parts that are then glued and screwed together to make the final shape, which [Kris] mentions was inspired by an audio installation at the Los Angeles County Museum of Art. They house 4″ Visaton FR 10 HM drivers, and are stuffed with insulation.

It’s a bit difficult to nail down the style that [Kris] has gone for here. You see the chunky controls and analog VU meters and want to call it retro, but it’s also a brass cog and sprocket away from being Steampunk. On the other hand, the shape of the speakers combined with the bamboo-filled PLA used to print them almost gives it an organic look: as if there’s a tree somewhere that grows these things. That’s actually a kind of terrifying thought, but you get the idea.

If your computer speakers were assembled by mere mortals, never fear. We’ve covered a number of interesting hacks and mods for more run-of-the-mill desktop audio setups which should hold you over until it’s time to harvest the speaker trees.

[via /r/3Dprinting]

Intel Rolls Out 49 Qubits

With a backdrop of security and stock trading news swirling, Intel’s [Brian Krzanich] opened the 2018 Consumer Electronics Show with a keynote where he looked to future innovations. One of the bombshells: Tangle Lake; Intel’s 49-qubit superconducting quantum test chip. You can catch all of [Krzanch’s] keynote in replay and there is a detailed press release covering the details.

This puts Intel on the playing field with IBM who claims a 50-qubit device and Google, who planned to complete a 49-qubit device. Their previous device only handled 17 qubits. The term qubit refers to “quantum bits” and the number of qubits is significant because experts think at around 49 or 50 qubits, quantum computers won’t be practical to simulate with conventional computers. At least until someone comes up with better algorithms. Keep in mind that — in theory — a quantum computer with 49 qubits can process about 500 trillion states at one time. To put that in some apple and orange perspective, your brain has fewer than 100 billion neurons.

Of course, the number of qubits isn’t the entire story. Error rates can make a larger number of qubits perform like fewer. Quantum computing is more statistical than conventional programming, so it is hard to draw parallels.

We’ve covered what quantum computing might mean for the future. If you want to experiment on a quantum computer yourself, IBM will let you play on a simulator and on real hardware. If nothing else, you might find the beginner’s guide informative.

Image credit: [Walden Kirsch]/Intel Corporation