Retrotechtacular: Better Living Through Nuclear Chemistry

The late 1950s were such an optimistic time in America. World War II had been over for less than a decade, the economy boomed thanks to pent-up demand after years of privation, and everyone was having babies — so many babies. The sky was the limit, especially with new technologies that promised a future filled with miracles, including abundant nuclear power that would be “too cheap to meter.”

It didn’t quite turn out that way, of course, but the whole “Atoms for Peace” thing did provide the foundation for a lot of innovations that we still benefit from to this day. This 1958 film on “The Armour Research Reactor” details the construction and operation of the world’s first privately owned research reactor. Built at the Illinois Institute of Technology by Atomics International, the reactor was a 50,000-watt aqueous-homogenous design using a solution of uranyl sulfate in distilled water as its fuel. The core is tiny, about a foot in diameter, and assembled by hand right in front of the camera. The stainless steel sphere is filled with 90 feet (27 meters) of stainless tubing to circulate cooling water through the core. Machined graphite reflector blocks surrounded the core and its fuel overflow tank (!) before the reactor was installed in “biological shielding” made from super-dense iron ore concrete with walls 5 feet (1.5 m) thick — just a few of the many advanced safety precautions taken “to ensure completely safe operation in densely populated areas.”

While the reactor design is interesting enough, the control panels and instrumentation are what really caught our eye. The Fallout vibe is strong, including the fact that the controls are all right in the room with the reactor. This allows technicians equipped with their Cutie Pie meters to insert samples into irradiation tubes, some of which penetrate directly into the heart of the core, where neutron flux is highest. Experiments included the creation of radioactive organic compounds for polymer research, radiation hardening of those new-fangled transistors, and manufacturing radionuclides for the diagnosis and treatment of diseases.

This mid-century technological gem might look a little sketchy to modern eyes, but the Armour Research Reactor had a long career. It was in operation until 1967 and decommissioned in 1972, and similar reactors were installed in universities and private facilities all over the world. Most of them are gone now, though, with only five aqueous-homogenous reactors left operating today.

Continue reading “Retrotechtacular: Better Living Through Nuclear Chemistry”

Fictional Computers: EMERAC Was The Chatbot Of 1957

Movies mirror the time they were made. [ErnieTech] asserts that we can see what people thought about computers back in 1957 by watching the classic Spencer Tracy/Katharine Hepburn movie “Desk Set.” What’s more, he thinks this might be the first movie appearance of a human-like computer. On a side note, in the UK this movie was known as “The Other Woman.”

The story is about an MIT computer expert computerizing a broadcasting company who, of course, finds romance and, at least towards the end, comedy.

Continue reading “Fictional Computers: EMERAC Was The Chatbot Of 1957”

Writing An OLED Display Driver In MicroZig

Although most people would use C, C++ or MicroPython for programming microcontrollers, there are a few more obscure options out there as well, with MicroZig being one of them. Recently [Andrew Conlin] wrote about how to use MicroZig with the Raspberry Pi RP2040 MCU, showing the process of writing an SSD1306 OLED display driver and running it. Although MicroZig has since published a built-in version, the blog post gives a good impression of what developing with MicroZig is like.

Zig is a programming language which seeks to improve on the C language, adding memory safety, safe pointers (via option types), while keeping as much as possible of what makes C so useful for low-level development intact. The MicroZig project customizes Zig for use in embedded projects,  targeting platforms including the Raspberry Pi MCUs and STM32.  During [Andrew]’s usage of MicroZig it was less the language or supplied tooling that tripped him up, and more just the convoluted initialization of the SSD1306 controller, which is probably a good sign. The resulting project code can be found on his GitHub page.

Expensive Camera, Cheap 3D-Printed Lens

If you’re a photography enthusiast, you probably own quite a few cameras, but the chances are your “good” one will have interchangeable lenses. Once you’ve exhausted the possibilities of the kit lens, you can try different focal lengths and effects, but you’ll soon find out that good glass isn’t cheap. Can you solve this problem by making your own lenses? [Billt] has done just that.

Given some CAD skills, it’s possible to replicate the mount on an existing lens, but he takes a shortcut by using a readily available camera cap project. There are two lenses detailed in the video below the break; the first is a plastic lens from a disposable camera, while the second takes one from a Holga toy camera. The plastic lens is inserted mid-print, giving the colour aberrations and soft focus you’d expect, while the Holga lens is mounted on a slide for focusing. There may be some room for improvement there, but the result is a pair of fun lenses for experimentation for not much outlay. Given the number of broken older cameras out there, it should be relatively easy for anyone wanting to try this for themselves to have a go.

The video is below the break, but while you’re on this path, take a look at a previous project using disposable camera lenses. Or, consider printing an entire camera.

Continue reading “Expensive Camera, Cheap 3D-Printed Lens”

Transceiver Reveals Unusual Components

[MSylvain59] likes to tear down old surplus, and in the video below, he takes apart a German transceiver known as a U-600M. From the outside, it looks like an unremarkable gray box, especially since it is supposed to work with a remote unit, so there’s very little on the outside other than connectors. Inside, though, there’s plenty to see and even a few surprises.

Inside is a neatly built RF circuit with obviously shielded compartments. In addition to a configurable power supply, the radio has modules that allow configuration to different frequencies. One of the odder components is a large metal cylinder marked MF450-1900. This appears to be a mechanical filter. There are also a number of unusual parts like dogbone capacitors and tons of trimmer capacitors.

The plug-in modules are especially dense and interesting. In particular, some of the boards are different from some of the others. It is an interesting design from a time predating broadband digital synthesis techniques.

While this transceiver is stuffed with parts, it probably performs quite well. However, transceivers can be simple. Even more so if you throw in an SDR chip.

Continue reading “Transceiver Reveals Unusual Components”

Physical Computing Used To Be A Thing

In the early 2000s, the idea that you could write programs on microcontrollers that did things in the physical world, like run motors or light up LEDs, was kind of new. At the time, most people thought of coding as stuff that stayed on the screen, or in cyberspace. This idea of writing code for physical gadgets was uncommon enough that it had a buzzword of its own: “physical computing”.

You never hear much about “physical computing” these days, but that’s not because the concept went away. Rather, it’s probably because it’s almost become the norm. I realized this as Tom Nardi and I were talking on the podcast about a number of apparently different trends that all point in the same direction.

We started off talking about the early days of the Arduino revolution. Sure, folks have been building hobby projects with microcontrollers built in before Arduino, but the combination of a standardized board, a wide-ranging software library, and abundant examples to learn from brought embedded programming to a much wider audience. And particularly, it brought this to an audience of beginners who were not only blinking an LED for the first time, but maybe even taking their first steps into coding. For many, the Arduino hello world was their coding hello world as well. These folks are “physical computing” natives.

Now, it’s to the point that when Arya goes to visit FOSDEM, an open-source software convention, there is hardware everywhere. Why? Because many successful software projects support open hardware, and many others run on it. People port their favorite programming languages to microcontroller platforms, and as they become more powerful, the lines between the “big” computers and the “micro” ones starts to blur.

And I think this is awesome. For one, it’s somehow more rewarding, when you’re just starting to learn to code, to see the letters you type cause something in the physical world to happen, even if it’s just blinking an LED. At the same time, everything has a microcontroller in it these days, and hacking on these devices is also another flavor of physical computing – there’s code in everything that you might think of as hardware. And with open licenses, everything being under version control, and more openness in open hardware than we’ve ever seen before, the open-source hardware world reflects the open-source software ethos.

Are we getting past the point where the hardware / software distinction is even worth making? And was “physical computing” just the buzzword for the final stages of blurring out those lines?

The Pentium Processor’s Innovative (and Complicated) Method Of Multiplying By Three, Fast

[Ken Shirriff] has been sharing a really low-level look at Intel’s Pentium (1993) processor. The Pentium’s architecture was highly innovative in many ways, and one of [Ken]’s most recent discoveries is that it contains a complex circuit — containing around 9,000 transistors — whose sole purpose is to multiply specifically by three. Why does such an apparently simple operation require such a complex circuit? And why this particular operation, and not something else?

Let’s back up a little to put this all into context. One of the feathers in the Pentium’s cap was its Floating Point Unit (FPU) which was capable of much faster floating point operations than any of its predecessors. [Ken] dove into reverse-engineering the FPU earlier this year and a close-up look at the Pentium’s silicon die shows that the FPU occupies a significant chunk of it. Of the FPU, nearly half is dedicated to performing multiplications and a comparatively small but quite significant section of that is specifically for multiplying a number by three. [Ken] calls it the x3 circuit.

The “x3 circuit”, a nontrivial portion of the Pentium processor, is dedicated to multiplying a number by exactly three and contains more transistors than an entire Z80 microprocessor.

Why does the multiplier section of the FPU in the Pentium processor have such specialized (and complex) functionality for such an apparently simple operation? It comes down to how the Pentium multiplies numbers.

Multiplying two 64-bit numbers is done in base-8 (octal), which ultimately requires fewer operations than doing so in base-2 (binary). Instead of handling each bit separately (as in binary multiplication), three bits of the multiplier get handled at a time, requiring fewer shifts and additions overall. But the downside is that multiplying by three must be handled as a special case.

[Ken] gives an excellent explanation of exactly how all that works (which is also an explanation of the radix-8 Booth’s algorithm) but it boils down to this: there are numerous shortcuts for multiplying numbers (multiplying by two is the same as shifting left by 1 bit, for example) but multiplying by three is the only one that doesn’t have a tidy shortcut. In addition, because the result of multiplying by three is involved in numerous other shortcuts (x5 is really x8 minus x3 for example) it must also be done very quickly to avoid dragging down those other operations. Straightforward binary multiplication is too slow. Hence the reason for giving it so much dedicated attention.

[Ken] goes into considerable detail on how exactly this is done, and it involves carry lookaheads as a key element to saving time. He also points out that this specific piece of functionality used more transistors than an entire Z80 microprocessor. And if that is not a wild enough idea for you, then how about the fact that the Z80 has a new OS available?