Vintage Computer Festival East Was A Retro Madhouse

The Vintage Computer Festival East took place last weekend at the InfoAge Science and History Museum in New Jersey, and by any metric you care to use, it was a phenomenal success. Everyone you spoke with, from the the exhibitors and attendees, to the veteran volunteers who put this incredible show together, all said the same thing: they’d never seen a turnout like this before.

Of course, such success is not without cost. The exhibit rooms were so packed that moving through them was a challenge, the line to get food or browse the consignment area occasionally stretched outside the building, and at one point the event’s electronic payment system buckled under the pressure.

Some things are worth the wait.

Yet even the folks who waited the better part of an hour to rummage through boxes of dusty treasures, only to find themselves left standing with armfuls of heavy gear they couldn’t pay for until the technical issues were resolved couldn’t really complain. I should know, I was one of them. It would be like going to a concert and getting upset that the music was too loud — the event was advertised as a festival, and that’s exactly what it was.

No matter where you went, you’d find throngs of excited people who were eager to chat about the golden age of computing. So even if you were stuck in a long line, or had to step outside of the exhibit area to get some fresh air, you were always in excellent company. Seeing such a large and diverse number of people come out for what’s ultimately a niche event was exceptionally gratifying. At the end of the day, if the price we have to pay for this kind of community response is a few long lines and tight squeezes, it’s well worth it.

Each time I cover an event like this for Hackaday, I do so with the caveat that there’s really no substitute for being there in person. No matter how many articles you read and YouTube recaps you watch, you’ll never be able to see all the things you would have had you been able to walk the show floor yourself. It’s a bit like exploring the Moon or Mars: remotely controlled robots are capable of capturing terabytes of data and beaming it back to Earth, but even still, there’s the potential to learn so much more by putting boots on the ground.

The same is true of VCF East 2023 — what I bring you here is just the tip of the iceberg in terms of what was on display at this year’s event. On the other hand, you have the advantage of being able to peruse these images without having to stand in line. Is it worth the trade? Only you can be the judge of that. But for my money, I’ll gladly get back in line when VCF East 2024 rolls around.

Continue reading “Vintage Computer Festival East Was A Retro Madhouse”

PUF Away For Hardware Fingerprinting

Despite the rigorous process controls for factories, anyone who has worked on hardware can tell you that parts may look identical but are not the same. Everything from silicon defects to microscopic variations in materials can cause profoundly head-scratching effects. Perhaps one particular unit heats up faster or locks up when executing a specific sequence of instructions and we throw our hands up, saying it’s just a fact of life. But what if instead of rejecting differences that fall outside a narrow range, we could exploit those tiny differences?

This is where physically unclonable functions (PUF) come in. A PUF is a bit of hardware that returns a value given an input, but each bit of hardware has different results despite being the same design. This often relies on silicon microstructure imperfections. Even physically uncapping the device and inspecting it, it would be incredibly difficult to reproduce the same imperfections exactly. PUFs should be like the ideal version of a fingerprint: unique and unforgeable.

Because they depend on manufacturing artifacts, there is a certain unpredictability, and deciding just what features to look at is crucial. The PUF needs to be deterministic and produce the same value for a given specific input. This means that temperature, age, power supply fluctuations, and radiation all cause variations and need to be hardened against. Several techniques such as voting, error correction, or fuzzy extraction are used but each comes with trade-offs regarding power and space requirements. Many of the fluctuations such as aging and temperature are linear or well-understood and can be easily compensated for.

Broadly speaking, there are two types of PUFs: weak and strong. Weak offers only a few responses and are focused on key generation. The key is then fed into more traditional cryptography, which means it needs to produce exactly the same output every time. Strong PUFs have exponential Challenge-Response Pairs and are used for authenticating. While strong PUFs still have some error-correcting they might be queried fifty times and it has to pass at least 95% of the queries to be considered authenticated, allowing for some error. Continue reading “PUF Away For Hardware Fingerprinting”

Retro Gadgets: The 1983 Pocket Oscilloscope

In the 1980s, an oscilloscope was typically a bulky affair with a large CRT, and a heavy power supply. So it probably grabbed a lot of attention in 1983 when Calvert Instruments Incorporated ran an ad in magazines like Radio Electronics. The ad touted a 5 MHz scope that was pocket-sized and weighed 4 ounces. The ad proudly proclaimed: CRT oscilloscopes just became obsolete!

Indeed they would, but if you are wondering who Calvert Instruments was, so are we. We have never heard of them before or since, and we don’t know for certain if any of these devices were ever actually produced. What did it use instead of a CRT? The CI Model 210 Pocket-O-Scope was not only solid state but used an LED screen 1.5 inches square. That’s small, but it packed in 210 LEDs for “high resolution.” We assume that was also the genesis of the model number. Judging from the product picture, there were 14 LEDs in the X direction and 15 in the Y direction. High resolution, for sure!

There were some early LCD scopes (like the Iskrascope and one from Scopex) around the same time, but it would be the 1990s before we would see LCD oscilloscopes and even longer before CRTs were totally squeezed out.

Continue reading “Retro Gadgets: The 1983 Pocket Oscilloscope”

Tinkercad Gets A Move On

Going to the movies is an experience. But how popular do you think they’d be if you went in, bought your popcorn, picked your seat, and the curtain would rise on a large still photograph? Probably not a great business model. If a picture is worth 1,000 words, then a video is worth at least a million, and that’s why we thought it was awesome that Tinkercad now has a physics simulator built right in.

Look for this icon on the top right toolbar.

It all starts with your 3D model or models, of course. Then there’s an apple icon. (Like Newton, not like Steve Jobs.) Once you click it, you are in simulation mode. You can select objects and make them fixed or movable. You can change the material of each part, too, which varies its friction, density, and mass. There is a play button at the bottom. Press it, and you’ll see what happens. You can also share and you have the option of making an MP4 video like the ones below.

We, of course, couldn’t resist. We started with a half-sphere and made it larger. We also rotated it so the flat side was up. We then made a copy that would become the inside of our bowl. Using the ruler tool, we shaved about 2 mm off the length and width (X and Y) of the inner sphere. We also moved it 2 mm up without changing the size.

Using the alignment tools, you can then center the inner piece in the X and Y axis. Change the inner color to a hole and group the objects. This forms a simple bowl shape. Then we moved the workplane to a random part of the inner surface of our bowl and dropped a sphere. Nothing complicated.

Continue reading “Tinkercad Gets A Move On”

Tech In Plain Sight: Field Guide To Power Plugs

It is the bane of worldwide travel: there isn’t just one way to get AC power from the wall. The exact connector — and what you can expect when you plug in — differs from country to country. Even if you stay home, you must account for this if your designs go places and expect to plug into the wall. If you’ve ever looked at a universal adapter, it is full of prongs and pins like a metallic porcupine. Where do all those pins go?

Of course, there are some easy ways to sidestep the whole issue if you don’t need AC power. Much low-power gear now just provides a USB or barrel connector. Then you can use an area-appropriate adapter or charger to power your device. Batteries work, too. But if you need to plug in, you will run into other kinds of plugs.

Switching power supplies have helped. In the old days, many things expected either 125V or 250V and didn’t work with the opposite voltage. Switching power supplies often allow a wide input range or have a switch to select one range or the other. These two voltages will cover almost any situation. If you have something that must have one voltage or the other, you’ll need a transformer — also called a converter — to step the voltage up or down. But most often, these days, you just need an adapter. There are slight variations. For example, some countries supply 100V or 110V, but that usually doesn’t make much difference. You also need to understand if your equipment cares if the AC is 50 Hz or 60 Hz.

Most of the power sockets you’ll find around the world will fall into one of several categories. The categories range from A to N. Even among these, however, there are variations.

Type A

For example, the common type A plug and socket are what Americans call “two prong.” If you live in the US, you’ve probably noticed that the plug is polarized. That is, one pin is slightly wider than the other so the plug can only go in one way. The wide pin is connected to the circuit neutral. The maximum load for this connector is 15A. It is difficult to find type A sockets anymore, other than on cheap extension cords or things like lamps that pass through their electrical connections to a second socket. Type B is far more common and type A plug will fit in a type B socket.

Continue reading “Tech In Plain Sight: Field Guide To Power Plugs”

Signed Distance Functions: Modeling In Math

What if instead of defining a mesh as a series of vertices and edges in a 3D space, you could describe it as a single function? The easiest function would return the signed distance to the closest point (negative meaning you were inside the object). That’s precisely what a signed distance function (SDF) is. A signed distance field (also SDF) is just a voxel grid where the SDF is sampled at each point on the grid. First, we’ll discuss SDFs in 2D and then jump to 3D.

SDFs in 2D

A signed distance function in 2D is more straightforward to reason about so we’ll cover it first. Additionally, it is helpful for font rendering in specific scenarios. [Vassilis] of [Render Diagrams] has a beautiful demo on two-dimensional SDFs that covers the basics. The naive technique for rendering is to create a grid and calculate the distance at each point in the grid. If the distance is greater than the size of the grid cell, the pixel is not colored in. Negative values mean the pixel is colored in as the center of the pixel is inside the shape. By increasing the size of the grid, you can get better approximations of the actual shape of the SDF. So, why use this over a more traditional vector approach? The advantage is that the shape is represented by a single formula calculated at many points. Most modern computers are extraordinarily good at calculating the same thing thousands of times with slightly different parameters, often using the GPU. GLyphy is an SDF-based text renderer that uses OpenGL ES2 as a shader, as discussed at Linux conf in 2014. Freetype even merged an SDF renderer written by [Anuj Verma] back in 2020. Continue reading “Signed Distance Functions: Modeling In Math”

The UK’s ST40 Spherical Tokamak Achieves Crucial Plasma Temperatures

As the race towards the first commercially viable nuclear fusion reactor heats up, the UK-based Tokamak Energy has published a paper on its recent achievements with its ST40 spherical tokamak. Most notable is the achieving of plasma temperatures of over 100 million Kelvin, which would put this fusion reactor firmly within the range for deuterium-tritium fusion at a rate that would lead credence to the projection made by Tokamak Energy about building its first commercial fusion plants in the 2030s.

The ST40 is intended to provide the necessary data to construct the ST80-HTS by 2026, which itself would be a testing ground for the first commercial reactor, called the ST-E1, which would be rated at 200 MWe. Although this may seem ambitious, Tokamak Energy didn’t come out of nowhere, but is a spin-of of Culham Centre for Fusion Energy (CCFE), the UK’s national laboratory for fusion research, which was grounded in 1965, and has been for decades been involved in spherical tokamak research projects like MAST and MAST-Upgrade, with STEP as its own design for a commercial fusion reactor.

The advantage offered by spherical tokamaks compared to regular tokamaks is that they favor a very compact construction style which puts the magnets very close to the plasma, effectively making them more efficient in retaining the plasma, with less power required to maintain stable plasma. Although this makes the use of super-conducting electromagnets not necessary, it does mean that wear and tear on these magnets is significantly higher. What this does mean is that this type of tokamak can be much cheaper than alternative reactor types, even if they do not scale as well.

Whether or not Tokamak Energy will be the first to achieve commercial nuclear fusion remains to be seen. So far Commonwealth Fusion’s SPARC and a whole host of Western and Asian fusion projects are vying for that gold medal.