Clock Runs Computer In Slow-Motion

At the heart of all computers is a clock, a dedicated timepiece ensuring that all of the parts of the computer are synchronized and can work together to execute the instructions that the computer receives. Clock speeds for most modern off-the-shelf computers and smartphones operate around a billion cycles per second, and even clocks that tick at a human-dizzying speed of a million times per second have been around since at least the 1970s. But there’s no reason a computer can’t run at a much slower speed, as [Greg] demonstrates in this video where he slows down a 6502 processor to a single clock cycle per second.

To reduce the clock speed from the megahertz range down to a single hertz or single clock cycle per second, [Greg] is using the pendulum from an actual clock. He attaches a small magnet to the bottom of the pendulum which is counted by a sensor as it swings past. Feeding that pulse into a monostable conditioner yields a clock signal which is usable for one of his 6502-based computers, and at this extremely slow rate, it’s possible to see the operation of a lot of the computers’ inner workings a step at a time. In fact, he optimized the computer’s operation as this slow speed let him see some inefficiencies in the program he was running.

It helps if your processor is static, of course. Older CPUs with dynamic storage for registers and some with limited-range PLLs would not work with this technique. The 8080A, for example, required a clock of at least 500 kHz.

Not only can this computer use a pendulum clock as the basis for its internal clock, but [Greg] also rigged up a mechanism to use a heartbeat. Getting in a little bit of exercise to increase his heart rate first will noticeably increase the computer’s speed. And, if you’re looking to get a deeper glimpse into the inner workings of a computer, we’d recommend looking at one which forgoes transistors in favor of relays.

Continue reading “Clock Runs Computer In Slow-Motion”

Four jumper wires with white heatshrink on them, labelled VCC, SCL, SDA and GND

Three Pitfalls In I2C Everyone Wishes Weren’t There

The best part of I2C is that it is a bus that is available just about anywhere, covering a vast ecosystem of devices that offer it as a hardware-defined interface, while being uncomplicated enough that it can also be implemented purely in software on plain GPIO pins. Despite this popularity, I2C is one of those famous informal standards that feature a couple of popular implementations, while leaving many of the details such as exact timing, bus capacitance and other tedious details to the poor sod doing the product development. Thus it is that we end up with articles such as a recent one on the tongue-twisting [pair of pared pears] blog, covering issues found while implementing an I2C slave.

As with any shared bus, whether multi-master or not, figuring out when the bus is clear is a fun topic, yet one which can cause endless headaches. One issue here comes from a feature that the SMBus version of I2C calls quick read/write. This allows for the rapid transfer of some data. Still, depending on the data returned by the slave, it may appear to the master that nothing is happening yet, since SDA is being held low by the slave until the stop condition, essentially locking the bus.

I2C hold times example.
I2C hold times example.

Where things get even more exciting comes generally in the form of what logic analyzers love to traumatically call a ‘spurious start/stop condition’. This refers to the behavior of SDA and SCL, with SDA going low before SCL indicating an error. This can occur due to a hold time that’s too low, causing other devices on the bus to miss the transition. Here SMBus defines a transition time of 300 ns, while I2C calls for 0 seconds, but it’s now suggested to delay calling a start/stop condition until a delay of 300 ns has passed. Essentially, it would seem that implementing a hold time is the way forward until evidence to the contrary appears.

The third pitfall pertains to the higher-speed modes of I2C, including Fast-Mode (FM) and Fast-Mode Plus (FM+). Backward compatibility with these higher speed versions is absent to spotty. Although FM+ (introduced by NXP in 2007) is supposed to be backward compatible with slower speeds, effectively the timing requirement differences between the FM+ and FM standards are too large to compensate for. At least in the current versions of the standards, but one of the joys of I2C is that there’s always another new set of revisions to look forward to.

Presence Sensor Locks Computer When You Step Away

Having a computer that locks its screen after a few minutes of inactivity is always a good idea from a security standpoint, especially in offices where there is a lot of foot traffic. Even the five- or ten-minute activity timers that are set on most workstations aren’t really perfect solutions. While ideally in these situations we’d all be locking our screens manually when we get up, that doesn’t always happen. The only way to guarantee that this problem is solved is to use something like this automatic workstation locker.

The project is based around the LD2410 presence sensor — a small 24 GHz radar module featuring onboard signal processing which simplifies the detection of objects and motion. [Enzo] paired one of these modules with a Seeed Studio XIAO nRF52840 development board to listen to the radar module and send the screen lock keyboard shortcut to the computer when it detects that the user has walked away from the machine. The only thing that [Enzo] wants to add is a blinking LED to let the user know when the device is about to timeout so that it doesn’t accidentally lock the machine when not needed.

One of the parts of this build that is a little bit glossed over is the fact that plenty of microcontroller platforms can send keystrokes to a computer even if they’re not themselves a USB keyboard. Even the Arduino Uno can do this, so by now this feature is fairly platform-agnostic. Still, you can use this to your advantage if you have the opposite problem from [Enzo] and need your computer to stay logged in no matter what.

Much Better VGA From An ESP32

The ESP32 series from Espressif have been a successful line of products, offering a powerful microcontroller with on-chip wireless networking. There’s a snag though in their practice of calling all of them ESP32s despite wildly varying specifications and even different processor cores, such that it’s easy to lose track of exactly what the chip in front of you can do. [Bitluni] was faced with updating his VGA library to include a newer variant, and was pleasantly surprised to find that it includes a far more capable display peripheral which enables significantly higher resolutions than previously.

The part in question is the ESP32-S3, a version of the chip with the dual Extensa cores we’re familiar with from earlier versions, but the interesting addition of an LCD controller. His previous VGA on ESP32 used the I2S peripheral and sacrificed some of the available bits to create sync pulses, while this version is not only faster but also includes dedicated sync hardware. He can now do up to 16-bit colour in as much as 1024×768 resolution as can be seen in the video below the break, though this feat requires a slightly out of spec framerate that only works on some screens. It’s by no means perfect because the peripheral is intended for LCD rather than VGA use, but it’s pushing microcontroller VGA to new heights and we look forward to any other uses people will put it to.

We covered the original Bitluni ESP32 VGA library when it first appeared.

Continue reading “Much Better VGA From An ESP32”

An ESP In Your Mini TV

When miniature LCD TVs arrived on the market they were an object of desire, far from the reach of tech-obsessed youngsters. Now in the age of smartphones they’re a historical curiosity, but with the onward march of technology you can have one for not a lot. [Taylor Galbraith] shows us how, with an ESP32 and an LCD we rather like because of its CRT-like rounded corners.

What he’s created is essentially a small media player, but perhaps what makes it of further interest is its migration from a mess of wires on a breadboard to a rather nice PCB. He’s not released the board files at the time of writing, but since the software can all be found in the GitHub repository linked above, we live in hope. On it are not only the ESP and the screen, but also a battery management board, an audio amplifier, and a small speaker. For now it’s a bare board, but we hope he’ll complete it with a neatly designed case for either a pocket player or a retro-styled mini TV. Until then you can see his progress in the videos below the break.

If you’re after more ESP32 media player inspiration, this isn’t the first retro-themed media player we’ve brought you.

Continue reading “An ESP In Your Mini TV”

How Hardware Testing Got Plugged Into A Continuous Integration Framework

The concept of Continuous Integration (CI) is a powerful tool in software development, and it’s not every day we get a look at how someone integrated automated hardware testing into their system. [Michael Orenstein] brought to our attention theĀ Hardware CI Arena, a framework for doing exactly that across a variety of host OSes and microcontroller architectures.

The Hardware CI Arena allows testing software across a variety of hardware boards such as Arduino, RP2040, ESP32, and more.

Here’s the reason it exists: while in theory every OS and piece of hardware implements things like USB communications and device discovery in the same way, in practice that is not always the case. For individual projects, the edge cases (or even occasional bugs) are not much of a problem. But when one is developing a software product that aims to work seamlessly across different hardware options, such things get in the way. To provide a reliable experience, one must find and address edge cases.

The Hardware CI Arena (GitHub repository) was created to allow automated testing to be done across a variety of common OS and hardware configurations. It does this by allowing software-controlled interactions to a bank of actual, physical hardware options. It’s purpose-built for a specific need, but the level of detail and frank discussion of the issues involved is an interesting look at what it took to get this kind of thing up and running.

The value of automatic hardware testing with custom rigs is familiar ground to anyone who develops hardware, but tying that idea into a testing and CI framework for a software product expands the idea in a useful way. When it comes to identifying problems, earlier is always better.

Dual Channel POV Display Also Has Nixie Tubes

What’s a tachyscope? According to [Daniel Ross], it is an animated display from an alternate timeline circa 1880. The real ones, of course, didn’t have LEDs and microcontrollers. The control unit looks like an old-timey radio, complete with Nixie tubes. The spinning part has blue and white LEDs, each accepting data from one of two serial ports. You can select to see data from one port, the other, or both. You can see the amazing contraption in the video below.

The LEDs are surface mounted and placed inside a glass test tube. Each display has its own processor. The project appears to have a PCB, but it is just a piece of fiberglass with a color print on top of it and holes drilled with a rotary tool. The board has no actual conductors — everything is point-to-point wiring. The base of the unit is old cookware. The slip ring is pretty interesting, too. It uses an old video tape head, D-cell batteries cut up, and contacts from a relay.

You might remember [Daniel] from his steampunk Victorian computer project, including a punk teletype and a magic eye tube. If you want some theory on these kinds of displays, we can help. If you just want a simple display, it doesn’t have to cost much.

Continue reading “Dual Channel POV Display Also Has Nixie Tubes”