A Real-World Experience In PCB Dye-Sub Printing

We all love PCB artwork, but those who create it work under the restriction of having a limited color palette to work with. If it’s not some combination of board, plating, solder mask, and silk screen, then it can’t easily be rendered on a conventional PCB. That’s not the end of the story though, because it’s technically possible to print onto a PCB and have it any color you like. Is it difficult? Read [Spencer]’s experience creating a rainbow Pride version of his RC2014 modular retrocomputer.

Dye-sublimation printing uses an ink that vaporizes at atmospheric pressure without a liquid phase, so a solid ink is heated and the vapor condenses back to solid on the surface to be printed. Commercial dye-sub printers are expensive, but there’s a cheaper route in the form of an Epson printer that can be converted. This in turn prints onto a transfer paper, from which the ink is applied to the PCB in a T-shirt printing press.

[Spencer] took the advice of creating boards with all-white silkscreen applied, and has come up with a good process for creating the colored boards. There is still an issue with discoloration from extra heat during soldering, so he advises in the instructions for the kit to take extra care. It remains however a fascinating look at the process, and raises the important point that it’s now within the reach of perhaps a hackerspace.

Regular readers will know we’ve long held an interest in the manufacture of artistic PCBs.

C++17’s Useful Features For Embedded Systems

Although the world of embedded software development languages seem to span somewhere between ASM and C89 all the way to MicroPython, there is a lot to be said for a happy medium between ease of development and features that makes the software more robust without adding overhead or bloat to the final firmware image.

This is where C++ has objectively many advantages over even C99, and as [Çağlayan Dökme] argues in a recent blog post C++17 adds many developer critter comforts to C++98 and the more recent C++11 C++14 standards.

First stepping back a generation (technically two, with C++20 also being a thing already), the addition of binary literals (e.g. 0b1010'1100) in C++14 and the expanded use of constexpr is addressed, with the latter foreshadowing C++17’s increased focus on compile time optimizations. A new attribute in C++17 that is part of this is [[nodiscard]], which when added before to the return type of a function or method requires the return value to be used in some manner, much like with functions in Ada (contrasted with procedures).

As [Çağlayan] notes, the biggest strength of compile-time checks is that it can save a lot of deploy-test-fix round-trips, with the total number of issues caught after deployment that could have been caught during compilation ideally being zero. Here C++17 streamlines the static_assert() mechanism and simplifies using if constexpr to instantiate code depending on compile-time conditions. Beyond compile-time optimizations there are a few other niceties, such as C++17 guaranteeing copy elision (return value optimization) when an object is returned directly, which is a welcome feature in hard real-time environments.

With today even MCUs having enough grunt to run multi-threaded applications and potentially firmware compiled from a many-thousand LoC codebase, picking a programming language that assists the developer with such an arduous task is very important, with Ada being the primary choice for high-reliability embedded platforms, but C++ along with C enjoying the most widespread (free) compiler support. Even if C++ isn’t supported on every single MCU out there (8051-based and most PIC MCUs mostly), whenever it is an option, it’s a pretty solid choice, especially with knowledge of these new language features.

The Most Ornate Birdbath You’ve Ever Seen

When one thinks of art, a birdbath may not be the first thing that comes to mind. However, there is no denying that the La Fontaine aux Oiseaux (The Bird Fountain) is a true work of art. This automaton, created by automaton maker [François Junod] in collaboration with 20 different workshops and craftsmen, represents thousands of hours of work and boasts a complex beauty that is both visible and hidden.The finished Bird Fountain, with all it's jewel encrusted exterior pieces

Commissioned by the Van Cleef & Arpels jewelry company, this purely mechanical display piece features a pair of jewel-encrusted birds that perform a little routine around the edge of the bath every hour. All the birds’ appendages move while bird song is added with the help of a whistle and bellows. The “water” is also mechanized, with a series of metal plates moving together to create ripple effects, while a water lily opens and closes and a dragonfly flutters above the surface.

The overall effect of this ridiculously over-the-top mechanical art piece is absolutely mesmerizing. Even if the bejeweled exterior isn’t quite your style, you can still appreciate its intricate workings thanks the video after the break giving us a peek at the development.

We’ve featured some of [François]’ other work before, which is equally impressive and displays the mechanics in all it’s glory. If you want to try your hand at making automatons, 3D printing is the perfect way to get started.

Continue reading “The Most Ornate Birdbath You’ve Ever Seen”

An Almost Invisible Desktop

When you’re putting together a computer workstation, what would you say is the cleanest setup? Wireless mouse and keyboard? Super-discrete cable management? How about no visible keeb, no visible mouse, and no obvious display?

That’s what [Basically Homeless] was going for. Utilizing a Flexispot E7 electronically raisable standing desk, an ASUS laptop, and some other off-the-shelf parts, this project is taking the idea of decluttering to the extreme, with no visible peripherals and no visible wires.

There was clearly a lot of learning and much painful experimentation involved, and the guy kind of glazed over how a keyboard was embedded in the desk surface. By forming a thin layer of resin in-plane with the desk surface, and mounting the keyboard just below, followed by lots of careful fettling of the openings meant the keys could be depressed. By not standing proud of the surface, the keys were practically invisible when painted. After all, you need that tactile feedback, and a projection keeb just isn’t right.

ChatGPT-inspired machine learning mouse emulator

Moving on, never mind an ultralight gaming mouse, how about a zero-gram mouse? Well, this is a bit of a cheat, as they mounted a depth-sensing camera inside a light fitting above the desk, and built a ChatGPT-designed machine-learning model to act as a hand-tracking HID device. Nice idea, but we don’t see the code.

The laptop chassis had its display removed and was embedded into the bottom of the desk, along with the supporting power supplies, a couple of fans, and a projector. To create a ‘floating’ display, a piece of transparent plastic was treated to a coating of Lux labs “ClearBright” transparent display film, which allows the image from the projector to be scattered and observed with sufficient clarity to be usable as a PC display. We have to admit, it looks a bit gimmicky, but playing Minecraft on this setup looks a whole lotta fun.

Many of the floating displays we’ve covered tend to be for clocks (after all timepieces are important) like this sweet HUD hack.

Continue reading “An Almost Invisible Desktop”

The Glitch That Brought Down Japan’s Lunar Lander

When a computer crashes, it usually doesn’t leave debris. But when a computer happens to be descending towards the lunar surface and glitches out, that’s a very different story. Turns out that’s what happened on April 26th, as the Japanese Hakuto-R Lunar lander made its mark on the Moon…by crashing into it. [Scott Manley] dove in to try and understand the software bug that caused an otherwise flawless mission to go splat.

The lander began the descent sequence as expected at 100 km above the surface. However, as it descended, the altitude sensor reported the altitude as much lower than it was. It thought it was at zero altitude once it reached about 5 km above the surface. Confused by the fact it hadn’t yet detected physical contact with the surface, the craft continued to slowly descend until it ran out of fuel and plunged to the surface.

Ultimately it all came down to sensor fusion. The lander merges several noisy sensors, such as accelerometers, gyroscopes, and radar, into one cohesive source of truth. The craft passed over a particularly large cliff that caused the radar altimeter to suddenly spike up 3 km. Like good filtering software, the craft reasons that the sensor must be getting spurious data and filters it out. It was now just estimating its altitude by looking at its acceleration. As anyone who has tried to track an object through space using just gyros and accelerometers alone can attest, errors accumulate, and suddenly you’re not where you think you are.

We know what you’re thinking: surely they would have run landing simulations to catch errors like these? Ironically they did, it’s just that after the simulations were run, the landing site for Hakuto-R was changed. Unfortunately, nobody thought to re-run the simulations, and now the Moon has a new lawn ornament,

We’ve previously written about why lunar landings are so hard. While knowing what led to the crash will hopefully prevent a similar fate for future missions, the reality is that remotely landing a robot on a dusty world without the help of GPS is fiendishly difficult and likely will be for some time.

Continue reading “The Glitch That Brought Down Japan’s Lunar Lander”

Bike Rides Played Back Via Aircraft Altitude Indicator

Any good bike ride should have a big climb to push your fitness, and a nice descent for the joy of careening down at high speed. [Glen Akins] has been recording his altitude during his mountain biking expeditions, and has now built a way to play them back on an aircraft altitude indicator.

A Python script is used to parse a recorded GPX file, which stores position and elevation data captured from a GPS device during [Glen]’s rides. The elevation data is then output to a Raspberry Pi Pico, which drives a set of three Microchip MCP4802 DACs and three TI OPA584 op-amps in order to create the necessary 400 Hz AC waveforms to drive the aircraft altitude indicator. One DAC and op-amp are used to generate 400 Hz AC to simply power the device, while the other two are used to generate synchro signals to actually drive the dial as needed. The maths involved is worth checking out, particularly if you’re into old-school instrumentation from the 20th century.

We’ve seen similar tinkering efforts from [Glen] before, too.

Continue reading “Bike Rides Played Back Via Aircraft Altitude Indicator”

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!