Blender And OpenEMS Teamed Up Make Stunning Simulations

There’s tons of theory out there to explain the behavior of electronic circuits and electromagnetic waves. When it comes to visualization though, most of us have had to make do with our lecturer’s very finest blackboard scribbles, or some diagrams in a textbook. [Sam A] has been working on some glorious animated simulations, however, which show us various phenomena in a far more intuitive way.

The animations were created in Blender, the popular 3D animation software. As for the underlying simulation going on behind the scenes, this was created using the openEMS platform. [Sam] has used openEMS to run electromagnetic simulations of simple circuits via KiCAD. From there, it was a matter of finding a way to export the simulation results in a way that could be imported into Blender. This was achieved with Paraview software acting as a conduit, paired with a custom Python script.

The result is that [Sam] can produce visually pleasing electromagnetic simulations that are easy to understand. One needn’t imagine a RF signal’s behaviour in a theoretical coax cable with no termination, when one can simply see what happens in [Sam]’s animation. 

Simulation is a powerful tool which is often key to engineering workflows, as we’ve seen before.

Continue reading “Blender And OpenEMS Teamed Up Make Stunning Simulations”

Modern Dance Or Full-Body Keyboard? Why Not Both!

If you felt in your heart that Hackaday was a place that would forever be free from projects that require extensive choreography to pull off, we’re sorry to disappoint you. Because you’re going to need a level of coordination and gross motor skills that most of us probably lack if you’re going to type with this full-body, semaphore-powered keyboard.

This is another one of [Fletcher Heisler]’s alternative inputs projects, in the vein of his face-operated coding keyboard. The idea there was to be able to code with facial gestures while cradling a sleeping baby; this project is quite a bit more expressive. Pretty much all you need to know about the technical side of the project can be gleaned from the brilliant “Hello world!” segment at the start of the video below. [Fletcher] uses OpenCV and MediaPipe’s Pose library for pose estimation to decode the classic flag semaphore alphabet, which encodes characters in the angle of the signaler’s extended arms relative to their body. To extend the character set, [Fletcher] added a squat gesture for numbers, and a shift function controlled by opening and closing the hands. The jazz-hands thing is just a bonus.

Honestly, the hack here is mostly a brain hack — learning a complex series of gestures and stringing them together fluidly isn’t easy. [Fletcher] used a few earworms to help him master the character set and tune his code; the inevitable Rickroll was quite artistic, and watching him nail the [Johnny Cash] song was strangely satisfying. We also thoroughly enjoyed the group number at the end. Ooga chaka FTW.

Continue reading “Modern Dance Or Full-Body Keyboard? Why Not Both!”

Hackaday Does Berlin

If you’re wondering why there was no newsletter last weekend, it was because we had our hands full with Hackaday Berlin. But boy, was it worth it! Besides being the launch party for the tenth annual Hackaday Prize, it was the first Hackaday gathering in Europe for four years, and it was awesome to see a bunch of familiar faces and meet many more new ones.

In a world that’s so interconnected, you might think that social media can take care of it all for you. And to some extent that’s true! If I could count the number of times I heard “I follow you on Twitter/Mastodon” over the course of the event!

But then there were tons of other meetings. People who are all interested in building and designing analog synthesizers, even some who live in the same urban megalopolis, meeting each other and talking about modules and designs. People who love flip dots. On the spot collaborations of people writing video drivers and people making huge LED walls. And somehow there’s still room for this to happen, even though the algorithms should have probably hooked these folks up by now.

From the perspective of hosting the conference, I get the most satisfaction from seeing these chance meetings and the general atmosphere of people learning not only new things, but new people. This cross-fertilization of friendships and project collaborations is what keeps our community vital, and especially coming out of the Pandemic Years, it’s absolutely necessary. I came away with a long list of new plans, and I’m sure everyone else did too. And for some reason, social media just isn’t a substitute. Take that, TwitFace!

Photo of the Echo Dot PCB, highlighting the capacitor that needs to be shorted out for the exploit to work

Squeezing Secrets Out Of An Amazon Echo Dot

As we have seen time and time again, not every device stores our sensitive data in a respectful manner. Some of them send our personal data out to third parties, even! Today’s case is not a mythical one, however — it’s a jellybean Amazon Echo Dot, and [Daniel B] shows how to make it spill your WiFi secrets with a bit of a hardware nudge.

There’s been exploits for Amazon devices with the same CPU, so to save time, [Daniel] started by porting an old Amazon Fire exploit to the Echo Dot. This exploit requires tactically applying a piece of tin foil to a capacitor on the flash chip power rail, and it forces the Echo to surrender the contents of its entire filesystem, ripe for analysis. Immediately, [Daniel] found out that the Echo keeps your WiFi passwords in plain text, as well as API keys to some of the Amazon-tied services.

Found an old Echo Dot at a garage sale or on eBay? There might just be a WiFi password and a few API keys ripe for the taking, and who knows what other kinds of data it might hold. From Amazon service authentication keys to voice recognition models and maybe even voice recordings, it sounds like getting an Echo to spill your secrets isn’t all that hard.

We’ve seen an Echo hijacked into an always-on microphone before, also through physical access in the same vein, so perhaps we all should take care to keep our Echoes in a secure spot. Luckily, adding a hardware mute switch to Amazon’s popular surveillance device isn’t all that hard. Though that won’t keep your burned out smart bulbs from leaking your WiFi credentials.

Apple Never Gave Them USB. Now, They’re Getting It For Themselves

These days we use USB as a default for everything from low-speed serial ports to high-capacity storage, and the ubiquitous connector has evolved into a truly multi-purpose interface. It’s difficult to believe then, that the first Apple Mac to be designed with a USB interface was shipped without it; but that’s the case with 1997’s grey Power Mac G3.

On the personality board are all the footprints for a single USB 1.1 port, but USB-hungry Apple fanboys had to wait for the translucent iMac and later G3 before they had a machine with the parts fitted. [Croissantking] is righting that particular wrong, by piecing together the missing Apple circuit using parts from contemporary cards for PCs. Over a long forum thread there are a few teething problems, but it certainly seems as though grey G3 owners will soon be able to have reliable USB upgrades.

If omitting USB from a 1997 Mac seems unexpected, it’s as well to remember how slow the first USB versions were. At the time SCSI was king in the high-speed peripheral world, and USB seemed more appropriate as a replacement for Apple Desktop Bus and the serial port. Even when they embraced USB they were reluctant to follow the standards of the PC world, as we remember finding out when for curiosity’s sake we tried swapping the mice and keyboards between an iMac and a Windows PC. We have USB’s success to thank for releasing Mac users from a world of hugely overpriced proprietary peripherals.

If you fancy hacking a ’90s PowerMac, make sure you get one that works.

Thanks [Doug] for the tip.

Working With Old High-Voltage EPROMs Is Fussy

EPROMs, those UV-erasable memory chips of the 80s and 90s, once played a crucial role in countless electronic devices. They’ve become relics of a bygone era, but for enthusiasts of vintage electronics, the allure of these light-sensitive devices remains strong. Today, we’re diving into [Kevin Osborn]’s nostalgic journey as he uncovers the secrets of old EPROMs loaded with Atari 7800 code.

[Kevin] used to work at General Computer Company, which produced the Atari 7800 and several games for the system. Thus, he had a handful of old carts and development EPROMs sitting up in his attic along with an old console. Recently, he decided to try and uncover what was on the EPROMs and begun an investigation. They wouldn’t run in his Atari, and he quickly realized why: the EPROMs weren’t cryptographically signed, so the system wouldn’t load them. Continue reading “Working With Old High-Voltage EPROMs Is Fussy”

Creating A 3D Visualization Of Freely Moving Organisms Using Camera Array And Software Algorithm

Observing a colony, swarm or similar grouping of creatures like ants or zebrafish over longer periods of time can be tricky. Simply recording their behavior with a camera misses a lot of information about the position of their body parts, while taking precise measurements using a laser-based system or LiDAR suffers from a reduction in parameters such as the resolution or the update speed. The ideal monitoring system would be able to record at high data rates and resolutions, while presenting the recorded data all three dimensions. This is where the work by Kevin C. Zhou and colleagues seeks to tick all the boxes, with a recent paper (preprint, open access) in Nature Photonics describing their 3D-RAPID system.

This system features a 9×6 camera grid, making for a total of 54 cameras which image the underlying surface. With 66% overlap between cameras across the horizontal dimension, there enough duplicate data between image stream that is subsequently used in the processing step to extract and reconstruct the 3D features, also helped by the pixel pitch of between 9.6 to 38.4 µm. The software is made available via the author’s GitHub.

Three configurations for the imaging are possible, ranging from no downsampling (1x) for 13,000×11,250 resolution at 15 FPS, to 2x downsampling (6,500×5,625@60FPS) and finally 4x (3,250×2,810@230FPS). Depending on whether the goal is to image finer features or rapid movement, this gives a range of options before the data is fed into the computational 3D reconstruction and stitching algorithm. This uses the overlap between the distinct frames to reconstruct the 3D image, which in this paper is used together with a convolutional neural network (CNN) to automatically determine for example how often the zebrafish are near the surface, as well as the behavior of fruit flies and harvester ants.

As noted in an interview with the authors, possible applications could be found in developmental biology as well as pharmaceutics.