Hackaday Links Column Banner

Hackaday Links: October 25, 2020

Siglent has been making pretty big inroads into the mid-range test equipment market, with the manufacturers instruments popping up on benches all over the place. Saulius Lukse, of Kurokesu fame, found himself in possession of a Siglent SPD3303X programmable power supply, which looks like a really nice unit, at least from the hardware side. The software it came with didn’t exactly light his fire, though, so Saulius came up with a Python library to control the power supply. The library lets him control pretty much every aspect of the power supply over its Ethernet port. There are still a few functions that don’t quite work, and he’s only tested it with his specific power supply so far, but chances are pretty good that there’s at least some crossover in the command sets for other Siglent instruments. We’re keen to see others pick this up and run with it.

From the “everyone needs a hobby” department, we found this ultra-detailed miniature of an IBM 1401 mainframe system to be completely enthralling. We may have written this up at an earlier point in its development, but it now appears that the model maker, 6502b, is done with the whole set, so it bears another look. The level of detail is eye-popping — the smallest features of every piece of equipment, from the operator’s console to the line printer, is reproduced . Even the three-ring binders with system documentation are there. And don’t get us started about those tape drives, or the wee chair in period-correct Harvest Gold.

Speaking of diversions, have you ever wondered how many people are in space right now? Or how many humans have had the privilege to hitch a ride upstairs? There’s a database for that: the Astronauts Database over on Supercluster. It lists pretty much everything — human and non-human — that has been intentionally launched into space, starting with Yuri Gagarin in 1961 and up to the newest member of the club, Sergey Kud-Sverchkov, who took off got the ISS just last week from his hometown of Baikonur. Everyone and everything is there, including “some tardigrades” that crashed into the Moon. They even included this guy, which makes us wonder why they didn’t include the infamous manhole cover.

And finally, for the machinists out there, if you’ve ever wondered what chatter looks like, wonder no more. Breaking Taps has done an interesting slow-motion analysis of endmill chatter, and the results are a bit unexpected. The footage is really cool — watching the four-flute endmill peel mild steel off and fling the tiny curlicues aside is very satisfying. The value of the high-speed shots is evident when he induces chatter; the spindle, workpiece, vise, and just about everything starts oscillating, resulting in a poor-quality cut and eventually, when pushed beyond its limits, the dramatic end of the endmill’s life. Interesting stuff — reminds us a bit of Ben Krasnow’s up close and personal look at chip formation in his electron microscope.

LED Art Reveals Itself In Very Slow Motion

Every bit of film or video you’ve ever seen is a mind trick, an optical illusion of continuous movement based on flashing 24 to 30 slightly different images into your eyes every second. The wetware between your ears can’t deal with all that information individually, so it convinces itself that you’re seeing smooth motion.

But what if you slow down time: dial things back to one frame every 100 seconds, or every 1,000? That’s the idea behind this slow-motion LED art display called, appropriately enough, “Continuum.” It’s the work of [Louis Beaudoin] and it was inspired by the original very-slow-motion movie player and the recent update we featured. But while those players featured e-paper displays for photorealistic images, “Continuum” takes a lower-resolution approach. The display is comprised of four nine HUB75 32×32 RGB LED displays, each with a 5-mm pitch. The resulting 96×96 pixel display fits nicely within an Ikea RIBBA picture frame.

The display is driven by a Teensy 4 and [Louis]’ custom-designed SmartLED Shield that plugs directly into the HUB75s. The rear of the frame is rimmed with APA102 LED strips for an Ambilight-style effect, and the front of the display has a frosted acrylic diffuser. It’s configured to show animated GIFs at anything from 1 frame per second its original framerate to 1,000 seconds per frame times slower, the latter resulting in an image that looks static unless you revisit it sometime later. [Louis] takes full advantage of the Teensy’s processing power to smoothly transition between each pair of frames, and the whole effect is quite wonderful. The video below captures it as best it can, but we imagine this is something best seen in person.

Continue reading “LED Art Reveals Itself In Very Slow Motion”

E-Paper Display Shows Movies Very, Very Slowly

How much would you enjoy a movie that took months to finish? We suppose it would very much depend on the film; the current batch of films from the Star Wars franchise are quite long enough as they are, thanks very much. But a film like Casablanca or 2001: A Space Odyssey might be a very different experience when played on this ultra-slow-motion e-paper movie player.

The idea of displaying a single frame of a movie up for hours rather than milliseconds has captivated [Tom Whitwell] since he saw [Bryan Boyer]’s take on the concept. The hardware [Tom] used is similar: a Raspberry Pi, an SD card hat with a 64 GB card for the movies, and a Waveshare e-paper display, all of which fits nicely in an IKEA picture frame.

[Tom]’s software is a bit different, though; a Python program uses FFmpeg to fetch and dither frames from a movie at a configurable rate, to customize the viewing experience a little more than the original. Showing one frame every two minutes and then skipping four frames, it has taken him more than two months to watch Psycho. He reports that the shower scene was over in a day and a half — almost as much time as it took to film — while the scene showing [Marion Crane] driving through the desert took weeks to finish. We always wondered why [Hitch] spent so much time on that scene.

With the proper films loaded, we can see this being an interesting way to really study the structure and flow of a good film. It’s also a good way to cut your teeth on e-paper displays, which we’ve seen pop up in everything from weather stations to Linux terminals.

660 FPS Raspberry Pi Video Captures The Moment In Extreme Slo-Mo

Filming in slow-motion has long become a standard feature on the higher end of the smartphone spectrum, and can turn the most trivial physical activity into a majestic action shot to share on social media. It also unveils some little wonders of nature that are otherwise hidden to our eyes: the formation of a lightning flash during a thunderstorm, a hummingbird flapping its wings, or an avocado reaching that perfect moment of ripeness. Altogether, it’s a fun way of recording videos, and as [Robert Elder] shows, something you can do with a few dollars worth of Raspberry Pi equipment at a whopping rate of 660 FPS, if you can live with some limitations.

Taking the classic 24 FPS, this will turn a one-second video into a nearly half-minute long slo-mo-fest. To achieve such a frame rate in the first place, [Robert] uses [Hermann-SW]’s modified version of raspiraw to get raw image data straight from the camera sensor to the Pi’s memory, leaving all the heavy lifting of processing it into an actual video for after all the frames are retrieved. RAM size is of course one limiting factor for recording length, but memory bandwidth is the bigger problem, restricting the resolution to 64×640 pixels on the cheaper $6 camera model he uses. Yes, sixty-four pixels height — but hey, look at that super wide-screen aspect ratio!

While you won’t get the highest quality out of this, it’s still an exciting and inexpensive way to play around with slow motion. You can always step up your game though, and have a look at this DIY high-speed camera instead. And well, here’s one mounted on a lawnmower blade destroying anything but a printer.

Continue reading “660 FPS Raspberry Pi Video Captures The Moment In Extreme Slo-Mo”

Visual Magnetic Fields

If you need help visualizing magnetic fields, these slow-motion video captures should educate or captivate you. Flux lines are difficult to describe in words, because magnet shape and strength play a part. It might thus be difficult to visualize what is happening with a conical magnet, for a person used to a bar magnet. We should advise you before you watch the video below the break, if you are repelled by the sight of magnetite sand clogging a bare magnet then flying on the floor, this is your only warning.

The technique and equipment are simple and shown in the video. A layer of black sand is spread on a piece of tense plastic to make something like a dirty trampoline, and a neodymium magnet is dropped in the middle. The bouncing action launches the sand and magnet simultaneously so they are hanging in the air and the particles can move with little more than air resistance.

These videos were all taken with a single camera and a single magnet. Multiple cameras would yield 3D visuals, and the intertwining fields of multiple magnets can be beautiful. Perhaps a helix of spherical magnets? What do you have lying around the hosue? In a twist, we can use magnets to simulate gas atoms and trick them into performing unusual feats.

Continue reading “Visual Magnetic Fields”

The Very Slow Movie Player Does It With E-Ink

Most displays are looking to play things faster. We’ve got movies at 60 frames per second, and gaming displays that run at 144 fps. But what about moving in the other direction? [Bryan Boyer] wanted to try this out, so he built the VSMP, or Very Slow Movie Player. It’s a neat device that plays back a movie at about 24 fph (frames per hour) on an e-ink display to demonstrate something that [Bryan] calls Slow Seeing, which, he says “helps you see yourself against the smear of time.” A traditional epic-length movie is now going to run you greater than 8,000 hours of viewing.

Artistic considerations aside, it’s an interesting device from a technical point of view. [Bryan] built it from a 7.4-inch e-ink display from Pervasive Displays. The controller is connected to a Raspberry Pi Zero, which is running a Python script to convert a frame of the movie file into a dithered file, then send it to the display. Because the Pi Zero isn’t a very fast computer, this takes some time, and thus the slow speed of the VSMP. Originally, [Bryan] had set it up to run as fast as the system could manage, which was about 25 seconds per frame, or about 2 frames per minute. He decided to slow it down a bit further to the more attractive multiple of 24 frames per hour to contrast with the 24 frames per second of the original movie. He did this by using a CRON job that kicks of the conversion script once every 2.5 minutes and increments the frame counter. All of this is topped off with a nice 3D-printed case that has a lovely interference pattern to make a rather neat and intriguing project.

Perhaps the best part of this is see a time-lapse of the VSMP — life moves quickly around it while 2001: A Space Odyssey plays at normal speed.

Continue reading “The Very Slow Movie Player Does It With E-Ink”

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”