The Bad Apple!! video with its silhouette animation style has long been a staple graphics demo for low-end hardware, a more stylish alternative to the question “Will it run DOOM?”. It’s normal for it to be rendered onto a screen by a small microcomputer or similar but as [Ian Ward] demonstrates in an unusual project, it’s possible to display the video without any processor being involved. Instead he’s used a clever arrangement involving a 32K byte EPROM driving a HD44780-compatible parallel alphanumeric LCD display.
While 32K bytes would have seemed enormous back in the days of 8-bit computing, even when driving only a small section of an alphanumeric LCD it’s still something of a struggle to express the required graphics characters. This feat is achieved by the use of a second EPROM, which carries a look-up table.
It’s fair to say that the result which can be seen in the video below the break isn’t the most accomplished rendition of Bad Apple!! that we’ve seen, but given the rudimentary hardware upon which it’s playing we think that shouldn’t matter. Why didn’t we think of doing this in 1988!
An interesting research project out of MIT shows that it’s possible to embed machine-readable labels into 3D printed objects using nothing more than an FDM printer and filament that is transparent to IR. The method is being called InfraredTags; by embedding something like a QR code or ArUco markers into an object’s structure, that label can be detected by a camera and interactive possibilities open up.
One simple proof of concept is a wireless router with its SSID embedded into the side of the device, and the password embedded into a different code on the bottom to ensure that physical access is required to obtain the password. Mundane objects can have metadata embedded into them, or provide markers for augmented reality functionality, like tracking the object in 3D.
How are the codes actually embedded? The process is straightforward with the right tools. The team used a specialty filament from vendor 3dk.berlin that looks nearly opaque in the visible spectrum, but transmits roughly 45% in IR. The machine-readable label gets embedded within the walls of a printed object either by using a combination of IR PLA and air gaps to represent the geometry of the code, or by making a multi-material print using IR PLA and regular (non-IR transmitting) PLA. Both provide enough contrast for an IR-sensitive camera to detect the label, although the multi-material version works a little better overall. Sadly, the average mobile phone camera by itself isn’t sufficiently IR-sensitive to passively read these embedded tags, so the research used easily available cameras with no IR-blocking filters, like the Raspberry Pi NoIR.
The PDF has deeper details of the implementation for those of you who want to know more, and you can see a demonstration of a few different applications in the video, embedded below. Determining the provenance of 3D printed objects is a topic of some debate in the industry, and it’s not hard to see how technology like this could be used to covertly identify objects without compromising their appearance.
Over on Retro Recipe’s YouTube channel, [Perifractic] has been busy restoring an old promotional video of how Commodore computers were made back in 1984 (video below the break). He cleaned up the old VHS-quality version that’s been around for years, translated the German to English, and trimmed some bits here and there. The result is a fascinating look into the MOS factory, Commodore’s German factory, and a few other facilities around the globe. The film shows the chip design engineers in action, wafer manufacturing, chip dicing, and some serious micro-probing of bare die. We also see PCB production, and final assembly, test and burn-in of Commodore PET and C64s in Germany.
Check out the video description, where [Perifractic] goes over the processes he used to clean up video and audio using machine learning. If restoration interests you, check out the piece we wrote about these techniques to restore old photographs last year. Are there any similar factory tour films, restored or not, lurking around the web? Let us know in the comments below.
[Caleb] shares a problem with most dog owners. Dogs leave their… byproducts…all over your yard. Some people pick it up right away and some just leave it. But what if your dog has run of the yard? How do you know where these piles are hiding? A security camera and AI image detection is the answer, but probably not the way that you think.
You might think as we did that you could train the system to recognize the–um–piles. But instead, [Caleb] elected to have the AI do animal pose estimation to detect the dog’s posture while producing the target. This is probably easier than recognizing a nondescript pile and then it doesn’t matter if it is, say, covered with snow.
We have to say, [Murtaza]’s example game in his latest video isn’t very exciting. However, the OpenCV technique he uses to track a hand and determine its distance from a single camera is pretty interesting. The demo shows a random button on the screen and you have to use your hand to press the button which then moves so you can try again. The hand measurement seems accurate to a few centimeters which is good enough for many applications.
The Python code is actually quite straightforward. Essentially, the software tracks your hand and by estimating its relative size to determine how far away it is. Of course, your hand might also rotate, and [Murtaza] works through all the cases step-by-step. If we wanted to know a distance, we’d probably turn to ultrasonics or a time of flight sensor. The problem is, those sensors can’t tell your hand from anything else that happens to be in front of it. The use of a single camera to track and locate is pretty impressive.
If you haven’t used OpenCV before, the channel has a lot of tutorials and they are all worth watching. Computer vision is a great technique and can replace a lot of things in some applications. GPS, for example. Or, try this creepier tracking application next Halloween.
It’s dangerous for a hardware hacker to go into a second-hand store. I was looking for a bed frame for my new apartment, but of course I spent an age browsing all the other rubbish treasures on offer. I have a rough rule of thumb: if it’s not under a tenner and fits in one hand, then it has to be exceptional for me to buy it, so I passed up on a nice Grundig reel-to-reel from the 1960s and instead came away with a folding Palm Pilot keyboard and a Fuji 8mm home movie camera after I’d arranged delivery for the bed. On those two I’d spent little more than a fiver, so I’m good. The keyboard is a serial device that’s a project for a rainy day, but the camera is something else. I’ve been keeping an eye out for one to use for a Raspberry Pi camera conversion, and this one seemed ideal. But once I examined it more closely, I was drawn into an unexpected train of research that shed some light on what must of been real objects of desire for my parents generation.
A Thrift Store Find Opens A Whole New Field
The Fuji P300 from 1972 is typical among consumer movie cameras of the day. It takes the form of a film magazine with a zoom lens assembly on its front, a reflex viewfinder on its side, and a handle with a shutter trigger button on it protruding vertically below the magazine and also housing the batteries.
Surprisingly it still has a mercury cell that would have powered its light meter; a minor annoyance to dispose of this correctly. Sometimes these devices had clockwork motors, but this one has an electric motor. It also has a light sensor that is coupled to some kind of electromechanical aperture. It would have been an expensive camera when it was new, probably as much of a purchase as an SLR or a decent mirrorless camera here in 2021.
The surprise came when I opened it up, for it looked like no other 8mm camera I had seen. I’m familiar wit the two reels of a Standard 8 or the boxy cassette of Super 8, but this one used something different. That film magazine is made to fit a compact twin-reel cartridge whose film fits in a metal film gate. This is a Single 8 camera, Fuji’s entry in the all-in-one 8 mm film market, and a format I never knew existed. To explain my unexpected discovery it was necessary to delve into the world of home movie formats in the decade before videotape arrived and drove them out. Continue reading “The Seductive Pull Of An Obsolete Home Movie Format”→
Modern oscilloscopes are often loaded with features, but every now and then you run into a feature that seems easy to implement yet isn’t available. [kgsws] wanted to use his Rigol DS1074 to show live measurements in his YouTube videos, but found out that this scope doesn’t support video output. Not to be deterred, [kgsws] decided to add this feature himself. In the video embedded below, he describes in detail the process of adding a USB Video Capture (UVC) interface to his oscilloscope.
The basic idea was to find the signals going into the scope’s display and read them out using a Cypress EZ-USB board. This is a development board that can be used to design USB devices, and supports the UVC mode. However, with no documentation of any of the Rigol’s internal circuitry [kgsws] had to probe the display connector to find out which pin carried which signal. And since he had no other scope available than this Rigol, he hooked up the various bits of the disassembled instrument so that it could (awkwardly) probe its own internal signals.
After mapping out its own display signals, it was time to hook them up to the EZ-USB board. [kgsws] achieved this by soldering about two dozen tiny wires to SMD pads on the motherboard. The EZ-USB board itself was placed in the back of the scope’s case, but had to be stripped of unneeded components in order to save space and power. A very clever trick was the addition of a reed switch, which allowed [kgsws] to set the EZ-USB board to programming mode without having to open the scope’s case, by simply holding a magnet near the switch.