With the Raspberry Pi and a digital modulator, he’s got the only house on the block that’s wired to show The Simpsons all day. He has absolutely no control over which episode plays next, he can’t pause it, and its in presented in standard definition (a nightmare for anyone who grew up in the Netflix era) but a familiar viewing experience for the rest of us.
The key to this project is the Channel Plus Model 3025 modulator. It takes the feed from the antenna and mixes in two composite video sources on user-defined channels. All [probnot] had to do was find a channel that wouldn’t interfere with any of the over-the-air stations. The modulator has been spliced into the house’s coax wiring, so any TV connected to the wall can get in on the action. There’s no special setup required: when he wants to watch The Simpsons he just tunes the nearest TV to the appropriate channel.
Providing the video for the modulator is a Raspberry Pi, specifically, the original model that featured composite video output. While the first generation Pi is a bit long in the tooth these days, playing standard definition video is certainly within its capabilities. With a USB flash drive filled with a few hundred episodes and a bit of scripting it’s able to deliver a never-ending stream direct from Springfield. There’s still that second channel available on the modulator as well, which we’re thinking could be perfect for Seinfeld or maybe The X-Files.
Stereoscopic vision works by having the brain fuse together what both eyes see, and this process is called binocular fusion. The small differences between what each eye sees mostly conveys a sense of depth to us, but DiCE uses some of the quirks of binocular fusion to trick the brain into perceiving enhanced contrast in the visuals. This perceived higher contrast in turn leads to a stronger sense of depth and overall image quality.
To pull off this trick, DiCE displays a different contrast level to both eyes in a way designed to encourage the brain to fuse them together in a positive way. In short, using a separate and different dynamic contrast range for each eye yields an overall greater perceived contrast range in the fused image. That’s simple in theory, but in practice there were a number of problems to solve. Chief among them was the fact that if the difference between what each eyes sees is too great, the result is discomfort due to binocular rivalry. The hard scientific work behind DiCE came from experimentally determining sweet spots, and pre-computing filters independent of viewer and content so that it could be applied in real-time for a consistent result.
Things like this are reminders that we experience the world only through the filter of our senses, and our perception of reality has quirks that can be demonstrated by things like this project and other “sensory fusion” edge cases like the Thermal Grill Illusion, which we saw used as the basis for a replica of the Pain Box from Dune.
With the benefit of decades of advances in miniaturization, looking back at the devices of yore can be entertaining. Take camcorders; did we really walk around with these massive devices resting on our shoulders just to record the family trip to Disneyworld? We did, but even if those days are long gone, the hardware remains for the picking in closets and at thrift stores.
Those camcorders can be turned into cool things such as this CRT-based virtual reality headset. [Andy West] removed the viewfinders from a pair of defunct Panasonic camcorders from slightly after the “Reggievision” era, leaving their housings and optics as intact as possible. He reverse-engineered the connections and hooked up the composite video inputs to HDMI-to-composite converters, which connect to the dual HDMI ports on a Raspberry Pi 4. An LM303DLHC accelerometer provides head tracking, and everything is mounted to a bodged headset designed to use a phone for VR. The final build is surprisingly neat for the number of thick cables and large components used, and it bears a passing resemblance to one of those targeting helmets attack helicopter pilots use.
The software is an amalgam of whatever works – Three.js for browser-based 3D animation, some off-the-shelf drivers for the accelerometers, and Python and shell scripts to glue it all together. The video below shows the build and a demo; we don’t get the benefit of seeing what [Andy] is seeing in glorious monochrome SD, but he seems suitably impressed. As are we.
We’ve seen an uptick in projects using CRT viewfinders lately, including this tiny vector display. Time to scour those thrift stores before all the old camcorders are snapped up.
We generally cast a skeptical eye at projects that claim some kind of superlative. If you go on about the “World’s Smallest” widget, the chances are pretty good that someone will point to a yet smaller version of the same thing. But in the case of what’s touted as “The world’s smallest vector monitor”, we’re willing to take that chance.
If you’ve seen any of [Arcade Jason]’s projects before, you’ll no doubt have noticed his abiding affection for vector displays. We’re OK with that; after all, many of the best machines from the Golden Age of arcade games such as Asteroids and Tempest were based on vector graphics. None so small as the current work, though, based as it is on the CRT from an old camcorder’s viewfinder. The tube appears to be about 3/4″ (19 mm) in diameter, and while it still had some of its original circuitry, the deflection coils had to be removed. In their place, [Jason] used a ferrite toroid with two windings, one for vertical and one for horizontal. Those were driven directly by a two-channel push-pull audio amplifier to make patterns on the screen. Skip to 15:30 in the video below to see the display playing [Jerobeam Fenderson]’s “Oscilloscope Music”.
[Ben Cox] found some interesting USB devices on eBay. The Epiphan VGA2USB LR accepts VGA video on one end and presents it as a USB webcam-like video signal on the other. Never have to haul a VGA monitor out again? Sounds good to us! The devices are old and abandoned hardware, but they do claim Linux support, so one BUY button mash later and [Ben] was waiting patiently for them in the mail.
But when they did arrive, the devices didn’t enumerate as a USB UVC video device as expected. The vendor has a custom driver, support for which ended in Linux 4.9 — meaning none of [Ben]’s machines would run it. By now [Ben] was curious about how all this worked and began digging, aiming to create a userspace driver for the device. He was successful, and with his usual detail [Ben] explains not only the process he followed to troubleshoot the problem but also how these devices (and his driver) work. Skip to the end of the project page for the summary, but the whole thing is worth a read.
The resulting driver is not optimized, but will do about 7 fps. [Ben] even rigged up a small web server inside the driver to present a simple interface for the video in a pinch. It can even record its output to a video file, which is awfully handy. The code is available on his GitHub repository, so give it a look and maybe head to eBay for a bit of bargain-hunting of your own.
Today it’s almost always cheaper to buy an imported 3D printer kit than it is to source your own parts and build one yourself. But that doesn’t stop people from doing it anyway. Whether they’re looking for something a bit more solid, or just want to do things their own way, there are still valid reasons to design and build your own machine. Luckily for us in the audience, [Rob Mech] decided to document the build of his custom “LayerFused C201” printer on his YouTube Channel.
If you’ve ever dreamed of taking the plunge and building a 3D printer exactly the way you want, but were never able to manage the time, this seven video series might be the next best thing. Each video takes you through a different step of the construction, from building the frame out of aluminum extrusion all the way to wiring up the endstop switches and the 32-bit SKR v1.3 controller. There’s even a video that introduces the viewer to the concept of a “Frankenstein” printer that uses cobbled together parts just long enough to produce its own final components.
We’ve seen 3D image projection tried in a variety of different ways, but this is a new one to us. This volumetric display by Interact Lab of the University of Sussex creates a 3D image by projecting light onto a tiny foam ball, which zips around in the air fast enough to create a persistence of vision effect. (Video, embedded below.) How is this achieved? With a large array of ultrasonic transducers, performing what researchers call ‘acoustic trapping’.
This is the same principle behind acoustic levitation devices which demonstrate how lightweight objects (like tiny polystyrene foam balls) can be made to defy gravity. But this 3D display is capable of not only moving the object in 3D space, but doing so at a high enough speed and with enough control to produce a persistence of vision effect. The abstract for their (as yet unreleased) paper claims the trapped ball can be moved at speeds of up to several meters per second.
It has a few other tricks up its sleeve, too. The array is capable of simultaneously creating sounds as well as providing a limited form of tactile feedback by letting a user touch areas of high and low air pressure created by the transducers. These areas can’t be the same ones being occupied by the speeding ball, of course, but it’s a neat trick. Check out the video below for a demonstration. Continue reading “Behold A 3D Display, Thanks To A Speeding Foam Ball”→