No-Battery HD Video Streaming Does It with Backscatter

What if Google Glass didn’t have a battery? That’s not too far fetched. This battery-free HD video streaming camera could be built into a pair of eyeglass frames to stream HD video to a nearby phone or other receiver using no bulky batteries or external power source. Researchers at the University of Washington are using backscatter to pull this off.

The problem is that a camera which streams HD video wirelessly to a receiver consumes over 1 watt due to the need for a digital processor and transmitter. The researchers have separated the processing hardware into the receiving unit. They then send the analog pixels from the camera sensor directly to backscatter hardware. Backscatter involves reflecting received waves back to where they came from. By adding the video signal to those reflected waves, they eliminated the need for the power-hungry transmitter. The full details are in their paper (PDF), but here are the highlights.

Battery-free camera design approach

On the camera side, the pixel voltages (CAM Out) are an analog signal which is fed into a comparator along with a triangular waveform. Wherever the triangle wave’s voltage is lower than the pixel voltage, the comparator outputs a 0, otherwise, it outputs a 1. In this way, the pixel voltage is converted to different pulse widths. The triangular waveform’s minimum and maximum voltages are selected such that they cover the full possible range of the camera voltages.

The sub-carrier modulation with the XOR gate in the diagram is there to address the problem of self-interference. This is unwanted interference from the transmitter of the same frequency as the carrier. And so the PWM output is converted to a different frequency using a sub-carrier. The receiver can then filter out the interference. The XOR gate is actually part of an FPGA which also inserts frame and line synchronization patterns.

They tested two different implementations with this circuit design, a 112 x 112 grayscale one at up to 13 frames per second (fps) and an HD one. Unfortunately, no HD camera on the market gives access to the raw analog pixel outputs so they took HD video from a laptop using USB and ran that through a DAC and then into their PWM converter. The USB limited it to 10 fps.

The result is that video streaming at 720p and 10 fps uses as low as 250 μW and can be backscattered up to sixteen feet. They also simulated an ASIC which achieved 720p and 1080p at 60 fps using 321 μW and 806 μW respectively. See the video below for an animated explanation and a demonstration. The resulting video is quite impressive for passive power only.

If the University of Washington seems familiar in the context of backscatter, that’s because we’ve previously covered their battery-free (almost) cell phone. Though they’re not the only ones experimenting with it. Here’s where backscatter is being used for a soil network. All of this involves power harvesting, and now’s a great time to start brushing up on these concepts and building your own prototypes. The Hackaday Prize includes a Power Harvesting Challenge this year.

Continue reading “No-Battery HD Video Streaming Does It with Backscatter”

IKEA Lamp with Raspberry Pi as the Smartest Bulb in the House

We love to hack IKEA products, marvel at Raspberry Pi creations, and bask in the glow of video projection. [Nord Projects] combined these favorite things of ours into Lantern, a name as minimalist as the IKEA lamp it uses. But the result is nearly magic.

The key component in this build is a compact laser-illuminated video projector whose image is always in focus. Lantern’s primary user interface is moving the lamp around to switch between different channels of information projected on different surfaces. It would be a hassle if the user had to refocus after every move, but the focus-free laser projector eliminates that friction.

A user physically changing the lamp’s orientation is detected by Lantern’s software via an accelerometer. Certain channels project an information overlay on top of a real world object. Rather than expecting its human user to perform precise alignment, Lantern gets feedback from a Raspberry Pi camera to position the overlay.

Speaking of software, Lantern as presented by [Nord Projects] is a showcase project under Google’s Android Things umbrella that we’ve mentioned before. But there is nothing tying the hardware directly to Google. Since the project is open source with information on Hackster.io and GitHub, the choice is yours. Build one with Google as they did, or write your own software to tie into a different infrastructure (MQTT?), or a standalone unit with no connectivity at all.

Continue reading “IKEA Lamp with Raspberry Pi as the Smartest Bulb in the House”

Real-Time Polarimetric Imager from 1980s Tech

It’s easy to dismiss decades old electronics as effectively e-waste. With the rapid advancements and plummeting prices of modern technology, most old hardware is little more than a historical curiosity at this point. For example, why would anyone purchase something as esoteric as 1980-era video production equipment in 2018? A cheap burner phone could take better images, and if you’re looking to get video in your projects you’d be better off getting a webcam or a Raspberry Pi camera module.

But occasionally the old ways of doing things offer possibilities that modern methods don’t. This fascinating white paper from [David Prutchi] describes in intricate detail how a 1982 JVC KY-1900 professional video camera purchased for $50 on eBay was turned into a polarimetric imager. The end result isn’t perfect, but considering such a device would normally carry a ~$20,000 price tag, it’s good enough that anyone looking to explore the concept of polarized video should probably get ready to open eBay in a new tab.

Likely many readers are not familiar with polarimetric imagers, it’s not exactly the kind of thing they carry at Best Buy. Put simply, it’s a device that allows the user to visualize the polarization of light in a given scene. [David] is interested in the technology as, among other things, it can be used to detect man-made materials against a natural backdrop; offering a potential method for detecting mines and other hidden explosives. He presented a fascinating talk on the subject at the 2015 Hackaday SuperConference, and DOLpi, his attempt at building a low-cost polarimetric imager with the Raspberry Pi, got him a fifth place win in that year’s Hackaday Prize.

While he got good results with his Raspberry Pi solution, it took several seconds to generate a single frame of the image. To be practical, it needed to be much faster. [David] found his solution in an unlikely place, the design of 1980’s portable video cameras. These cameras made use of a dichroic beamsplitter to separate incoming light into red, blue, and green images; and in turn, each color image was fed into a dedicated sensor by way of mirrors. By replacing the beamsplitter assembly with a new 3D printed version that integrates polarization filters, each sensor now receives an image that corresponds to 0, 45, and 90 degrees polarization.

With the modification complete, the camera now generates real-time video that shows the angle of polarization as false color. [David] notes that the color reproduction and resolution is quite poor due to the nature of 30+ year old video technology, but that overall it’s a fair trade-off for running at 30 frames per second.

In another recent project, [David] found a way to hack optics onto a consumer-level thermal imaging camera. It’s becoming abundantly clear that he’s not a big fan of leaving hardware in an unmodified state.

Building A Lightweight Softbox For Better Photography

If you want to take good photographs, you need good light. Luckily for us, you can get reels and reels of LEDs from China for pennies, power supplies are ubiquitous, and anyone can solder up a few LED strips. The missing piece of the puzzle is a good enclosure for all these LEDs, and a light diffuser.

[Eric Strebel] recently needed a softbox for some product shots, and came up with this very cheap, very good lighting solution. It’s made from aluminum so it should handle the rigors of photography, and it’s absolutely loaded with LEDs to get all that light on the subject.

The metal enclosure for this softbox is constructed from sheet aluminum that’s about 22 gauge, and folded on a brake press. This is just about the simplest project you can make with a brake and a sheet of metal, with the tabs of the enclosure held together with epoxy. The mounting for this box is simply magnets super glued to the back meant to attach to a track lighting fixture. The 5000 K LED strips are held onto the box with 3M Super 77 spray adhesive, and with that the only thing left to do is wire up all the LED strips in series.

But without some sort of diffuser, this is really only a metal box with some LEDs thrown into the mix. To get an even cast of light on his subject, [Eric] is using drawing vellum attached to the metal frame with white glue. The results are fairly striking, and this is an exceptionally light and sturdy softbox for photography.

Continue reading “Building A Lightweight Softbox For Better Photography”

DIY Planetarium Built From PVC Pipes and Cardboard

When you think about DIY projects, you probably don’t consider building your own planetarium. Why would you? Building the thing is surely outside the capabilities of the individual, and even if you could figure it out, the materials would be far too expensive. There’s a limit to DIY projects, and obviously building a planetarium is on the wrong side of the line. Right?

Well, apparently not. [Gabby LeBeau] has documented the planetarium she built as her senior project, and if you’ll forgive the pun, it’s absolutely out of this world. Using readily available parts and the help of family and friends, she built a fully functional planetarium big enough to seat the Physics Department. No word on what grade she got, but it’s a safe bet she screwed the curve up for the rest of the class.

After two months of research and a couple of smaller proof of concept builds, she was able to find a business who graciously allowed her to construct the full scale planetarium in their warehouse. The frame is made of PVC pipes held together with zip ties. The big advantage to using the PVC pipes (beyond being cheap and easy to works with) is that they will automatically find a hemispherical shape when bent; saving the time and trouble it would take to create the shape with more rigid building materials.

Once the PVC frame was up, white cardboard panels were cut to shape and attached to the inside. The panels were lined up as closely as possible, but gaps were covered with white tape so the fit didn’t need to be perfect. When the dome was finished, it was lifted and placed on metal trusses to get some room underneath, and finally covered with a black tarp and stage curtain to block out all light.

Of course, she didn’t go through all this trouble to just stick some glow in the dark stars on the inside of this thing. The image from a standard projector is directed at a flat mirror, which then bounces off of a convex mirror. Driving the projector is a laptop running Stellarium. While there were some imperfections she couldn’t get polished or cleaned off of the mirrors, the end result was still very impressive.

Unfortunately, you can’t really do a planetarium justice with a camera, so we aren’t able to see what the final image looked like. But judging by the slack-jawed faces of those who are pictured inside of it, we’re going to go out on a limb and say it was awesome.

We might suggest trying to quiet down the projector or adding some lasers to the mix, but overall this is a truly exceptional project, and we’re jealous of everyone who got to experience it first hand.

One Man’s Quest for a Desktop Spherical Display

[Nirav Patel] is a man on a mission. Since 2011 he has been obsessed with owning a spherical display, the kind of thing you see in museums and science centers, but on a desktop scale. Unfortunately for him, there hasn’t been much commercial interest in this sort of thing as of yet. Up to this point, he’s been forced to hack up his own versions of his dream display.

That is until he heard about the Gakken Worldeye from Japan. This device promised to be exactly what he’s been looking for all these years, and he quickly snapped up two of them: one to use, and one to tear apart. We like this guy’s style. But as is often the case with cheap overseas imports, the device didn’t quite live up to his expectations. Undaunted by the out of the box performance of the Worldeye, [Nirav] has started documenting his attempts to improve on the product.

These displays work by projecting an image on the inside of a frosted glass or plastic sphere, and [Nirav] notes that the projection sphere on the Worldeye is actually pretty decent. The problem is the electronics, namely the anemic VGA resolution projector that’s further cropped down to a 480 pixel circle by the optics. Combined with the low-quality downsampling that squashes down the HDMI input, the final image on the Worldeye is underwhelming to say the least.

[Nirav] decided to rip the original projector out of the Worldeye and replace it with a Sony MP-CL1 model capable of a much more respectable 1280×720. He came up with a 3D printed bracket to hold the MP-CL1 in place, and has put the files up on Thingiverse for anyone who might want to play along at home. The results are better, but unfortunately still not great. [Nirav] thinks the sphere is physically too small to support the higher resolution of the MP-CL1, plus the optics aren’t exactly of the highest quality to begin with. But he’s just glad he didn’t have to build this one from scratch.

Going back to our first coverage of his DIY spherical display in 2012, we have to say his earliest attempts are still very impressive. It looks like this is a case of the commercial market struggling to keep up with the work of independent hackers.

FPGA Makes ASCII Video

Human beings like pictures which is probably why there’s the old adage “A picture’s worth a thousand words.” We take computer graphic output for granted now, but even in the earliest days for Teletypes and line printers, there was artwork made from characters ranging from Snoopy to Spock. [Wenting Z] continues the tradition by creating an FPGA that converts VGA video to ASCII art and outputs it via DVI.

The device uses a Xilinx Virtex device and uses about 500 LUT (look up tables) which is not much at all. You can see a video (that includes an overlay of the source video) of the device in action below.

In fact, we think of art like this as a computer phenomenon, but [Flora Stacey] created a butterfly on a typewriter in 1898 and ham radio operators were doing art using paper tape for the last half of the twentieth century. Even before that, In 1865, Alice in Wonderland had a certain passage that was typeset to suggest a mouse’s tail. Perhaps the pinnacle is the famous ASCII version of Star Wars.

This is decidedly less mechanical than some of the other ASCII art projects we’ve seen. If you have a taste for more text art, have a look at some other examples, including a very old advertisement that uses character art.

Continue reading “FPGA Makes ASCII Video”