A Retro Camcorder Upgraded As A Raspberry Pi HQ Camera

In 2020 when we carry an all-purpose computer and data terminal able to store our every thought and deed on a global computer network, it’s easy to forget that once upon a time we were excited by the simpler things. Take the camcorder for example, back in the 1990s the idea of a complete video recording solution that captured moving images on tape cartridges and fit in the palm of your hand was a very big deal indeed, and camcorders as we called them in those innocent times were a prized posession. Now they’re a $0.50 find a Goodwill, which is how [Dustin] picked up the RCA camcoder he’s converting into something altogether more modern. He’s gutted it and upgraded it by removing the analogue innards and retaining only the case and lens assembly to put around a Raspberry Pi and associated HQ camera module.

Opening the camcorder up reveals a ton of miniaturised analogue circuitry, but once the original assemblies are removed it’s relatively straightforward to put the Pi camera on the rear of the lens unit. There’s plenty of space for the Pi in the box, and he’s putting a touchscreen on the outside.

Sadly the camcorder’s original tiny CRT is no longer working, else that would have been the ultimate retro viewfinder. Still we hope to see some tinkering on that part of the project since those little CRTS make for delightful hacks. The project is very much a work in progress, but should serve that these once ubiquitous devices are now in the realm of the throwaway.

This isn’t the first such conversion we’ve seen with a Raspberry Pi, the original camera module is a handy fit to an 8mm movie camera.

Killing Mosquitoes With Cardi B

Keeping a bird bath or a pond in your yard is a great way to add ambiance and style, but both of these things can be a haven for mosquitoes. Popular methods of getting rid of them are often with harsh pesticides, but [Shane] has brought us a more environmentally-friendly way of taking care of these disease-carrying insects by looping a Cardi B playlist underwater, killing the mosquito larvae.

While the build does include some other favorites such as “Baby Shark” and would probably work with any song (or audio of sufficient volume) the build is still pretty interesting. It’s based on a 555 timer circuit which powered an ultrasonic sound gun, but was repurposed for this build. The ultrasonic modules were replaced with piezo modules which were waterproofed with silicone. The sound produced vibrates at a frequency which resonates with the mosquito larvae and is fatal to them. [Shane] put the build into a small boat which can be floated in any pond, bird bath, horse trough, or water feature.

The major caveat to this build is that it may be damaging to other beneficial animals such as fish or frogs, so he suggests limiting its use to uninhabited stagnant water. Either way, though, it’s a pretty unique way of taking care of a mosquito problem not unlike another build which takes care of these insects in water a slightly different way.

Continue reading “Killing Mosquitoes With Cardi B”

Sufficiently Advanced Technology And Justice

Imagine that you’re serving on a jury, and you’re given an image taken from a surveillance camera. It looks pretty much like the suspect, but the image has been “enhanced” by an AI from the original. Do you convict? How does this weigh out on the scales of reasonable doubt? Should you demand to see the original?

AI-enhanced, upscaled, or otherwise modified images are tremendously realistic. But what they’re showing you isn’t reality. When we wrote about this last week, [Denis Shiryaev], one of the authors of one of the methods we highlighted, weighed in the comments to point out that these modifications aren’t “restorations” of the original. While they might add incredibly fine detail, for instance, they don’t recreate or restore reality. The neural net creates its own reality, out of millions and millions of faces that it’s learned.

And for the purposes of identification, that’s exactly the problem: the facial features of millions of other people have been used to increase the resolution. Can you identify the person in the pixelized image? Can you identify that same person in the resulting up-sampling? If the question put before the jury was “is the defendant a former president of the USA?” you’d answer the question differently depending on which image you were presented. And you’d have a misleading level of confidence in your ability to judge the AI-retouched photo. Clearly, informed skepticism on the part of the jury is required.

Unfortunately, we’ve all seen countless examples of “zoom, enhance” in movies and TV shows being successfully used to nab the perps and nail their convictions. We haven’t seen nearly as much detailed analysis of how adversarial neural networks create faces out of a scant handful of pixels. This, combined with the almost magical resolution of the end product, would certainly sway a jury of normal folks. On the other hand, the popularity of intentionally misleading “deep fakes” might help educate the public to the dangers of believing what they see when AI is involved.

This is just one example, but keeping the public interested in and educated on the deep workings and limitations of the technology that’s running our world is more important than ever before, but some of the material is truly hard. How do we separate the science from the magic?

ARM And X86 Team Up In No Compromise Cyberdeck

Over the last couple of years the cyberdeck community has absolutely exploded. Among those who design and build these truly personal computers there are no hard rules, save perhaps making sure the final result looks as unconventional as possible. But one thing that’s remained fairly consistent is the fact that these machines are almost exclusively powered by the Raspberry Pi. Unfortunately, that means they often leave something to be desired in terms of raw performance.

But [MSG] had a different idea. His cyberdeck still has the customary Raspberry Pi inside, but it also has an i7 Intel NUC that can be fired up at the touch of a button. He says it’s the best of both worlds: an energy efficient ARM Linux platform for mobile experimentation, and a powerful x86 Windows box for playing games working from home. It’s the hacker equivalent of business in the front, party in the back.

With a KVM connected to the custom Planck 40% mechanical keyboard and seven inch LCD, [MSG] can switch between both systems on the fly. Assuming he’s got the juice anyway; while the Raspberry Pi 4 and LCD is able to run on a pair of 18650 batteries, the cyberdeck needs to be plugged in if he wants to use the power-hungry NUC. If he ditched the Pi he could potentially load up the case with enough batteries to get the Intel box spun up, but that would be getting a little too close to a conventional laptop.

The whole plurality theme doesn’t stop at the computing devices, either. In addition to the primary LCD, there’s also a 2.13 inch e-paper display and a retro-style LED matrix courtesy of a Pimoroni Micro Dot pHAT. With a little Python magic behind the scenes, [MSG] is able to display things like the system temperature, time, and battery percentage even when the LCD is powered down.

In a post on the aptly-named Cyberdeck Cafe, [MSG] talks about how seeing the VirtuScope built by [bootdsc] inspired him to start working towards his own personal deck, and where he hopes to take the idea from here. The unique USB expansion bay behind the screen holds particular promise, and it sounds like a few add-on modules are already in the works. But of course, it wouldn’t be a true cyberdeck if it wasn’t constantly being improved and redesigned. Come to think of it, that makes at least two rules to live by in this community.

Fans Add Reality To Virtual Driving

A few decades ago you might have been satisfied with a crude wireframe flight simulator or driving a race car with the WASD keys. Today, gamers expect more realism, and [600,000 milliliters] is no different. At first, he upgraded his race car driving chair and put on VR goggles. But watching the world whiz by in VR is you can’t feel the wind on your face. Armed with a 3D printer, some software, and some repurposed PC fans, he can now feel the real wind in virtual reality. You can see the build in the video, below.

The electronics are relatively straightforward and there is already software available. The key, though, is the giant 3D printed ducts that direct the airflow. These are big prints, so probably not for some printers, but printers are getting bigger every day. The fan parts are from Thingiverse, but the enclosures are custom and you can download them from the blog post.

Continue reading “Fans Add Reality To Virtual Driving”

Sierpinski PCB Christmas Tree

It’s holiday time again! And that means it’s time to break out the soldering iron and the RGB LEDs! If you’re going to make a custom PCB to put those LEDs on, you’ll notice that you get few copies of your PCB in your order, so, might as well design it such that you can combine them all together into a single Sierpinski Christmas Tree, just like [Landon Carter] did.

Each PCB “tree” has three connections which can be used as either inputs or outputs by soldering one of two bridge connections on the PCB. The power and signal goes up and down through the tree, rather than across, so the connections go one on the top of the tree and two on the bottom. This way, each tree in the triangle can easily be connected, and each triangle can be easily connected to another. Each individual tree has three WS2812b-mini addressable RGB LEDs and the tree is controlled by an external Arduino.

The first order of 10 PCBs came in, which makes a 9 member tree – next up is a 27 member tree. After that, you’re going to need some pretty high vaulted ceilings in order to put these on the wall. On the upside, though, once the holidays are over, everything can be easily disconnected and packed away with the rest of the decorations. If you, too, are interested in RGB LED decorations, there are a few on the site for your perusal.

The Protein Folding Break-Through

Researchers at DeepMind have proudly announced a major break-through in predicting static folded protein structures with a new program known as AlphaFold 2. Protein folding has been an ongoing problem for researchers since 1972. Christian Anfinsen speculated in his Nobel Prize acceptance speech in that year that the three-dimensional structure of a given protein should be algorithm determined by the one-dimensional DNA sequence that describes it. When you hear protein, you might think of muscles and whey powder, but the proteins mentioned here are chains of amino acids that fold into complex shapes. Cells use these proteins for almost everything. Many of the enzymes, antibodies, and hormones inside your body are folded proteins. We’ve discussed why protein folding is important as well covered recent advancements in cryo-electron microscopy used to experimentally determine the structure of folded proteins.

The shape of proteins largely controls their function, and if we can predict their shape then we get much closer to predicting how they interact. While AlphaFold 2 just predicts the static state, the sheer number of interactions that can change a protein, dynamic protein structures are still out of reach. The technical achievement of DeepMind is not to be understated. For a typical protein, there are an estimated 10^300 different configurations.

Out of the 180 million protein sequences in the Protein database, only 170,000 have had their structures identified. Technologies like the cryo-electron microscope make the process of mapping their structure easier, but it is still complex and tedious to go from sequence to structure. AlphaFold 2 and other folding algorithms are tested against this 170,000 member corpus to determine their accuracy. The previous highest-scoring algorithm of 2016 had a median global distance test (GDT) of 40 (0-100, with 100 being the best) in the most difficult category (free-modeling). In 2018, AlphaFold made waves by pushing that up to the high 50’s. AlphaFold 2 brings that GDT up to 87.

At this point in time, it is hard to determine what sort of effects this will have on the drug industry, healthcare, and society in general. Research has always been done to create the protein, identify what it does, then figure out its structure. AlphaFold 2 represents an avenue towards doing that whole process completely backward. Whether the next goal is to map all the proteins encoded in the human genome or find new, more effective drug treatments, we’re quite excited to see what becomes of this landmark breakthrough.

Continue reading “The Protein Folding Break-Through”