Bug Eliminator Zaps With A Laser

Mosquitoes tend to be seen as an almost universal negative, at least in the lives of humans. While they serve as a food source for plenty of other animals and may even pollinate some plants, they also carry diseases like malaria and Zika, not to mention the itchy bites. Various mosquito deterrents have been invented over the years to solve some of these problems, but one of the more interesting ones is this project by [Ildaron] which attempts to build a mosquito-tracking laser.

The device uses a neural learning algorithm to identify mosquitoes flying nearby. Once a mosquito is detected, a laser is aimed at it and activated in order to “thermally neutralize” the pest. The control system as well as the neural network and machine learning are hosted on a Raspberry Pi and Jetson Nano which give it plenty of computing power. The only major downside with this specific project is that the high-powered laser can be harmful to humans as well.

Ideally, a market for devices like these would bring the price down, perhaps even through the use of something like an ASIC specifically developed for these mosquito-targeting machines. In the meantime, [Ildaron] has made this project available for replication on his GitHub page. We have also seen similar builds before which are effective against non-flying insects, so it seems like only a matter of time before there is more widespread adoption — either that or Judgement day!

Continue reading “Bug Eliminator Zaps With A Laser”

Need A Snack From Across Town? Send Spot!

[Dave Niewinski] clearly knows a thing or two about robots, judging from his YouTube channel. Usually the projects involve robot arms mounted on some sort of wheeled platform, but this time it’s the tune of some pretty famous yellow robot legs, in the shape of spot from Boston Dynamics. The premise is simple — tell the robot what snacks you want, entirely by voice command, and off he goes to fetch. But, we’re not talking about navigating to the fridge in the same room. We’re talking about trotting out the front door, down the street and crossing roads to visit favorite restaurant. Spot will order the snacks and bring them back, fully autonomously.

Spot’s depth cameras provide localized navigation and object avoidance information
Local AI vision system handles avoiding those pesky moving objects

There are multiple things going here, all of which are pretty big computational tasks. Firstly, there is no cloud-based voice control, ala Google voice or Alexa. The robot works on the premise of full autonomy, which means no internet connectivity for any aspect. All voice recognition, voice-to-text, and speech synthesis are performed locally using the NVIDIA Riva GPU-based AI speech SDK, running on the local NVIDIA Jetson AGX Orin carried on Spot’s back. A front-facing webcam supplies the audio feed for this. The voice recognition application listens for the wake phrase, then turns the snack order into text, for later replay when it gets to the destination. Navigation is taken care of with a Microstrain RTK GNSS module, which has all the needed robustness, such as dual antennas, and inertial fallback for those regions with a spotty signal. Navigation is no use out in the real world on its own, which is where Spot’s depth sensor cameras come in. These enable local obstacle avoidance, as per the usual spot behavior we’ve all seen before. But what about crossing the road without getting tens of thousands of dollars of someone else’s hardware crushed by a passing truck? Spot’s onboard streaming cameras are fed into the NVIDIA dash cam net AI platform which enables real-time recognition of moving obstacles such as cars, humans and anything else that might be wandering around and get in the way. All in all a cool project showing the future potential of AI in robotics for important tasks, like fetching me a beer when I most need it, even if it comes from the local corner shop.

We love robots around here. Robots can mow your lawn, navigate inside your house with a little help from invisible QR Codes, even help out with growing your food. The robot-assisted future long promised, may now be looking more like the present.

Continue reading “Need A Snack From Across Town? Send Spot!”

Researchers Build Neural Networks With Actual Neurons

Neural networks have become a hot topic over the last decade, put to work on jobs from recognizing image content to generating text and even playing video games. However, these artificial neural networks are essentially just piles of maths inside a computer, and while they are capable of great things, the technology hasn’t yet shown the capability to produce genuine intelligence.

Cortical Labs, based down in Melbourne, Australia, has a different approach. Rather than rely solely on silicon, their work involves growing real biological neurons on electrode arrays, allowing them to be interfaced with digital systems. Their latest work has shown promise that these real biological neural networks can be made to learn, according to a pre-print paper that is yet to go through peer review.
Continue reading “Researchers Build Neural Networks With Actual Neurons”

OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Thought Control Via Handwriting

Computers haven’t done much for the quality of our already poor handwriting. However, a man paralyzed by an accident can now feed input into a computer by simply thinking about handwriting, thanks to work by Stanford University researchers. Compared to more cumbersome systems based on eye motion or breath, the handwriting technique enables entry at up to 90 characters a minute.

Currently, the feat requires a lab’s worth of equipment, but it could be made practical for everyday use with some additional work and — hopefully — less invasive sensors. In particular, the sensor used two microelectrode arrays in the precentral gyrus portion of the brain. When the subject thinks about writing, recognizable patterns appear in the collected data. The rest is just math and classification using a neural network.

If you want to try your hand at processing this kind of data and don’t have a set of electrodes to implant, you can download nearly eleven hours of data already recorded. The code is out there, too. What we’d really like to see is some easier way to grab the data to start with. That could be a real game-changer.

More traditional input methods using your mouth have been around for a long time. We’ve also looked at work that involves moving your head.

Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Continue reading “Mind-Controlled Flamethrower”

Imaging The Past With Time-Travel Rephotography

Have you ever noticed that people in old photographs looks a bit weird? Deep wrinkles, sunken cheeks, and exaggerated blemishes are commonplace in photos taken up to the early 20th century. Surely not everybody looked like this, right? Maybe it was an odd makeup trend — was it just a fashionable look back then?

Not quite — it turns out that the culprit here is the film itself. The earliest glass-plate emulsions used in photography were only sensitive to the highest-frequency light, that which fell in the blue to ultraviolet range. Perhaps unsurprisingly, when combined with the fact that humans have red blood, this posed a real problem. While some of the historical figures we see in old photos may have benefited from an improved skincare regimen, the primary source of their haunting visage was that the photographic techniques available at the time were simply incapable of capturing skin properly. This lead to the sharp creases and dark lips we’re so used to seeing.

Of course, primitive film isn’t the only thing separating antique photos from the 42 megapixel behemoths that your camera can take nowadays. Film processing steps had the potential to introduce dust and other blemishes to the image, and over time the prints can fade and age in a variety of ways that depend upon the chemicals they were processed in. When rolled together, all of these factors make it difficult to paint an accurate portrait of some of history’s famous faces. Before you start to worry that you’ll never know just what Abraham Lincoln looked like, you might consider taking a stab at Time-Travel Rephotography.

Amazingly, Time-Travel Rephotography is a technique that actually lives up to how cool its name is. It uses a neural network (specifically, the StyleGAN2 framework) to take an old photo and project it into the space of high-res modern photos the network was trained on. This allows it to perform colorization, skin correction, upscaling, and various noise reduction and filtering operations in a single step which outputs remarkable results. Make sure you check out the project’s website to see some of the outputs at full-resolution.

We’ve seen AI upscaling before, but this project takes it to the next level by completely restoring antique photographs. We’re left wondering what techniques will be available 100 years from now to restore JPEGs stored way back in 2021, bringing them up to “modern” viewing standards.

Thanks to [Gus] for the tip!

Continue reading “Imaging The Past With Time-Travel Rephotography”