But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”

The Tiniest Computer Vision Platform Just Got Better

The future, if you believe the ad copy, is a world filled with cameras backed by intelligence, neural nets, and computer vision. Despite the hype, this may actually turn out to be true: drones are getting intelligent cameras, self-driving cars are loaded with them, and in any event it makes a great toy.

That’s what makes this Kickstarter so exciting. It’s a camera module, yes, but there are also some smarts behind it. The OpenMV is a MicroPython-powered machine vision camera that gives your project the power of computer vision without the need to haul a laptop or GPU along for the ride.

The OpenMV actually got its start as a Hackaday Prize entry focused on one simple idea. There are cheap camera modules everywhere, so why not attach a processor to that camera that allows for on-board image processing? The first version of the OpenMV could do face detection at 25 fps, color detection at more than 30 fps, and became the basis for hundreds of different robots loaded up with computer vision.

This crowdfunding campaign is financing the latest version of the OpenMV camera, and there are a lot of changes. The camera module is now removable, meaning the OpenMV now supports global shutter and thermal vision in addition to the usual color/rolling shutter sensor. Since this camera has a faster microcontroller, this latest version can support multi-blob color tracking at 80 fps. With the addition of a FLIR Lepton sensor, this camera does thermal sensing, and thanks to a new library, the OpenMV also does number detection with the help of neural networks.

We’ve seen a lot of builds using the OpenMV camera, and it’s getting ot the point where you can’t compete in an autonomous car race without this hardware. This new version has all the bells and whistles, making it one of the best ways we’ve seen to add computer vision to any hardware project.

Turn Yourself Into A Cyborg With Neural Nets

If smartwatches and tiny Bluetooth earbuds are any indications, the future is with wearable electronics. This brings up a problem: developing wearable electronics isn’t as simple as building a device that’s meant to sit on a shelf. No, wearable electronics move, they stretch, people jump, kick, punch, and sweat. If you’re prototyping wearable electronics, it might be a good idea to build a Smart Internet of Things Wearable development board. That’s exactly what [Dave] did for his Hackaday Prize entry, and it’s really, really fantastic.

[Dave]’s BodiHub is an outgrowth of his entry into last year’s Hackaday Prize. While the project might not look like much, that’s kind of the point; [Dave]’s previous projects involved shrinking thousands of dollars worth of equipment down to a tiny board that can read muscle signals. This project takes that idea a bit further by creating a board that’s wearable, has support for battery charging, and makes prototyping with wearable electronics easy.

You might be asking what you can do with a board like this. For that, [David] suggests a few projects like boxing gloves that talk to each other, or tell you how much force you’re punching something with. Alternatively, you could read body movements and synchronize a LED light show to a dance performance. It can go further than that, though, because [David] built a mesh network logistics tracking system that uses an augmented reality interface. This was actually demoed at TechCrunch Disrupt NY, and the audience was wowed. You can check out the video of that demo here.

Neural Networks Using Doom Level Creator Like It’s 1993

Readers of a certain vintage will remember the glee of building your own levels for DOOM. There was something magical about carefully crafting a level and then dialing up your friends for a death match session on the new map. Now computers scientists are getting in on that fun in a new way. Researchers from Politecnico di Milano are using artificial intelligence to create new levels for the classic DOOM shooter (PDF whitepaper).

While procedural level generation has been around for decades, recent advances in machine learning to generate game content (usually levels) are different because they don’t use a human-defined algorithm. Instead, they generate new content by using existing, human-generated levels as a model. In effect they learn from what great game designers have already done and apply those lesson to new level generation. The screenshot shown above is an example of an AI generated level and the gameplay can be seen in the video below.

The idea of an AI generating levels is simple in concept but difficult in execution. The researchers used Generative Adversarial Networks (GANs) to analyze existing DOOM maps and then generate new maps similar to the originals. GANs are a type of neural network which learns from training data and then generates similar data. They considered two types of GANs when generating new levels: one that just used the appearance of the training maps, and another that used both the appearance and metrics such as the number of rooms, perimeter length, etc. If you’d like a better understanding of GANs, [Steven Dufresne] covered it in his guide to the evolving world of neural networks.

While both networks used in this project produce good levels, the one that included other metrics resulted in higher quality levels. However, while the AI-generated levels appeared similar at a high level to human-generated levels, many of the little details that humans tend to include were omitted. This is partially due to a lack of good metrics to describe levels and AI-generated data.

Example DOOM maps generated by AI. Each row is one map, and each image is one aspect of the map (floor, height, things, and walls, from left to right)

We can only guess that these researcher’s next step is to use similar techniques to create an entire game (levels, characters, and music) via AI. After all, how hard can it be?? Joking aside, we would love to see you take this concept and run with it. We’re dying to play through some gnarly levels whipped up by the AI from Hackaday readers!

Continue reading “Neural Networks Using Doom Level Creator Like It’s 1993”

Neural Network Zaps You To Take Better Photographs

It’s ridiculously easy to take a bad photograph. Your brain is a far better Photoshop than Photoshop, and the amount of editing it does on the scenes your eyes capture often results in marked and disappointing differences between what you saw and what you shot.

Taking your brain out of the photography loop is the goal of [Peter Buczkowski]’s “prosthetic photographer.” The idea is to use a neural network to constantly analyze a scene until maximal aesthetic value is achieved, at which point the user unconsciously takes the photograph.

But the human-computer interface is the interesting bit — the device uses a transcutaneous electrical nerve stimulator (TENS) wired to electrodes in the handgrip to involuntarily contract the user’s finger muscles and squeeze the trigger. (Editor’s Note: This project is about as sci-fi as it gets — the computer brain is pulling the strings of the meat puppet. Whoah.)

Meanwhile, back in reality, it’s not too strange a project. A Raspberry Pi watches the scene through a Pi Cam and uses a TensorFlow neural net trained against a set of high-quality photos to determine when to trip the shutter. The video below shows it in action, and [Peter]’s blog has some of the photos taken with it.

We’re not sure this is exactly the next “must have” camera accessory, and it probably won’t help with snapshots and selfies, but it’s an interesting take on the human-device interface. And if you’re thinking about the possibilities of a neural net inside your camera to prompt you when to take a picture, you might want to check out our primer on TensorFlow to get started.

Continue reading “Neural Network Zaps You To Take Better Photographs”

Listen To The Netherworld With Artificial Intelligence

It’s that time of year again, and with Halloween arguably being the hacker’s perfect holiday, we’re starting to see a tick up in projects with a spooky theme. Most seem to do with making some otherwise tame Halloween decorations scarily awesome, but this is different — using artificial intelligence to search for ghosts.

It seems like [Matt Reed]’s “DeepWhisper” project is meant to be taken as light-hearted fun for the spooky season, but there may be a touch of seriousness to his efforts to listen in on ghostly conversations. The principle behind this is electronic voice phenomena (EVP), whereby the metabolically and/or dimensionally challenged are purported to influence electronic systems, resulting in heavily processed audio clips that seem to have a whispered endearment from the departed or a threat from a malevolent spirit. DeepWhisper takes this a step further by using a Raspberry Pi to feed audio into the Google Cloud Speech API for analysis. If anything is whispered in one of the 110 or so languages Google knows, it’ll get displayed on a screen. [Matt] plans to set DeepWhisper up in the aptly-named Butchertown section of Nashville and live-stream the results next week.

It’ll be interesting to see what Google’s neural network makes out of the random noise it will probably only ever hear. And [Matt] is planning on releasing his code for all to see, so there may be some valuable cloud techniques to learn from DeepWhisper. But in the unlikely event that he does discover ghosts, it’s nice to know you can have the tools and the talent to bust ’em.

Continue reading “Listen To The Netherworld With Artificial Intelligence”

Neural Nets In The Browser: Why Not?

We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.

TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.

Continue reading “Neural Nets In The Browser: Why Not?”