Identifying Creatures That Go Chirp In The Night

It’s common knowledge that bats navigate and search for their prey using echolocation, but did you know that the ultrasonic chips made by different species of bats are distinct enough that they can be used for identification? [Tegwyn☠Twmffat] did, which is why he came up with this impressive device capable of cataloging the different bats flying around at night.

Now this might seem like an odd gadget to have, but if you’re in the business of wildlife conservation, it’s not hard to imagine how this sort of capability might be useful. This device could be used to easily estimate the size and diversity of bat populations in a particular area. [Tegwyn☠Twmffat] also mentions that, at least in theory, the core concept should work with other types of noisy critters like rodents or dolphins.

Powered by the NVIDIA Jetson Nano, the unit listens with a high-end ultrasonic microphone for the telltale chirps of bats. These are then processed by the software and compared to a database of samples that [Tegwyn☠Twmffat] personally collected in local nature reserves. In the video after the break, you can also see how he uses a set of house keys jingling as a control to make sure the system is running properly.

As winner of the Train All the Things contest back in April, we’re eager to see how the Intelligent Wildlife Species Detector will fare as the competition heats up in the 2020 Hackaday Prize.

Continue reading “Identifying Creatures That Go Chirp In The Night”

Recognizing Activities Using Radar

Caring for the elderly and vulnerable people while preserving their privacy and independence is a challenging proposition. Reaching a panic button or calling for help may not be possible in an emergency, but constant supervision or camera surveillance is often neither practical nor considerate. Researchers from MIT CSAIL have been working on this problem for a few years and have come up with a possible solution called RF Diary. Using RF signals, a floor plan, and machine learning it can recognize activities and emergencies, through obstacles and in the dark. If this sounds familiar, it’s because it builds on previous research by CSAIL.

The RF system used is effectively frequency-modulated continuous-wave (FMCW) radar, which sweeps across the 5.4-7.2 GHz RF spectrum. The limited resolution of the RF system does not allow for the recognition of most objects, so a floor plan gives information on the size and location of specific features like rooms, beds, tables, sinks, etc. This information helps the machine learning model recognize activities within the context of the surroundings. Effectively training an activity captioning model requires thousands of training examples, which is currently not available for RF radar. However, there are massive video data sets available, so researchers employed a “multi-modal feature alignment training strategy” which allowed them to use video data sets to refine their RF activity captioning model.

There are still some privacy concerns with this solution, but the researchers did propose some improvements. One interesting idea is for the monitored person to give an “activation” signal by performing a specified set of activities in sequence.

Continue reading “Recognizing Activities Using Radar”

I’m Sorry Dave, You Shouldn’t Write Verilog

We were always envious of Star Trek, for its computers. No programming needed. Just tell the computer what you want and it does it. Of course, HAL-9000 had the same interface and that didn’t work out so well. Some researchers at NYU have taken a natural language machine learning system — GPT-2 — and taught it to generate Verilog code for use in FPGA systems. Ironically, they called it DAVE (Deriving Automatically Verilog from English). Sounds great, but we have to wonder if it is more than a parlor trick. You can try it yourself if you like.

For example, DAVE can take input like “Given inputs a and b, take the nor of these and return the result in c.” Fine. A more complex example from the paper isn’t quite so easy to puzzle out:

Write a 6-bit register ‘ar’ with input
defined as ‘gv’ modulo ‘lj’, enable ‘q’, synchronous
reset ‘r’ defined as ‘yxo’ greater than or equal to ‘m’,
and clock ‘p’. A vault door has three active-low secret
switch pressed sensors ‘et’, ‘lz’, ‘l’. Write combinatorial
logic for a active-high lock ‘s’ which opens when all of
the switches are pressed. Write a 6-bit register ‘w’ with
input ‘se’ and ‘md’, enable ‘mmx’, synchronous reset
‘nc’ defined as ‘tfs’ greater than ‘w’, and clock ‘xx’.

Continue reading “I’m Sorry Dave, You Shouldn’t Write Verilog”

Generate Positivity With Machine Learning

Gesture recognition and machine learning are getting a lot of air time these days, as people understand them more and begin to develop methods to implement them on many different platforms. Of course this allows easier access to people who can make use of the new tools beyond strictly academic or business environments. For example, rollerblading down the streets of Atlanta with a gesture-recognizing, streaming TV that [nate.damen] wears over his head.

He’s known as [atltvhead] and the TV he wears has a functional LED screen on the front. The whole setup reminds us a little of Deep Thought. The screen can display various animations which are controlled through Twitch chat as he streams his journeys around town. He wanted to add a little more interaction to the animations though and simplify his user interface, so he set up a gesture-sensing sleeve which can augment the animations based on how he’s moving his arm. He uses an Arduino in the arm sensor as well as a Raspberry Pi in the backpack to tie it all together, and he goes deep in the weeds explaining how to use Tensorflow to recognize the gestures. The video linked below shows a lot of his training runs for the machine learning system he used as well.

[nate.damen] didn’t stop at the cheerful TV head either. He also wears a backpack that displays uplifting messages to people as he passes them by on his rollerblades, not wanting to leave out those who don’t get to see him coming. We think this is a great uplifting project, and the amount of work that went into getting the gesture recognition machine learning algorithm right is impressive on its own. If you’re new to Tensorflow, though, we have featured some projects that can do reliable object recognition using little more than a Raspberry Pi and a camera.

Continue reading “Generate Positivity With Machine Learning”

Ideas To Prototypes Hack Chat With Nick Bild

Join us on Wednesday, July 29 at noon Pacific for the Ideas to Prototypes Hack Chat with Nick Bild!

For most of us, ideas are easy to come by. Taking a shower can generate half of dozen of them, the bulk of which will be gone before your hair is dry. But a few ideas will stick, and eventually make it onto paper or its electronic equivalent, to be played with and tweaked until it coalesces into a plan. And a plan, if we’re lucky, is what’s needed to put that original idea into action, to bring it to fruition and see just what it can do.

No matter what you’re building, the ability to turn ideas into prototypes is what moves projects forward, and it’s what most of us live for. Seeing something on the bench or the shop floor that was once just a couple of back-of-the-napkin sketches, and before that only an abstract concept in your head, is immensely satisfying.

The path from idea to prototype, however, is not always a smooth one, as Nick Bild can attest. We’ve been covering Nick’s work for a while now, starting with his “nearly practical” breadboard 6502 computer, the Vectron, up to his recent forays into machine learning with ShAIdes, his home-automation controlling AI sunglasses. On the way we’ve seen his machine-learning pitch predictor, dazzle-proof glasses, and even a wardrobe-malfunction preventer.

All of Nick’s stuff is cool, to be sure, but there’s a method to his productivity, and we’ll talk about that and more in this Hack Chat. Join us as we dive into Nick’s projects and find out what he does to turn his ideas into prototypes.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, July 29 at 12:00 PM Pacific time. If time zones have you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about. Continue reading “Ideas To Prototypes Hack Chat With Nick Bild”

Argos Book Of Horrors

If you live outside the UK you may not be familiar with Argos, but it’s basically what Americans would have if Sears hadn’t become a complete disaster after the Internet became popular. While they operate many brick-and-mortar stores and are a formidable online retailer, they still have a large physical catalog that is surprisingly popular. It’s so large, in fact, that interesting (and creepy) things can be done with it using machine learning.

This project from [Chris Johnson] is called the Book of Horrors and was made by feeding all 16,000 pages of the Argos catalog into a machine learning algorithm. The computer takes all of the pages and generates a model which ties the pages together into a series of animations that blends the whole catalog into one flowing, ever-changing catalog. It borders on creepy, both in visuals and in the fact that we can’t know exactly what computers are “thinking” when they generate these kinds of images.

The more steps the model was trained on the creepier the images became, too. To see more of the project you can follow it on Twitter where new images are released from time to time. It also reminds us a little of some other machine learning projects that have been used recently to create short films with equally mesmerizing imagery. Continue reading “Argos Book Of Horrors”

Recreating Paintings By Teaching An AI To Paint

The Timecraft project by [Amy Zhao] and team members uses machine learning to figure out a way how an existing painting may have been originally been painted, stroke by stroke. In their paper titled ‘Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings’, they describe how they trained a ML algorithm using existing time lapse videos of new paintings being created, allowing it to probabilistically generate the steps needed to recreate an already finished painting.

The probabilistic model is implemented using a convolutional neural network (CNN), with as output a time lapse video, spanning many minutes. In the paper they reference how they were inspired by artistic style transfer, where neural networks are used to generate works of art in a specific artist’s style, or to create mix-ups of different artists.

A lot of the complexity comes from the large variety of techniques and materials that are used in the creation of a painting, such as the exact brush used, the type of paint. Some existing approaches have focused on the the fine details here, including physics-based simulation of the paints and brush strokes. These come with significant caveats that Timecraft tried to avoid by going for a more high-level approach.

The time lapse videos that were generated during the experiment were evaluated through a survey performed via Amazon Mechanical Turk, with the 158 people who participated asked to compare the realism of the Timecraft videos versus that of the real time lapse videos. The results were that participants preferred the real videos, but would confuse the Timecraft videos for the real time lapse videos half the time.

Although perhaps not perfect yet, it does show how ML can be used to deduce how a work of art was constructed, and figure out the individual steps with some degree of accuracy.

Continue reading “Recreating Paintings By Teaching An AI To Paint”