Finding Pre-Trained AI In A Modelzoo Using Python

Training a machine learning model is not a task for mere mortals, as it takes a lot of time or computing power to do so. Fortunately there are pre-trained models out there that one can use, and [Max Bridgland] decided it would be a good idea to write a python module to find and view such models using the command line.

For the uninitiated, Modelzoo is a place where you can find open source deep learning code and pre-trained models. [Max] taps into the (undocumented) API and allows a user to find and view models directly. When you run a utility, it goes online and retrieves the categories and then details of the available models. From then on, the user can select a model and the application will simply open the corresponding GitHub repository. Sounds simple but it has a lot of value since the code is designed to be extendable so that users working on such projects may automate the downloading part as well.

We have seen projects with machine learning used to detect humans, and with AI trending community tools such as this one help beginners get started even faster.

Tracking Ants And Zapping Them With Lasers

Thanks to the wonders of neural networks and machine learning algorithms, it’s now possible to do things that were once thought to be inordinately difficult to achieve with computers. It’s a combination of the right techniques and piles of computing power that make such feats doable, and [Robert Bond’s] ant zapping project is a great example.

The project is based around an NVIDIA Jetson TK1, a system that brings the processing power of a modern GPU to an embedded platform. It’s fitted with a USB camera, that is used to scan its field of view for ants. Once detected, thanks to a little OpenCV magic, the coordinates of the insect are passed to the laser system. Twin stepper motors are used to spin mirrors that direct the light from a 5 mW red laser, which is shined on the target. If you’re thinking of working on something like this we highly recommend using galvos to direct the laser.

Such a system could readily vaporize ants if fitted with a more powerful laser, but [Robert] decided to avoid this for safety reasons. Plus, the smell wouldn’t be great, and nobody wants charred insect residue all over the kitchen floor anyway. We’ve seen AIs do similar work, too – like detecting naughty cats for security reasons.

Continue reading “Tracking Ants And Zapping Them With Lasers”

BeagleBone Deep Learning Video Demo

BeagleBoard often gets eclipsed by Raspberry Pi. Where the Pi focuses on ease-of-use, the BeagleBone generally has more power for hardcore applications. With machine learning AI all the rage now, BeagleBoard now has the BeagleBone AI, a board with specific features aimed at machine learning. A recent video (see below) shows a demo of using TIDL (Texas Instruments Deep Learning Library). The video includes an example of streaming video to a browser and using predefined learning models to identify things picked up by a web camera.

The CPU onboard is the TI Sitara AM5729. That’s a dual Arm Cortex A15 running at 1.5 GHz. There are also two C66x floating-point DSP processors and two dual ARM Cortex M4 coprocessors. Still need more? You get four embedded vision engines, two dual-core real-time units, a 2D graphics accelerator, a 3D graphics accelerator, and a subsystem for encoding and decoding video and cryptography.

Continue reading “BeagleBone Deep Learning Video Demo”

Arduino, Accelerometer, And TensorFlow Make You A Real-World Street Fighter

A question: if you’re controlling the classic video game Street Fighter with gestures, aren’t you just, you know, street fighting?

That’s a question [Charlie Gerard] is going to have to tackle should her AI gesture-recognition controller experiments take off. [Charlie] put together the game controller to learn more about the dark arts of machine learning in a fun and engaging way.

The controller consists of a battery-powered Arduino MKR1000 with WiFi and an MPU6050 accelerometer. Held in the hand, the controller streams accelerometer data to an external PC, capturing the characteristics of the motion. [Charlie] trained three different moves – a punch, an uppercut, and the dreaded Hadouken – and captured hundreds of examples of each. The raw data was massaged, converted to Tensors, and used to train a model for the three moves. Initial tests seem to work well. [Charlie] also made an online version that captures motion from your smartphone. The demo is explained in the video below; sadly, we couldn’t get more than three Hadoukens in before crashing it.

With most machine learning project seeming to concentrate on telling cats from dogs, this is a refreshing change. We’re seeing lots of offbeat machine learning projects these days, from cryptocurrency wallet attacks to a semi-creepy workout-monitoring gym camera.

Continue reading “Arduino, Accelerometer, And TensorFlow Make You A Real-World Street Fighter”

Name Stone Helps You Greet Coworkers

When starting a new job, learning coworkers names can be a daunting task. Getting this right is key to forming strong professional relationships. [Ahad] noted that [Marcos] was struggling with this, so built the Name Stone to help.

The Name Stone consists of some powerful hardware, wrapped up in a 3D printed case reminiscent of the Eye of Agamotto from Doctor Strange. Inside, there’s a Jetson Nano – an excellent platform for any project built around machine learning tasks. This is combined with a microphone and camera to collect data from the environment.

[Ahad] then went about training neural networks to help with basic identification tasks. Video was taken of the coworkers, then the frames used to train a convolutional neural network using PyTorch. Similarly, a series of audio clips were used to again train a network to identify individuals through the sound of their voice, using MFCC techniques. Upon activating the stone, the device will capture an image or a short sound clip, and process the data to identify the target coworker and remind [Marcos] of their name.

It’s a project that could be quite useful, given to new employees to help them transition into the new workplace. Of course, pervasive facial recognition technology does have some drawbacks. Video after the break.

Continue reading “Name Stone Helps You Greet Coworkers”

GymCam Knows Exactly What You’ve Been Doing In The Gym

Getting exact statistics on one’s physical activities at the gym, is not an easy feat. While most people these days are familiar with or even regularly use one of those motion-based trackers on their wrist, there’s a big question as to their accuracy. After all, it’s all based on the motions of just one’s wrist, which as we know leads to amusing results in the tracker app when one does things like waving or clapping one’s hands, and cannot track leg exercises at the gym.

To get around the issue of limited sensor data, researchers at Carnegie Mellon University (Pittsburgh, USA) developed a system based around a camera and machine vision algorithms. While other camera solutions that attempt this suffer from occlusion while trying to track individual people as accurately as possible, this new system instead doesn’t try to track people’s joints, but merely motion at specific exercise machines by looking for repetitive motion in the scene.

The basic concept is that repetitive motion usually indicates forms of exercise, and that no two people at the same type of machine will ever be fully in sync with their motions, so that merely a handful of pixels suffice to track motion at that machine by a single person. This also negates many privacy issues, as the resolution doesn’t have to be high enough to see faces or track joints with any degree of accuracy.

In experiments at the university’s gym, the accuracy of their system over 5 days and 42 hours of video. Detecting exercise activities in the scene was with a 99.6% accuracy, disambiguating between simultaneous activities was 84.6% accurate, while recognizing exercise types was 93.6% accurate. Ultimately repetition counts for specific exercises were within 1.7 counts.

Maybe an extended version of this would be a flying drone capturing one’s outside activities, giving one finally that 100% accurate exercise account while jogging?

Thanks to [Qes] for sending this one in!

AI Makes Hyperbolic Brain Hats A Reality

It isn’t often that the world of Hackaday intersects with the world of crafting, which is perhaps a shame because many of the skills and techniques of the two have significant overlap. Crochet for instance has rarely featured here, but that is about to change with [Janelle Shane]’s HAT3000 neural network trained to produce crochet hat patterns.

Taking the GPT-2 neural network trained on Internet text  and further training it with a stack of crochet hat patterns, she was able to generate AI-designed hats which her friends on the Ravelry yarn forum set to crochet into real hats. It’s a follow-up to a previous knitting-based project, and instead of producing the hats you might expect it goes into flights of fancy. Some are visibly hat-like while others turn into avant-garde creations that defy any attempt to match them to real heads. A whole genre of hyperbolic progressions of crochet rows produce hats with organic folds that begin to resemble brains, and tax both the stamina of the person doing the crochet and their supply of yarn.

Perhaps most amusingly the neural network retains the ability to produce text, but when it does so it now inevitably steers the subject back to crochet hats. A Harry Potter sentence spawns a passage of something she aptly describes as “terrible crochet-themed erotica“, and such is the influence of the crochet patterns that this purple prose can even include enough crochet instructions to make them crochetable. It would be fascinating to see whether a similar model trained with G-code from Thingiverse would produce printable designs, what would an AI make with Benchy for example?

We’ve been entertained by [Janelle]’s AI work before, both naming tomato varieties, and creating pie recipes.

Thanks [Laura] for the tip.