AI Recognizes And Locks Out Murder Cats

Anyone with a cat knows that the little purring ball of fluff in your lap is one tiny step away from turning into a bloodthirsty serial killer. Give kitty half a chance and something small and defenseless is going to meet a slow, painful end. And your little killer is as likely as not to show off its handiwork by bringing home its victim – “Look what I did for you, human! Are you not proud?”

As useful as a murder-cat can be, dragging the bodies home for you to deal with can be – inconvenient. To thwart his adorable serial killer [Metric], Amazon engineer [Ben Hamm] turned to an AI system to lock his prey-laden cat out of the house. [Metric] comes and goes as he pleases through a cat flap, which thanks to a solenoid and an Arduino is now lockable. The decision to block entrance to [Metric] is based on an Amazon AWS DeepLens AI camera, which watches the approach to the cat flap. [Ben] trained three models: one to determine if [Metric] was in the scene, one to determine whether he’s coming or going, and one to see if he’s alone or accompanied by a lifeless friend, in which case he’s locked out for 15 minutes and an automatic donation is made to the Audubon Society – that last bit is pure genius. The video below is a brief but hilarious summary of the project for an audience in Seattle that really seems quite amused by the whole thing.

So your cat isn’t quite the murder fiend that [Metric] is? An RFID-based cat door might suit your needs better.

Continue reading “AI Recognizes And Locks Out Murder Cats”

Blisteringly Fast Machine Learning On An Arduino Uno

Even though machine learning AKA ‘deep learning’ / ‘artificial intelligence’ has been around for several decades now, it’s only recently that computing power has become fast enough to do anything useful with the science.

However, to fully understand how a neural network (NN) works, [Dimitris Tassopoulos] has stripped the concept down to pretty much the simplest example possible – a 3 input, 1 output network – and run inference on a number of MCUs, including the humble Arduino Uno. Miraculously, the Uno processed the network in an impressively fast prediction time of 114.4 μsec!

Whilst we did not test the code on an MCU, we just happened to have Jupyter Notebook installed so ran the same code on a Raspberry Pi directly from [Dimitris’s] bitbucket repo.

He explains in the project pages that now that the hype about AI has died down a bit that it’s the right time for engineers to get into the nitty-gritty of the theory and start using some of the ‘tools’ such as Keras, which have now matured into something fairly useful.

In part 2 of the project, we get to see the guts of a more complicated NN with 3-inputs, a hidden layer with 32 nodes and 1-output, which runs on an Uno at a much slower speed of 5600 μsec.

This exploration of ML in the embedded world is NOT ‘high level’ research stuff that tends to be inaccessible and hard to understand. We have covered Machine Learning On Tiny Platforms Like Raspberry Pi And Arduino before, but not with such an easy and thoroughly practical example.

AI And Art Appreciation

In 2019, using AI to evaluate artwork is finally more productive than foolish. We all hope that someday soon our Roomba will judge our living habits and give unsolicited advice on how we could spruce things up with a few pictures and some natural light. There is already an extensive amount of Deep Learning dedicated to photo recognition but a team in Croatia is adapting them for use on fine art. It makes sense that everything is geared toward cameras since most of us have a vast photographic portfolio but fine art takes longer to render. Even so, the collection on Wikiart.org is vast and already a hotbed for computer classification work, so they set to work there.

As they modify existing convolutional neural networks, they check themselves by comparing results with human ratings to keep what works and discard what flops. Fortunately, fine art has a lot of existing studies and commentary, whereas the majority of photographs in the public domain have nothing more than a file name and maybe some EXIF data. The difference here is that photograph-parsing AI can say, “That is a STOP sign,” while the fine art AI can say, “That is a memorable painting of a sign.” Continue reading “AI And Art Appreciation”

AI-Enabled Teletype Live Streams Nearly Coherent Conversations

If you’ve got a working Model 33 Teletype, every project starts to look like an excuse to use it. While the hammering, whirring symphony of a teleprinter going full tilt brings to mind a simpler time of room-sized computers and 300 baud connections, it turns out that a Teletype makes a decent AI conversationalist, within the limits of AI, of course.

The Teletype machine that [Hugh Pyle] used for this interesting project, a Model 33 ASR with the paper tape reader, is a nostalgia piece that figures prominently in many of his projects. As such, [Hugh] has access to tons of Teletype documentation, so when OpenAI released their GPT-2 text generation language model, he decided to use the docs as a training set for the model, and then use the Teletype to print out text generated by the model. Initial results were about as weird as you’d expect for something trained on technical docs from the 1960s. The next step was obvious: make a chat-bot out of it and stream the results live. The teletype can be seen clattering away in the recorded stream below, using the chat history as a prompt for generating text responses, sometimes coherent, sometimes disturbing, and sometimes just plain weird.

Alas, the chat-bot and stream are only active a couple of times a week, so you’ll have to wait a bit to try it out. But it looks like a fun project, and we appreciate the mash-up of retro tech and AI. We’ve seen teleprinters revived for modern use before, both for texting and Tweeting, but this one almost has a mind of its own.

Continue reading “AI-Enabled Teletype Live Streams Nearly Coherent Conversations”

AI At The Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

But Can Your AI Recognize Slugs?

The common garden slug is a mystery. Observing these creatures as they slowly emerge from their slimy lairs each evening, it’s hard to imagine how much damage they can do. With paradoxical speed, they can mow down row after row of tender seedlings, leaving nothing but misery in their mucusy wake.

To combat this slug menace, [Tegwyn☠Twmffat] (the [☠] is silent) is developing this AI-powered slug busting system. The squeamish or those challenged by the ethics of slug eradication can relax: no slugs have been harmed yet. So far [Tegwyn] has concentrated on the detection of slugs, a considerably non-trivial problem since there are few AI models that are already trained for slugs.

So far, [Tegwyn] has acquired 5,712 images of slugs in their natural environment – no mean feat as they only come out at night, they blend into their background, and their slimy surface makes for challenging reflections. The video below shows moderate success of the trained model using a static image of a slug; it also gives a glimpse at the hardware used, which includes an Nvidia Jetson TX2. [Tegwyn] plans to capture even more images to refine the model and boost it up from the 50 to 60% confidence level to something that will allow for the remediation phase of the project, which apparently involves lasers. Although he’s willing to entertain other methods of disposal; perhaps a salt-shooting turret gun?

This isn’t the first garden-tending project [Tegwyn] has tackled. You may recall The Weedinator, his 2018 Hackaday Prize entry. This slug buster is one of his entries for the 2019 Hackaday Prize, which was just announced. We’re looking forward to seeing the onslaught of cool new projects everyone will be coming up with.

Continue reading “But Can Your AI Recognize Slugs?”