Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Continue reading “Wrap Your Mind Around Neural Networks”

Neural Networks Walk Better Than Humans for Game Animation

Modern day video games have come a long way from Mario the plumber hopping across the screen. Incredibly intricate environments of games today are part of the lure for new gamers and this experience is brought to life by the characters interacting with the scene. However the illusion of the virtual world is disrupted by unnatural movements of the figures in performing actions such as turning around suddenly or climbing a hill.

To remedy the abrupt movements, [Daniel Holden et. al] recently published a paper (PDF) and a video showing a method to greatly improve the real-time character control mechanism. The proposed system uses a neural network that has been trained using a large data set of walking, jumping and other sequences on various terrains. The key is breaking down the process of bipedal movement and its cyclic behaviour into a series of sub-steps or phases. Each phase translates to a natural posture for the character while moving. The system precomputes the next-phases offline to conserve computational resources at runtime. Then considering user control, previous pose of the character(including joint positions) and terrain geometry, the consequent frame of the animation is computed. The computation is done by a regression network that calculates future position of the joints and a blending function is used for Motion Matching as described in a presentation (PDF) and video by [Simon Clavet]. Continue reading “Neural Networks Walk Better Than Humans for Game Animation”

Google AIY: Artificial Intelligence Yourself

When Amazon released the API to their voice service Alexa, they basically forced any serious players in this domain to bring their offerings out into the hacker/maker market as well. Now Google and Raspberry Pi have come together to bring us ‘Artificial Intelligence Yourself’ or AIY.

A free hardware kit made by Google was distributed with Issue 57 of the MagPi Magazine which is targeted at makers and hobbyists which you can see in the video after the break. The kit contains a Raspberry Pi Voice Hat, a microphone board, a speaker and a number of small bits to mount the kit on a Raspberry Pi 3. Putting all of it together and following the instruction on the official site gets you a Google Voice Interaction Kit with a bunch of IOs just screaming to be put to good use.

The source code for the python app can be downloaded from GitHub and consists of a loop that awaits a trigger. This trigger can be a press of a button or a clap near the microphones. When a trigger is detected, the recorder function takes over sending the stream to the Google Cloud. Speech-to-Text conversion happens there and the result is returned via a Text-To-Speech engine that helps the system talk back. The repository suggests that the official Voice Kit SD Image (893 MB download) is based on Raspbian so don’t go reflashing a memory card right away, you should be able to add this to an existing install.

Continue reading “Google AIY: Artificial Intelligence Yourself”

Hackaday Prize Entry: Detecting Adulterated Food Using AI

Adulterated food is food that has a substance added to it to save on manufacturing costs. It can have a negative effect, it can reduce the food’s potency or it can have no effect at all. In many cases it’s done illegally. It’s also a widespread problem, one which [G. Vignesh] has decided to take on as his entry for the 2017 Hackaday Prize, an AI Based Adulteration Detector.

On his hackaday.io Project Details page he outlines some existing methods for testing food, some which you can do at home: adulterated sugar may have chalk added to it, so put it in water and the sugar will dissolve while the chalk will not. His approach is to instead take high-definition photos of the food and, on a Raspberry Pi, apply filters to them to reveal various properties such as density, size, color, texture and so on. He also mentions doing image analysis using a deep learning neural network. This project touches us all and we’ll be watching it with interest.

If all this talk of adulterated food makes you nervous about your food supply then consider growing our own, hacker style. One such project we’ve seen here on Hackaday is Farmbot, an open-source CNC farming robot. Another such is MIT’s OpenAg Food Computer, a robotic control and monitoring growing chamber.

Chess AI, Old School

People have been interested in chess-playing computers before there were any chess-playing computers. In a 1950 paper, [Claude Shannon] defined two major chess-playing strategies. Apparently, practical chess programs still use the techniques he outlined. If you’ve ever wondered how to make a computer play chess [FreeCodeCamp] has an interesting post that walks you through building a chess engine step-by-step.

The code is in JavaScript, but the approach struck us as old school. However, it is interesting to watch the evolution of code as you go from random moves, to slightly smarter strategy, to deeper searching. Because it is in JavaScript, you can follow along in your browser and find out when the program gets smart enough to beat you. The final version is even on GitHub.

Continue reading “Chess AI, Old School”

Paramotoring for the Paranoid: Google’s AI and Relationship Mining

My son approached me the other day with his best 17-year-old sales pitch: “Dad, I need a bucket of cash!” Given that I was elbow deep in suds doing the dishes he neglected to do the night before, I mentioned that it was a singularly bad time for him to ask for anything.

Never one to be dissuaded, he plunged ahead with the reason for the funding request. He had stumbled upon a series of YouTube videos about paramotoring, and it was love at first sight for him. He waxed eloquent about how cool it would be to strap a big fan to his back and soar with the birds on a nylon parasail wing. It was actually a pretty good pitch, complete with an exposition on the father-son bonding opportunities paramotoring presented. He kind of reminded me of the twelve-year-old version of myself trying to convince my dad to spend $600 on something called a “TRS-80” that I’d surely perish if I didn’t get.

Needless to say, the $2500 he needed for the opportunity to break his neck was not forthcoming. But what happened the next day kind of blew my mind. As I was reviewing my YouTube feed, there among the [Abom79] and [AvE] videos I normally find in my “Recommended” queue was a video about – paramotoring. Now how did that get there?

Continue reading “Paramotoring for the Paranoid: Google’s AI and Relationship Mining”

The AI is Always Watching

My phone can now understand me but it’s still an idiot when it comes to understanding what I want. We have both the hardware capacity and the software capacity to solve this right now. What we lack is the social capacity.

We are currently in a dumb state of personal automation. I have Google Now enabled on my phone. Every single month Google Now reminds me of bills coming due that I have already paid. It doesn’t see me pay them, it just sees the email I received and the due date. A creature of habit, I pay my bills on the last day of the month even though that may be weeks early. This is the easiest thing in the world for a computer to learn. But it’s an open loop system and so no learning can happen.

Earlier this month [Cameron Coward] wrote an outstanding pair or articles on AI research that helped shed some light on this problem. The correct term for this level of personal automation is “weak AI”. What I want is Artificial General Intelligence (AGI) on a personal level. But that’s not going to happen, and I am the problem. Here’s why.

Continue reading “The AI is Always Watching”