Grand Theft Auto V Used To Teach Self-Driving AI

For all the complexity involved in driving, it becomes second nature to respond to pedestrians, environmental conditions, even the basic rules of the road. When it comes to AI, teaching machine learning algorithms how to drive in a virtual world makes sense when the real one is packed full of squishy humans and other potential catastrophes. So, why not use the wildly successful virtual world of Grand Theft Auto V to teach machine learning programs to operate a vehicle?

Half and Half GTAV Annotation ThumbThe hard problem with this approach is getting a large enough sample for the machine learning to be viable. The idea is this: the virtual world provides a far more efficient solution to supplying enough data to these programs compared to the time-consuming task of annotating object data from real-world images. In addition to scaling up the amount of data, researchers can manipulate weather, traffic, pedestrians and more to create complex conditions with which to train AI.

It’s pretty easy to teach the “rules of the road” — we do with 16-year-olds all the time. But those earliest drivers have already spent a lifetime observing the real world and watching parents drive. The virtual world inside GTA V is fantastically realistic. Humans are great pattern recognizers and fickle gamers would cry foul at anything that doesn’t analog real life. What we’re left with is a near-perfect source of test cases for machine learning to be applied to the hard part of self-drive: understanding the vastly variable world every vehicle encounters.

A team of researchers from Intel Labs and Darmstadt University in Germany created a program that automatically indexes the virtual world (as seen above), creating useful data for a machine learning program to consume. This isn’t a complete substitute for real-world experience mind you, but the freedom to make a few mistakes before putting an AI behind the wheel of a vehicle has the potential to speed up development of autonomous vehicles. Read the paper the team published Playing for Data: Ground Truth from Video Games.

Continue reading “Grand Theft Auto V Used To Teach Self-Driving AI”

Kids! Don’t Try This at Home! Robot Destroys Mankind

From the Forbin Project, to HAL 9000, to War Games, movies are replete with smart computers that decide to put humans in their place. If you study literature, you’ll find that science fiction isn’t usually about the future, it is about the present disguised as the future, and smart computers usually represent something like robots taking your job, or nuclear weapons destroying your town.

Lately, I’ve been seeing something disturbing, though. [Elon Musk], [Bill Gates], [Steve Wozniak], and [Stephen Hawking] have all gone on record warning us that artificial intelligence is dangerous. I’ll grant you, all of those people must be smarter than I am. I’ll even stipulate that my knowledge of AI techniques is a little behind the times. But, what? Unless I’ve been asleep at the keyboard for too long, we are nowhere near having the kind of AI that any reasonable person would worry about being actually dangerous in the ways they are imagining.

Smart Guys Posturing

Keep in mind, I’m interpreting their comments as saying (essentially): “Soon machines will think and then they will out-think us and be impossible to control.” It is easy to imagine something like a complex AI making a bad decision while driving a car or an airplane, sure. But the computer that parallel parks your car isn’t going to suddenly take over your neighborhood and put brain implants in your dogs and cats. Anyone who thinks that is simply not thinking about how these things work. The current state of computer programming makes that as likely as saying, “Perhaps my car will start flying and we can go to Paris.” Ain’t happening.

Continue reading “Kids! Don’t Try This at Home! Robot Destroys Mankind”

RetroFab: Machine Designed Control of All the Things

On the Starship Enterprise, an engineer can simply tell the computer what he’d like it to do, and it will do the design work. Moments later, the replicator pops out the needed part (we assume to atomic precision). The work [Raf Ramakers] is doing seems like the Model T ford of that technology. Funded by Autodesk, and as part of his work as a PhD Researcher of Human Computer Interaction at Hasselt University it is the way of the future.

The technology is really cool. Let’s say we wanted to control a toaster from our phone. The first step is to take a 3D scan of the object. After that the user tells the computer which areas of the toaster are inputs and what kind of input they are. The user does this by painting a color on the area of the rendering, we think this technique is intuitive and has lots of applications.

The computer then looks in its library of pre-engineered modules for ones that will fit the applications. It automagically generates a casing for the modules, and fits it to the scanned surface of the toaster. It is then up to the user to follow the generated assembly instructions.

Once the case and modules are installed, the work is done! The toaster can now be controlled from an app. It’s as easy as that. It’s this kind of technology that will really bring technologies like 3D printing to mass use. It’s one thing to have a machine that can produce most geometries for practically no cost. It’s another thing to have the skills to generate those geometries. Video of it in action after the break.  Continue reading “RetroFab: Machine Designed Control of All the Things”

Impressive StarCraft 2 AI More Fair to Fleshy Opponents

There was a discussion in the comments when the Alpha Go results were released. Some commentors were postulating that AI researchers are discounting more fluid games such as the RTS StarCraft.

The comments then devolved into a discussion of what would make the AI fair to consider against a human player. Many times, AI in RTS games win because they have direct access to the variables in the game. Rather than physically looking at the small area of the screen where a unit is located and then moving their eye to take in strategic information like exact location, health, unit level, etc, the AI just knows that it’s at 120x,2000y,76%,lvl5, etc instantly. The AI also has no click lag as it gets direct access to the game’s API, it simply changes the variables and action queue of a unit directly.

So we were interested to see [Matt]’s Star Craft AI that required the computer to actually look at the game board and click. [Matt]’s AI doesn’t see using OpenCV, which in its own way is forcing the computer to look in a way that’s unnatural to it. He instead wrote some code to intercept the behind the scenes calls to the DirectX library.

The computer is then able to make determinations about what it is looking at using the texture information and other pieces sent to the library. Unlike AI’s that get a direct look at the variables, it has to then translate this and keep its own mental picture of the map and the situation. If a building is destroyed, for example, it has to go over and look at that part of the map, test what it’s seeing against a control, and then remove the building from its list.

The AI’s one big advantage are its robot fingers. Even though this AI has to click on the interface, it doesn’t do it with a weak articulated fleshy nub like the rest of us. This allows the AI to get crazy Actions Per Minute (APM) in the range of 500 to 2000.

The AI has only been tested against StarCraft’s built in cheater bots. So far it can win most games against the hard level bots. If you want to see a video of what the AI is looking at, check after the break.

Continue reading “Impressive StarCraft 2 AI More Fair to Fleshy Opponents”

Swarm of Robot Boats Coming To An Ocean Near You Soon

Planning a hostile takeover of your local swimming pool? This might help: [Dr Anders Lyhne Christensen] sent us a note about his work at the BioMachines Lab of the Institute of Telecommunications in Portugal. They have been building a swarm of robot boats to experiment with autonomous swarms, with some excellent results.

In an autonomous swarm, each robot makes its own decisions and talks to its neighbors, and the combined behavior of the swarm produces an overall behavior, like ants in a nest. They’ve created swarms that can autonomously navigate, patrol an area or monitor the temperature in an area and return to base to report the results. In an excellent video, [Anders] outlines how they used computational evolution to create these behaviors, randomly mutating a neural net to find the best approach, which is then sent to the real boats.

Perhaps coolest of all: the whole project is open source, with the brains of each boat running on a Raspberry Pi, and a CNC milled foam hull with 3D printed component mounts. Each boat costs about 300 Euro (about $340), but you could reduce the cost a bit by salvaging components and once the less-expensive Pi Zero becomes obtainable. This project will no doubt be useful for many an evil genius who is sick of being splashed by the toughs at the local pool: a swarm of killer robots surrounding them would be an excellent way to keep them at bay.

Continue reading “Swarm of Robot Boats Coming To An Ocean Near You Soon”

Marvin Minsky, AI Pioneer, Dies at 88

Marvin Minsky, one of the early pioneers of neural networks, died on Sunday at the age of 88.

The obituary in the Washington Post paints a fantastic picture of his life. Minsky was friends with Richard Feynman, Isaac Asimov, Arthur C. Clarke, and Stanley Kubrick. He studied under Claude Shannon, worked with Alan Turing, had frequent conversations with John Von Neumann, and had lunch with Albert Einstein.

Single_layer_ann
“Single layer ann” by Mcstrother

Minsky’s big ideas were really big. He built one of the first artificial neural networks, but was aiming higher — toward machines that could actually think rather than simply classify data. This was one of the driving forces behind his book, Perceptrons, that showed some of the limitations in the type of neural networks (single-layer, feedforward) that were being used at the time. He wanted something more.

Minsky’s book The Society of Mind is interesting because it reframes the problem of human thought from being a single top-down process to being a collaboration between many different brain regions, the nervous system, and indeed the body as a whole. This “connectionist” theme would become influential both in cognitive science and in robotics.

In short, Minksy was convinced that complex problems often had necessarily complex solutions. In research projects, he was in for the long-term, and encouraged a bottom-up design procedure where many smaller elements combined into a complicated whole. “The secret of what something means lies in how it connects to other things we know. That’s why it’s almost always wrong to seek the “real meaning” of anything. A thing with just one meaning has scarcely any meaning at all.”

useless_machine-shot0005Minsky was a very deep thinker, but he kept grounded by also being a playful inventor. Minsky is credited with inventing the “ultimate machine” which would pop up in modern geek culture and shared numerous times on Hackaday as the “most useless machine”. He inspired Claude Shannon to build one. Arthur C. Clarke said, “There is something unspeakably sinister about a machine that does nothing — absolutely nothing — except switch itself off.”

He also co-designed the Triadex Muse, which was an early synthesizer and sequencer and “automatic composer” that creates fairly complex and original patterns with minimal input. It’s an obvious offshoot of his explorations in artificial intelligence, and on our bucket list of must-play-with electronic instruments.

Minsky’s web site at MIT has a number of his essays, and the full text of “The Society of Mind”, all available for your reading pleasure. It’s worth a bit of your time, not just in memoriam of a great thinker and a wacky inventor, but also because we bet you’ll see the world a little bit differently afterwards. That’s a legacy that lasts.

A Short History of AI, and Why It’s Heading in the Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Continue reading “A Short History of AI, and Why It’s Heading in the Wrong Direction”