Kids! Don’t Try This at Home! Robot Destroys Mankind

From the Forbin Project, to HAL 9000, to War Games, movies are replete with smart computers that decide to put humans in their place. If you study literature, you’ll find that science fiction isn’t usually about the future, it is about the present disguised as the future, and smart computers usually represent something like robots taking your job, or nuclear weapons destroying your town.

Lately, I’ve been seeing something disturbing, though. [Elon Musk], [Bill Gates], [Steve Wozniak], and [Stephen Hawking] have all gone on record warning us that artificial intelligence is dangerous. I’ll grant you, all of those people must be smarter than I am. I’ll even stipulate that my knowledge of AI techniques is a little behind the times. But, what? Unless I’ve been asleep at the keyboard for too long, we are nowhere near having the kind of AI that any reasonable person would worry about being actually dangerous in the ways they are imagining.

Smart Guys Posturing

Keep in mind, I’m interpreting their comments as saying (essentially): “Soon machines will think and then they will out-think us and be impossible to control.” It is easy to imagine something like a complex AI making a bad decision while driving a car or an airplane, sure. But the computer that parallel parks your car isn’t going to suddenly take over your neighborhood and put brain implants in your dogs and cats. Anyone who thinks that is simply not thinking about how these things work. The current state of computer programming makes that as likely as saying, “Perhaps my car will start flying and we can go to Paris.” Ain’t happening.

Continue reading “Kids! Don’t Try This at Home! Robot Destroys Mankind”

RetroFab: Machine Designed Control of All the Things

On the Starship Enterprise, an engineer can simply tell the computer what he’d like it to do, and it will do the design work. Moments later, the replicator pops out the needed part (we assume to atomic precision). The work [Raf Ramakers] is doing seems like the Model T ford of that technology. Funded by Autodesk, and as part of his work as a PhD Researcher of Human Computer Interaction at Hasselt University it is the way of the future.

The technology is really cool. Let’s say we wanted to control a toaster from our phone. The first step is to take a 3D scan of the object. After that the user tells the computer which areas of the toaster are inputs and what kind of input they are. The user does this by painting a color on the area of the rendering, we think this technique is intuitive and has lots of applications.

The computer then looks in its library of pre-engineered modules for ones that will fit the applications. It automagically generates a casing for the modules, and fits it to the scanned surface of the toaster. It is then up to the user to follow the generated assembly instructions.

Once the case and modules are installed, the work is done! The toaster can now be controlled from an app. It’s as easy as that. It’s this kind of technology that will really bring technologies like 3D printing to mass use. It’s one thing to have a machine that can produce most geometries for practically no cost. It’s another thing to have the skills to generate those geometries. Video of it in action after the break.  Continue reading “RetroFab: Machine Designed Control of All the Things”

Impressive StarCraft 2 AI More Fair to Fleshy Opponents

There was a discussion in the comments when the Alpha Go results were released. Some commentors were postulating that AI researchers are discounting more fluid games such as the RTS StarCraft.

The comments then devolved into a discussion of what would make the AI fair to consider against a human player. Many times, AI in RTS games win because they have direct access to the variables in the game. Rather than physically looking at the small area of the screen where a unit is located and then moving their eye to take in strategic information like exact location, health, unit level, etc, the AI just knows that it’s at 120x,2000y,76%,lvl5, etc instantly. The AI also has no click lag as it gets direct access to the game’s API, it simply changes the variables and action queue of a unit directly.

So we were interested to see [Matt]’s Star Craft AI that required the computer to actually look at the game board and click. [Matt]’s AI doesn’t see using OpenCV, which in its own way is forcing the computer to look in a way that’s unnatural to it. He instead wrote some code to intercept the behind the scenes calls to the DirectX library.

The computer is then able to make determinations about what it is looking at using the texture information and other pieces sent to the library. Unlike AI’s that get a direct look at the variables, it has to then translate this and keep its own mental picture of the map and the situation. If a building is destroyed, for example, it has to go over and look at that part of the map, test what it’s seeing against a control, and then remove the building from its list.

The AI’s one big advantage are its robot fingers. Even though this AI has to click on the interface, it doesn’t do it with a weak articulated fleshy nub like the rest of us. This allows the AI to get crazy Actions Per Minute (APM) in the range of 500 to 2000.

The AI has only been tested against StarCraft’s built in cheater bots. So far it can win most games against the hard level bots. If you want to see a video of what the AI is looking at, check after the break.

Continue reading “Impressive StarCraft 2 AI More Fair to Fleshy Opponents”

Swarm of Robot Boats Coming To An Ocean Near You Soon

Planning a hostile takeover of your local swimming pool? This might help: [Dr Anders Lyhne Christensen] sent us a note about his work at the BioMachines Lab of the Institute of Telecommunications in Portugal. They have been building a swarm of robot boats to experiment with autonomous swarms, with some excellent results.

In an autonomous swarm, each robot makes its own decisions and talks to its neighbors, and the combined behavior of the swarm produces an overall behavior, like ants in a nest. They’ve created swarms that can autonomously navigate, patrol an area or monitor the temperature in an area and return to base to report the results. In an excellent video, [Anders] outlines how they used computational evolution to create these behaviors, randomly mutating a neural net to find the best approach, which is then sent to the real boats.

Perhaps coolest of all: the whole project is open source, with the brains of each boat running on a Raspberry Pi, and a CNC milled foam hull with 3D printed component mounts. Each boat costs about 300 Euro (about $340), but you could reduce the cost a bit by salvaging components and once the less-expensive Pi Zero becomes obtainable. This project will no doubt be useful for many an evil genius who is sick of being splashed by the toughs at the local pool: a swarm of killer robots surrounding them would be an excellent way to keep them at bay.

Continue reading “Swarm of Robot Boats Coming To An Ocean Near You Soon”

Marvin Minsky, AI Pioneer, Dies at 88

Marvin Minsky, one of the early pioneers of neural networks, died on Sunday at the age of 88.

The obituary in the Washington Post paints a fantastic picture of his life. Minsky was friends with Richard Feynman, Isaac Asimov, Arthur C. Clarke, and Stanley Kubrick. He studied under Claude Shannon, worked with Alan Turing, had frequent conversations with John Von Neumann, and had lunch with Albert Einstein.

Single_layer_ann
“Single layer ann” by Mcstrother

Minsky’s big ideas were really big. He built one of the first artificial neural networks, but was aiming higher — toward machines that could actually think rather than simply classify data. This was one of the driving forces behind his book, Perceptrons, that showed some of the limitations in the type of neural networks (single-layer, feedforward) that were being used at the time. He wanted something more.

Minsky’s book The Society of Mind is interesting because it reframes the problem of human thought from being a single top-down process to being a collaboration between many different brain regions, the nervous system, and indeed the body as a whole. This “connectionist” theme would become influential both in cognitive science and in robotics.

In short, Minksy was convinced that complex problems often had necessarily complex solutions. In research projects, he was in for the long-term, and encouraged a bottom-up design procedure where many smaller elements combined into a complicated whole. “The secret of what something means lies in how it connects to other things we know. That’s why it’s almost always wrong to seek the “real meaning” of anything. A thing with just one meaning has scarcely any meaning at all.”

useless_machine-shot0005Minsky was a very deep thinker, but he kept grounded by also being a playful inventor. Minsky is credited with inventing the “ultimate machine” which would pop up in modern geek culture and shared numerous times on Hackaday as the “most useless machine”. He inspired Claude Shannon to build one. Arthur C. Clarke said, “There is something unspeakably sinister about a machine that does nothing — absolutely nothing — except switch itself off.”

He also co-designed the Triadex Muse, which was an early synthesizer and sequencer and “automatic composer” that creates fairly complex and original patterns with minimal input. It’s an obvious offshoot of his explorations in artificial intelligence, and on our bucket list of must-play-with electronic instruments.

Minsky’s web site at MIT has a number of his essays, and the full text of “The Society of Mind”, all available for your reading pleasure. It’s worth a bit of your time, not just in memoriam of a great thinker and a wacky inventor, but also because we bet you’ll see the world a little bit differently afterwards. That’s a legacy that lasts.

A Short History of AI, and Why It’s Heading in the Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Continue reading “A Short History of AI, and Why It’s Heading in the Wrong Direction”

The Machine that Japed: Microsoft’s Humor-Emulating AI

Ten years ago, highbrow culture magazine The New Yorker started a contest. Each week, a cartoon with no caption is published in the back of the magazine. Readers are encouraged to submit an apt and hilarious caption that captures the magazine’s infamous wit. Editors select the top three entries to vie for reader votes and the prestige of having captioned a New Yorker cartoon.

The magazine receives about 5,000 submissions each week, which are scrutinized by cartoon editor [Bob Mankoff] and a parade of assistants that burn out after a year or two. But soon, [Mankoff]’s assistants may have their own assistant thanks to Microsoft researcher [Dafna Shahaf].

[Dafna Shahaf] heard [Mankoff] give a speech about the New Yorker cartoon archive a year or so ago, and it got her thinking about the possibilities of the vast collection with regard to artificial intelligence. The intricate nuances of humor and wordplay have long presented a special challenge to creators. [Shahaf] wondered, could computers begin to learn what makes a caption funny, given a big enough canon?

[Shahaf] threw ninety years worth of wry, one-panel humor at the system. Given this knowledge base, she trained it to choose funny captions for cartoons based on the jokes of similar cartoons. But in order to help [Mankoff] and his assistants choose among the entries, the AI must be able to rank the comedic value of jokes. And since computer vision software is made to decipher photos and not drawings, [Shahaf] and her team faced another task: assigning keywords to each cartoon. The team described each one in terms of its contextual anchors and subsequently its situational anomalies. For example, in the image above, the context keywords could be car dealership, car, customer, and salesman. Anomalies might include claws, fangs, and zoomorphic automobile.

The result is about the best that could be hoped for, if one was being realistic. All of the cartoon editors’ chosen winners showed up among the AI’s top 55.8%, which means the AI could ultimately help [Mankoff and Co.] weed out just under half of the truly bad entries. While [Mankoff] sees the study’s results as a positive thing, he’ll continue to hire assistants for the foreseeable future.

Humor-enabled AI may still be in its infancy, but the implications of the advancement are already great. To give personal assistants like Siri and Cortana a funny bone is to make them that much more human. But is that necessarily a good thing?

[via /.]