Death Of The Turing Test In An Age Of Successful AIs

IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?

The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.

Continue reading “Death Of The Turing Test In An Age Of Successful AIs”

I’m Sorry, Alexander, I’m Afraid I Can’t Do That

Getting people to space is extremely difficult, and while getting robots to space is still pretty challenging, it’s much easier. For that reason, robots and probes have been helping us explore the solar system for decades. Now, though, a robot assistant is on board the ISS to work with the astronauts, and rather than something impersonal like a robot arm, this one has a face, can navigate throughout the ship, and can respond to voice inputs.

The robot is known as CIMON, the Crew Interactive Mobile Companion. Built by Airbus, this interactive helper will fly with German astronaut Alexander Gerst to test the concept of robotic helpers such as this one. It is able to freely move about the cabin and can learn about the space it is in without being specifically programmed for it. It processes voice inputs similarly to a smart phone, but still processes requests on Earth via the IBM Watson AI. This means that it’s not exactly untethered, and future implementations of this technology might need to be more self-contained for missions outside of low Earth orbit.

While the designers have listened to the warnings of 2001 and not given it complete control of the space station, they also learned that it’s helpful to create an interactive robot that isn’t something as off-putting as a single creepy red-eye. This robot can display an interactive face on the screen, as well as use the same screen to show schematics, procedure steps, or anything else the astronauts need. If creepy design is more your style though, you can still have HAL watching you in your house.

Thanks to [Marian] for the tip!

Continue reading “I’m Sorry, Alexander, I’m Afraid I Can’t Do That”

Make A Natural Language Phone Bot Like Google’s Duplex AI

After seeing how Google’s Duplex AI was able to book a table at a restaurant by fooling a human maître d’ into thinking it was human, I wondered if it might be possible for us mere hackers to pull off the same feat. What could you or I do without Google’s legions of ace AI programmers and racks of neural network training hardware? Let’s look at the ways we can make a natural language bot of our own. As you’ll see, it’s entirely doable.

Continue reading “Make A Natural Language Phone Bot Like Google’s Duplex AI”

Less Dear Heating For The Deer

Keeping animals from tropical regions of the world in a cold climate is an expensive business, they need a warm environment in their pens and sleeping areas. Marwell Zoo was spending a small fortune keeping its herd of nyalas (an antelope, not as the title suggests a deer, native to Southern Africa) warm with electric heating, so they went looking for a technology that could reduce their costs by only heating while an animal was in its pen.

One might expect that a passive IR sensor would solve the problem, but a sleeping nyala too soon becomes part of the background heat for these devices, and as a result, the heaters would not operate for long enough to keep the animals warm. The solution came from an unlikely source, a coffee queue monitoring project at the IBM Watson headquarters in Munich, that used an array of infra-red sensors to monitor the changing heat patterns and thus gauge the likelihood of a lengthy wait for a beverage.

In the zoo application, an array of thermal sensors hooked up to ESP8266 boards talk back to a Raspberry Pi that aggregates the readings and sends them to the IBM Watson cloud where they are analyzed by a neural net. The decision is then made whether or not a nyala is in the field of view, and the animal is toasted accordingly.

This project has some similarities with a Hackaday Prize entry, automated wildlife recognition, in its use of Watson.

Nyala image: Charlesjsharp [CC BY-SA 4.0 ].