Death Of The Turing Test In An Age Of Successful AIs

IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?

The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.

Continue reading “Death Of The Turing Test In An Age Of Successful AIs”

An AI-Free Way To Catch Wildlife On Camera

Judging by the over-representation of the term “AI” in our news feeds these days, we’re clearly in the exponential phase of the artificial intelligence hype cycle, and very nearly at the dreaded “Peak of Inflated Expectations.” It seems like there’s nothing that AI can’t do, and nowhere that its principles can’t be applied to virtuous — and profitable — effect.

We don’t deny that AI has massive potential, but we strongly suspect that there will soon come a day when eyes will roll and stomachs will turn at yet another AI application that could have been addressed with something easier. An example of the simpler approach can be seen in this non-AI wildlife photo trap, cobbled together by [Sebastian] to capture pictures of some camera-shy squirrels. Rather than train an AI with gigabytes of squirrel images, he instead relies on his old Sony Alpha camera, which has a built-in WiFi. A Python script connects to the camera, which is trained on a feeder box and set to a very narrow depth of field. That makes a good percentage of the scene out of focus until a squirrel or other animal comes along looking for treats. The script detects the increased area of the scene that is now in-focus with a Laplace operator in OpenCV, and triggers the camera shutter. [Sebastian] ended up with some wonderful shots of the shy squirrels using this scheme; the video below describes the setup in more detail.

It’s not the first time we’ve seen Laplace transforms used to gauge image sharpness, of course, but we really like the approach [Sebastian] took here for its simplicity. The squirrels are cute too.

Continue reading “An AI-Free Way To Catch Wildlife On Camera”

An ESP Will Read Your Meter For You

As home automation starts to live up to its glossy sci-fi promise there remains a deficiency when it comes to interfacing between the newer computerised components and legacy items from a previous age. A frequent example that appears in projects on Hackaday is the reading of utility meters, and in that arena [jomjol] has a very neat solution involving an ESP32 camera module and a software neural network to identify meter readings directly.

The ESP and camera sit at the top of a 3D-printed housing that fits over the meter. The clever trick comes as each photo’s orientation is determined, and not only is OCR used to read digits but also figures are derived from small dial meters and other indicators on the meter face. It’s a very well-thought-out system, with a web-based configuration tool that allows full customisation of the readable zones and how they should be treated.

This project makes full use of the ESP32’s capabilities, and the attention to detail that has gone into making it usable is particularly impressive. It certainly raises the bar against previous OCR meter reading projects.

[Thanks for the tip Sascha]

AI Learns To Drive Trackmania

Machine learning has long been a topic of interest for humanity, but only in recent years have we had broad access to great computing power to enable to the average person to dive in. [Yosh] recently decided to put an AI to work learning how to race in Trackmania.

After early experiments with supervised learning, [Yosh] decided to implement a genetic algorithm to produce an AI to drive in the game. The AI takes distance from the track walls as an input, and has steering and accelerator values as an output. Starting with 100 AIs in generation 1, [Yosh] iterated by choosing the AIs that covered the longest distance in 13 seconds. Once the AIs started to get the hang of the first few corners, he changed the training to instead prioritize the lowest time taken to traverse each of the checkpoints along the track.

The AI improved over time, and over 100 generations, got down to a 23.48s time on the test track, versus 19.63s for [Trabadia], a talented human. We’d love to see how much better the AI could do with more training. [Yosh] is trying more experiments, like providing extra feedback in the AI fitness function to keep it from hitting the walls. It’s not the first time we’ve seen a genetic algorithm used to train a racing AI, either. Video after the break.

Continue reading “AI Learns To Drive Trackmania”

Baby Yoda Becomes Personable Robot

Baby Yoda has been a hit character in Disney’s The Mandalorian, but does not actually exist in real life as far as we know. Instead, [Manuel Ahumada] set about building a robotic replica, complete with artificial intelligence.  (Video, embedded below.)

The first step was to build a basic robotic simulcra of Baby Yoda, which [Manuel] achieved by outfitting a toy with servos, motors and a Raspberry Pi. With everything hooked up, Baby Yoda was able to move his head and arms, and scoot around on wheels, all under the control of a Bluetooth gamepad. With that sorted, [Manuel] added brains in the form of a smartphone running Intel’s OpenBot machine learning platform. This allows Baby Yoda to track and follow people it sees on its smartphone camera, and potentially even navigate real-world spaces with future upgrades.

It’s a fun build, and we’d love to see the bot let loose at a convention to explore and make friends. We’ve covered OpenBot before, and look forward to seeing it used in more builds. Video after the break.

Continue reading “Baby Yoda Becomes Personable Robot”

AI On The Highway

A couple of announcements caught our attention last week regarding AI-controlled cars. South Korea’s Kakao Mobility and local startup Autonomous A2G launched a limited self-driving taxi service in Sejong City this month, made possible by enabling legislation passed in May. For now, the service is restricted to government employees, and the AI driver will be backed-up by an engineer who is there to monitor the systems and take over in an emergency. The companies plan to expand the fleet and service areas this year, although no details are given.

Another announcement comes from the Ministry of Land, Infrastructure and Transport about the on-going successes of the semi-autonomous truck platooning program. This is a collaboration between the Korean Expressway Corporation, Kookmin University in Seoul, and Hyundai Motors. Previously restricted to a designated test road called the Yeoju Smart Highway, the program is now being tested on public roads at speeds up to 70 kph. This year the program will expand to platoons of 4 trucks running at 90 kph. We’ve always thought that long-haul trucking and freight industries would be an early adaptor AI technologies, and one which AI could offer significant benefits.

Continue reading “AI On The Highway”

Hackaday Links Column Banner

Hackaday Links: December 6, 2020

By now you’ve no doubt heard of the sudden but not unexpected demise of the iconic Arecibo radio telescope in Puerto Rico. We have been covering the agonizing end of Arecibo from almost the moment the first cable broke in August to a eulogy, and most recently its final catastrophic collapse this week. That last article contained amazing video of the final collapse, including up-close and personal drone shots of the cable breaking. For a more in-depth analysis of the collapse, it’s hard to beat Scott Manley’s frame-by-frame analysis, which really goes into detail about what happened. Seeing the paint spalling off the cables as they stretch and distort under loads far greater than they were designed for is both terrifying and fascinating.

Exciting news from Australia as the sample return capsule from JAXA’s Hayabusa2 asteroid explorer returned safely to Earth Saturday. We covered Hayabusa2 in our roundup of extraterrestrial excavations a while back, describing how it used both a tantalum bullet and a shaped-charge penetrator to blast regolith from the surface of asteroid 162173 Ryugu. Samples of the debris were hoovered up and hermetically sealed for the long ride back to Earth, which culminated in the fiery re-entry and safe landing in the midst of the Australian outback. Planetary scientists are no doubt eager to get a look inside the capsule and analyze the precious milligrams of space dust. In the meantime, Hayabusa2, with 66 kilograms of propellant remaining, is off on an extended mission to visit more asteroids for the next eleven years or so.

The 2020 Remoticon has been wrapped up for most of a month now, but one thing we noticed was how much everyone seemed to like the Friday evening Bring-a-Hack event that was hosted on Remo. To kind of keep that meetup momentum going and to help everyone slide into the holiday season with a little more cheer, we’re putting together a “Holiday with Hackaday & Tindie” meetup on Tuesday, December 15 at noon Pacific time. The details haven’t been shared yet, but our guess is that this will certainly be a “bring-a-hack friendly” event. We’ll share more details when we get them this week, but for now, hop over to the Remo event page and reserve your spot.

On the Buzzword Bingo scorecard, “Artificial Intelligence” is a square that can almost be checked off by default these days, as companies rush to stretch the definition of the term to fit almost every product in the neverending search for market share. But even those products that actually have machine learning built into them are only as good as the data sets used to train them. That can be a problem for voice-recognition systems; while there are massive databases of utterances in just about every language, the likes of Amazon and Google aren’t too willing to share what they’ve leveraged from their smart speaker using customer base. What’s the little person to do? Perhaps the People’s Speech database will help. Part of the MLCommons project, it has 86,000 hours of speech data, mostly derived from audiobooks, a clever source indeed since the speech and the text can be easily aligned. The database also pulls audio and the corresponding text from Wikipedia and other random sources around the web. It’s a small dataset, to be sure, but it’s a start.

And finally, divers in the Baltic Sea have dredged up a bit of treasure: a Nazi Enigma machine. Divers in Gelting Bay near the border of Germany and Denmark found what appeared to be an old typewriter caught in one of the abandoned fishing nets they were searching for. When they realized what it was — even crusted in 80-years-worth of corrosion and muck some keys still look like they’re brand new — they called in archaeologists to take over recovery. Gelting Bay was the scene of a mass scuttling of U-boats in the final days of World War II, so this Engima may have been pitched overboard before by a Nazi commander before pulling the plug on his boat. It’ll take years to restore, but it’ll be quite a museum piece when it’s done.