AI Learns To Drive Trackmania

Machine learning has long been a topic of interest for humanity, but only in recent years have we had broad access to great computing power to enable to the average person to dive in. [Yosh] recently decided to put an AI to work learning how to race in Trackmania.

After early experiments with supervised learning, [Yosh] decided to implement a genetic algorithm to produce an AI to drive in the game. The AI takes distance from the track walls as an input, and has steering and accelerator values as an output. Starting with 100 AIs in generation 1, [Yosh] iterated by choosing the AIs that covered the longest distance in 13 seconds. Once the AIs started to get the hang of the first few corners, he changed the training to instead prioritize the lowest time taken to traverse each of the checkpoints along the track.

The AI improved over time, and over 100 generations, got down to a 23.48s time on the test track, versus 19.63s for [Trabadia], a talented human. We’d love to see how much better the AI could do with more training. [Yosh] is trying more experiments, like providing extra feedback in the AI fitness function to keep it from hitting the walls. It’s not the first time we’ve seen a genetic algorithm used to train a racing AI, either. Video after the break.

Continue reading “AI Learns To Drive Trackmania”

Baby Yoda Becomes Personable Robot

Baby Yoda has been a hit character in Disney’s The Mandalorian, but does not actually exist in real life as far as we know. Instead, [Manuel Ahumada] set about building a robotic replica, complete with artificial intelligence.  (Video, embedded below.)

The first step was to build a basic robotic simulcra of Baby Yoda, which [Manuel] achieved by outfitting a toy with servos, motors and a Raspberry Pi. With everything hooked up, Baby Yoda was able to move his head and arms, and scoot around on wheels, all under the control of a Bluetooth gamepad. With that sorted, [Manuel] added brains in the form of a smartphone running Intel’s OpenBot machine learning platform. This allows Baby Yoda to track and follow people it sees on its smartphone camera, and potentially even navigate real-world spaces with future upgrades.

It’s a fun build, and we’d love to see the bot let loose at a convention to explore and make friends. We’ve covered OpenBot before, and look forward to seeing it used in more builds. Video after the break.

Continue reading “Baby Yoda Becomes Personable Robot”

AI On The Highway

A couple of announcements caught our attention last week regarding AI-controlled cars. South Korea’s Kakao Mobility and local startup Autonomous A2G launched a limited self-driving taxi service in Sejong City this month, made possible by enabling legislation passed in May. For now, the service is restricted to government employees, and the AI driver will be backed-up by an engineer who is there to monitor the systems and take over in an emergency. The companies plan to expand the fleet and service areas this year, although no details are given.

Another announcement comes from the Ministry of Land, Infrastructure and Transport about the on-going successes of the semi-autonomous truck platooning program. This is a collaboration between the Korean Expressway Corporation, Kookmin University in Seoul, and Hyundai Motors. Previously restricted to a designated test road called the Yeoju Smart Highway, the program is now being tested on public roads at speeds up to 70 kph. This year the program will expand to platoons of 4 trucks running at 90 kph. We’ve always thought that long-haul trucking and freight industries would be an early adaptor AI technologies, and one which AI could offer significant benefits.

Continue reading “AI On The Highway”

Hackaday Links: December 6, 2020

By now you’ve no doubt heard of the sudden but not unexpected demise of the iconic Arecibo radio telescope in Puerto Rico. We have been covering the agonizing end of Arecibo from almost the moment the first cable broke in August to a eulogy, and most recently its final catastrophic collapse this week. That last article contained amazing video of the final collapse, including up-close and personal drone shots of the cable breaking. For a more in-depth analysis of the collapse, it’s hard to beat Scott Manley’s frame-by-frame analysis, which really goes into detail about what happened. Seeing the paint spalling off the cables as they stretch and distort under loads far greater than they were designed for is both terrifying and fascinating.

Exciting news from Australia as the sample return capsule from JAXA’s Hayabusa2 asteroid explorer returned safely to Earth Saturday. We covered Hayabusa2 in our roundup of extraterrestrial excavations a while back, describing how it used both a tantalum bullet and a shaped-charge penetrator to blast regolith from the surface of asteroid 162173 Ryugu. Samples of the debris were hoovered up and hermetically sealed for the long ride back to Earth, which culminated in the fiery re-entry and safe landing in the midst of the Australian outback. Planetary scientists are no doubt eager to get a look inside the capsule and analyze the precious milligrams of space dust. In the meantime, Hayabusa2, with 66 kilograms of propellant remaining, is off on an extended mission to visit more asteroids for the next eleven years or so.

The 2020 Remoticon has been wrapped up for most of a month now, but one thing we noticed was how much everyone seemed to like the Friday evening Bring-a-Hack event that was hosted on Remo. To kind of keep that meetup momentum going and to help everyone slide into the holiday season with a little more cheer, we’re putting together a “Holiday with Hackaday & Tindie” meetup on Tuesday, December 15 at noon Pacific time. The details haven’t been shared yet, but our guess is that this will certainly be a “bring-a-hack friendly” event. We’ll share more details when we get them this week, but for now, hop over to the Remo event page and reserve your spot.

On the Buzzword Bingo scorecard, “Artificial Intelligence” is a square that can almost be checked off by default these days, as companies rush to stretch the definition of the term to fit almost every product in the neverending search for market share. But even those products that actually have machine learning built into them are only as good as the data sets used to train them. That can be a problem for voice-recognition systems; while there are massive databases of utterances in just about every language, the likes of Amazon and Google aren’t too willing to share what they’ve leveraged from their smart speaker using customer base. What’s the little person to do? Perhaps the People’s Speech database will help. Part of the MLCommons project, it has 86,000 hours of speech data, mostly derived from audiobooks, a clever source indeed since the speech and the text can be easily aligned. The database also pulls audio and the corresponding text from Wikipedia and other random sources around the web. It’s a small dataset, to be sure, but it’s a start.

And finally, divers in the Baltic Sea have dredged up a bit of treasure: a Nazi Enigma machine. Divers in Gelting Bay near the border of Germany and Denmark found what appeared to be an old typewriter caught in one of the abandoned fishing nets they were searching for. When they realized what it was — even crusted in 80-years-worth of corrosion and muck some keys still look like they’re brand new — they called in archaeologists to take over recovery. Gelting Bay was the scene of a mass scuttling of U-boats in the final days of World War II, so this Engima may have been pitched overboard before by a Nazi commander before pulling the plug on his boat. It’ll take years to restore, but it’ll be quite a museum piece when it’s done.

Sufficiently Advanced Technology And Justice

Imagine that you’re serving on a jury, and you’re given an image taken from a surveillance camera. It looks pretty much like the suspect, but the image has been “enhanced” by an AI from the original. Do you convict? How does this weigh out on the scales of reasonable doubt? Should you demand to see the original?

AI-enhanced, upscaled, or otherwise modified images are tremendously realistic. But what they’re showing you isn’t reality. When we wrote about this last week, [Denis Shiryaev], one of the authors of one of the methods we highlighted, weighed in the comments to point out that these modifications aren’t “restorations” of the original. While they might add incredibly fine detail, for instance, they don’t recreate or restore reality. The neural net creates its own reality, out of millions and millions of faces that it’s learned.

And for the purposes of identification, that’s exactly the problem: the facial features of millions of other people have been used to increase the resolution. Can you identify the person in the pixelized image? Can you identify that same person in the resulting up-sampling? If the question put before the jury was “is the defendant a former president of the USA?” you’d answer the question differently depending on which image you were presented. And you’d have a misleading level of confidence in your ability to judge the AI-retouched photo. Clearly, informed skepticism on the part of the jury is required.

Unfortunately, we’ve all seen countless examples of “zoom, enhance” in movies and TV shows being successfully used to nab the perps and nail their convictions. We haven’t seen nearly as much detailed analysis of how adversarial neural networks create faces out of a scant handful of pixels. This, combined with the almost magical resolution of the end product, would certainly sway a jury of normal folks. On the other hand, the popularity of intentionally misleading “deep fakes” might help educate the public to the dangers of believing what they see when AI is involved.

This is just one example, but keeping the public interested in and educated on the deep workings and limitations of the technology that’s running our world is more important than ever before, but some of the material is truly hard. How do we separate the science from the magic?

Hackaday Links: November 8, 2020

Saturday, November 7, 2020 – NOT PASADENA. Remoticon, the virtual version of the annual Hackaday Superconference forced upon us by 2020, the year that keeps on giving, is in full swing. As I write this, Kipp Bradford is giving one of the two keynote addresses, and last night was the Bring a Hack virtual session, which I was unable to attend but seems to have been very popular, at least from the response to it. In about an hour, I’m going to participate in the SMD Soldering Challenge on the Hackaday writing crew team, and later on, I’ll be emceeing a couple of workshops. And I’ll be doing all of it while sitting in my workshop/office here in North Idaho.

Would I rather be in Pasadena? Yeah, probably — last year, Supercon was a great experience, and it would have been fun to get together again and see everyone. But here we are, and I think we’ve all got to tip our hacker hats to the Remoticon organizers, for figuring out how to translate the in-person conference experience to the virtual space as well as they have.

The impact of going to a museum and standing in the presence of a piece of art or a historic artifact is hard to overstate. I once went to an exhibit of artifacts from Pompeii, and was absolutely floored to gaze upon a 2,000-year-old loaf of bread that was preserved by the volcanic eruption of 79 AD. But not everyone can get to see such treasures, which is why Scan the World was started. The project aims to collect 3D scans of all kinds of art and artifacts so that people can potentially print them for study. Their collection is huge and seems to concentrate on classic sculptures — Michelangelo’s David is there, as are the Venus de Milo, the Pieta, and Rodin’s Thinker. But there are examples from architecture, anatomy, and history. The collection seems worth browsing through and worth contributing to if you’re so inclined.

For all the turmoil COVID-19 has caused, it has opened up some interesting educational opportunities that probably wouldn’t ever have been available in the Before Time. One such opportunity is an undergraduate-level course in radio communications being offered on the SDRPlay YouTube channel. The content was created in partnership with the Sapienza University of Rome. It’s not entirely clear who this course is open to, but the course was originally designed for third-year undergrads, and the SDRPlay Educators Program is open to anyone in academia, so we’d imagine you’d need some kind of academic affiliation to qualify. The best bet might be to check out the intro video on the SDRPlay Educator channel and plan to attend the webinar scheduled for November 19 at 1300 UTC. You could also plan to drop into the Learning SDR and DSP Hack Chat on Wednesday at noon Pacific, too — that’s open to everyone, just like every Hack Chat is.

And finally, as if bald men didn’t suffer enough disrespect already, now artificial intelligence is having a go at them. At a recent soccer match in Scotland, an AI-powered automatic camera system consistently interpreted an official’s glabrous pate as the soccer ball. The system is supposed to keep the camera trained on the action by recognizing the ball as it’s being moved around the field. Sadly, the linesman in this game drew the attention of the system quite frequently, causing viewers to miss some of the real action. Not that what officials do during sporting events isn’t important, of course, but it’s generally not what viewers want to see. The company, an outfit called Pixellot, knows about the problem and is working on a solution. Here’s hoping the same problem doesn’t crop up on American football.

Hackaday Links: October 4, 2020

In case you hadn’t noticed, it was a bad week for system admins. Pennsylvania-based United Health Services, a company that owns and operates hospitals across the US and UK, was hit by a ransomware attack early in the week. The attack, which appears to be the Ryuk ransomware, shut down systems used by hospitals and health care providers to schedule patient visits, report lab results, and do the important job of charting. It’s not clear how much the ransomers want, but given that UHS is a Fortune 500 company, it’s likely a tidy sum.

And as if an entire hospital corporation’s IT infrastructure being taken down isn’t bad enough, how about the multi-state 911 outage that occurred around the same time? Most news reports seemed to blame the outage on an Office 365 outage happening at the same time, but Krebs on Security dug a little deeper and traced the issue back to two companies that provide 911 call routing services. Each of the companies is blaming the other, so nobody is talking about the root cause of the issue. There’s no indication that it was malware or ransomware, though, and the outage was mercifully brief. But it just goes to show how vulnerable our systems have become.

Our final “really bad day at work” story comes from Japan, where a single piece of failed hardware shut down a $6-trillion stock market. The Tokyo Stock Exchange, third-largest bourse in the world, had to be completely shut down early in the trading day Thursday when a shared disk array failed. The device was supposed to automatically failover to a backup unit, but apparently the handoff process failed. This led to cascading failures and blank terminals on the desks of thousands of traders. Exchange officials made the call to shut everything down for the day and bring everything back up carefully. We imagine there are some systems people sweating it out this weekend to figure out what went wrong and how to keep it from happening again.

With our systems apparently becoming increasingly brittle, it might be a good time to take a look at what goes into space-rated operating systems. Ars Technica has a fascinating overview of the real-time OSes used for space probes, where failure is not an option and a few milliseconds error can destroy billions of dollars of hardware. The article focuses on the RTOS VxWorks and goes into detail on the mysterious rebooting error that affected the Mars Pathfinder mission in 1997. Space travel isn’t the same as running a hospital or stock exchange, of course, but there are probably lessons to be learned here.

As if 2020 hasn’t dealt enough previews of various apocalyptic scenarios, here’s what surely must be a sign that the end is nigh: AI-generated PowerPoint slides. For anyone who has ever had to sit through an endless slide deck and wondered who the hell came up with such drivel, the answer may soon be: no one. DeckRobot, a startup company, is building an AI-powered extension to Microsoft Office to automate the production of “company compliant and visually appealing” slide decks. The extension will apparently be trained using “thousands and thousands of real PowerPoint slides”. So, great — AI no longer has to have the keys to the nukes to do us in. It’ll just bore us all to death.

And finally, if you need a bit of a palate-cleanser after all that, please do check out robotic curling. Yes, the sport that everyone loves to make fun of is actually way more complicated than it seems, and getting a robot to launch the stones on the icy playing field is a really complex and interesting problem. The robot — dubbed “Curly”, of course — looks like a souped-up Roomba. After sizing up the playing field with a camera on an extendable boom, it pushes the stone while giving it a gentle spin to ease it into exactly the right spot. Sadly, the wickedly energetic work of the sweepers and their trajectory-altering brooms has not yet been automated, but it’s still pretty cool to watch. But fair warning: you might soon find yourself with a curling habit to support.