Helping Robots Learn By Letting Them Fail

The [MIT Technology Review] has just released its annual list of the top innovators under the age of 35, and there are some interesting people on this list of the annoyingly accomplished at a young age. Like [Lerrel Pinto], an associate professor of computer science at NY University. His work focuses on teaching robots how to do things in the home by failing.

Continue reading “Helping Robots Learn By Letting Them Fail”

Teaching A Mini-Tesla To Steer Itself

At the risk of stating the obvious, even when you’ve got unlimited resources and access to the best engineering minds, self-driving cars are hard. Building a multi-ton guided missile that can handle the chaotic environment of rush-hour traffic without killing someone is a challenge, to say the least. So if you’re looking to get into the autonomous car game, perhaps it’s best to start small.

If [Austin Blake]’s fun-sized Tesla go-kart looks familiar, it’s probably because we covered the Teskart back when he whipped up this little demon of an EV from a Radio Flyer toy. Adding self-driving to the kart is a natural next step, so [Austin] set off on a journey into machine learning to make it happen. Having settled on behavioral cloning, which trains a model to replicate a behavior by showing it examples of the behavior, he built a bolt-on frame to hold a steering servo made from an electric wheelchair motor, some drive electronics, and a webcam attached to a laptop. Ten or so human-piloted laps around a walking path at a park resulted in a 48,000-image training set, along with the steering wheel angle at each point.

The first go-around wasn’t so great, with the Teskart seemingly bent on going off the track. [Austin] retooled by adding two more webcams, to get a little parallax data and hopefully improve the training data. After a bug fix, the improved model really seemed to do the trick, with the Teskart pretty much keeping in its lane around the track, no matter how fast [Austin] pushed it. Check out the video below to see the Teskart in action.

It’s important to note that this isn’t even close to “Full Self-Driving.” The only thing being controlled is the steering angle; [Austin] is controlling the throttle himself and generally acting as the safety driver should the car veer off course, which it tends to do at one particular junction. But it’s a great first step, and we’re looking forward to further development.

Continue reading “Teaching A Mini-Tesla To Steer Itself”

AI Learns To Walk In 3D Training Grounds

AI agents are learning to do all kinds of interesting jobs, even the creative ones that we quite prefer handling ourselves. Nevertheless, technology marches on. Working in this area is YouTuber [AI Warehouse], who has been teaching an AI to walk in a simulated environment.

Albert needed some specific guidance to learn how to walk upright, something that humans tend to figure out innately.

The AI controls a vaguely humanoid-like creature, albeit with a heavily-simplified body and limbs. It “lives” in a 3D environment created in the Unity engine, which provides the necessary physics engine for the work. Meanwhile, the ML-Agents package is used to provide the brain for Albert, the AI charged with learning to walk.

The video steps through a variety of “deep reinforcement learning” tasks. In these, the AI is rewarded for completing goals which are designed to teach it how to walk. Albert is given control of his limbs, and simply charged with reaching a button some distance away on the floor. After many trials, he learns to do the worm, and achieves his goal.

Getting Albert to walk upright took altogether more training. Lumpy ground and walls in between him and his goal were used to up the challenge, as well as encouragements to alternate his use of each foot and to maintain an upright attitude. Over time, he was able to progress through skipping and to something approximating a proper walk cycle.

One may argue that the teaching method required a lot of specific guidance, but it’s still a neat feat to achieve nonetheless. It’s altogether more complex than learning to play Trackmania, we’d say, and that was impressive enough in itself. Video after the break.

Continue reading “AI Learns To Walk In 3D Training Grounds”

Smart Assistants Need To Get Smarter

Science fiction has regularly portrayed smart computer assistants in a fanciful way. HAL from 2001: A Space Odyssey and J.A.R.V.I.S. from the contemporary Iron Man films are both great examples. They’re erudite, wise, and capable of doing just about any reasonable task that is asked of them, short of opening the pod bay doors.

Cut back to reality, and you’ll only be disappointed at how useless most voice assistants are. It’s been twelve long years since Siri burst onto the scene, with Alexa and Google Assistant following years later. Despite years on the market, their capabilities remain limited and uninspiring. It’s time for voice assistants to level up.

Continue reading “Smart Assistants Need To Get Smarter”

Closeup of an Apple ][ terminal program. The background is blue and the text white. The prompt says, "how are you today?" and the ChatGPT response says, "As an AI language model, I don't have feelings, but I am functioning optimally. Thank you for asking. How may I assist you?"

Apple II – Now With ChatGPT

Hackers are finding no shortage of new things to teach old retrocomputers, and [Evan Michael] has taught his Apple II how to communicate with ChatGPT.

Written in Python, iiAI lets an Apple II access everyone’s favorite large language model (LLM) through the terminal. The program lives on a more modern computer and is accessed over a serial connection. OpenAI API credentials are stored in a file invoked by iiAI when you launch it by typing python3 openai_apple.py. The program should work on any device that supports TTY serial, but so far testing has only happened on [Michael]’s Apple IIGS.

For a really clean setup, you might try running iiAI internally on an Apple II Pi. ChatGPT has also found its way onto Commodore 64 and MS-DOS, and look here if you’d like some more info on how these AI chat bots work anyway.

Continue reading “Apple II – Now With ChatGPT”

C64 Gets ChatGPT Access Via BBS

ChatGPT, powered by GPT 3.5 and GPT 4, has become one of the most popular Large Language Models (LLM), due to its ability to hold passable conversations and generate large tracts of text. Now, that very tool is available on the Commodore 64 via the Internet.

Obviously, a 6502 CPU with just 64 kilobytes of RAM can barely remember a dictionary, let alone the work with something as complicated as a modern large language model. Nor is the world’s best-selling computer well-equipped to connect to modern online APIs. Instead, the C64 can access ChatGPT through the Retrocampus BBS, as demonstrated by [Retro Tech or Die].

Due to security reasons, the ChatGPT area of the BBS is only available to the board’s Patreon members. Once in, though, you’re granted a prompt with ChatGPT displayed in glorious PETSCII on the Commodore 64. It’s all handled via a computer running as a go-between for the BBS clients and OpenAI’s ChatGPT service, set up by board manager [Francesco Sblendorio]. It’s particularly great to see ChatGPT spitting out C64-compatible BASIC.

While this is a fun use of ChatGPT, be wary of using it for certain tasks in wider society. Video after the break.

Continue reading “C64 Gets ChatGPT Access Via BBS”

Neural Network Helps With Radar Pipeline Diagnostics

Diagnosing pipeline problems is important in industry to avoid costly or dangerous failures from cracked, broken, or damaged pipes. [Kutluhan Aktar] has built an system that uses AI to assist in this difficult task.

The core of the system is a MR60BHA1 60 GHz mmWave radar module, which is most typically used for breathing and heartrate detection. Here, it’s repurposed to detect fluctuating vibrations as a sign that a pipeline may be cracked or damaged. It’s paired with an Arduino Nicla Vision module, with the smart camera able to run a neural network model on the captured radar data to flag potential pipe defects and photograph them. The various modules are assembled on a PCB resembling Dragonite, the Dragon/Flying-type Pokemon.

[Kutluhan] walks us through the whole development process, including the creation of a web interface for the system. Of particular interest is the way the neural network was trained on real defect models that [Kutluhan] built using PVC pipe. We’ve looked at industrial pipelines in detail before, too. Video after the break.

Continue reading “Neural Network Helps With Radar Pipeline Diagnostics”