Teaching A Mini-Tesla To Steer Itself

At the risk of stating the obvious, even when you’ve got unlimited resources and access to the best engineering minds, self-driving cars are hard. Building a multi-ton guided missile that can handle the chaotic environment of rush-hour traffic without killing someone is a challenge, to say the least. So if you’re looking to get into the autonomous car game, perhaps it’s best to start small.

If [Austin Blake]’s fun-sized Tesla go-kart looks familiar, it’s probably because we covered the Teskart back when he whipped up this little demon of an EV from a Radio Flyer toy. Adding self-driving to the kart is a natural next step, so [Austin] set off on a journey into machine learning to make it happen. Having settled on behavioral cloning, which trains a model to replicate a behavior by showing it examples of the behavior, he built a bolt-on frame to hold a steering servo made from an electric wheelchair motor, some drive electronics, and a webcam attached to a laptop. Ten or so human-piloted laps around a walking path at a park resulted in a 48,000-image training set, along with the steering wheel angle at each point.

The first go-around wasn’t so great, with the Teskart seemingly bent on going off the track. [Austin] retooled by adding two more webcams, to get a little parallax data and hopefully improve the training data. After a bug fix, the improved model really seemed to do the trick, with the Teskart pretty much keeping in its lane around the track, no matter how fast [Austin] pushed it. Check out the video below to see the Teskart in action.

It’s important to note that this isn’t even close to “Full Self-Driving.” The only thing being controlled is the steering angle; [Austin] is controlling the throttle himself and generally acting as the safety driver should the car veer off course, which it tends to do at one particular junction. But it’s a great first step, and we’re looking forward to further development.

Continue reading “Teaching A Mini-Tesla To Steer Itself”

Re-Creating Pink Floyd In The Name Of Speech

For people who have lost the ability to speak, the future may include brain implants that bring that ability back. But could these brain implants also allow them to sing? Researchers believe that, all in all, it’s just another brick in the wall.

In a new study published in PLOS Biology, twenty-nine people who were already being monitored for epileptic seizures participated via a postage stamp-sized array of electrodes implanted directly on the surface of their brains. As the participants were exposed to Pink Floyd’s Another Brick In the Wall, Part 1, the researchers gathered data from several areas of the brain, each attuned to a different musical element such as harmony, rhythm, and so on. Then the researchers used machine learning to reconstruct the audio heard by the participants using their brainwaves.

First, an AI model looked at the data generated from the brains’ responses to components of the song, like the changes in rhythm, pitch, and tone. Then a second model rejiggered the piecemeal song and estimated the sounds heard by the patients. Of the seven audio samples published in the study results, we think #3 sounds the most like the song. It’s kind of creepy but ultimately very cool. What do you think?

Continue reading “Re-Creating Pink Floyd In The Name Of Speech”

Several video clips of a robot arm manipulating objects in a kitchen environment, demonstrating some of the 12 generalized skills

RoboAgent Gets Its MT-ACT Together

Researchers at Carnegie Mellon University have shared a pre-print paper on generalized robot training within a small “practical data budget.” The team developed a system that breaks movement tasks into 12 “skills” (e.g., pick, place, slide, wipe) that can be combined to create new and complex trajectories within at least somewhat novel scenarios, called MT-ACT: Multi-Task Action Chunking Transformer. The authors write:

Trained merely on 7500 trajectories, we are demonstrating a universal RoboAgent that can exhibit a diverse set of 12 non-trivial manipulation skills (beyond picking/pushing, including articulated object manipulation and object re-orientation) across 38 tasks and can generalize them to 100s of diverse unseen scenarios (involving unseen objects, unseen tasks, and to completely unseen kitchens). RoboAgent can also evolve its capabilities with new experiences.

Continue reading “RoboAgent Gets Its MT-ACT Together”

Hackaday Links Column Banner

Hackaday Links: August 13, 2023

Remember that time when the entire physics community dropped what it was doing to replicate the extraordinary claim that a room-temperature semiconductor had been discovered? We sure do, and if it seems like it was just yesterday, it’s probably because it pretty much was. The news of LK-99, a copper-modified lead apatite compound, hit at the end of July; now, barely three weeks later, comes news that not only is LK-99 not a superconductor, but that its resistivity at room temperature is about a billion times higher than copper. For anyone who rode the “cold fusion” hype train back in the late 1980s, LK-99 had a bit of code smell on it from the start. We figured we’d sit back and let science do what science does, and sure enough, the extraordinary claim seems not to be able to muster the kind of extraordinary evidence it needs to support it — with the significant caveat that a lot of the debunking papers –and indeed the original paper on LK-99 — seem still to be just preprints, and have not been peer-reviewed yet.

So what does all this mean? Sadly, probably not much. Despite the overwrought popular media coverage, a true room-temperature and pressure superconductor was probably not going to save the world, at least not right away. The indispensable Asianometry channel on YouTube did a great video on this. As always, his focus is on the semiconductor industry, so his analysis has to be viewed through that lens. He argues that room-temperature superconductors wouldn’t make much difference in semiconductors because the place where they’d most likely be employed, the interconnects on chips, will still have inductance and capacitance even if their resistance is zero. That doesn’t mean room-temperature superconductors wouldn’t be a great thing to have, of course; seems like they’d be revolutionary for power transmission if nothing else. But not so much for semiconductors, and certainly not today.

Continue reading “Hackaday Links: August 13, 2023”

AI Learns To Walk In 3D Training Grounds

AI agents are learning to do all kinds of interesting jobs, even the creative ones that we quite prefer handling ourselves. Nevertheless, technology marches on. Working in this area is YouTuber [AI Warehouse], who has been teaching an AI to walk in a simulated environment.

Albert needed some specific guidance to learn how to walk upright, something that humans tend to figure out innately.

The AI controls a vaguely humanoid-like creature, albeit with a heavily-simplified body and limbs. It “lives” in a 3D environment created in the Unity engine, which provides the necessary physics engine for the work. Meanwhile, the ML-Agents package is used to provide the brain for Albert, the AI charged with learning to walk.

The video steps through a variety of “deep reinforcement learning” tasks. In these, the AI is rewarded for completing goals which are designed to teach it how to walk. Albert is given control of his limbs, and simply charged with reaching a button some distance away on the floor. After many trials, he learns to do the worm, and achieves his goal.

Getting Albert to walk upright took altogether more training. Lumpy ground and walls in between him and his goal were used to up the challenge, as well as encouragements to alternate his use of each foot and to maintain an upright attitude. Over time, he was able to progress through skipping and to something approximating a proper walk cycle.

One may argue that the teaching method required a lot of specific guidance, but it’s still a neat feat to achieve nonetheless. It’s altogether more complex than learning to play Trackmania, we’d say, and that was impressive enough in itself. Video after the break.

Continue reading “AI Learns To Walk In 3D Training Grounds”

Ecological System Dynamics For Computing

Some of you may remember that the ship’s computer on Star Trek: Voyager contained bioneural gel packs. Researchers have taken us one step closer to a biocomputing future with a study on the potential of ecological systems for computing.

Neural networks are a big deal in the world of machine learning, and it turns out that ecological dynamics exhibit many of the same properties. Reservoir Computing (RC) is a special type of Recurrent Neural Network (RNN) that feeds inputs into a fixed-dynamics reservoir black box with training only occurring on the outputs, drastically reducing the computational requirements of the system. With some research now embodying these reservoirs into physical objects like robot arms, the researchers wanted to see if biological systems could be used as computing resources.

Using both simulated and real bacterial populations (Tetrahymena thermophila) to respond to temperature stimuli, the researchers showed that ecological system dynamics has the “necessary conditions for computing (e.g. synchronized dynamics in response to the same input sequences) and can make near-future predictions of empirical time series.” Performance is currently lower than other forms of RC, but the researchers believe this will open up an exciting new area of research.

If you’re interested in some other experiments in biocomputing, checkout these RNA-based logic gates, this DNA-based calculator, or this fourteen-legged state machine.

What Do You Want In A Programming Assistant?

The Propellerheads released a song in 1998 entitled “History Repeating.” If you don’t know it, the lyrics include: “They say the next big thing is here. That the revolution’s near. But to me, it seems quite clear. That it’s all just a little bit of history repeating.” The next big thing today seems to be the AI chatbots. We’ve heard every opinion from the “revolutionize everything” to “destroy everything” camp. But, really, isn’t it a bit of history repeating itself? We get new tech. Some oversell it. Some fear it. Then, in the end, it becomes part of the ordinary landscape and seems unremarkable in the light of the new next big thing. Dynamite, the steam engine, cars, TV, and the Internet were all predicted to “ruin everything” at some point in the past.

History really does repeat itself. After all, when X-rays were discovered, they were claimed to cure pneumonia and other infections, along with other miracle cures. Those didn’t pan out, but we still use them for things they are good at. Calculators were going to ruin math classes. There are plenty of other examples.

This came to mind because a recent post from ACM has the contrary view that chatbots aren’t able to help real programmers. We’ve also seen that — maybe — it can, in limited ways. We suspect it is like getting a new larger monitor. At first, it seems huge. But in a week, it is just the normal monitor, and your old one — which had been perfectly adequate — seems tiny.

But we think there’s a larger point here. Maybe the chatbots will help programmers. Maybe they won’t. But clearly, programmers want some kind of help. We just aren’t sure what kind of help it is. Do we really want CoPilot to write our code for us? Do we want to ask Bard or ChatGPT/Bing what is the best way to balance a B-tree? Asking AI to do static code analysis seems to work pretty well.

So maybe your path to fame and maybe even riches is to figure out — AI-based or not — what people actually want in an automated programming assistant and build that. The home computer idea languished until someone figured out what people wanted to do with them. Video cassette didn’t make it into the home until companies figured out what people wanted most to watch on them.

How much and what kind of help do you want when you program? Or design a circuit or PCB? Or even a 3D model? Maybe AI isn’t going to take your job; it will just make it easier. We doubt, though, that it can much improve on Dame Shirley Bassey’s history lesson.