A photo of the circuitry along with an oscilloscope

Eight Artificial Neurons Control Fully Autonomous Toy Truck

Recently the [Global Science Network] released a video of using an artificial brain to control an RC truck.

The video shows a neural network comprised of eight artificial neurons assembled on breadboards used to control a fully autonomous toy truck. The truck is equipped with four proximity sensors, one front, one front left, one front right, and one rear. The sensor readings from the truck are transmitted to the artificial brain which determines which way to turn and whether to go forward or backward. The inputs to each neuron, the “synapses”, can be excitatory to increase the firing rate or inhibitory to decrease the firing rate. The output commands are then returned wirelessly to the truck via a hacked remote control.

This particular type of neural network is called a Spiking Neural Network (SNN) which uses discrete events, called “spikes”, instead of continuous real-valued activations. In these types of networks when a neuron fires matters as well as the strength of the signal. There are other videos on this channel which go into more depth on these topics.

The name of this experimental vehicle is the GSN SNN 4-8-24-2 Autonomous Vehicle, which is short for: Global Science Network Spiking Neural Network 4 Inputs 8 Neurons 24 Synapses 2 Degrees of Freedom Output. The circuitry on both the vehicle and the breadboards is littered with LEDs which give some insight into how it all functions.

If you’re interested in how neural networks can control behavior you might like to see a digital squid’s behavior shaped by a neural network.

Continue reading “Eight Artificial Neurons Control Fully Autonomous Toy Truck”

Diffuse glow of red, green, and blue LEDs embedded in silicone

Embedded LEDs For Soft Robots Made From Silicone

Over on their YouTube channel [Science Buddies] shows us how to embed LEDs in soft robots. Soft robots can be made entirely or partially from silicone. In the video you see an example of a claw-like gripper made entirely from silicone. You can also use silicone to make “skin”. The skin can stretch, and the degree of stretch can be measured by means of an embedded sensor made from stretchy conductive fabric.

As silicone is translucent if you embed LEDs within it when illuminated they will emit diffuse light. Stranded wire is best for flexibility and the video demonstrates how to loop the wires back and forth into a spring-like shape for expansion and contraction along the axis which will stretch. Or you can wire in the LEDs without bending the wires if you run them along an axis which won’t stretch.

The video shows how to make silicone skin by layering two-part mixture into a mold. A base layer of silicone is followed by a strip of conductive fabric and the LED with its wires. Then another layer of silicone is applied to completely cover and seal the fabric and LED in place. Tape is used to hold the fabric and LED in place while the final layer of silicone is applied.

When the LEDs are embedded in silicone there will be reduced airflow to facilitate cooling so be sure to use a large series resistor to limit the current through the LED as much as possible to prevent overheating. A 1K series resistor would be a good value to try first. If you need the LED to be brighter you will need to decrease the resistance, but make sure you’re not generating too much heat when you do so.

If you’re interested in stretchy circuits you might also like to read about flexible circuits built on polyimide film.

Continue reading “Embedded LEDs For Soft Robots Made From Silicone”

A Lockpicking Robot That Can Sense The Pins

Having a robot that can quickly and unsupervised pick any lock with the skills of a professional human lockpicker has been a dream for many years. A major issue with lockpicking robots is however the lack of any sensing of the pins – or equivalent – as the pick works its magic inside. One approach to try and solve this was attempted by the [Sparks and Code] channel on YouTube, who built a robot that uses thin wires in a hollow key, load cells and servos to imitate the experience of a human lockpicker working their way through a pin-tumbler style lock.

Although the experience was mostly a frustrating series of setbacks and failures, it does show an interesting approach to sensing the resistance from the pin stack in each channel. The goal with picking a pin-tumbler lock is to determine when the pin is bound where it can rotate, and to sense any false gates from security pins that may also be in the pin stack. This is not an easy puzzle to solve, and is probably why most lockpicking robots end up just brute-forcing all possible combinations.

Perhaps that using a more traditional turner and pick style approach here – with one or more loadcells on the pick and turner- or a design inspired by the very effective Lishi decoding tools would be more effective here. Regardless, the idea of making lockpicking robots more sensitive is a good one, albeit a tough nut to crack. The jobs of YouTube-based lockpicking enthusiasts are still safe from the robots, for now.

Continue reading “A Lockpicking Robot That Can Sense The Pins”

Robots Want The Jobs You Can’t Do

There’s something ominous about robots taking over jobs that humans are suited to do. Maybe you don’t want a job turning a wrench or pushing a broom, but someone does. But then there are the jobs no one wants to do or physically can’t do. Robots fighting fires, disarming bombs, or cleaning up nuclear reactors is something most people will support. But can you climb through a water pipe from the inside? No? There are robots that are available from several commercial companies and others from university researchers from multiple continents.

If you think about it, it makes sense. For years, companies that deal with pipes would shoot large slugs, or “pigs”, through the pipeline to scrape them clean. Eventually, they festooned some pigs with sensors, and thus was born the smart pig. But now that it is possible to make tiny robots, why not send them inside the pipe to inspect and repair?

Continue reading “Robots Want The Jobs You Can’t Do”

Two views of a motor are shown. On the left, a ring of copper-wire-wound stator arms is visible inside a ring of magnets. Inside this, a planetary gearbox is visible, with three mid-sized gears surrounding a small central gear. On the right, the same motor is shown, but with the internal components mostly covered by a black faceplate with brass inserts.

A Budget Quasi-Direct-Drive Motor Inspired By MIT’s Mini Cheetah

It’s an unfortunate fact that when a scientist at MIT describes an exciting new piece of hardware as “low-cost,” it might not mean the same thing as if a hobbyist had said it. [Caden Kraft] encountered this disparity when he was building a SCARA arm and needed good actuators. An actuator like those on MIT’s Mini Cheetah would have been ideal, but they cost about $300. Instead, [Caden] designed his own actuator, much cheaper but still with excellent performance.

The actuator [Caden] built is a quasi-direct-drive actuator, which combines a brushless DC motor with an integrated gearbox in a small, efficient package. [Caden] wanted all of the custom parts in the motor to be 3D printed, so a backing iron for the permanent magnets was out of the question. Instead, he arranged the magnets to form a Halbach array; according to his simulations, this gave almost identical performance to a motor with a backing iron. As a side benefit, this reduced the inertia of the rotor and let it reverse more easily.

To increase torque, [Caden] used a planetary gearbox with cycloidal gear profiles, which may be the stars of the show here. These reduced backlash, decreased stress concentration on the teeth, and were easier to 3D print. He found a Python program to generate planetary gearbox designs, but ended up creating a fork with the ability to export 3D files. The motor’s stator was commercially-bought and hand-wound, and the finished drive integrates a cheap embedded motor controller. Continue reading “A Budget Quasi-Direct-Drive Motor Inspired By MIT’s Mini Cheetah”

Cara robot dog

From Leash To Locomotion: CARA The Robotic Dog

Normally when you hear the words “rope” and “dog” in the same sentence, you think about a dog on a leash, but in this robot dog, the rope is what makes it move, not what stops it from going too far. [Aaed Musa]’s latest project is CARA, a robotic dog made mostly of 3D printed parts, with brushless motors and ropes used to tie the motors and legs together.

In a previous post, we covered [Aaed Musa]’s use of rope as a mechanism to make capstan drives, enabling high torque and little to no backlash. Taking that gearbox design, tweaking it a bit, and using three motors, he was able to make a leg capable of moving in all three axes. He had to do a good deal of inverse kinematics math to get the leg moving around as desired; once he had the motion of a step defined, it was time to build the rest of the dog.

CARA is made primarily of 3D printed parts, with several carbon fiber tubes running its length for rigidity. The legs are all free to move not only forward and back but side to side some, as in a real dog. He uses 12 large brushless motors, as they provide the torque needed, and ODrive S1 motor controllers to control each one, controlled over CAN by a Teensy 4.1 microcontroller. There is also a small BNO086 IMU to sense CARA’s position relative to gravity, and a 24V cordless tool battery powers everything.

Once assembled, there was some more tuning of what type of motion CARA’s legs take while walking. There were a few tweaks to the printed parts to address some structural issues, and then a good deal more inverse kinematics math to make full use of the IMU, allowing CARA to handle inclines and make a much more natural movement style. [Aaed Musa] does a great job explaining his approach on his site as well as in the video below; we’re looking forward to seeing his future projects!

CARA isn’t alone on this site—be sure to check out the other robot dogs we’ve featured here.

Continue reading “From Leash To Locomotion: CARA The Robotic Dog”

Meet Cucumber, The Robot Dog

Robots can look like all sorts of things, but they’re often more fun if you make them look like some kind of charming animal. That’s precisely what [Ananya], [Laurence] and [Shao] did when they built Cucumber the Robot Dog for their final project in the ECE 4760 class.

Cucumber is controllable over WiFi, which was simple enough to implement by virtue of the fact that it’s based around the Raspberry Pi Pico W. With its custom 3D-printed dog-like body, it’s able to move around on its four wheels driven by DC gear motors, and it can flex its limbs thanks to servos in its various joints. It’s able to follow someone with some autonomy thanks to its ultrasonic sensors, while it can also be driven around manually if so desired. To give it more animal qualities, it can also be posed, or commanded to bark, howl, or growl, with commands issued remotely via a web interface.

The level of sophistication is largely on the level of the robot dogs that were so popular in the early 2000s. One suspects it could be pretty decent at playing soccer, too, with the right hands behind the controls. Video after the break.

Continue reading “Meet Cucumber, The Robot Dog”