Really Useful Robot

[James Bruton] is an impressive roboticist, building all kinds of robots from tracked, exploring robots to Boston Dynamics-esque legged robots. However, many of the robots are proof-of-concept builds that explore machine learning, computer vision, or unique movements and characteristics. This latest build make use of everything he’s learned from building those but strives to be useful on a day-to-day basis as well, and is part of the beginning of a series he is doing on building a Really Useful Robot. (Video, embedded below.)

While the robot isn’t quite finished yet, his first video in this series explores the idea behind the build and the construction of the base of the robot itself. He wants this robot to be able to navigate its environment but also carry out instructions such as retrieving a small object from a table. For that it needs a heavy base which is built from large 3D-printed panels with two brushless motors with encoders for driving the custom wheels, along with a suspension built from casters and a special hinge. Also included in the base is an Nvidia Jetson for running the robot, and also handling some heavy lifting tasks such as image recognition.

As of this writing, [James] has also released his second video in the series which goes into detail about the mapping and navigation functions of the robots, and we’re excited to see the finished product. Of course, if you want to see some of [James]’s other projects be sure to check out his tracked rover or his investigations into legged robots.

Continue reading “Really Useful Robot”

How To Improve A Smart Motor? Make It Bigger!

Brushless motors can offer impressive torque-to-size ratios, and when combined with complex drive control and sensor feedback, exciting things become possible that expand the usual ideas of what motors can accomplish. For example, to use a DC motor in a robot leg, one might expect to need a gearbox, a motor driver, plus an encoder for position sensing. If smooth, organic motion is desired, some sort of compliant mechanical design would be involved as well. But motors like the IQ Vertiq 6806 offered by [IQ Motion Control] challenge those assumptions. By combining a high-torque brushless DC motor, advanced controller, and position sensing into an integrated device, things like improved drone performance and direct-drive robotic legs like those of the Mini Cheetah become possible.

IQ Vertiq 6806 brushless DC motor with integrated controller, driver, and position sensing.

First, the bad news: these are not cheap motors. The IQ Vertiq 6806 costs $399 USD each through the Crowd Supply pre-order ($1499 for four), but they aren’t overpriced for what they are. The cost compares favorably with other motors and controllers of the same class. A little further than halfway down the Crowd Supply page, [IQ Motion Control] makes a pretty good case for itself by comparing features with other solutions. Still, these are not likely to be anyone’s weekend impulse purchase.

So how do these smart motors work? They have two basic operating modes: Speed and Position, each of which requires different firmware, and which one to use depends on the intended application.

The “Speed” firmware is designed with driving propeller loads in mind, and works a lot like any other brushless DC motor with an ESC (electronic speed control) on something like a drone or other UAV. But while the unit can be given throttle or speed control signals like any other motor, it can also do things like accept commands in terms of thrust. In other words, an aircraft’s flight controller can communicate to motors directly in thrust units, instead of a speed control signal whose actual effect is subject to variances like motor voltage level.

The “Position” mode has the motor function like a servo with adjustable torque, which is perfect for direct drive applications like robotic legs. The position sensing also allows for a few neat tricks, like the ability to use the motors as inputs. Embedded below are two short videos showcasing both of these features, so check them out.

Continue reading “How To Improve A Smart Motor? Make It Bigger!”

Open Source Self-Driving Smartphone Robot

Our smartphones are incredibly powerful computers in their own right, yet we don’t often see them directly integrated into projects. Intel Intelligent Systems Lab has done exactly that with the release OpenBot, an open source smartphone based self-driving robot.

Most of the magic happens on the smartphone, which runs an app built on TensorFlow Lite, and integrates the camera and array of sensors on the smartphone, as well as the data from ultrasonic sensors and wheel encoders on the robot. The robot itself is relatively simple, with four geared DC motors, motor drivers wired to an Arduino Nano that interfaces with an Android Phone over serial.

The app created by the Intel ISL team comes preloaded with three AI models that can do either person following, or two different modes of autonomous navigation. By connecting a Bluetooth controller to the smartphone and drive the robot around manually in your specific environment while collecting data, you can train a custom autonomous driving policy to suit your environment.

This looks like an excellent way to get a taste of autonomous robots on a small budget, while still being a viable base for more demanding applications. We’ve seen only a few smartphone based robots like DriveMyPhone and SmartiPresense, which don’t have AI capabilities, but are intended for telepresence applications. We’ve always wondered why we don’t see more projects with cellphones, so we welcome the example.

Continue reading “Open Source Self-Driving Smartphone Robot”

Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Rebuilding A Hero (the Robot, Not The Sandwich)

When [Scott Baker] found a Heathkit Hero Junior on eBay, he grabbed it. He had one as a kid, but it was long sold. The robot arrived with no electronics, so the first order of business is to give it some new modern brains including an ATMega328 and a Raspberry Pi. You can see the start of the project in the video below.

So far, you can see a nice teardown of the chassis and what’s left of the little robot’s drive system. This wasn’t the big Hero-1 that you probably remember, but it was still a pretty solid platform, especially for the time it was on the market.

Continue reading “Rebuilding A Hero (the Robot, Not The Sandwich)”

Wheels Or Legs? Why Not Both?

Out of the thousands of constraints and design decisions to consider when building a robot, the way it moves is perhaps one of the most fundamental. The method of movement constrains the design and use case for the robot perhaps more than any other parameter. A team of researchers at Texas A&M led by [Kiju Lee] is trying to have their cake and eat it too by building a robot with wheels that transform into legs, known as a-WaLTR (Adaptable Wheel-and-Leg Transformable Robot).

a-WaLTR was designed to conquer one of wheeled robots’ biggest obstacles: stairs. By adding a bit of smarts to determine whether a given terrain is better handled by wheels or legs, a-WaLTR can convert its segmented wheels into simple legs. Rather than implemented complex and error-prone articulated legs, the team stuck with robust appendages that remind us a little of whegs.

The team will show off their prototype at DARPA OFFSET Sprint-5 in February 2021, which is a program focused on building robots that can form adaptive human-swarm teams.

Thanks to the rise of 3D printers and hobbyist electronics there are more open-source experimental robot designs than ever. We’ve seen smaller versions of the famous Boston Dynamics’ Spot as well as simpler quadruped bots with more servos. a-WaLTR isn’t the first transforming robot we’ve seen, but we’re looking forward to seeing more unique takes on robotic locomotion in the future.

Thanks to [Qes] for sending this one in!

Flexible Actuators Spring Into Action

Most experiments in flexible robot actuators are based around pneumatics, but [Ayato Kanada] and [Tomoaki Mashimo] has been working on using a coiled spring as the moving component of a linear actuator. Named the flexible ultrasonic motor (FUSM), [Yunosuke Sato] built on top of their work and assembled a pair of FUSM into a closed-loop actuator with motion control in two dimensions.

A single FUSM is pretty interesting by itself, its coiled spring is the only mechanical moving part. An earlier paper published by [Kanada] and [Mashimo] laid out how to push the spring through a hole in a metal block acting as the stator of this motor. Piezoelectric devices attached to that block minutely distorts it in a controlled manner resulting in linear motion of the spring.

For closed-loop feedback, electrical resistance from the free end of the spring to the stator block can be measured and converted to linear distance to within a few millimeters. However, the acting end of the spring might be deformed via stretching or bending, which made calculating its actual position difficult. Accounting for such deformation is a future topic for this group of researchers.

This work was presented at IROS2020 which like many other conferences this year, moved online and became IROS On-Demand. After a no-cost online registration we can watch the 12-minute recorded presentation on this project or any other at the conference. The video includes gems such as an exaggerated animation of stator block deformation to illustrate how a FUSM works, and an example of the position calculation challenge where the intended circular motion actually resulted in an oval.

Speaking of conferences that have moved online, we have our own Hackaday Remoticon coming up soon!

Continue reading “Flexible Actuators Spring Into Action”