A Crust-Cutting, Carrot-Chopping Robot

[3DprintedLife] sure does hate bread crust. Not the upper portion of homemade bread, mind you — just that nasty stuff around the edges of store-bought loaves. Several dozen hours of CAD later, [3DprintedLife] had themselves a crust-cutting robot that also chops vegetables.

This De-Cruster 9000 is essentially a 2-axis robotic guillotine over a turntable. It uses a Raspberry Pi 4 and OpenCV to seek and destroy bread crusts with a dull dollar store knife. Aside from the compact design, our favorite part has to be the firmware limit switches baked into the custom control board. The stepper drivers have this fancy feature called StallGuard™ that constantly reads the back EMF to determine the load the motor is under. If you have it flag you right before the motor hits the end of the rail and stalls, bam, you have a firmware limit switch. Watch it remove crusts and chop a lot of carrots with faces after the break.

This is far from the dangerous-looking robot we’ve seen lately. Remember this hair-cutting contraption?

Continue reading “A Crust-Cutting, Carrot-Chopping Robot”

Artistic Robot Has Paints, Will Travel

Creativity is a very human trait, and one that many try to emulate with robots. Some focus on the cerebral side of things, working with neural networks and machine learning to produce new artistic output. Others work on the mechanical side, building ‘bots that can manipulate tools in the real world for artistic purposes. [Technovation]’s latest build falls into the latter category – a small Arduino-powered ‘bot that likes to paint.

The robot moves around on two wheels, each driven by a stepper motor for accurate movement. The paintbrush itself is controlled with another stepper, which rotates it between the paint pots and the canvas. A servo is used to dip the brush into pots, and to apply it to the canvas. An Arduino Uno runs the show, with the robot currently programmed to paint random lines of various colors on the canvas.

By virtue of its roving design, it could theoretically paint on arbitrarily large canvasses. It’s a platform that could prove highly capable when paired with a neural network and perhaps some machine vision to allow it to concoct more complex artworks. We’ve seen other paint bots before, too. Video after the break.

Continue reading “Artistic Robot Has Paints, Will Travel”

Really Useful Robot

[James Bruton] is an impressive roboticist, building all kinds of robots from tracked, exploring robots to Boston Dynamics-esque legged robots. However, many of the robots are proof-of-concept builds that explore machine learning, computer vision, or unique movements and characteristics. This latest build make use of everything he’s learned from building those but strives to be useful on a day-to-day basis as well, and is part of the beginning of a series he is doing on building a Really Useful Robot. (Video, embedded below.)

While the robot isn’t quite finished yet, his first video in this series explores the idea behind the build and the construction of the base of the robot itself. He wants this robot to be able to navigate its environment but also carry out instructions such as retrieving a small object from a table. For that it needs a heavy base which is built from large 3D-printed panels with two brushless motors with encoders for driving the custom wheels, along with a suspension built from casters and a special hinge. Also included in the base is an Nvidia Jetson for running the robot, and also handling some heavy lifting tasks such as image recognition.

As of this writing, [James] has also released his second video in the series which goes into detail about the mapping and navigation functions of the robots, and we’re excited to see the finished product. Of course, if you want to see some of [James]’s other projects be sure to check out his tracked rover or his investigations into legged robots.

Continue reading “Really Useful Robot”

How To Improve A Smart Motor? Make It Bigger!

Brushless motors can offer impressive torque-to-size ratios, and when combined with complex drive control and sensor feedback, exciting things become possible that expand the usual ideas of what motors can accomplish. For example, to use a DC motor in a robot leg, one might expect to need a gearbox, a motor driver, plus an encoder for position sensing. If smooth, organic motion is desired, some sort of compliant mechanical design would be involved as well. But motors like the IQ Vertiq 6806 offered by [IQ Motion Control] challenge those assumptions. By combining a high-torque brushless DC motor, advanced controller, and position sensing into an integrated device, things like improved drone performance and direct-drive robotic legs like those of the Mini Cheetah become possible.

IQ Vertiq 6806 brushless DC motor with integrated controller, driver, and position sensing.

First, the bad news: these are not cheap motors. The IQ Vertiq 6806 costs $399 USD each through the Crowd Supply pre-order ($1499 for four), but they aren’t overpriced for what they are. The cost compares favorably with other motors and controllers of the same class. A little further than halfway down the Crowd Supply page, [IQ Motion Control] makes a pretty good case for itself by comparing features with other solutions. Still, these are not likely to be anyone’s weekend impulse purchase.

So how do these smart motors work? They have two basic operating modes: Speed and Position, each of which requires different firmware, and which one to use depends on the intended application.

The “Speed” firmware is designed with driving propeller loads in mind, and works a lot like any other brushless DC motor with an ESC (electronic speed control) on something like a drone or other UAV. But while the unit can be given throttle or speed control signals like any other motor, it can also do things like accept commands in terms of thrust. In other words, an aircraft’s flight controller can communicate to motors directly in thrust units, instead of a speed control signal whose actual effect is subject to variances like motor voltage level.

The “Position” mode has the motor function like a servo with adjustable torque, which is perfect for direct drive applications like robotic legs. The position sensing also allows for a few neat tricks, like the ability to use the motors as inputs. Embedded below are two short videos showcasing both of these features, so check them out.

Continue reading “How To Improve A Smart Motor? Make It Bigger!”

Open Source Self-Driving Smartphone Robot

Our smartphones are incredibly powerful computers in their own right, yet we don’t often see them directly integrated into projects. Intel Intelligent Systems Lab has done exactly that with the release OpenBot, an open source smartphone based self-driving robot.

Most of the magic happens on the smartphone, which runs an app built on TensorFlow Lite, and integrates the camera and array of sensors on the smartphone, as well as the data from ultrasonic sensors and wheel encoders on the robot. The robot itself is relatively simple, with four geared DC motors, motor drivers wired to an Arduino Nano that interfaces with an Android Phone over serial.

The app created by the Intel ISL team comes preloaded with three AI models that can do either person following, or two different modes of autonomous navigation. By connecting a Bluetooth controller to the smartphone and drive the robot around manually in your specific environment while collecting data, you can train a custom autonomous driving policy to suit your environment.

This looks like an excellent way to get a taste of autonomous robots on a small budget, while still being a viable base for more demanding applications. We’ve seen only a few smartphone based robots like DriveMyPhone and SmartiPresense, which don’t have AI capabilities, but are intended for telepresence applications. We’ve always wondered why we don’t see more projects with cellphones, so we welcome the example.

Continue reading “Open Source Self-Driving Smartphone Robot”

Robots Learning To Understand Their Surroundings

Today it is pretty easy to build a robot with an onboard camera and have fun manually driving through that first-person view. But builders with dreams of autonomy quickly learn there is a lot of work between camera installation and autonomously executing a “go to chair” command. Fortunately we can draw upon work such as View Parsing Network by [Bowen Pan, Jiankai Sun, et al]

When a camera image comes into a computer, it is merely a large array of numbers representing red, green, and blue color values and our robot has no idea what that image represents. Over the past years, computer vision researchers have found pretty good solutions for problems of image classification (“is there a chair?”) and segmentation (“which pixels correspond to the chair?”) While useful for building an online image search engine, this is not quite enough for robot navigation.

A robot needs to translate those pixel coordinates into real-world layout, and this is the problem View Parsing Network offers to solve. Detailed in Cross-view Semantic Segmentation for Sensing Surroundings (DOI 10.1109/LRA.2020.3004325) the system takes in multiple camera views looking all around the robot. Results of image segmentation are then synthesized into a 2D top-down segmented map of the robot’s surroundings. (“Where is the chair located?”)

The authors documented how to train a view parsing network in a virtual environment, and described the procedure to transfer a trained network to run on a physical robot. Today this process demands a significantly higher skill level than “download Arduino sketch” but we hope such modules will become more plug-and-play in the future for better and smarter robots.

[IROS 2020 Presentation video (duration 10:51) requires free registration, available until at least Nov. 25th 2020. One-minute summary embedded below.]

Continue reading “Robots Learning To Understand Their Surroundings”

Rebuilding A Hero (the Robot, Not The Sandwich)

When [Scott Baker] found a Heathkit Hero Junior on eBay, he grabbed it. He had one as a kid, but it was long sold. The robot arrived with no electronics, so the first order of business is to give it some new modern brains including an ATMega328 and a Raspberry Pi. You can see the start of the project in the video below.

So far, you can see a nice teardown of the chassis and what’s left of the little robot’s drive system. This wasn’t the big Hero-1 that you probably remember, but it was still a pretty solid platform, especially for the time it was on the market.

Continue reading “Rebuilding A Hero (the Robot, Not The Sandwich)”