Designing An Advanced Autonomous Robot: Goose

Robotics is hard, maybe not quite as difficult as astrophysics or understanding human relationships, but designing a competition winning bot from scratch was never going to be easy. Ok, so [Paul Bupe, Jr’s] robot, named ‘Goose’, did not quite win the competition, but we’re very interested to learn what golden eggs it might lay in the aftermath.

The mechanics of the bot is based on a fairly standard dual tracked drive system that makes controlling a turn much easier than if it used wheels. Why make life more difficult than it is already? But what we’re really interested in is the design of the control system and the rationale behind those design choices.

The diagram on the left might look complicated, but essentially the system is based on two ‘brains’, the Teensy microcontroller (MCU) and a Raspberry Pi, though most of the grind is performed by the MCU. Running at 96 MHz, the MCU is fast enough to process data from the encoders and IMU in real time, thus enabling the bot to respond quickly and smoothly to sensors. More complicated and ‘heavier’ tasks such as LIDAR and computer vision (CV) are performed on the Pi, which runs ‘Robot operating system’ (ROS), communicating with the MCU by means of a couple of ‘nodes’.

The competition itself dictated that the bot should travel in large circles within the walls of a large box, whilst avoiding particular objects. Obviously, GPS or any other form of dead reckoning was not going to keep the machine on track so it relied heavily on ‘LiDAR point cloud data’ to effectively pinpoint the location of the robot at all times. Now we really get to the crux of the design, where all the available sensors are combined and fed into a ‘particle filter algorithm’:

What we particularly love about this project is how clearly everything is explained, without too many fancy terms or acronyms. [Paul Bupe, Jr] has obviously taken the time to reduce the overall complexity to more manageable concepts that encourage us to explore further. Maybe [Paul] himself might have the time to produce individual tutorials for each system of the robot?

We could well be reading far too much into the name of the robot, ‘Goose’ being Captain Marvel’s bazaar ‘trans-species’ cat that ends up laying a whole load of eggs. But could this robot help reach a de-facto standard for small robots?

We’ve seen other competition robots on Hackaday, and hope to see a whole lot more!

Video after the break: Continue reading “Designing An Advanced Autonomous Robot: Goose”

Robot Harvesting Machine Is Tip Of The Agri-Tech Iceberg

Harvesting delicate fruit and vegetables with robots is hard, and increasingly us humans no longer want to do these jobs. The pressure to find engineering solutions is intense and more and more machines of different shapes and sizes have recently been emerging in an attempt to alleviate the problem. Additionally, each crop is often quite different from one another and so, for example, a strawberry picking machine can not be used for harvesting lettuce.

A team from Cambridge university, UK, recently published the details of their lettuce picking machine, written in a nice easy-to-read style and packed full of useful practical information. Well worth a read!

The machine uses YOLO3 detection and classification networks to get localisation coordinates of the crop and then check if it’s ready for harvest, or diseased. A standard UR10 robotic arm then positions the harvesting mechanism over the lettuce, getting force feedback through the arm joints to detect when it hits the ground. A pneumatically actuated cutting blade then attempts to cut the lettuce at exactly the right height below the lettuce head in order to satisfy the very exacting requirements of the supermarkets.

Rather strangely, the main control hardware is just a standard laptop which handles 2 consumer grade USB cameras with overall combined detection and classification speeds of about 0.212 seconds. The software is ROS (Robot Operating System) with custom nodes written in Python by members of the team.

Although the machine is slow and under-powered, we were very impressed with the fact that it seemed to work quite well. This particular project has been ongoing for several years now and the machine rebuilt 16 times! These types of machines are currently (2019) very much in their infancy and we can expect to see many more attempts at cracking these difficult engineering tasks in the next few years.

We’ve covered some solutions before, including: Weedinator, an autonomous farming ‘bot, MoAgriS, an indoor farming rig, a laser-firing fish-lice remover, an Aussie farming robot, and of course the latest and greatest from FarmBot.

Video after the break:

Continue reading “Robot Harvesting Machine Is Tip Of The Agri-Tech Iceberg”

Dashing Diademata Delivers Second Generation ROS

A simple robot that performs line-following or obstacle avoidance can fit all of its logic inside a single Arduino sketch. But as a robot’s autonomy increases, its corresponding software gets complicated very quickly. It won’t be long before diagnostic monitoring and logging comes in handy, or the desire to encapsulate feature areas and orchestrate how they work together. This is where tools like the Robot Operating System (ROS) come in, so we don’t have to keep reinventing these same wheels. And Open Robotics just released ROS 2 Dashing Diademata for all of us to use.

ROS is an open source project that’s been underway since 2007 and updated regularly, each named after a turtle species. What makes this one worthy of extra attention? Dashing marks the first longer term support (LTS) release of ROS 2, a refreshed second generation of ROS. All high level concepts stayed the same, meaning almost everything in our ROS orientation guide is still applicable in ROS 2. But there were big changes under the hood reflecting technical advances over the past decade.

ROS was built in an age where a Unix workstation cost thousands of dollars, XML was going to be how we communicate all data online, and an autonomous robot cost more than a high-end luxury car. Now we have $35 Raspberry Pi running Linux, XML has fallen out of favor due to processing overhead, and some autonomous robots are high-end luxury cars. For these and many other reasons, the people of Open Robotics decided it was time to make a clean break from legacy code.

The break has its detractors, as it meant leaving behind the vast library of freely available robot intelligence modules released by researchers over the years. Popular ones were (or will be) ported to ROS 2, and there is a translation bridge sufficient to work with some, but the rest will be left behind. However, this update also resolved many of the deal-breakers preventing adoption outside of research, making ROS more attractive for commercial investment which should bring more robots mainstream.

Judging by responses to the release announcement, there are plenty of people eager to put ROS 2 to work, but it is not the only freshly baked robotics framework around. We just saw Nvidia release their Isaac Robot Engine tailored to make the most of their Jetson hardware.

Little Lamp To Learn Longer Leaps

Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.

Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.

Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.

[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.

Continue reading “Little Lamp To Learn Longer Leaps”

ROS Gets Quick Sensor Debugging In The Terminal

Sensors are critical in robotics. A robot relies on its sensor package to perform its programmed duties. If sensors are damaged or non-functional, the robot can perform unpredictably, or even fail entirely. [Dheera Venkatraman] has been working to make debugging sensor issues easier with the rosshow package for Robot Operating System.

Normally, if you want to be certain a camera feed is working on a robot, normally you’d have to connect a monitor and other peripherals, check manually, then put everything away again when you’re finished. [Dheera] considered this was altogether too much of a pain for basic sensor checks.

Instead, rosshow uses the power of SSH to speed things along. Log in to the robot, fire off a few command line instructions, and rosshow will start displaying sensor data in the terminal on your remote machine. It’s achieved through the use of Unicode Braille art in the terminal.  Sure, you won’t get a full-resolution feed from your high-definition camera, and the display from the laser scanner isn’t exactly perfect. But it’s enough to provide an instant verification that sensors are connected and working, and will speed up those routine is-it-connected checks by an order of magnitude.

Robot Operating System is a particularly useful platform if you’re thinking about the software platform for your next build. If you do put something together, be sure to let us know.

Buy Or Build An Autonomous Race Car To Take The Checkered Flag

Putting autonomous vehicles on public roads takes major resources beyond most of our means. But we can explore all the same general concepts at a smaller scale by modifying remote-control toy cars, limited only by our individual budgets and skill levels. For those of us whose interest and expertise lie in software, Amazon Web Services just launched AWS DeepRacer: a complete package for exploring machine learning on autonomous vehicles.

At a hardware level, the spec sheet makes it sound like they’ve bolted their AWS DeepLens machine vision computer on an 1/18th scale monster truck chassis. But the hardware is only the tip of the iceberg. The software behind DeepRacer is AWS RoboMaker, a set of tools for applying AWS to robot development. Everything from running digital simulations on AWS to training neural networks on AWS. Don’t know enough about machine learning? No problem! Amazon has also just opened up their internal training curriculum to the world. And to encourage participation, Amazon is running a DeepRacer League with races taking place both digitally online and physically at AWS Summit events around the world. They’ve certainly offered us a full plate at their re:Invent conference this week.

But maybe someone prefers not to use Amazon, or prefer to build their own hardware, or run their own competitions. Fortunately, Amazon is not the only game in town, merely the latest entry in an existing field. The DeepRacer’s League’s predecessor was the Robocar Rally, and the DeepRacer itself follows the Donkey Car. A do-it-yourself autonomous racing platform we first saw at Bay Area Maker Faire 2017, Donkey Car has since built up its documentation and software tools including a simulator. The default Donkey Car code is fairly specific to the car, but builders are certainly free to use something more general like the open source Robot Operating System and Gazebo robot simulator. (Which is what AWS RoboMaker builds on.)

So if the goal is to start racing little autonomous cars, we have options to buy pre-built hardware or enjoy the flexibility of building our own. Either way, it’s just another example of why this is a great time to get into neural networks, with or without companies like Amazon devising ways to earn our money. Of course, this isn’t the only Amazon project trying to build a business around an idea explored by an existing open source project. We had just talked about their AWS Ground Station offering which covers similar ground (sky?) as our 2014 Hackaday Prize winner SatNOGS.

Is 2018 Finally The Year Of Windows On The Robot?

Microsoft is bringing ROS to Window 10. ROS stands for Robot Operating System, a software framework and large collection of libraries for developing robots which we recently wrote an introductory article about, It’s long been primarily supported under Linux and Mac OS X, and even then, best under Ubuntu. My own efforts to get it working under the Raspbian distribution on the Raspberry Pi led me to instead download a Pi Ubuntu image. So having it running with the support of Microsoft on Windows will add some welcome variety.

TurtleBot 3 at ROSCon 2018
TurtleBot 3 at ROSCon 2018, Photo: Evan Ackerman/IEEE Spectrum

To announce it to the world, they had a small booth at the recent ROSCon 2018 in Madrid. There they showed a Robotis TurtleBot 3 robot running the Melodic Morenia release of ROS under Windows 10 IoT Enterprise on an Intel Coffee Lake NUC and with a ROS node incorporating hardware-accelerated Windows Machine Learning.

Why are they doing this? It may be to help promote their own machine learning products to roboticists and manufacturing. From their recent blog entry they say:

We’re looking forward to bringing the intelligent edge to robotics by bringing advanced features like hardware-accelerated Windows Machine Learning, computer vision, Azure Cognitive Services, Azure IoT cloud services, and other Microsoft technologies to home, education, commercial, and industrial robots.

Initially, they’ll support ROS1, the version most people will have used, but also have plans for ROS2. Developers will use Microsoft’s Visual Studio toolset. Thus far it’s an experimental release but you can give it a try by starting with the details here.

[Main Image Credit: Microsoft]