ROS Gets Quick Sensor Debugging In The Terminal

Sensors are critical in robotics. A robot relies on its sensor package to perform its programmed duties. If sensors are damaged or non-functional, the robot can perform unpredictably, or even fail entirely. [Dheera Venkatraman] has been working to make debugging sensor issues easier with the rosshow package for Robot Operating System.

Normally, if you want to be certain a camera feed is working on a robot, normally you’d have to connect a monitor and other peripherals, check manually, then put everything away again when you’re finished. [Dheera] considered this was altogether too much of a pain for basic sensor checks.

Instead, rosshow uses the power of SSH to speed things along. Log in to the robot, fire off a few command line instructions, and rosshow will start displaying sensor data in the terminal on your remote machine. It’s achieved through the use of Unicode Braille art in the terminal.  Sure, you won’t get a full-resolution feed from your high-definition camera, and the display from the laser scanner isn’t exactly perfect. But it’s enough to provide an instant verification that sensors are connected and working, and will speed up those routine is-it-connected checks by an order of magnitude.

Robot Operating System is a particularly useful platform if you’re thinking about the software platform for your next build. If you do put something together, be sure to let us know.

Buy Or Build An Autonomous Race Car To Take The Checkered Flag

Putting autonomous vehicles on public roads takes major resources beyond most of our means. But we can explore all the same general concepts at a smaller scale by modifying remote-control toy cars, limited only by our individual budgets and skill levels. For those of us whose interest and expertise lie in software, Amazon Web Services just launched AWS DeepRacer: a complete package for exploring machine learning on autonomous vehicles.

At a hardware level, the spec sheet makes it sound like they’ve bolted their AWS DeepLens machine vision computer on an 1/18th scale monster truck chassis. But the hardware is only the tip of the iceberg. The software behind DeepRacer is AWS RoboMaker, a set of tools for applying AWS to robot development. Everything from running digital simulations on AWS to training neural networks on AWS. Don’t know enough about machine learning? No problem! Amazon has also just opened up their internal training curriculum to the world. And to encourage participation, Amazon is running a DeepRacer League with races taking place both digitally online and physically at AWS Summit events around the world. They’ve certainly offered us a full plate at their re:Invent conference this week.

But maybe someone prefers not to use Amazon, or prefer to build their own hardware, or run their own competitions. Fortunately, Amazon is not the only game in town, merely the latest entry in an existing field. The DeepRacer’s League’s predecessor was the Robocar Rally, and the DeepRacer itself follows the Donkey Car. A do-it-yourself autonomous racing platform we first saw at Bay Area Maker Faire 2017, Donkey Car has since built up its documentation and software tools including a simulator. The default Donkey Car code is fairly specific to the car, but builders are certainly free to use something more general like the open source Robot Operating System and Gazebo robot simulator. (Which is what AWS RoboMaker builds on.)

So if the goal is to start racing little autonomous cars, we have options to buy pre-built hardware or enjoy the flexibility of building our own. Either way, it’s just another example of why this is a great time to get into neural networks, with or without companies like Amazon devising ways to earn our money. Of course, this isn’t the only Amazon project trying to build a business around an idea explored by an existing open source project. We had just talked about their AWS Ground Station offering which covers similar ground (sky?) as our 2014 Hackaday Prize winner SatNOGS.

ROS on Windows 10

Is 2018 Finally The Year Of Windows On The Robot?

Microsoft is bringing ROS to Window 10. ROS stands for Robot Operating System, a software framework and large collection of libraries for developing robots which we recently wrote an introductory article about, It’s long been primarily supported under Linux and Mac OS X, and even then, best under Ubuntu. My own efforts to get it working under the Raspbian distribution on the Raspberry Pi led me to instead download a Pi Ubuntu image. So having it running with the support of Microsoft on Windows will add some welcome variety.

TurtleBot 3 at ROSCon 2018
TurtleBot 3 at ROSCon 2018, Photo: Evan Ackerman/IEEE Spectrum

To announce it to the world, they had a small booth at the recent ROSCon 2018 in Madrid. There they showed a Robotis TurtleBot 3 robot running the Melodic Morenia release of ROS under Windows 10 IoT Enterprise on an Intel Coffee Lake NUC and with a ROS node incorporating hardware-accelerated Windows Machine Learning.

Why are they doing this? It may be to help promote their own machine learning products to roboticists and manufacturing. From their recent blog entry they say:

We’re looking forward to bringing the intelligent edge to robotics by bringing advanced features like hardware-accelerated Windows Machine Learning, computer vision, Azure Cognitive Services, Azure IoT cloud services, and other Microsoft technologies to home, education, commercial, and industrial robots.

Initially, they’ll support ROS1, the version most people will have used, but also have plans for ROS2. Developers will use Microsoft’s Visual Studio toolset. Thus far it’s an experimental release but you can give it a try by starting with the details here.

[Main Image Credit: Microsoft]

Simple Quadcopter Testbed Clears The Air For Easy Algorithm Development

We don’t have to tell you that drones are all the rage. But while new commercial models are being released all the time, and new parts get released for the makers, the basic technology used in the hardware hasn’t changed in the last few years. Sure, we’ve added more sensors, increased computing power, and improved the efficiency, but the key developments come in the software: you only have to look at the latest models on the market, or the frequency of Git commits to Betaflight, Butterflight, Cleanflight, etc.

With this in mind, for a Hackaday prize entry [int-smart] is working on a quadcopter testbed for developing algorithms, specifically localization and mapping. The aim of the project is to eventually make it as easy as possible to get off the ground and start writing code, as well as to integrate mapping algorithms with Ardupilot through ROS.

The initial idea was to use a Beaglebone Blue and some cheap hobby hardware which is fairly standard for a drone of this size: 1250 kv motors and SimonK ESCs, mounted on an f450 flame wheel style frame. However, it looks like an off-the-shelf solution might be even simpler if it can be made to work with ROS. A Scanse Sweep LIDAR sensor provides point cloud data, which is then munched with some Iterative Closest Point (ICP) processing. If you like math then it’s definitely worth reading the project logs, as some of the algorithms are explained there.

It might be fun to add FPV to this system to see how the mapping algorithms are performing from the perspective of the drone. And just because it’s awesome. FPV is also a fertile area for hacking: we particularly love this FPV tracker which rotates itself to get the best signal, and this 3D FPV setup using two cameras.

Modular Robotics Made Easier With ROS

A robot is made up of many hardware components each of which requires its own software. Even a small robot arm with a handful of servo motors uses a servo motor library.

Add that arm to a wheeled vehicle and you have more motors. Then attach some ultrasonic sensors for collision avoidance or a camera for vision. By that point, you’ve probably split the software into multiple processes: one for the arm, another for the mobility, one for vision, and one to act as the brains interfacing somehow with all the rest. The vision may be doing object recognition, something which is computationally demanding and so you now have multiple computers.

Break all this complexity into modules and you have a use case for ROS, the Robot Operating System. As this article shows, ROS can help with designing, building, managing, and even evolving your robot.

Continue reading “Modular Robotics Made Easier With ROS”

Yellow Robot Wheels Rolling Out

Small wheeled robots are great for exploring robotics and it’s easier than ever to get started, thanks to growing availability and affordability of basic components. One such component is a small motorized wheel assembly commonly shown when searching for “robot wheel”: a small DC motor mounted in a gearbox to drive a single plastic wheel (inevitably yellow) on which a thin rubber tire has been mounted for traction. Many projects have employed these little motor + gearbox + wheel modules, such as these three entries for 2018 Hackaday Prize:

BoxBotics takes the idea of an affordable entry point and runs with it: build robot chassis for these wheels out of cardboard boxes. (Maybe even the exact box that shipped the yellow wheels.) Cardboard is cheap and easy to work with, making cardboard projects approachable to any creative mind. There will be an audience for something like a Nintendo Labo for robotics, and maybe BoxBotics will grow into that offering.

Cing also intends to make a friendly entry point for robotics and they offer a different chassis solution. Instead of cardboard, they use a circuit board. The yellow gearbox is mounted directly to the main circuit board making it into the physical spine, along with its copper traces serving as the spinal cord of the robot. While less amenable to mechanical creativity than BoxBotics, Cing’s swappable modules might be a better fit for those interested in exploring electronics.

ROS Starter Robot caters to those who wish to go far beyond simple “make it move” level of robot intelligence. It aims to lower the barrier to enter the world of ROS (robot operating system) which has historically been the domain of very capable (but also very expensive) research-oriented robots. This project could become the bridge for aspiring roboticists who wish to grow beyond hobbyist level software but can’t justify the cost typical of research level hardware.

All three of these projects take the same simple motorized wheel and build very different ideas on top of them. This is exactly the diversity of ideas we want to motivate with the Hackaday Prize and we hope to see great progress on all prize contestants in the month ahead.

Simulate Your Robot Before You Build It

[Nurgak] shows how one can use some of the great robotic tools out there to simulate a robot before you even build it. To drive this point home he builds the tutorial off of the easily 3D printable and buildable Robopoly platform.

The robot runs on Robot Operating System at its core. ROS is interesting because of its decentralized and input/output agnostic messaging system. For example, if you leave everything alone but swap out the motor output from actual motors to a simulator, you can see how the robot would respond to any arbitrary input.

[Nurgak] uses another piece of software called V-REP to demonstrate this. V-REP is a simulation suite for robotics and has a few ROS nodes built in. So in order to make a simulated line-following robot, [Nurgak] tells V-REP to send a simulated camera image to the decision making node of the robot in ROS. It then sends the movement messages back to V-REP which drives the pretend robot around.

He runs through a few more examples, proving that it’s entirely possible to become if not a roboticist, at least a really good AI programmer without ever dropping the big money on parts to build a robot.