The robot runs on Robot Operating System at its core. ROS is interesting because of its decentralized and input/output agnostic messaging system. For example, if you leave everything alone but swap out the motor output from actual motors to a simulator, you can see how the robot would respond to any arbitrary input.
[Nurgak] uses another piece of software called V-REP to demonstrate this. V-REP is a simulation suite for robotics and has a few ROS nodes built in. So in order to make a simulated line-following robot, [Nurgak] tells V-REP to send a simulated camera image to the decision making node of the robot in ROS. It then sends the movement messages back to V-REP which drives the pretend robot around.
He runs through a few more examples, proving that it’s entirely possible to become if not a roboticist, at least a really good AI programmer without ever dropping the big money on parts to build a robot.
If you haven’t jumped on the ESP8266 bandwagon yet, it might be a good time to get started. If you can program an Arduino you have pretty much all of the skills you’ll need to get an ESP8266 up and running. And, if you need a good idea for a project to build with one of these WiFi miracle chips, look no further than [Ben Buxton]’s dated, but awesome, NTP clock.
We’re quite used to multitasking computer systems today. Our desktops run email, a couple of browsers in different workspaces, a word processor, and a few other applications, apparently all at once. Looking behind the scenes using a system monitor or task manager program reveals a multitude of other programs running in support of our activities. Of course, any given CPU is running a maximum of one program at a time. Multitasking is simply the practice of switching between active processes fast enough to give the illusion of simultaneity.
The roots of multiasking go way back. In the early days, when computers cost tons of money, the thought of an idle system was anathema. Teletype IO was slow compared to the processor, and leaving the processor waiting idle for a card reader to slurp in the next card was outrageous. The gurus of the time worked to fill that idle time with productive work. That eventually led to systems that would run multiple programs at one time, and eventually to more finely grained multitasking within a program.
Modern multitasking depends on support from the underlying API of an operating system. Each OS uses its own techniques, making it difficult to write portable code. The C++ 2011 standard increased the portability of the language by adding concurrency routines to the Standard Template Library (STL). These routines use the API of the OS. For instance, the Linux version uses the POSIX threading library, pthread. The result is a minimal, but useful, capability for building upon in later standards. The C++ 2017 standard development activities include work on parallelism and concurrency.
In this article, I’ll work through some of the facilities for and pitfalls in writing threaded code in C++.
Last fall, I grabbed a robot arm from Robot Geeks when they were on sale at Thanksgiving. The arm uses servos to rotate the base and move the joints and gripper. These work well enough but I found one aspect of the arm frustrating. When you apply power, the software commands the servos to move to home position. The movement is sufficiently violent it can cause the entire arm to jump.
This jump occurs because there is no position feedback to the Arduino controller leaving it unable to know the positions of the arm’s servos and move them slowly to home. I pondered how to add this feedback using sensors, imposing the limitation that they couldn’t be large or require replacing existing parts. I decided to try adding accelerometers on each arm section.
Accelerometers, being affected by gravity when on a planet, provide an absolute reference because they always report the direction of down. With an accelerometer I can calculate the angle of an arm section with respect to the direction of gravitational acceleration.
Before discussing the accelerometers, take a look at the picture of the arm. An accelerometer would be added to each section of the arm between the controlling servos.
Your quad-copter is hovering nicely 100 feet north of you, its camera pointed exactly on target. The hover is doing so well all the RC transmitter controls are in the neutral position. The wind picks up a bit and now the ‘copter is 110 feet north. You adjust its position with your control stick but as you do the wind dies and you overshoot the correction. Another gust pushed it away from target in more than one direction as frustration passes your lips: ARGGGHH!! You promise yourself to get a new flight computer with position hold capability.
How do multicopters with smart controllers hold their position? They use a technique called Proportional – Integral – Derivative (PID) control. It’s a concept found in control systems of just about everything imaginable. To use PID your copter needs sensors that measure the current position and movement.
The typical sensors used for position control are a GPS receiver and an Inertial Management Measurement Unit (IMU) made up of an accelerometer, a gyroscope, and possibly a magnetometer (compass). Altitude control would require a barometer or some other means of measuring height above ground. Using sensor fusion techniques to combine the raw data, a computer can determine the position, movement, and altitude of the multicopter. But calculating corrections that will be just right, without over or undershooting the goal, is where PID comes into play. Continue reading “Flying with Proportional – Integral – Derivative Control”→
[Martin Raynsford] wrote a program that converts a black-and-white 2D image to G-code so that his laser printer could then etch the image. Not satisfied with just that, he used his laser printer to make a scanner that consists of a stand for his webcam and a tray below it for positioning the paper just right. The result was something he took to a recent Maker Faire where many kids drew pictures on paper which his system then scanned and laser etched.
[Martin’s] program, written in C#, does the work of taking the image from the webcam using OpenGL and scanning it line by line looking for pixels that surpass a contrast threshold. For each suitable pixel the program then produces G-code that moves the laser to the corresponding coordinate and burns a hole. Looking at the source code (downloadable from his webpage) it’s clear from commented-out code that he did plenty of experimenting, including varying the laser burn time based on the pixel’s brightness.
While it’s a lot of fun writing this code as [Martin] did, after the break we talk about some off-the-shelf ways of accomplishing the same thing.
Sometimes there’s just no place like your desktop. You’ve already got your favorite development tools and references setup or installed and it’s a pain when you’re trying to work on an unfamiliar, or simply uncustomized, system. On your desktop everything is at your fingertips. If you want to search the web, the browser is just an alt-tab away. If you need a calculator, it’s right there to run. Your editor highlights syntax in your favorite colors already.
When developing on a Raspberry Pi, you leave all these creature comforts behind unless you spend the time to configure the Pi to your liking. Then it all gets wiped when you install a new distribution, like the recent change from Wheezy to Jessie. Even then it’s frustrating to switch back and forth between the desktop and the Pi because there is always something on the other system that you need. My usual comment is, “dirty word”, literally.
Cross-developing on your desktop is a very workable solution. We’re going to walk through setting up your desktop and a Pi to do this. This means loading a Pi ARM toolchain on your desktop and a debugging server on the Pi. This’ll let you develop and debug from in the comfort of your desktop. An added advantage is when you put that Pi in a robot you can debug over a wireless link.