Toyota’s Code Didn’t Meet Standards and Might Have Led To Death

We were initially skeptical of this article by [Aleksey Statsenko] as it read a bit conspiratorially. However, he proved the rule by citing his sources and we could easily check for ourselves and reach our own conclusions. There were fatal crashes in Toyota cars due to a sudden unexpected acceleration. The court thought that the code might be to blame, two engineers spent a long time looking at the code, and it did not meet common industry standards. Past that there’s not a definite public conclusion.

[Aleksey] has a tendency to imply that normal legal proceedings and recalls for design defects are a sign of a sinister and collaborative darker undercurrent in the world. However, this article does shine a light on an actual dark undercurrent. More and more things rely on software than ever before. Now, especially for safety critical code, there are some standards. NASA has one and in the pertinent case of cars, there is the Motor Industry Software Reliability Association C Standard (MISRA C). Are these standards any good? Are they realistic? If they are, can they even be met?

When two engineers sat down, rather dramatically in a secret hotel room, they looked through Toyota’s code and found that it didn’t even come close to meeting these standards. Toyota insisted that it met their internal standards, and further that the incidents were to be blamed on user error, not the car.

So the questions remain. If they didn’t meet the standard why didn’t Toyota get VW’d out of the market? Adherence to the MIRSA C standard entirely voluntary, but should common rules to ensure code quality be made mandatory? Is it a sign that people still don’t take software seriously? What does the future look like? Either way, browsing through [Aleksey]’s article and sources puts a fresh and very real perspective on the problem. When it’s NASA’s bajillion dollar firework exploding a satellite it’s one thing, when it’s a car any of us can own it becomes very real.

Simulate Your Robot Before You Build It

[Nurgak] shows how one can use some of the great robotic tools out there to simulate a robot before you even build it. To drive this point home he builds the tutorial off of the easily 3D printable and buildable Robopoly platform.

The robot runs on Robot Operating System at its core. ROS is interesting because of its decentralized and input/output agnostic messaging system. For example, if you leave everything alone but swap out the motor output from actual motors to a simulator, you can see how the robot would respond to any arbitrary input.

[Nurgak] uses another piece of software called V-REP to demonstrate this. V-REP is a simulation suite for robotics and has a few ROS nodes built in. So in order to make a simulated line-following robot, [Nurgak] tells V-REP to send a simulated camera image to the decision making node of the robot in ROS. It then sends the movement messages back to V-REP which drives the pretend robot around.

He runs through a few more examples, proving that it’s entirely possible to become if not a roboticist, at least a really good AI programmer without ever dropping the big money on parts to build a robot.

Simple Clock from Tiny Chip

If you haven’t jumped on the ESP8266 bandwagon yet, it might be a good time to get started. If you can program an Arduino you have pretty much all of the skills you’ll need to get an ESP8266 up and running. And, if you need a good idea for a project to build with one of these WiFi miracle chips, look no further than [Ben Buxton]’s dated, but awesome, NTP clock.

Continue reading “Simple Clock from Tiny Chip”

Code Craft – Embedding C++: Multitasking

We’re quite used to multitasking computer systems today. Our desktops run email, a couple of browsers in different workspaces, a word processor, and a few other applications, apparently all at once. Looking behind the scenes using a system monitor or task manager program reveals a multitude of other programs running in support of our activities. Of course, any given CPU is running a maximum of one program at a time. Multitasking is simply the practice of switching between active processes fast enough to give the illusion of simultaneity.

The roots of multiasking go way back. In the early days, when computers cost tons of money, the thought of an idle system was anathema. Teletype IO was slow compared to the processor, and leaving the processor waiting idle for a card reader to slurp in the next card was outrageous. The gurus of the time worked to fill that idle time with productive work. That eventually led to systems that would run multiple programs at one time, and eventually to more finely grained multitasking within a program.

Modern multitasking depends on support from the underlying API of an operating system. Each OS uses its own techniques, making it difficult to write portable code. The C++ 2011 standard increased the portability of the language by adding concurrency routines to the Standard Template Library (STL). These routines use the API of the OS. For instance, the Linux version uses the POSIX threading library, pthread. The result is a minimal, but useful, capability for building upon in later standards. The C++ 2017 standard development activities include work on parallelism and concurrency.

In this article, I’ll work through some of the facilities for and pitfalls in writing threaded code in C++.

Continue reading “Code Craft – Embedding C++: Multitasking”

Taming Robot Arm Jump with Accelerometers

Last fall, I grabbed a robot arm from Robot Geeks when they were on sale at Thanksgiving. The arm uses servos to rotate the base and move the joints and gripper. These work well enough but I found one aspect of the arm frustrating. When you apply power, the software commands the servos to move to home position. The movement is sufficiently violent it can cause the entire arm to jump.

This jump occurs because there is no position feedback to the Arduino controller leaving it unable to know the positions of the arm’s servos and move them slowly to home. I pondered how to add this feedback using sensors, imposing the limitation that they couldn’t be large or require replacing existing parts. I decided to try adding accelerometers on each arm section.

Accelerometers, being affected by gravity when on a planet, provide an absolute reference because they always report the direction of down. With an accelerometer I can calculate the angle of an arm section with respect to the direction of gravitational acceleration.

Before discussing the accelerometers, take a look at the picture of the arm. An accelerometer would be added to each section of the arm between the controlling servos.

arm flat extended with text Continue reading “Taming Robot Arm Jump with Accelerometers”

Flying with Proportional – Integral – Derivative Control

Your quad-copter is hovering nicely 100 feet north of you, its camera pointed exactly on target. The hover is doing so well all the RC transmitter controls are in the neutral position. The wind picks up a bit and now the ‘copter is 110 feet north. You adjust its position with your control stick but as you do the wind dies and you overshoot the correction. Another gust pushed it away from target in more than one direction as frustration passes your lips: ARGGGHH!! You promise yourself to get a new flight computer with position hold capability.

How do multicopters with smart controllers hold their position? They use a technique called Proportional – Integral – Derivative (PID) control. It’s a concept found in control systems of just about everything imaginable. To use PID your copter needs sensors that measure the current position and movement.

The typical sensors used for position control are a GPS receiver and an Inertial Management  Measurement Unit (IMU) made up of an accelerometer, a gyroscope, and possibly a magnetometer (compass). Altitude control would require a barometer or some other means of measuring height above ground. Using sensor fusion techniques to combine the raw data, a computer can determine the position, movement, and altitude of the multicopter. But calculating corrections that will be just right, without over or undershooting the goal, is where PID comes into play. Continue reading “Flying with Proportional – Integral – Derivative Control”

Converting Kids’ Hand-Drawings to G-Code

[Martin Raynsford] wrote a program that converts a black-and-white 2D image to G-code so that his laser printer could then etch the image. Not satisfied with just that, he used his laser printer to make a scanner that consists of a stand for his webcam and a tray below it for positioning the paper just right. The result was something he took to a recent Maker Faire where many kids drew pictures on paper which his system then scanned and laser etched.

Screenshot of Martin's scanning and G-code maker program
Martin’s scanning and G-code maker program

[Martin’s] program, written in C#, does the work of taking the image from the webcam using OpenGL and scanning it line by line looking for pixels that surpass a contrast threshold. For each suitable pixel the program then produces G-code that moves the laser to the corresponding coordinate and burns a hole. Looking at the source code (downloadable from his webpage) it’s clear from commented-out code that he did plenty of experimenting, including varying the laser burn time based on the pixel’s brightness.

While it’s a lot of fun writing this code as [Martin] did, after the break we talk about some off-the-shelf ways of accomplishing the same thing.

Continue reading “Converting Kids’ Hand-Drawings to G-Code”