Our recent “Retrotechtacular” feature on an early 1970s dead-reckoning car navigation system stirred a memory of another pre-GPS solution for the question that had vexed the motoring public on road trips into unfamiliar areas for decades: “Where the heck are we?” In an age when the tattered remains of long-outdated paper roadmaps were often the best navigational aid a driver had, the dream of an in-dash scrolling map seemed like something Q would build for James Bond to destroy.
And yet, in the mid-1980s, just such a device was designed and made available to the public. Dubbed Etak, the system was simultaneously far ahead of its time and doomed to failure by the constellation of global positioning satellites being assembled overhead as it was being rolled out. Given the constraints it was operating under, Etak worked very well, and even managed to introduce some of the features of modern GPS that we take for granted, such as searching for services and businesses. Here’s a little bit about how the system came to be and how it worked.
Anyone old enough to have driven before the GPS era probably wonders, as we do, how anyone ever found anything. Navigation back then meant outdated paper maps, long detours because of missed turns, and the far too frequent stops at dingy gas stations for the humiliation of asking for directions. It took forever sometimes, and though we got where we were going, it always seemed like there had to be a better way.
Indeed there was, but instead of waiting for the future and a constellation of satellites to guide the way, some clever folks in the early 1970s had a go at dead reckoning systems for car navigation. The video below shows one, called Cassette Navigation, in action. It consisted of a controller mounted under the dash and a modified cassette player. Special tapes, with spoken turn-by-turn instructions recorded for a specific route, were used. Each step was separated from the next by a tone, the length of which encoded the distance the car would cover before the next step needed to be played. The controller was hooked to the speedometer cable, and when the distance traveled corresponded to the tone length, the next instruction was played. There’s a long list of problems with this method, not least of which is no choice in road tunes while using it, but given the limitations at the time, it was pretty ingenious.
Dead reckoning is better than nothing, but it’s a far cry from GPS navigation. If you’re still baffled by how that cloud of satellites points you to the nearest Waffle House at 3:00 AM, check out our GPS primer for the details.
The worst thing about walking around while trying to follow directions is that you have to keep looking down at them to get the next turn. At best, you’ll miss out on the scenery; at worst, you might walk into traffic.
Wouldn’t it be great if you didn’t have to look down? Yes it would, and with Walkity, there’s no need to look down. Walkity is a set of cuffs that slip on the backs of your shoes, pairs with your phone, and uses haptic feedback to tell you where to go. Each one has an Arduino Mini Pro, an NRF24L01 to talk to its mate, a Bluetooth module, a vibration motor, and what must be the thinnest, most flexible LiPo currently available on Earth. The specified cell is PGEB0083559, a 65 mAH cell that is 0.8 mm thick!
Your smartphone will vibrate in your pocket during naviation but our experience has been that of still not knowing which way to turn. Walkity’s feedback is simple and intuitive. The left cuff vibrates to indicate a left turn, right for right, and both vibrate when you reach your destination. Going the wrong way? Walkity will vibrate vigorously to let you know it’s time to pull over. It’s a great example of a an entry for the Human Computer Interface Challenge of the Hackaday Prize!
[Tom Scott] ran across an interesting visual effect created with Moiré patterns and used for guiding ships but we’re sure it can be adapted for hacks somewhere. Without the aid of any motors or LED animation, the image changes as the user views it from different angles. When viewed straight on, the user sees vertical lines, but from the left they see a right-pointing arrow and from the right, they see a left-pointing arrow. It’s used with shipping to guide ships. For example, one use would be to guide them to the center point of a bridge. When the pilots see straight, vertical lines then they know where to steer the ship.
US patent 4,629,325, Leading mark indicator, explains how it works and how to make one. Two screens are separated from each other. The one in front is vertical but the one behind is split in two and angled. It’s this angle which creates the slants of the arrows when viewed from the left or right. We had to convince ourselves that we understood it correctly and a quick test with two combs showed that we did. See below for the test in action as well as for [Tom’s] video of the real-world shipping one.
How does one go about programming a drone to fly itself through the real world to a location without crashing into something? This is a tough problem, made even tougher if you’re pushing speeds higher and high. But any article with “MIT” implies the problems being engineered are not trivial.
The folks over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have put their considerable skill set to work in tackling this problem. And what they’ve come up with is (not surprisingly) quite clever: they’re embracing uncertainty.
Why Is Autonomous Navigation So Hard?
Suppose we task ourselves with building a robot that can insert a key into the ignition switch of a motor vehicle and start the engine, and could do so in roughly the same time-frame that a human could do — let’s say 10 seconds. It may not be an easy robot to create, but we can all agree that it is very doable. With foreknowledge of the coordinate information of the vehicle’s ignition switch relative to our robotic arm, we can place the key in the switch with 100% accuracy. But what if we wanted our robot to succeed in any car with a standard ignition switch?
Now the location of the ignition switch will vary slightly (and not so slightly) for each model of car. That means we’re going to have to deal with this in real time and develop our coordinate system on the fly. This would not be too much of an issue if we could slow down a little. But keeping the process limited to 10 seconds is extremely difficult, perhaps impossible. At some point, the amount of environment information and computation becomes so large that the task becomes digitally unwieldy.
This problem is analogous to autonomous navigation. The environment is always changing, so we need sensors to constantly monitor the state of the drone and its immediate surroundings. If the obstacles become too great, it creates another problem that lies in computational abilities… there is just too much information to process. The only solution is to slow the drone down. NanoMap is a new modeling method that breaks the artificial speed limit normally imposed with on-the-fly environment mapping.
This interesting project out of MIT aims to use technology to help visually impaired people navigate through the use of a haptic feedback belt, chest-mounted sensors, and a braille display.
The belt consists of a vibration motors controlled by what appears to be a Raspberry Pi (for the prototype anyway) with a distance sensor and camera connected as well. The core algorithm is designed to take input from the camera and distance sensors to compute the distance to obstacles, and to buzz the right motor to alert the user — fairly expected stuff. However, the project has a higher goal: to assist in identifying and using chairs.
Aiming to detect the seat and arms, the algorithm looks for three horizontal surfaces near each other, taking extra care to ensure the chair isn’t occupied. The study found that, used in conjunction with a cane, the system noticeably helped users navigate through realistic environments, as measured by minor and major collisions. Users recorded dramatically fewer collisions as compared to using the system alone or the cane alone. The project also calls for a belt-mounted braille display to relay more complicated information to the user.
There’s a harsh truth underlying all robotic research: compared to evolution, we suck at making things move. Nature has a couple billion years of practice making things that can slide, hop, fly, swim and run, so why not leverage those platforms? That’s the idea behind this turtle with a navigation robot strapped to its back.
This reminds us somewhat of an alternative universe sci-fi story by S.M. Stirling called The Sky People. In the story, Venus is teeming with dinosaurs that Terran colonists use as beasts of burden with brain implants that stimulate pleasure centers to control them. While the team led by [Phill Seung-Lee] at the Korean Advanced Institute of Science and Technology isn’t likely to get as much work from the red-eared slider turtle as the colonists in the story got from their bionic dinosaurs, there’s still plenty to learn from a setup like this. Using what amounts to a head-up display for the turtle in the form of a strip of LEDs, along with a food dispenser for positive reinforcement, the bionic terrapin is trained to associate food with the flashing LEDs. The LEDs are then used as cues as the turtle navigates between waypoints in a tank. Sadly, the full article is behind a paywall, but the video below gives you a taste of the gripping action.