Low-cost Autonomous Rover Will Drive Your Projects

[Miguel] wanted to get more hands-on experience with Python, so he created a small robotic platform as a testbed. But as such things sometimes go, it turns out the robot he created is a worthy enough project in its own right. With a low total cost and highly flexible design, it might be exactly what you’re looking for. Who knows, it might even bootstrap that rover project that’s been wandering around the back of your mind.

The robot makes use of an exceptionally simple 3D printed frame. No complicated suspension to worry about, no fasteners to hold together multiple printed parts. It’s just a single printed “L” shaped piece that has mounts for the motors and front sensor board. As designed it simply drags its tail around, which should work fine on smooth surfaces, but might need a bit of tweaking if you plan on taking your new robotic friend on an outdoor adventure.

There’s a big open area on the “tail” to mount a Raspberry Pi, but you could really put whatever board or microcontroller you wish here. In the nose is an HC-SR04 ultrasonic sensor, which [Miguel] is using to perform obstacle avoidance in his Python code. A dual H-Bridge motor driver controls the pair of gear motors in the front to provide propulsion and steering, and a buck converter steps down the 7.4V from the 2S LiPo battery to power the electronics. He’s even included a mini breadboard so you can add circuits or sensors as experimental payloads.

If you’re looking for a slightly more advanced 3D printed robotics platform, we’ve seen our fair share. From the nearly fully printed Watney to a tank that looks like it’s ready for front-line combat.

Hackaday Prize Entry: Visioneer Sensor HUD

Only about two percent of the blind or visually impaired work with guide animals and assistive canes have their own limitations. There are wearable devices out there that take sensor data and turn the world into something a visually impaired person can understand, but these are expensive. The Visioneer is a wearable device that was intended as a sensor package for the benefit of visually impaired persons. The key feature: it’s really inexpensive.

The Visioneer consists of a pair of sunglasses, two cameras, sensors, a Pi Zero, and bone conduction transducers for audio and vibration feedback. The Pi listens to a 3-axis accelerometer and gyroscope, a laser proximity sensor for obstacle detection within 6.5ft, and a pair of NOIR cameras. This data is processed by neural nets and OpenCV, giving the wearer motion detection and object recognition. A 2200mA battery powers it all.

When the accelerometer determines that the person is walking, the software switches into obstacle avoidance mode. However, if the wearer is standing still, the Visioneer assumes you’re looking to interact with nearby objects, leveraging object recognition software and haptic/audio cues to relay the information. It’s a great device, and unlike most commercial versions of ‘glasses-based object detection’ devices, the BOM cost on this project is only about $100. Even if you double or triple that (as you should), that’s still almost an order of magnitude of cost reduction.

Visual Scanner Turns Obstacles Into Braille

This interesting project out of MIT aims to use technology to help visually impaired people navigate through the use of a haptic feedback belt, chest-mounted sensors, and a braille display.

The belt consists of a vibration motors controlled by what appears to be a Raspberry Pi (for the prototype anyway) with a distance sensor and camera connected as well. The core algorithm is designed to take input from the camera and distance sensors to compute the distance to obstacles, and to buzz the right motor to alert the user — fairly expected stuff. However, the project has a higher goal: to assist in identifying and using chairs.

Aiming to detect the seat and arms, the algorithm looks for three horizontal surfaces near each other, taking extra care to ensure the chair isn’t occupied. The study found that, used in conjunction with a cane, the system noticeably helped users navigate through realistic environments, as measured by minor and major collisions. Users recorded dramatically fewer collisions as compared to using the system alone or the cane alone. The project also calls for a belt-mounted braille display to relay more complicated information to the user.

We at HaD have followed along with several braille projects, including a refreshable braille display, a computer with a braille display and keyboard, and this braille printer.

Continue reading “Visual Scanner Turns Obstacles Into Braille”