Only about two percent of the blind or visually impaired work with guide animals and assistive canes have their own limitations. There are wearable devices out there that take sensor data and turn the world into something a visually impaired person can understand, but these are expensive. The Visioneer is a wearable device that was intended as a sensor package for the benefit of visually impaired persons. The key feature: it’s really inexpensive.
The Visioneer consists of a pair of sunglasses, two cameras, sensors, a Pi Zero, and bone conduction transducers for audio and vibration feedback. The Pi listens to a 3-axis accelerometer and gyroscope, a laser proximity sensor for obstacle detection within 6.5ft, and a pair of NOIR cameras. This data is processed by neural nets and OpenCV, giving the wearer motion detection and object recognition. A 2200mA battery powers it all.
When the accelerometer determines that the person is walking, the software switches into obstacle avoidance mode. However, if the wearer is standing still, the Visioneer assumes you’re looking to interact with nearby objects, leveraging object recognition software and haptic/audio cues to relay the information. It’s a great device, and unlike most commercial versions of ‘glasses-based object detection’ devices, the BOM cost on this project is only about $100. Even if you double or triple that (as you should), that’s still almost an order of magnitude of cost reduction.
That’s a LOT of processing for a simple RasPi, in fact it’s too much to not be annoying as hell with lag I expect.
Although, to be fair, the writeup here says ” two cameras, sensors, a Pi Zero” when in fact on a closer look I see it’s 2 Pi Zeros (which you could predict by the mention of 2 cameras).
There are 2 PI’s in the final version. In this test model there is only one PI.
Very cool. Make it look in the released model more like Geordi La Forge’s headgear in STTNG and it’ll even be cool looking.
And in that one pi version I assume there is also only one camera.