Brain Implant Offers Artificial Vision To The Blind

Nothing makes you appreciate your vision more than getting a little older and realizing that it used to be better and that it will probably get worse. But imagine how much more difficult it would be if you were totally blind. That was what happened to [Berna Gomez] when, at 42, she developed a medical condition that destroyed her optic nerves leaving her blind in a matter of days and ending her career as a science teacher. But thanks to science [Gomez] can now see, at least to some extent. She volunteered after 16 years to have a penny-sized device with 96 electrodes implanted in her visual cortex. The research is in the Journal of Clinical Investigation and while it is a crude first step, it shows lots of promise and uses some very novel techniques to overcome certain limitations.

The 96 electrodes were in a 10×10 grid with the four corner electrodes missing. The resolution, of course, is lacking, but the project turned to a glasses-mounted camera to acquire images and process them, reducing them to signals for the electrodes that may not directly map to the image.

Continue reading “Brain Implant Offers Artificial Vision To The Blind”

Quadcopter With Stereo Vision

Flying a quadcopter or other drone can be pretty exciting, especially when using the video signal to do the flying. It’s almost like a real-life video game or flight simulator in a way, except the aircraft is physically real. To bring this experience even closer to the reality of flying, [Kevin] implemented stereo vision on his quadcopter which also adds an impressive amount of functionality to his drone.

While he doesn’t use this particular setup for drone racing or virtual reality, there are some other interesting things that [Kevin] is able to do with it. The cameras, both ESP32 camera modules, can make use of their combined stereo vision capability to determine distances to objects. By leveraging cloud computing services from Amazon to offload some of the processing demands, the quadcopter is able to recognize faces and keep the drone flying at a fixed distance from that face without needing power-hungry computing onboard.

There are a lot of other abilities that this drone unlocks by offloading its resource-hungry tasks to the cloud. It can be flown by using a smartphone or tablet, and has its own web client where its user can observe the facial recognition being performed. Presumably it wouldn’t be too difficult to use this drone for other tasks where having stereoscopic vision is a requirement.

Thanks to [Ilya Mikhelson], a professor at Northwestern University, for this tip about a student’s project.

Braille On A Tablet Computer

Signing up for college classes can be intimidating, from tuition, textbook requirements, to finding an engaging professor. Imagine signing up online, but you cannot use your monitor. We wager that roughly ninety-nine percent of the hackers reading this article have it displayed on a tablet, phone, or computer monitor. Conversely, “Only one percent of published books is available in Braille,” according to [Kristina Tsvetanova] who has created a hybrid tablet computer with a Braille display next to a touch-screen tablet running Android. The tablet accepts voice commands for launching apps, a feature baked right into Android. The idea came to her after helping a blind classmate sign up for classes.

Details on the mechanism are not clear, but they are calling it smart liquid, so it may be safe to assume hydraulic valves control the raised dots, which they call “tixels”. A rendering of the tablet can be seen below the break. The ability to create a full page of braille cells suggest they have made the technology pretty compact. We have seen Braille written on PCBs, a refreshable display based on vibrator motors, and a nicely sized Braille keyboard that can fit on the back of a mobile phone.

Continue reading “Braille On A Tablet Computer”

Real Or Fake? Robot Uses AI To Find Waldo

The last few weeks have seen a number of tech sites reporting on a robot which can find and point out Waldo in those “Where’s Waldo” books. Designed and built by Redpepper, an ad agency. The robot arm is a UARM Metal, with a Raspberry Pi controlling the show.

A Logitech c525 webcam captures images, which are processed by the Pi with OpenCV, then sent to Google’s cloud-based AutoML Vision service. AutoML is trained with numerous images of Waldo, which are used to attempt a pattern match.  If a pattern is found, the coordinates are fed to PYUARM, and the UARM will literally point Waldo out.

While this is a totally plausible project, we have to admit a few things caught our jaundiced eye. The Logitech c525 has a field of view (FOV) of 69°. While we don’t have dimensions of the UARM Metal, it looks like the camera is less than a foot in the air. Amazon states that “Where’s Waldo Delux Edition” is 10″ x 0.2″ x 12.5″ inches. That means the open book will be 10″ x 25″. The robot is going to have a hard time imaging a surface that large in a single image. What’s more, the c525 is a 720p camera, so there isn’t a whole lot of pixel density to pattern match. Finally, there’s the rubber hand the robot uses to point out Waldo. Wouldn’t that hand block at least some of the camera’s view to the left?

We’re not going to jump out and call this one fake just yet — it is entirely possible that the robot took a mosaic of images and used that to pattern match. Redpepper may have used a bit of movie magic to make the process more interesting. What do you think? Let us know down in the comments!

Robot Maps Rooms With Help From IPhone

The Unity engine has been around since Apple started using Intel chips, and has made quite a splash in the gaming world. Unity allows developers to create 2D and 3D games, but there are some other interesting applications of this gaming engine as well. For example, [matthewhallberg] used it to build a robot that can map rooms in 3D.

The impetus for this project was a robotics company that used a series of robots around their business. The robots navigate using computer vision, but couldn’t map the rooms from scratch. They hired [matthewhallberg] to tackle this problem, and this robot is a preliminary result. Using the Unity engine and an iPhone, the robot can perform in one of three modes. The first is a user-controlled mode, the second is object following, and the third is 3D mapping.

The robot seems fairly easy to construct and only carries and iPhone, a Node MCU, some motors, and a battery. Most of the computational work is done remotely, with the robot simply receiving its movement commands from another computer. There’s a lot going on here, software-wise, and a lot of toolkits and software packages to install and communicate with one another, but the video below does a good job of showing what you’ll need and how it all works together. If that’s all too much, there are other robots with a form of computer vision that can get you started into the world of computer vision and mapping.

Continue reading “Robot Maps Rooms With Help From IPhone”

What Makes A Hacker

I think I can sum up the difference between those of us who regularly visit Hackaday and the world of non-hackers. As a case study, here is a story about how necessity is the mother of invention and the people who invent.

Hackaday has overlap with sites like Pinterest and Instructables but there is one vital difference, we choose to create something new and beautiful with the materials at hand. Often these tools and techniques are very simple. We look to make things elegant by reducing the unnecessary clutter, not adding glitter. If something could be built with a 555 timer we will let you know. If there is a better choice for a processor, we will tell you.

My first real work commute was a forty-minute eastward drive every morning and a forty-minute westward drive every evening. This route pointed my car directly into the sun twice a day. Staring into a miasma of incandescent plasma for an hour and a half a day isn’t fun, and probably isn’t safe, but we can fix that.

Continue reading “What Makes A Hacker”

Arduino Video Isn’t Quite 4K

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Continue reading “Arduino Video Isn’t Quite 4K”