D-POINT: A Digital Pen With Optical-Inertial Tracking

[Jcparkyn] clearly had an interesting topic for their thesis project, and was conscientious enough to write up a chunk of it and release it to the wild. The project in question is a digital pen that uses some neat sensor fusion to combine the inputs from a pen-mounted gyro/accelerometer with data from an optical tracking system provided by an off-the-shelf webcam.

A six degrees of freedom (6DOF) tracking system is achieved as a result, with the pen-mounted hardware tracking orientation and the webcam tracking the 3D position. The pen itself is quite neat, with an ALPS/Alpine HSFPAR003A load sensor measuring the contact pressure transmitted to it from the stylus tip. A Seeed Xaio nRF52840 sense is on duty for Bluetooth and hosting the needed IMU. This handy little module deals with all the details needed for such a high-integration project and even manages the charging of a single 10440 lithium cell via a USB-C connector.

Positional tracking uses Visual Pose Estimation (VPE) assisted with ArUco markers mounted on the end of the stylus. A consumer-grade (i.e. uncalibrated) webcam is all that is required on the hardware side. The software utilizes the familiar OpenCV stack to unroll the effects of the webcam rolling shutter, followed by Perspective-n-Point (PnP) to estimate the pose from the corrected image stream. Finally, a coordinate space conversion is performed to determine the stylus tip position relative to the drawing surface.

The sensor fusion is taken care of with a Kalman filter, smoothed with the typical Rauch-Tung-Striebel (RTS) algorithm before being passed onto the final application. This process is running in Python using the NumPy module, as you would expect, but accelerated using the Numba JIT compiler.

Motion tracking is not news to us, we’ve seen many an implementation over the years, such as this one. But digital input pens? Why aren’t they more of a thing?

Thanks to [Oliver] for the tip!

Full Self-Driving, On A Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

Continue reading “Full Self-Driving, On A Budget”

IR Camera Is Excellent Hacking Platform

While there have been hiccups here and there, the general trend of electronics is to decrease in cost or increase in performance. This can be seen in fairly obvious ways like more powerful and affordable computers but it also often means that more powerful software can be used in other devices without needing expensive hardware to support it. [Manawyrm] and [Toble_Miner] found this was true of a particular inexpensive thermal camera that ships with Linux installed on it, and found that this platform was nearly perfect for tinkering with and adding plenty of other features to turn it into a much more capable tool.

The duo have been working on a SC240N variant of the InfiRay C200 infrared camera, which ships with a Hisilicon SoC. The display is capable of displaying 25 frames per second, making this platform an excellent candidate for modifying. A few ports were added to the device, including USB and MicroSD, and which also allows the internal serial port to be accessed easily. From there the device can be equipped with the uboot bootloader in order to run essentially anything that could be found on any other Linux machine such as supporting a webcam interface (and including a port of DOOM, of course). The duo doesn’t stop at software modifications though. They also equipped the camera with a lens, attached magnetically, which changes the camera’s focal length to give it improved imaging capabilities at closer ranges.

While the internal machinations of this device are interesting, it actually turns out to be a fairly capable infrared camera on its own as well. The hardware and software requirements for these devices certainly don’t need a full Linux environment to work, and while we have seen thermal cameras that easily fit in a pocket that are based on nothing any more powerful than an ESP32, it does tend to simplify the development process dramatically to include Linux and a little more processing power if you can.

Continue reading “IR Camera Is Excellent Hacking Platform”

Hackaday Prize 2023: Eye-Tracking Wheelchair Interface Is A Big Help

For those with quadriplegia, electric wheelchairs with joystick controls aren’t much help. Typically, sip/puff controllers or eye-tracking solutions are used, but commercial versions can be expensive. [Dhruv Batra] has been experimenting with a DIY eye-tracking solution that can be readily integrated with conventional electric wheelchairs.

The system uses a regular webcam aimed at the user’s face. A Python script uses OpenCV and a homebrewed image segmentation algorithm to analyze the user’s eye position. The system is configured to stop the wheelchair when the user looks forward or up. Looking down commands the chair forward. Glancing left and right steers the chair in the given direction.

The Python script then sends the requisite commands via a TCP connection to an ESP32, which controls a bunch of servos to move the wheelchair’s joystick in the desired manner. This allows retrofitting the device on a wheelchair without having to modify it in an invasive manner.

It’s a neat idea, though it could likely benefit from some further development. A reverse feature would be particularly important, after all. However, it’s a great project that has likely taught [Dhruv] many important lessons about human-machine interfaces, particularly those beyond the ones we use every day. 

This project has a good lineage as well — a similar project, EyeDriveOMatic won the Hackaday prize back in 2015.

New Drivers For Ancient Webcam

For those of us who are a little older, the 90s seem like they were just a few years ago. The younger folks might think that the 90s were ancient history though, and they might be right as we’ve been hearing more bands like Pearl Jam and The Offspring playing on the classic rock stations lately. Another example of how long ago the 90s were is taking a look at the technological progress that has happened since then through the lens of things like this webcam from 1999, presuming you load up this custom user space driver from [benjojo].

Thankfully the driver for this infamous webcam didn’t need to be built completely from scratch. There’s a legacy driver available for Windows XP which showed that the camera still physically worked, and there’s also a driver for Linux which was used as a foundation to start working from. From there a USB interface was set up which allowed communication to the device. Not a simple task, but apparently much easier than the next steps which involve actually interpreting the information coming from the webcam. This is where a background in digital signal processing is handy to have. First, the resolution and packet size were sorted out which led to a somewhat recognizable image. From there a single monochrome image was pieced together, and then after deconstructing a Bayer filter and adding color, the webcam is back to its former 90s glory.

[benjojo] has hosted all of the code for this project on a GitHub page for anyone who still has one of these webcams sitting around in the junk drawer. The resolution and color fidelity are about what we’d expect for a 25-year-old device that predates Skype, Facebook, Wikipedia, and Firefox. And, while there are still some things that need to be tweaked such as the colors, white balance, and exposure, once that is sorted out the 90s and early 00s nostalgia is free to flood in.

Keeping An Eye On Heating Oil

Energy costs around the world are going up, whether it’s electricity, natural gas, or gasoline. This is leading to a lot of people looking for ways to decrease their energy use, especially heading into winter in the Northern Hemisphere. As the saying goes, you can’t manage what you can’t measure, so [Steve] has built this system around monitoring the fuel oil level for his home’s furnace.

Fuel oil is an antiquated way of heating, but it’s fairly common in certain parts of the world and involves a large storage tank typically in a home’s basement. Since the technology is so dated, it’s not straightforward to interact with these systems using anything modern. This fuel tank has a level gauge showing its current percentage full. A Raspberry Pi is set up nearby with a small camera module which monitors the gauge, and it runs OpenCV to determine the current fuel level and report its findings.

Since most fuel tanks are hidden in inconvenient locations, it makes checking in on the fuel level a breeze and helps avoid running out of fuel during cold snaps. [Steve] designed this project to be reproducible even if your fuel tank is different than his. You have other options beyond OpenCV as well; this fuel tank uses ultrasonic sensors to measure the fuel depth directly.

Machine Learning Does Its Civic Duty By Spotting Roadside Litter

If there’s one thing that never seems to suffer from supply chain problems, it’s litter. It’s everywhere, easy to spot and — you’d think — pick up. Sadly, most of us seem to treat litter as somebody else’s problem, but with something like this machine vision litter mapper, you can at least be part of the solution.

For the civic-minded [Nathaniel Felleke], the litter problem in his native San Diego was getting to be too much. He reasoned that a map of where the trash is located could help municipal crews with cleanup, so he set about building a system to search for trash automatically. Using Edge Impulse and a collection of roadside images captured from a variety of sources, he built a model for recognizing trash. To find the garbage, a webcam with a car window mount captures images while driving, and a Raspberry Pi 4 runs the model and looks for garbage. When roadside litter is found, the Pi uses a Blues Wireless Notecard to send the GPS location of the rubbish to a cloud database via its cellular modem.

Cruising around the streets of San Diego, [Nathaniel]’s system builds up a database of garbage hotspots. From there, it’s pretty straightforward to pull the data and overlay it on Google Maps to create a heatmap of where the garbage lies. The video below shows his system in action.

Yes, driving around a personal vehicle specifically to spot litter is just adding more waste to the mix, but you’d imagine putting something like this on municipal vehicles that are already driving around cities anyway. Either way, we picked up some neat tips, especially those wireless IoT cards. We’ve seen them used before, but [Nathaniel]’s project gives us a path forward on some ideas we’ve had kicking around for a while.

Continue reading “Machine Learning Does Its Civic Duty By Spotting Roadside Litter”