Hackaday Links Column Banner

Hackaday Links: June 29, 2025

In today’s episode of “AI Is Why We Can’t Have Nice Things,” we feature the Hertz Corporation and its new AI-powered rental car damage scanners. Gone are the days when an overworked human in a snappy windbreaker would give your rental return a once-over with the old Mark Ones to make sure you hadn’t messed the car up too badly. Instead, Hertz is fielding up to 100 of these “MRI scanners for cars.” The “damage discovery tool” uses cameras to capture images of the car and compares them to a model that’s apparently been trained on nothing but showroom cars. Redditors who’ve had the displeasure of being subjected to this thing report being charged egregiously high damage fees for non-existent damage. To add insult to injury, if renters want to appeal those charges, they have to argue with a chatbot first, one that offers no path to speaking with a human. While this is likely to be quite a tidy profit center for Hertz, their customers still have a vote here, and backlash will likely lead the company to adjust the model to be a bit more lenient, if not outright scrapping the system.

Continue reading “Hackaday Links: June 29, 2025”

A piano is pictured with two hands playing different notes, G outlined in orange and C outlined in blue.

AI Piano Teacher To Criticize Your Every Move

Learning new instruments is never a simple task on your own; nothing can beat the instant feedback of a teacher. In our new age of AI, why not have an AI companion complain when you’re off note? This is exactly what [Ada López] put together with their AI-Powered Piano Trainer.

The basics of the piano rely on rather simple boolean actions, either you press a key or not. Obviously, this sets up the piano for many fun projects, such as creative doorbells or helpful AI models. [Ada López] started their AI model with a custom dataset with images of playing specific notes on the piano. These images then get fed into Roboflow and trained using the YOLOv8 model.

Using the piano training has the model run on a laptop and only has a Raspberry Pi for video, and gives instant feedback to the pianist due to the demands of the model. Placing the Pi and an LCD screen for feedback into a simple enclosure allows the easy viewing of how good an AI model thinks you play piano. [Ada López] demos their device by playing Twinkle Twinkle Little Star but there is no reason why other songs couldn’t be added!

While there are simpler piano trainers out there relying on audio cues, this project presents a great opportunity for a fun project for anyone else wanting to take up the baton. If you want to get a little more from having to do less in the physical space, then this invisible piano is perfect for you!

Cheap Endoscopic Camera Helps Automate Pressure Advance Calibration

The difference between 3D printing and good 3D printing comes down to attention to detail. There are so many settings and so many variables, each of which seems to impact the other to a degree that can make setting things up a maddening process. That makes anything that simplifies the process, such as this computer vision pressure advance attachment, a welcome addition to the printing toolchain.

If you haven’t run into the term “pressure advance” for FDM printing before, fear not; it’s pretty intuitive. It’s just a way to compensate for the elasticity of the molten plastic column in the extruder, which can cause variations in the amount of material deposited when the print head acceleration changes, such as at corners or when starting a new layer.

To automate his pressure advance calibration process, [Marius Wachtler] attached one of those dirt-cheap endoscope cameras to the print head of his modified Ender 3, pointing straight down and square with the bed. A test grid is printed in a corner of the bed, with each arm printed using a slightly different pressure advance setting. The camera takes a photo of the pattern, which is processed by computer vision to remove the background and measure the thickness of each line. The line with the least variation wins, and the pressure advance setting used to print that line is used for the rest of the print — no blubs, no blebs.

We’ve seen other pressure-advanced calibrators before, but we like this one because it seems so cheap and easy to put together. True, it does mean sending images off to the cloud for analysis, but that seems a small price to pay for the convenience. And [Marius] is hopeful that he’ll be able to run the model locally at some point; we’re looking forward to that.

Continue reading “Cheap Endoscopic Camera Helps Automate Pressure Advance Calibration”

Hackaday Links Column Banner

Hackaday Links: November 24, 2024

We received belated word this week of the passage of Ward Christensen, who died unexpectedly back in October at the age of 78. If the name doesn’t ring a bell, that’s understandable, because the man behind the first computer BBS wasn’t much for the spotlight. Along with Randy Suess and in response to the Blizzard of ’78, which kept their Chicago computer club from meeting in person, Christensen created an electronic version of a community corkboard. Suess worked on the hardware while Christensen provided the software, leveraging his XMODEM file-sharing protocol. They dubbed their creation a “bulletin board system” and when the idea caught on, they happily shared their work so that other enthusiasts could build their own systems.

Continue reading “Hackaday Links: November 24, 2024”

Analyzing Feature Learning In Artificial Neural Networks And Neural Collapse

Artificial Neural Networks (ANNs) are commonly used for machine vision purposes, where they are tasked with object recognition. This is accomplished by taking a multi-layer network and using a training data set to configure the weights associated with each ‘neuron’. Due to the complexity of these ANNs for non-trivial data sets, it’s often hard to make head or tails of what the network is actually matching in a given (non-training data) input. In a March 2024 study (preprint) by [A. Radhakrishnan] and colleagues in Science an approach is provided to elucidate and diagnose this mystery somewhat, by using what they call the average gradient outer product (AGOP).

Defined as the uncentered covariance matrix of the ANN’s input-output gradients averaged over the training dataset, this property can provide information on the data set’s features used for predictions. This turns out to be strongly correlated with repetitive information, such as the presence of eyes in recognizing whether lipstick is being worn and star patterns in a car and truck data set rather than anything to do with the (highly variable) vehicles. None of this was perhaps too surprising, but a number of the same researchers used the same AGOP for elucidating the mechanism behind neural collapse (NC) in ANNs.

NC occurs when an ANN gets overtrained (overparametrized). In the preprint paper by [D. Beaglehole] et al. the AGOP is used to provide evidence for the mechanism behind NC during feature learning. Perhaps the biggest take-away from these papers is that while ANNs can be useful, they’re also incredibly complex and poorly understood. The more we learn about their properties, the more appropriately we can use them.

Art Exhibit Lets You Hide From Self-Driving Cars

In the discussions about how dangerous self-driving cars are – or aren’t – one thing is sorely missing, and that is an interactive game in which you do your best to not be recognized as a pedestrian and subsequently get run over. Even if this is a somewhat questionable take, there’s something to be said for the interactive display over at the Asian Art Museum in San Francisco which has you try to escape the tyranny of machine-vision and get recognized as a crab, traffic cone, or something else that’s not pedestrian-shaped.

Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)
Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)

The display ran from March 21st to March 23rd, with [Stephen Council] of SFGate having a swing at the challenge. As can be seen in the above image, he managed to get labelled as ‘fire’ during one attempt while hiding behind a stop sign as he walked the crossing. Other methods include crawling and (ab)using a traffic cone.

Created by [Tomo Kihara] and [Daniel Coppen], it’s intended to be a ‘playful, engaging game installation’. Both creators make it clear that self-driving vehicles which use LIDAR and other advanced detection methods are much harder to fool, but given how many Teslas are on the road using camera-based systems, it’s still worth demonstrating the shortcomings of the technology.

There’s no shortage of debate about whether or not autonomous vehicles are ready to share the roads with human drivers, especially when they exhibit unusual behavior. We’ve already seen protesters attempt to confuse self-driving systems with methods that aren’t far removed from what [Kihara] and [Coppen] have demonstrated here, and it seems likely such antics will only become more common with time.

Autonomous Racing Drones Are Starting To Beat Human Pilots

Even with all the technological advancements in recent years, autonomous systems have never been able to keep up with top-level human racing drone pilots. However, it looks like that gap has been closed with Swift – an autonomous system developed by the University of Zurich’s Robotics and Perception Group.

Previous research projects have come close, but they relied on optical motion capture settings in a tightly controlled environment. In contrast, Swift is completely independent of remote inputs and utilizes only an onboard computer, IMU, and camera for real-time for navigation and control. It does however require a pretrained machine learning model for the specific track, which maps the drone’s estimated position/velocity/orientation directly to control inputs. The details of how the system works is well explained in the video after the break.

The paper linked above contains a few more interesting details. Swift was able to win 60% of the time, and it’s lap times were significantly more consistent than those of the human pilots. While human pilots were often faster on certain sections of the course, Swift was faster overall. It picked more efficient trajectories over multiple gates, where the human pilots seemed to plan one gate in advance at most. On the other hand human pilots could recover quickly from a minor crash, where Swift did not include crash recovery.

The final results are impressive, especially given that all the processing and sensing comes from the drone. However, it still requires a well mapped track, so a human pilot should still come out on top given limited information about a new track. It would also be interesting to see how it handles large courses with gates that are much further apart.

Continue reading “Autonomous Racing Drones Are Starting To Beat Human Pilots”