Hackaday Links Column Banner

Hackaday Links: November 24, 2024

We received belated word this week of the passage of Ward Christensen, who died unexpectedly back in October at the age of 78. If the name doesn’t ring a bell, that’s understandable, because the man behind the first computer BBS wasn’t much for the spotlight. Along with Randy Suess and in response to the Blizzard of ’78, which kept their Chicago computer club from meeting in person, Christensen created an electronic version of a community corkboard. Suess worked on the hardware while Christensen provided the software, leveraging his XMODEM file-sharing protocol. They dubbed their creation a “bulletin board system” and when the idea caught on, they happily shared their work so that other enthusiasts could build their own systems.

Continue reading “Hackaday Links: November 24, 2024”

Analyzing Feature Learning In Artificial Neural Networks And Neural Collapse

Artificial Neural Networks (ANNs) are commonly used for machine vision purposes, where they are tasked with object recognition. This is accomplished by taking a multi-layer network and using a training data set to configure the weights associated with each ‘neuron’. Due to the complexity of these ANNs for non-trivial data sets, it’s often hard to make head or tails of what the network is actually matching in a given (non-training data) input. In a March 2024 study (preprint) by [A. Radhakrishnan] and colleagues in Science an approach is provided to elucidate and diagnose this mystery somewhat, by using what they call the average gradient outer product (AGOP).

Defined as the uncentered covariance matrix of the ANN’s input-output gradients averaged over the training dataset, this property can provide information on the data set’s features used for predictions. This turns out to be strongly correlated with repetitive information, such as the presence of eyes in recognizing whether lipstick is being worn and star patterns in a car and truck data set rather than anything to do with the (highly variable) vehicles. None of this was perhaps too surprising, but a number of the same researchers used the same AGOP for elucidating the mechanism behind neural collapse (NC) in ANNs.

NC occurs when an ANN gets overtrained (overparametrized). In the preprint paper by [D. Beaglehole] et al. the AGOP is used to provide evidence for the mechanism behind NC during feature learning. Perhaps the biggest take-away from these papers is that while ANNs can be useful, they’re also incredibly complex and poorly understood. The more we learn about their properties, the more appropriately we can use them.

Art Exhibit Lets You Hide From Self-Driving Cars

In the discussions about how dangerous self-driving cars are – or aren’t – one thing is sorely missing, and that is an interactive game in which you do your best to not be recognized as a pedestrian and subsequently get run over. Even if this is a somewhat questionable take, there’s something to be said for the interactive display over at the Asian Art Museum in San Francisco which has you try to escape the tyranny of machine-vision and get recognized as a crab, traffic cone, or something else that’s not pedestrian-shaped.

Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)
Daniel Coppen, one of the artists behind “How (not) to get hit by a self-driving car,” sets up a cone at the exhibit at the Asian Art Museum in San Francisco on March 22, 2024. (Credit: Stephen Council, SFGate)

The display ran from March 21st to March 23rd, with [Stephen Council] of SFGate having a swing at the challenge. As can be seen in the above image, he managed to get labelled as ‘fire’ during one attempt while hiding behind a stop sign as he walked the crossing. Other methods include crawling and (ab)using a traffic cone.

Created by [Tomo Kihara] and [Daniel Coppen], it’s intended to be a ‘playful, engaging game installation’. Both creators make it clear that self-driving vehicles which use LIDAR and other advanced detection methods are much harder to fool, but given how many Teslas are on the road using camera-based systems, it’s still worth demonstrating the shortcomings of the technology.

There’s no shortage of debate about whether or not autonomous vehicles are ready to share the roads with human drivers, especially when they exhibit unusual behavior. We’ve already seen protesters attempt to confuse self-driving systems with methods that aren’t far removed from what [Kihara] and [Coppen] have demonstrated here, and it seems likely such antics will only become more common with time.

Autonomous Racing Drones Are Starting To Beat Human Pilots

Even with all the technological advancements in recent years, autonomous systems have never been able to keep up with top-level human racing drone pilots. However, it looks like that gap has been closed with Swift – an autonomous system developed by the University of Zurich’s Robotics and Perception Group.

Previous research projects have come close, but they relied on optical motion capture settings in a tightly controlled environment. In contrast, Swift is completely independent of remote inputs and utilizes only an onboard computer, IMU, and camera for real-time for navigation and control. It does however require a pretrained machine learning model for the specific track, which maps the drone’s estimated position/velocity/orientation directly to control inputs. The details of how the system works is well explained in the video after the break.

The paper linked above contains a few more interesting details. Swift was able to win 60% of the time, and it’s lap times were significantly more consistent than those of the human pilots. While human pilots were often faster on certain sections of the course, Swift was faster overall. It picked more efficient trajectories over multiple gates, where the human pilots seemed to plan one gate in advance at most. On the other hand human pilots could recover quickly from a minor crash, where Swift did not include crash recovery.

The final results are impressive, especially given that all the processing and sensing comes from the drone. However, it still requires a well mapped track, so a human pilot should still come out on top given limited information about a new track. It would also be interesting to see how it handles large courses with gates that are much further apart.

Continue reading “Autonomous Racing Drones Are Starting To Beat Human Pilots”

Four images in as many panes. Top left is a fuchsia bottle with a QR code that only shows up on the smartphone screen held above it. Top right image is A person holding a smartphone over a red wristband. The phone displays a QR code on its screen that it sees but is invisible in the visible wavelengths. Bottom left is a closeup of the red wristband in visible light and the bottom right image is the wristband in IR showing the three QR codes embedded in the object.

Fluorescent Filament Makes Object Identification Easier

QR codes are a handy way to embed information, but they aren’t exactly pretty. New work from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have a new way to produce high contrast QR codes that are invisible. [PDF]

If this sounds familiar, you may remember CSAILs previous project embedding QR codes into 3D prints via IR-transparent filament. This followup to that research increases the detection of the objects by using an IR-fluorescent filament. Another benefit of this new approach is that while the InfraredTags could be any color you wanted as long as it was black, BrightMarkers can be embedded in objects of any color since the important IR component is embedded in traditional filament instead of the other way around.

One of the more interesting applications is privacy-preserving object detection since the computer vision system only “sees” the fluorescent objects. The example given is marking a box of valuables in a home to be detected by interior cameras without recording the movements of the home’s occupants, but the possibilities certainly don’t end there, especially given the other stated application of tactile interfaces for VR or AR systems.

We’re interested to see if the researchers can figure out how to tune the filament to fluoresce in more colors to increase the information density of the codes. Now, go forth and 3D print a snake with snake in a QR code inside!

Continue reading “Fluorescent Filament Makes Object Identification Easier”

A wooden robot with a large fresnel lens in a sunny garden

Gardening Robot Uses Sunlight To Incinerate Weeds

Removing weeds is a chore few gardeners enjoy, as it typically involves long sessions of kneeling in the dirt and digging around for anything you don’t remember planting. Herbicides also work, but spraying poison all over your garden comes with its own problems. Luckily, there’s now a third option: [NathanBuildsDIY] designed and built a robot to help him get rid of unwanted plants without getting his hands dirty.

Constructed mostly from scrap pieces of wood and riding on a pair of old bicycle wheels, the robot has a pretty low-tech look to it. But it is in fact a very advanced piece of engineering that uses multiple sensors and actuators while running on a sophisticated software platform. The heart of the system is a Raspberry Pi, which drives a pair of DC motors to move the whole system along [Nathan]’s garden while scanning the ground below through a camera.

Machine vision software identifying a weed in a picture of garden soilThe Pi runs the camera’s pictures through a TensorFlow Lite model that can identify weeds. [Nathan] built this model himself by taking hundreds of pictures of his garden and manually sorting them into categories like “soil”, “plant” and “weed”. Once a weed has been detected, the robot proceeds to destroy it by concentrating sunlight onto it through a large Fresnel lens. The lens is mounted in a frame that can be moved in three dimensions through a set of servos. A movable lens cover turns the incinerator beam on or off.

Sunlight is focused onto the weed through a simple but clever two-step procedure. First, the rough position of the lens relative to the sun is adjusted with the help of a sun tracker made from four light sensors arranged around a cross-shaped cardboard structure. Then, the shadow cast by the lens cover onto the ground is observed by the Pi’s camera and the lens is focused by adjusting its position in such a way that the image formed by four holes in the lens cover ends up right on top of the target.

Once the focus is correct, the lens cover is removed and the weed is burned to a crisp by the concentrated sunlight. It’s pretty neat to see how well this works, although [Nathan] recommends you keep an eye on the robot while it’s working and don’t let it near any flammable materials. He describes the build process in full detail in his video (embedded below), hopefully enabling other gardeners to make their own, improved weed burner robots. Agricultural engineers have long been working on automatic weed removal, often using similar machine vision systems with various extermination methods like lasers or flamethrowers.

Continue reading “Gardening Robot Uses Sunlight To Incinerate Weeds”

Machine Vision Automates Trainspotting With Unique Full-Length Portraits

As hobbies go, trainspotting is just as valid a choice as any — we don’t judge. But it does present certain logistical challenges, such as having to be in visual range of a train to be able to spot it. There’s also the fact that trains are very large objects, and they tend to move very fast. What’s a railfan to do?

If you’re also technically minded, you might try building an automatic trainspotting bot like [jo-m] has. It looks like the hardware end of “Trainbot” is pretty simple since it has been tested on both x86 and Raspberry Pi, and supports both video4linux and Pi cam. The magic is in the software, which is able to detect a train entering the frame, record images, and then stitch them together into one long image. The whole thing is coded in Go and has some interesting bits, like a custom image patch mapping package.

Trainbot gives an unusual view of a train, one that most of us accustomed to watching a train pass at a crossing have never seen. By stitching small chunks of the train as it passes, Trainbot is able to show the entire train in a single image, which would be impossible to do except for being very, very far away from the track. [jo-m] also built a web interface for Trainbot where you can check out the comings and goings yourself. Each passing train’s image is accompanied by data like its velocity and acceleration, length of the train, and time of passage. There’s also a GIF of the original source video, which is pretty cool.

Here in the States, we don’t have a lot of passenger trains to spot, but we do have some really long freight trains. It’d be interesting to see how this works with a train that’s over a mile long; that would be quite an image. Looks like someone at least has the hardware in place to give it a try.