Raytracing makes the design easier, but the building is still as tricky as ever.

A 10″ Telescope, Because You Only Live Once

Why build a telescope? YOLO, as the kids say. Having decided that, one must decide what type of far-seer one will construct. For his 10″ reflector, [Carl Anderson] once again said “Yolo”— this time not as a slogan, but in reference to a little-known type of reflecting telescope.

Telescope or sci-fi laser gun? YOLO, just try it.

The Yolo-pattern telescope was proposed by [Art Leonard] back in the 1960s, and was apparently named for a county in California. It differs from the standard Newtonian reflector in that it uses two concave spherical mirrors of very long radius to produce a light path with no obstructions. (This differs from the similar Schiefspiegler that uses a convex secondary.) The Yolo never caught on, in part because of the need to stretch the primary mirror in a warping rig to correct for coma and astigmatism.

[Carl] doesn’t bother with that, instead using modern techniques to precisely calculate and grind the required toric profile into the mirror. Grinding and polishing was done on motorized jigs [Carl] built, save for the very final polishing. (A quick demo video of the polishing machine is embedded below.)

The body of the telescope is a wooden truss, sheathed in plywood. Three-point mirror mounts alowed for the final adjustment. [Carl] seems to prefer observing by eye to astrophotography, as there are no photos through the telescope. Of course, an astrophotographer probably would not have built an F/15 (yes, fifteen) telescope to begin with. The view through the eyepiece on the rear end must be astounding.

If you’re inspired to spend your one life scratch-building a telescope, but want something more conventional, check out this comprehensive guide. You can go bit more modern with 3D printed parts, but you probably don’t want to try spin-casting resin mirrors. Or maybe you do: YOLO!

Continue reading “A 10″ Telescope, Because You Only Live Once”

Image Recognition On 0.35 Watts

Much of the expense of developing AI models, and much of the recent backlash to said models, stems from the massive amount of power they tend to consume. If you’re willing to sacrifice some ability and accuracy, however, you can get ever-more-decent results from minimal hardware – a tradeoff taken by the Grove Vision AI board, which runs image recognition in near-real time on only 0.35 Watts.

The heart of the board is a WiseEye processor, which combines two ARM Cortex M55 CPUs and an Ethos U55 NPU, which handles AI acceleration. The board connects to a camera module and a host device, such as another microcontroller or a more powerful computer. When the host device sends the signal, the Grove board takes a picture, runs image recognition on it, and sends the results back to the host computer. A library makes signaling over I2C convenient, but in this example [Jaryd] used a UART.

To let it run on such low-power hardware, the image recognition model needs some limits; it can run YOLO8, but it can only recognize one object, runs at a reduced resolution of 192×192, and has to be quantized down to INT8. Within those limits, though, the performance is impressive: 20-30 fps, good accuracy, and as [Jaryd] points out, less power consumption than a single key on a typical RGB-backlit keyboard. If you want another model, there are quite a few available, though apparently of varying quality. If all else fails, you can always train your own.

Continue reading “Image Recognition On 0.35 Watts”

Robot Dinosaur YOLOs Colors And Shapes For Kids

YOLO can mean many things, but in the context of [be_riddickulous]’s AI Talking Robot Dinosaur it refers to the “You Only Look Once” YOLOv11 object-detection algorithm by Ultralytics, the method by which this adorable dino recognizes colors and shapes to teach them to children.

If you’re new to using YOLO or object recognition more generally, [be_riddiculous]’s tutorial is not a bad place to get started. She goes through how many images you’ll need and what types to get the shape-and-color recognition needed for this project, as well as how to annotate them and train the model, either locally or in the cloud.

The project itself is an adorable paper-mache dinosaur with a servo-actuated mouth hiding some LEDs and a Raspberry Pi camera module to provide images. In operation, the dinosaur “talks” to children using pre-recorded voice lines, inviting them to play a game and put a specific shape, or shape of a specific color (or both) in its mouth. Then the aforementioned object detection (running on a laptop) goes “YOLO” and identifies the shape so the toy can provide feedback on the child’s choice via a speaker in the belly of the beast.

The link to the game code is currently not valid, but it looks like they used PyGame for the audio output code. A servo motor controls the mouth, but without that code it’s not entirely clear to us what it’s doing. We expect by the time you read this there’s good odds [be_riddickulous] will have fixed that link and you can see for yourself.

The only thing that holds this back from being a great toy to put in every Kindergarten class is the need to have a laptop close by to plug the webcam into. A Raspberry Pi 5 ought to have the horsepower to run YOLOv11, so with a little extra effort the whole thing could be standalone — there might even be room in there for batteries.We’ve had other hacks aimed at little ones, like a kid-friendly computer to relive the glory days of the school computer lab or one of the many iterations of the RFID jukebox idea. If you want to wow the kiddos with AI, perhaps take a look at this talking Santa plush.

Got a cool project, AI, kid-related, or otherwise? Don’t forget to toss us a tip!

Hackaday Links Column Banner

Hackaday Links: September 29, 2024

There was movement in the “AM Radio in Every Vehicle Act” last week, with the bill advancing out of the US House of Representatives Energy and Commerce Committee and heading to a full floor vote. For those not playing along at home, auto manufacturers have been making moves toward deleting AM radios from cars because they’re too sensitive to all the RF interference generated by modern vehicles. The trouble with that is that the government has spent a lot of effort on making AM broadcasters the centerpiece of a robust and survivable emergency communications system that reaches 90% of the US population.

The bill would require cars and trucks manufactured or sold in the US to be equipped to receive AM broadcasts without further fees or subscriptions, and seems to enjoy bipartisan support in both the House and the Senate. Critics of the bill will likely point out that while the AM broadcast system is a fantastic resource for emergency communications, if nobody is listening to it when an event happens, what’s the point? That’s fair, but short-sighted; emergency communications isn’t just about warning people that something is going to happen, but coordinating the response after the fact. We imagine Hurricane Helene’s path of devastation from Florida to Pennsylvania this week and the subsequent emergency response might bring that fact into focus a bit.

Continue reading “Hackaday Links: September 29, 2024”

Render Yourself Invisible To AI With This Adversarial Sweater Of Doom

Ugly sweater season is rapidly approaching, at least here in the Northern Hemisphere. We’ve always been a bit baffled by the tradition of paying top dollar for a loud, obnoxious sweater that gets worn to exactly one social event a year. We don’t judge, of course, but that’s not to say we wouldn’t look a little more favorably on someone’s fashion choice if it were more like this AI-defeating adversarial ugly sweater.

The idea behind this research from the University of Maryland is not, of course, to inform fashion trends, nor is it to create a practical invisibility cloak. It’s really to probe machine learning systems for vulnerabilities by making small changes to the input while watching for changes in the output. In this case, the ML system was a YOLO-based vision system which has little trouble finding humans in an arbitrary image. The adversarial pattern was generated by using a large set of training images, some of which contain the objects of interest — in this case, humans. Each time a human is detected, a random pattern is rendered over the image, and the data is reassessed to see how much the pattern lowers the object’s score. The adversarial pattern eventually improves to the point where it mostly prevents humans from being recognized. Much more detail is available in the research paper (PDF) if you want to dig into the guts of this.

The pattern, which looks a little like a bad impressionist painting of people buying pumpkins at a market and bears some resemblance to one we’ve seen before in similar work, is said to work better from different viewing angles. It also makes a spiffy pullover, especially if you’d rather blend in at that Christmas party.

Laser Zaps Cockroaches Over One Meter

You may have missed this month’s issue of Oriental Insects, in which a project by [Ildar Rakhmatulin] Heriot-Watt University in Edinburgh caught our attention. [Ildar] led a team of researchers in the development of an AI-controlled laser that neutralizes moving cockroaches at distances of up to 1.2 meters. Noting the various problems using chemical pesticides for pest control, his team sought out a non-conventional approach.

The heart of the pest controller is a Jetson Nano, which uses OpenCV and Yolo object detection to find the cockroaches and galvanometers to steer the laser beam. Three different lasers were used for testing, allowing the team to evaluate a range of wavelengths, power levels, and spot sizes. Unsurprisingly, the higher power 1.6 W laser was most efficient and quicker.

The project is on GitHub (here) and the cockroach machine learning image set is available here. But [Ildar] points out in the conclusion of the report, this is dangerous. It’s suitable for academic research, but it’s not quite ready for general use, lacking any safety features. This report is full of cockroach trivia, such as the average speed of a cockroach is 4.8 km/h, and they run much faster when being zapped. If you want to experiment with cockroaches yourself, a link is provided to a pet store that sells the German Blattela germanica that was the target of this report.

If this project sounds familiar, it is because it is an improvement of a previous project we wrote about last year which used similar techniques to zap mosquitoes.

Continue reading “Laser Zaps Cockroaches Over One Meter”

Teaching A Machine To Be Worse At A Video Game Than You Are

Is it really cheating if the aimbot you’ve built plays the game worse than you do?

We vote no, and while we take a dim view on cheating in general, there are still some interesting hacks in this AI-powered bot for Valorant. This is a first-person shooter, team-based game that has a lot of action and a Counter-Strike vibe. As [River] points out, most cheat-bots have direct access to the memory of the computer which is playing the game, which gives it an unfair advantage over human players, who have to visually process the game field and make their moves in meatspace. To make the Valorant-bot more of a challenge, he decided to feed video of the game from one computer to another over an HDMI-to-USB capture device.

The second machine has a YOLOv5 model which was trained against two hours of gameplay, enough to identify friend from foe — most of the time. Navigation around the map was done by analyzing the game’s on-screen minimap with OpenCV and doing some rudimentary path-finding. Actually controlling the player on the game machine was particularly hacky; rather than rely on an API to send keyboard sequences, [River] used a wireless mouse dongle on the game machine and a USB transmitter on the second machine.

The results are — iffy, to say the least. The system tends to get the player stuck in corners, and doesn’t recognize enemies that pop up at close range. The former is a function of the low-res minimap, while the latter has to do with the training data set — most human players engage enemies at distance, so there’s a dearth of “bad breath range” encounters to train to. Still, we’re impressed that it’s possible to train a machine to play a complex FPS game at all, let alone this well.