This Machine-Vision Ekranoplan Might Just Follow You Home

What is it that’s not quite either a plane or a boat, but has characteristics of both? There are probably a lot of things that fit that description, but the one that [Nick Rehm] is working on is known as an ekranoplan. Specifically, he’s looking to make the surface-skimming ground-effect vehicle operate autonomously.

If you think you’ve heard about ekranoplans around here before, you’d be right — we’ve covered a cool LIDAR-controlled model ekranoplan that [rctestflight] worked on about a year ago, and more recently, [ThinkFlight]’s attempts to make an autonomous ekranoplan that can follow behind a boat. The latter is where [Nick] enters the collaboration, and the featherweight foam ground-effect vehicle shown in the video below is his test platform.

After sorting out the basic airframe design and getting the LIDAR integrated, he turned his attention to the autonomous bit, which relies on a Raspberry Pi 4 running ROS and a camera with a wide-angle lens. The Pi uses machine vision algorithms to find an “AprilTag” fiducial marker in the scene, which gives the flight controller information about the relative orientation of the ekranoplan to the tag. [Nick] tested tag tracking using an electric longboard, and the model ekranoplan did an admirable job of not only managing the ground-effect, but also staying on target right behind him. And hats off to [Nick] for keeping all the balls in the air and not breaking his neck in the process.

We’re looking forward to seeing what [Nick] built here end up in [ThinkFlight]’s big ekranoplan build. Ground-effect vehicles like these are undeniably cool, and it seems like they’ve got the potential to solve some interesting transportation problems.

Continue reading “This Machine-Vision Ekranoplan Might Just Follow You Home”

a 3D printed box with a Terminator head watching a camera

Machine Vision Helps You Terminate Failing 3D Print Jobs

If you’re a 3D printer user you’re probably familiar with that dreaded feeling of returning to your printer a few hours after submitting a big job, only to find that it threw an error and stopped printing, or worse, turned half a spool of filament into a useless heap of twisted plastic. While some printers come with remote monitoring facilities, [Kutluhan Aktar]’s doesn’t, so he built a device that keeps a watchful eye on his 3D printer and notifies him if anything’s amiss.

a 3D printed box with a Terminator head watching a cameraThe device does this by tracking the movement of the print head using a camera and looking for any significant changes in motion. If, for example, the Y-axis suddenly stops moving and doesn’t resume within a reasonable amount of time, it will generate a warning message and send it to its owner through Telegram. If all three axes stop moving, then either the print is finished or some serious error occurred, both of which require user intervention.

The camera [Kutluhan] used is a HuskyLens AI camera that can detect objects and output a set of 3D coordinates describing their motion. A set of QR-like AprilTags attached to the moving parts of the 3D printer help the camera to identify the relevant components. The software runs on a Raspberry Pi housed in a 3D-printed enclosure with a T-800 Terminator head on top to give it a bit of extra presence.

[Kutluhan]’s description of the project covers lots of detail on how to set up the camera and hook it up to a Telegram bot that enables it to send automated messages, so it’s an interesting read even if you’re not planning to 3D print something to check on your 3D printer. After all, software like Octoprint has many similar features, but having an independent observer can still be a good safety feature to prevent some types of catastrophic failure.

Continue reading “Machine Vision Helps You Terminate Failing 3D Print Jobs”

Camera held in hand

Review: Vizy Linux-Powered AI Camera

Vizy is a Linux-based “AI camera” based on the Raspberry Pi 4 that uses machine learning and machine vision to pull off some neat tricks, and has a design centered around hackability. I found it ridiculously simple to get up and running, and it was just as easy to make changes of my own, and start getting ideas.

Person and cat with machine-generated tags identifying them
Out of the box, Vizy is only a couple lines of Python away from being a functional Cat Detector project.

I was running pre-installed examples written in Python within minutes, and editing that very same code in about 30 seconds more. Even better, I did it all without installing a development environment, or even leaving my web browser, for that matter. I have to say, it made for a very hacker-friendly experience.

Vizy comes from the folks at Charmed Labs; this isn’t their first stab at smart cameras, and it shows. They also created the Pixy and Pixy 2 cameras, of which I happen to own several. I have always devoured anything that makes machine vision more accessible and easier to integrate into projects, so when Charmed Labs kindly offered to send me one of their newest devices, I was eager to see what was new.

I found Vizy to be a highly-polished platform with a number of truly useful hardware and software features, and a focus on accessibility and ease of use that I really hope to see more of in future embedded products. Let’s take a closer look.

Continue reading “Review: Vizy Linux-Powered AI Camera”

Wordle bot

Solving Wordle By Adding Machine Vision To A 3D Printer

Truth be told, we haven’t jumped on the Wordle bandwagon yet, mainly because we don’t need to be provided with yet another diversion — we’re more than capable of finding our own rabbit holes to fall down, thank you very much. But the word puzzle does look intriguing, and since the rules and the interface are pretty simple, it’s no wonder we’ve seen a few efforts like this automated Wordle solver crop up lately.

The goal of Wordle is to find a specific five-letter, more-or-less-common English word in as few guesses as possible. Clues are given at each turn in the form of color-coding the letters to indicate whether they appear in the word and in what order. [iamflimflam1]’s approach was to attach a Raspberry Pi camera over the bed of a 3D printer and attach a phone stylus in place of the print head. A phone running Wordle is placed on the printer bed, and Open CV is used to find both the screen of the phone, as well as the position of the phone on the printer bed. From there, the robot uses the stylus to enter an opening word, analyzes the colors of the boxes, and narrows in on a solution.

The video below shows the bot in use, and source code is available if you want to try it yourself. If you need a deeper dive into Wordle solving algorithms, and indeed other variant puzzles in the *dle space, check out this recent article on reverse engineering the popular game.

Continue reading “Solving Wordle By Adding Machine Vision To A 3D Printer”

Invisible 3D Printed Codes Make Objects Interactive

An interesting research project out of MIT shows that it’s possible to embed machine-readable labels into 3D printed objects using nothing more than an FDM printer and filament that is transparent to IR. The method is being called InfraredTags; by embedding something like a QR code or ArUco markers into an object’s structure, that label can be detected by a camera and interactive possibilities open up.

One simple proof of concept is a wireless router with its SSID embedded into the side of the device, and the password embedded into a different code on the bottom to ensure that physical access is required to obtain the password. Mundane objects can have metadata embedded into them, or provide markers for augmented reality functionality, like tracking the object in 3D.

How are the codes actually embedded? The process is straightforward with the right tools. The team used a specialty filament from vendor 3dk.berlin that looks nearly opaque in the visible spectrum, but transmits roughly 45% in IR.  The machine-readable label gets embedded within the walls of a printed object either by using a combination of IR PLA and air gaps to represent the geometry of the code, or by making a multi-material print using IR PLA and regular (non-IR transmitting) PLA. Both provide enough contrast for an IR-sensitive camera to detect the label, although the multi-material version works a little better overall. Sadly, the average mobile phone camera by itself isn’t sufficiently IR-sensitive to passively read these embedded tags, so the research used easily available cameras with no IR-blocking filters, like the Raspberry Pi NoIR.

The PDF has deeper details of the implementation for those of you who want to know more, and you can see a demonstration of a few different applications in the video, embedded below. Determining the provenance of 3D printed objects is a topic of some debate in the industry, and it’s not hard to see how technology like this could be used to covertly identify objects without compromising their appearance.

Continue reading “Invisible 3D Printed Codes Make Objects Interactive”

Robot with glowing eyes

Spatial AI And CV Hack Chat

Join us on Wednesday, December 1 at noon Pacific for the Spatial AI and CV Hack Chat with Erik Kokalj!

A lot of what we take for granted these days existed only in the realm of science fiction not all that long ago. And perhaps nowhere is this more true than in the field of machine vision. The little bounding box that pops up around everyone’s face when you go to take a picture with your cell phone is a perfect example; it seems so trivial now, but just think about what’s involved in putting that little yellow box on the screen, and how it would not have been plausible just 20 years ago.

Erik Kokalj

Perhaps even more exciting than the development of computer vision systems is their accessibility to anyone, as well as their move into the third dimension. No longer confined to flat images, spatial AI and CV systems seek to extract information from the position of objects relative to others in the scene. It’s a huge leap forward in making machines see like we see and make decisions based on that information.

To help us along the road to incorporating spatial AI into our projects, Erik Kokalj will stop by the Hack Chat. Erik does technical documentation and support at Luxonis, a company working on the edge of spatial AI and computer vision. Join us as we explore the depths of spatial AI.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, December 1st at 12:00 PM Pacific time. If time zones have you tied up, we have a handy time zone converter.

SLA printer rigged for time lapse

Silky Smooth Resin Printer Timelapses Thanks To Machine Vision

The fascination of watching a 3D printer go through its paces does tend to wear off after you spent a few hours doing it, in which case those cool time-lapse videos come in handy. Trouble is they tend to look choppy and unpleasant unless the exposures are synchronized to the motion of the gantry. That’s easy enough to do on FDM printers, but resin printers are another thing altogether.

Or are they? [Alex] found a way to make gorgeous time-lapse videos of resin printers that have to be seen to be believed. The advantage of his method is that it’ll work with any camera and requires no hardware other than a little LED throwie attached to the build platform of the printer. The LED acts as a fiducial that OpenCV can easily find in each frame, one that indicates the Z-axis position of the stage when the photo was taken. A Python program then sorts the frames, so it looks like the resin print is being pulled out of the vat in one smooth pull.

To smooth things out further, [Alex] also used frame interpolation to fill in the gaps where the build platform appears to jump between frames using real-time intermediate flow estimation, or RIFE. The details of that technique alone were worth the price of admission, and the results are spectacular. Alex kindly provides his code if you want to give this a whack; it’s almost worth buying a resin printer just to try.

Is there a resin printer in your future? If so, you might want to look over [Donald Papp]’s guide to the pros and cons of SLA compared to FDM printers.

Continue reading “Silky Smooth Resin Printer Timelapses Thanks To Machine Vision”