Finding Plastic Spaghetti With Machine Learning

Among 3D printer owners, “spaghetti” is the common term for the tangled mess of stringy plastic that’s often the result of a failed print. Fear of their print bed turning into a hot plate of PLA spaghetti is enough to keep many users from leaving their machines operating overnight or while they’re out of the house. Accordingly, we’ve seen a number of methods that allow the human operator to watch their print remotely to make sure everything is progressing smoothly.

But unless you plan on keeping your eyes on your phone the entire time you’re out of the house, there’s still a chance some PETG pasta might sneak its way out. Enter the Spaghetti Detective, an open source project that lets machine learning take over when you can’t sit watching the printer all day. Their system plugs into Octoprint to monitor your print in real-time and pause it if it starts looking particularly stringy. The concept is still under development, but judging by the gallery of results submitted by users, the system seems to have a knack for identifying non-edible noodles.

Once the software comes out of beta it looks like the team is going to try to monetize it by providing hosting and monitoring services for a monthly fee, but as it’s an open source project, you’re also able to run the software on your own machine. Though the documentation notes that the lowly Raspberry Pi doesn’t have quite what it takes to handle the image recognition routines, so you’ll need a proper computer if you want to self-host the service. Could be a good use for that old laptop you’ve got kicking around the lab.

As demonstrated in the video after the break, the system’s “spaghetti confidence” is shown with a simple to understand gauge: green is a good-looking print, and red means the detective is getting a sniff of the stringy stuff. If your print dips into the red too much, Octoprint is commanded to pause the print. The user can then look at the last image from the printer and decide to either cancel the print entirely, or resume if the Spaghetti Detective got a little overzealous.

Frankly, it’s a brilliant idea and we’re very interested to see where it goes from here. Assuming you’ve got Octoprint controlling your 3D printer there are some very clever monitoring systems out there currently, but since spaghetti isn’t the only thing a rogue 3D printer can cook up, having an extra line of defense sounds like a good idea to us.

10 thoughts on “Finding Plastic Spaghetti With Machine Learning

  1. I don’t think this is the best approach – this is throwing way too much horsepower at the problem without thought – given the number of printers I’ve peered into, illumination is usually and issue, and the lovely spaghetti can be pretty much anywhere in the print plane of the head. Since no one wants to train a model by deliberately blowing up a lot of their own prints, now differences in machines, filament colors, and everything else enters the mix. I would argue that the very nature of a failed spaghetti print makes it amenable to another analysis, basically one of line detection. The systems going to have a ridiculously high count in the area where the spaghetti is, much more than in any other situation. Simple techniques for blur analysis like computing the variance of the Laplacian would work, as discussed here – https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/

    1. I totally agree. It’s seems like a better solution and one that could be run off of a raspberry pi. It could be an Octoprint plugin. Do you have any links to projects trying this approach?

  2. I wonder if an alternative approach wouldn’t be better. Even older slicers have been able to generate a render of the work at any layer in the print. There are existing tools that will take a picture of the print after each layer with the print head off to the side so it doesn’t interfere with the photo. Combining the two should allow a tool to compare the render and actual print to see if things are going off the rails.

    1. That is exactly what I proposed a little over a year ago. You wouldn’t even have to move the print head, as the issues would show up under it. All the software would have to do is render what the whole object looks like from the camera’s perspective, trim the rendered image to fit with the current layer height, use that as a mask against an image of the background, and look for differences.

  3. Wouldn’t it be enough to have a very basic classifier, class A: nozzle touching part (good), class B: nozzle pushing out plastic in thin air (bad). For this to work one doesn’t need a neural network. Some image conditioner and a trained SVM would suffice. This could be performed on a RPi3 at some 20-30fps. The problem would be to equip the print head with suitably mounted camera that looks exactly at the right spot. The main benefit as I see it would be that it’s an early warning system. See it before the entire roll of filament has become ramen.

  4. For April Fool’s Day people need to post photos of perfect cubes on their print beds, with captions like “This was supposed to be a bird’s nest. I hate print failures!”

  5. My approach will be to use a camera close in mounted to the head, only a few mm FOV. only looking exactly at the feed coming out and if it bends away from where it’s supposed to be, alert.
    might be able to draw a box around where it should be and if it moves out of that box, alert.

Leave a Reply to Mark M MullinCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.