One of the best things about hanging out with other hackers is the freewheeling brainstorming sessions that tend to occur. Case in point: I was at the Electronica trade fair and ended up hanging out with [Stephen Hawes] and [Lucian Chapar], two of the folks behind the LumenPnP open-source pick and place machine that we’ve covered a fair number of times in the past.
Among many cool features, it has a camera mounted on the parts-moving head to find the fiducial markings on the PCB. But of course, this mean a camera mounted to an almost general purpose two-axis gantry, and that sent the geeks’ minds spinning. [Stephen] was talking about how easy it would be to turn into a photo-stitching macrophotography rig, which could yield amazingly high resolution photos.
Meanwhile [Lucian] and I were thinking about how similar this gantry was to a 3D printer, and [Lucian] asked why 3D printers don’t come with cameras mounted on the hot ends. He’d even shopped this idea around at the East Coast Reprap Festival and gotten some people excited about it.
So here’s the idea: computer vision near extruder gives you real-time process control. You could use it to home the nozzle in Z. You could use it to tell when the filament has run out, or the steppers have skipped steps. If you had it really refined, you could use it to compensate other printing defects. In short, it would be a simple hardware addition that would open up a universe of computer-vision software improvements, and best of all, it’s easy enough for the home gamer to do – you’d probably only need a 3D printer.
Now I’ve shared the brainstorm with you. Hope it inspires some DIY 3DP innovation, or at least encourages you to brainstorm along below.
start by using a flir camera integrated in. it would allow material studies with data from the cooling print thermal density and load etc. this is brainstorming right, scrumming, punching the clown
……. OUOCH …… stop doing that wil you!
Use multiple cameras. There are cheap lightweight endoscope-class ones, cylinders with 6 to 8 millimeters diameter. Mostly 640×480, some even 1080p. One can be angled, look at the nozzle; can easily have better view, even with just the VGA resolution, than looking with standard-issue eyeball. Another can be pointed straight down, for optical touch-off.
The cam on the gantry can serve also for focus-stacking. Narrow depth of focus with a stack can be a partial workaround to need of a telecentric lens.
For CNC machines, a side-mounted cam on the bed can be used for optical sensing of the length/dimensions of a tool. By mechanical tracking of the tool outline through the center of the image, we can again cheat the telecentric lens need, trading a little bit of time for a big chunk of money.
With angled laser lines, a 3d scan of the object on the bed can be done. (Or using the time-of-flight cams like Kinect or the new Arducam’s kickstarter offering.) Can be handy for automated exclusion zones for fixtures, making the machine somewhat more self-aware, and refusing to blindly crash a tool into the vise just because the gcode said so. A bit of machine vision can go a long way.
For laser engraving, a camera can assist with autocalibration of the material’s response. Put in a range of beam settings, automatically detect the cutoff below which the surface ignores the beam, and the power where the top darkness is reached.
For general surface engraving or EDM machining or ink deposition by a pen, such 3d scan if sufficiently accurate can provide the z-map to follow the surface. If too coarse, it can still provide contours for fine feeling by a touch probe.
Just some thoughts…
This video does a great job showing some of the challenges with 3d printer nozzle cameras using endoscope cameras https://youtu.be/GAp23w_dnNc
Everything a hot-end camera could do would be better done with encoders on motors, torque sensors, perhaps some pressure sensor on the nozzle to check the output flow. That would give direct analogue and digital readings with no need for fancy, computing hungry, error prone, image recognition software.