Ask Hackaday MRRF Edition: 3D Printers Can Catch Fire

[Jay] out of the River City Labs Hackerspace in Peoria, IL cleared out a jam in his printer. It’s an operation most of us who own a 3D printer have performed. He reassembled the nozzle, and in a moment forgot to tighten down the grub nut that holds the heater cartridge in place. He started a print, saw the first layer go down right, and left the house at 8:30 for work. When he came back from work at 10:30 he didn’t see the print he expected, but was instead greeted by acrid smoke and a burnt out printer.

The approximate start time of the fire can be guessed by the height of the print before failure.
The approximate start time of the fire can be guessed by the height of the print before failure.

As far as he can figure, some time at around the thirty minute mark the heater cartridge vibrated out of the block. The printer saw a drop in temperature and increased the power to the cartridge. Since the cartridge was now hanging in air and the thermistor that reads the temperature was still attached to the block, the printer kept sending power. Eventually the cartridge, without a place to dump the energy being fed to it, burst into flame. This resulted in the carnage pictured. Luckily the Zortrax is a solidly built full metal printer, so there wasn’t much fuel for the fire, but the damage is total and the fire could easily have spread.

Which brings us to the topics of discussion.

How much can we trust our own work? We all have our home-builds and once you’ve put a lot of work into a printer you want to see it print a lot of things. I regularly leave the house with a print running and have a few other home projects going 24/7. Am I being arrogant? Should I treat my home work with a lesser degree of trust than something built by a larger organization? Or is the chance about the same? Continue reading “Ask Hackaday MRRF Edition: 3D Printers Can Catch Fire”

Using Photogrammetry To Design 3D Printed Parts

[Stefan] is building a fixed wing drone, and with that comes the need for special mounts and adapters for a GoPro. The usual way of creating an adapter is pulling out a ruler, caliper, measuring everything, making a 3D model, and sending it off to a 3D printer. Instead of doing things the usual way, [Stefan] is using photogrammetric 3D reconstruction to build a camera adapter that fits perfectly in his plane and holds a camera securely.

ScanPhotogrammetry requires taking a few dozen pictures with a camera, using software to turn these 2D images into a 3D model, and building the new part from that model. The software [Stefan] is using is Pix4D, a piece of software that is coincidentally used to create large-scale 3D models from drone footage.

With the 2D images turned into a 3D model, [Stefan] imported the .obj file into MeshLab where the model could be cropped, smoothed, and the file size reduced. From there, creating the adapter was as simple as a little bit of OpenSCAD and sending the adapter model off to a 3D printer.

Just last week we saw photogrammetry used in another 3D object scanner. The results from both of these projects show real promise for modeling, especially with objects that are difficult to measure by hand.

Hacklet 88 – Projector Projects

Everyone loves a big screen TV. Back in the old days, anything over 27 ” was considered big. These days if you’re not sporting at least 50″, you’ll end up with display envy. One thing hasn’t changed though, those who want to go really, really big get into projectors. Hacking and projectors seem to go hand in hand. Anyone else remember those old DIY projection setups where the user would put their TV in a box upside down? This week’s Hacklet is all about projector hacks!

hushWe start with [Chaz] with Projector Hush Box . [Chaz] had a good projector, but still found himself with a problem. Projectors generate a lot of heat, which is dissipated via a fan. For whatever reason, projector companies seem to pick the loudest fans available. [Chaz’s] solution is to put the projector inside a box. Done right, this makes for a quiet projector. Done wrong, it makes an oven. [Chaz] projector hasn’t caught fire yet, so we think he did it right. Two quiet and efficient PC fans direct air through the box, and around baffles which keep the noise down. An anti-reflective coated glass window lets the light out but keeps the noise in. Sound deadening foam helps cut the sound down even further.

led-projNext up is [ric866] with 100w LED projector conversion. The killer with projectors these days are the bulbs. In some cases it’s more cost-effective to buy a new projector than to replace the bulb in an aging one. That’s how [ric866] ended up with a pair of old NEC projectors – one with a working bulb, and one without. Bulbs for this model aren’t cheap at £100. [ric866] found a cheap replacement in a 100 Watt LED. The LED in question only cost £8.99 from everyone’s favorite auction site. LEDs may be efficient, but anyone who’s played with powerful LEDs can tell you they still get hot. [ric866] had to cut up the projector’s case a bit to fit in a heat sink and fan. He also had to spend some time bypassing the various case interlock switches. The final products color calibration looks to be a bit off, but not too shabby for a quick mod!

baffle[Tom_VdE] is serious about recycling. He isn’t one to let an old laptop go to waste when it can be turned into a projector! Remember the “TV in a box” kit we mentioned up in the title? This is the modern version of that same idea. [Tom] tore down the laptop’s LCD and placed it in a CRT monitor case with the appropriate lenses. A setup like this needs length, and focus adjustments. [Tom] managed all that by building a collapsible baffle out of plywood. A build like this needs a lot of light, so [Tom] is using a 100 Watt LED (or two). A water cooling system will keep the LED’s from melting down. [Tom] is still in the prototype phase, but we can’t wait to see his first movie night with this upcycled laptop.

sensorcalFinally, we have [Alex] who built Automatic projector calibration, project #161 on Hackaday.io. [Alex] took his inspiration from [Johnny Chung Lee] to build a system which can map a projector to any angle, size, or position. The secret is phototransistors embedded in the corners of a rectangular piece of foamboard. An Arduino reads the phototransistors while the projector runs a calibration routine. [Alex] switched over to a scanning line from [Johnny’s] original binary pattern. The scan isn’t quite as fast as the binary, but it sure looks cool. Once the positions of the sensors are known, it’s just a matter of mapping the entire screen to a smaller piece of real estate. Toss in a few neat transitions, and you’ve got an awesome demo.

If you want to see more projector projects, check out our new projector project list! If I missed your project, don’t be shy, just drop me a message on Hackaday.io. That’s it for this week’s Hacklet. As always, see you next week. Same hack time, same hack channel, bringing you the best of Hackaday.io!

3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms With A Kinect”

Laser Cut-and-Weld Makes 3D Objects

Everybody likes 3D printing, right? But it’s slow compared to 2D laser cutting. If only there were a way to combine multiple 2D slices into a 3D model. OK, we know that you’re already doing it by hand with glue and/or joints. But where’s the fun in that?

LaserStacker automates the whole procedure for you. They’ve tweaked their laser cutter settings to allow not just cutting but also welding of acrylic. This lets them build up 3D objects out of acrylic slices with no human intervention by first making a cutting pass at one depth and then selectively re-welding together at another. And they’ve also built up some software, along with a library of functional elements, that makes designing these sort of parts easier.

There’s hardly any detail on their website about how it works, so you’ll have to watch the video below the break and make some educated guesses. It looks like they raise the cutter head upwards to make the welding passes, probably spreading the beam out a bit. Do they also run it at lower power, or slower? We demand details!

Anyway, check out the demo video at 3:30 where they run through the slice-to-depth and heal modes through their paces. It’s pretty impressive.

Continue reading “Laser Cut-and-Weld Makes 3D Objects”

Converting Live 2D Video To 3D

Here’s some good news for all the fools who thought 3D TV was going to be the next big thing back in 2013. Researchers at MIT have developed a system that converts 2D video into 3D. The resulting 2D video can be played on an Oculus Rift, a Google Cardboard, or even that 3D TV sitting in the living room.

Right now, the system only works on 2D broadcasts of football, but this is merely a product of how the researchers solved this problem. The problem was first approached by looking at screencaps of the game FIFA 13. Using an analysis tool called PIX, the researchers both stored the display data and extracted the corresponding 3D map of the pitch, players, ball, and stadium. To generate 3D video of a 2D football broadcast, the system then looks at every frame of the 2D broadcast and searches for a 3D dataset that corresponds to the action on the field. This depth information is then added to the video feed, producing a 3D broadcast using only traditional 2D cameras.

Grab your red and blue filter shades and check out the product of their research below.

Continue reading “Converting Live 2D Video To 3D”

Teardown Of Intel RealSense Gesture Camera Reveals Projector Details

[Chipworks] has just released the details on their latest teardown on an Intel RealSense gesture camera that was built into a Lenovo laptop. Teardowns are always interesting (and we suspect that [Chipworks] can’t eat breakfast without tearing it down), but this one reveals some fascinating details on how you build a projector into a module that fits into a laptop bezel. While most structured light projectors use a single, static pattern projected through a mask, this one uses a real projection mechanism to send different patterns that help the device detect gestures faster, all in a mechanism that is thinner than a poker chip.

mechanism1It does this by using an impressive miniaturized projector made of three tiny components: an IR laser, a line lens and a resonant micromirror. The line lens takes the point of light from the IR laser and turns it into a flat horizontal line. This is then bounced off the resonant micromirror, which is twisted by an electrical signal. This micromirror is moved by a torsional drive system, where an electrostatic signal twists the mirror, which is manufactured in a single piece. The system is described in more detail in this PDF of a presentation by the makers, ST Micro. This combination of lens and rapidly moving mirrors creates a pattern of light that is projected, and the reflection is detected by the IR camera on the other side of the module, which is used to create a 3D model that can be used to detect gestures, faces, and other objects. It’s a neat insight into how you can miniaturize things by approaching them in a different way.