1.5 Million Dollars Buys 850,000 LEDs And 29 Raspberry Pis

You think you like RGB LEDs? Columbus, OH art professor [Matthew Mohr] has more blinkenlove than you! His airport– convention-center-scale installation piece is an incredible 850,000 RGB LEDs wrapped around a 14-foot tall face-shaped sculpture that projection-maps participants’ faces onto the display. To capture images, there is also a purpose-built room with even illumination and a slew of Raspberry Pi cameras to take pictures of the person’s face from many angles simultaneously.

Besides looking pretty snazzy, the scale of this is just crazy. For instance, if you figure that the usual strip of 60 WS2812s can draw just about 9.6 watts full on, that scales up to 136 kW(!) for the big head. And getting the control signals right? Forgeddaboutit. Prof. [Mohr], if you’re out there, leave us some details in the comments.

(Edit: He did! And his website is back up after being DOSed. And they’re custom LEDs that are even brighter to compete with daylight in the space.)

What is it with airports and iconic LED art pieces? Does anyone really plan their stopovers to see public art? How many of you will fly through Columbus on purpose now?

Hackaday Prize Entry: Cheap, Open LiDAR

[adam] is a caver, meaning that he likes to explore caves and map their inner structure. This is still commonly done using traditional tools, such as notebooks (the paper ones), tape measure, compasses, and inclinometers. [adam] wanted to upgrade his equipment, but found that industrial LiDAR 3D scanners are quite expensive. His Hackaday Prize entry, the Open LIDAR, is an affordable alternative to the expensive industrial 3D scanning solutions out there.

The 3D scan of a small cave near Louisville (source: [caver.adam's] Sketchfab repository)
The 3D scan of a small cave near Louisville from [caver.adam’s] Sketchfab repository
LiDAR — Light Detection And Ranging —  is the technology that senses the distance between a sensor and an object by reflectively measuring the time of flight of a light beam between the two. By acquiring a two-dimensional array of multiple distance readings, this can be used for 3D scanning. Looking at how the industrial LiDAR scanners capture the environment using fast spinning mirrors, [adam] realized that he could basically achieve the same by using a cheap laser range finder strapped to a pan and tilt gimbal.

The gimbal he designed for this task uses stepper motors to aim an SF30-B laser rangefinder. An Arduino controls the movement and lets the eye of the sensor scan an object or an entire environment. By sampling the distance readings returned by the sensor, a point cloud is created which then can be converted into a 3D model. [adam] plans to drive the stepper motors in microstepping mode to increase the resolution of his scanner. We’re looking forwards to see the first renderings of 3D cave maps captured with the Open LIDAR.

The HackadayPrize2016 is Sponsored by:

Hacked Turntable Rotates Humans For 3D Scanning

If you are from the 70’s, you’ll probably remember the Disco Body Shaper or the Aerobic Body Shaper exerciser devices that were the rage of the day. Basically, Lazy Susan turntables on which humans could stand and twist away to burn fat. The results were suspect, but [Daniel Kucera] thought one of them would be ideal in 2016 to build a heavy-duty turntable to allow full body scanning.

He had already tried a few other ideas and failed, so it was worth giving this a shot, since it cost just 10 bucks to buy one. The plan was to use a motor to provide friction drive along the circumference of the turntable platform. For this, he used a high torque motor with a gear on the output shaft. From the looks of it, he attached a Meccano plate to the base, and mounted the motor to this plate. A large spring keeps the motor pressed against the rim of the turntable. A strip of rubber scavenged from a bicycle tube was glued along the side of the turntable to provide some friction to the gear drive. The turntable is placed on two thick pieces of foam, to provide clearance for the motor. We aren’t sure if a toothed gear is the best choice to drive this thing, but a hacker’s gotta use what he’s got. He’s clocking 190 seconds for a full rotation, but he still hasn’t posted any scan results from the Android scanner software that he is working on. This one, for sure, doesn’t qualify for a “it’s not a hack” comment.

Super Detailed 3d Scans With Photogrammetry

Photogrammetry is a real word, and [shapespeare] built himself a nice setup to take high-res 3d scans using it. A good set of images for photogrammetry are: in sharp focus, well lit, precisely indexed, and have a uniform background. The background was handled by a 3d printed stand and some copier paper. To get even lighting he used four adjustable LED lamps from Ikea.

In order to precisely index the object, he built an indexing set-up with an Arduino and a stepper motor (housed in the, self proclaimed, most elegant of 3d printed enclosures). The Arduino rotates the platform a measured increment, and then using [Sebastian Setz]’s very neat IR camera control library, snaps a photo. This process repeats until multiple photos of the object have been taken.

Once the photos have been taken, they need to be run through a photogrammetry processor. [shapespeare] uses Agisoft Photoscan, but says Autodesk Memento and 123d Catch do pretty well too. After all this work it appears that [shapespeare] used his new powers to 3d print a giant decking screw. Cool.

3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms With A Kinect”

We Should 3D Scan People

In a perfect futuristic world you have pre-emptive 3D scans of your specific anatomy. They’d be useful to compare changes in your body over time, and to have a pristine blueprint to aid in the event of a catastrophe. As with all futuristic worlds there are some problems with actually getting there. The risks may outweigh the rewards, and cost is an issue, but having 3D imaging of a sick body’s anatomy does have some real benefits. Take a journey with me down the rabbit hole of 3D technology and Gray’s Anatomy.

Continue reading “We Should 3D Scan People”

DIY BobbleHead

Making A Bobblehead Of You

Bobbleheads, you remember them, small figures with a spring-mounted comically large head. They brought joy to millions of car drivers every day as at least 97.5% of all registered cars in the 1960’s had bobbleheads mounted to the dash. Years later bobblehead popularity has waned but [Luis] is trying to bring them back, this time not as your iconic sports hero but as YOU!

[Luis] uses software called Skanect along with his Kinect to scan a persons geometry. There is a free version of Skanect but it is limited to exporting STL files no larger than 5,000 faces. That means that 3d printed bobbleheadscans of large objects (including people) come out looking noticeably faceted. [Luis] came up with a work-around that results in a much finer detailed scan. Instead of scanning an entire person with one scan, he would do 4 separate scans. Since each individual scan can support 5,000 faces, the resulting merged model can be up to 20,000 faces. Check out the comparison, the difference between the two scanning methods is quite noticeable. MeshMixer is the software used to merge the STL files of the 4 separate scans.

Once the full body is assembled in MeshMixer, it is time to separate the head from the body. A cylindrical hole is then made in the bottom of the head and the top of the body. This hole is just slightly larger than the spring used to support the head. The parts are then printed, painted and assembled. We have to say that the end result looks pretty darn good.