Behold A 3D Display, Thanks To A Speeding Foam Ball

We’ve seen 3D image projection tried in a variety of different ways, but this is a new one to us. This volumetric display by Interact Lab of the University of Sussex creates a 3D image by projecting light onto a tiny foam ball, which zips around in the air fast enough to create a persistence of vision effect. (Video, embedded below.) How is this achieved? With a large array of ultrasonic transducers, performing what researchers call ‘acoustic trapping’.

This is the same principle behind acoustic levitation devices which demonstrate how lightweight objects (like tiny polystyrene foam balls) can be made to defy gravity. But this 3D display is capable of not only moving the object in 3D space, but doing so at a high enough speed and with enough control to produce a persistence of vision effect. The abstract for their (as yet unreleased) paper claims the trapped ball can be moved at speeds of up to several meters per second.

It has a few other tricks up its sleeve, too. The array is capable of simultaneously creating sounds as well as providing a limited form of tactile feedback by letting a user touch areas of high and low air pressure created by the transducers. These areas can’t be the same ones being occupied by the speeding ball, of course, but it’s a neat trick. Check out the video below for a demonstration.
Continue reading “Behold A 3D Display, Thanks To A Speeding Foam Ball”

Watch A 3D Printer Get Designed From The Ground Up

Too often when you see a build video, you only get to see the final product. Even if there’s footage of the build itself, it’s usually only the highlights as a major component is completed. But thankfully that’s not the case with the “V-Baby” CoreXY 3D printer that [Roy Berntsen] has been working on.

Watching through his playlist of videos, you’re able to see him tackle his various design goals. For example he’d like the final design to be both machinable and printable, which is possible, but it certainly adds complexity and time. He also transitions from a triangular base to a rectangular one at some point. These decisions, and the reasons behind them, are all documented and discussed.

Towards the end of the series we can see the final testing and torturing process as he ramps up to a final design release. This should definitely demystify the process for anyone attempting their first 3D printer design from scratch.

Hang Ten With Help From The Surf Window

Unless you live in a special, unique place like Hawaii or Costa Rica it’s unlikely you’ll be able to surf every day. It’s not easy to plan surf sessions or even surf trips to most locations because the weather conditions will need to be just right. Not only the wave height (swell) but also the wind speed and direction, tide, water and air temperature, and even amount and type of marine life present can all impact your surf session. You’ll want something which can easily tell you right away if conditions are good.

This project from [luke] is called the Surf Window shows the surf conditions at the local beach with just one glance. Made out of various pieces of wood, each part represents one of the weather conditions at the beach. A rotating seagull gives the wind direction, for example, and the wave height is represented by 3D, moving waves. All of the parts are connected with various motors and linkages to an Arduino Mega +WiFi R3 which grabs all of its information from Magicseaweed, a surf forecasting site.

The Surf Window can show the current conditions at virtually any surfable beach in the world, so if you really want to know how Jaws, Mavericks, or even Reef Road is breaking right now, you could use this to give you a more nuanced look. Don’t forget to take the correct board for the conditions!

Continue reading “Hang Ten With Help From The Surf Window”

Knitting Software Automatically Converts 3D Models Into Machine-knit Stuffies

We’ve seen our fair share of interesting knitting hacks here at Hackaday. There has been a lot of creative space explored while mashing computers into knitting machines and vice versa, but for the most part the resulting knit goods all tend to be a bit… two-dimensional. The mechanical reality of knitting and hobbyist-level knitting machines just tends to lend itself to working with a simple grid of pixels in a flat plane.

However, a team at the [Carnegie Mellon Textiles Lab] have been taking the world of computer-controlled knitting from two dimensions to three, with software that can create knitting patterns for most any 3D model you feed it. Think of it like your standard 3D printing slicer software, except instead of simple layers of thermoplastics it generates complex multi-dimensional chains of knits and purls with yarn and 100% stuffing infill.

The details are discussed and very well illustrated in their paper entitled Automatic Machine Knitting of 3D Meshes and a video (unfortunately not embeddable) shows the software interface in action, along with some of the stuffing process and the final adorable (ok they’re a little creepy too) stuffed shapes.

Since the publication of their paper, [the Textiles Lab] has also released an open-source version of their autoknit software on GitHub. Although the compilation and installation steps look non-trivial, the actual interface seems approachable by a dedicated hobbyist. Anyone comfortable with 3D slicer software should be able to load a model, define the two seams necessary to close the shape, which will need to be manually sewn after stuffing, and output the knitting machine code.

Previous knits: the Knit Universe, Bike-driven Scarf Knitter, Knitted Circuit Board.

Get Great 3D Scans With Open Photogrammetry

Not long ago, photogrammetry — the process of stitching multiple photographs taken from different angles into a 3D whole — was hard stuff. Nowadays, it’s easy. [Mikolas Zuza] over at Prusa Printers, has a guide showing off cutting edge open-source software that’s not only more powerful, but also easier to use. They’ve also produced a video, which we’ve embedded below.

Basically, this is a guide to using Meshroom, which is based on the AliceVision photogrammetry framework. AliceVision is a research platform, so it’s got tremendous capability but doesn’t necessarily focus on the user experience. Enter Meshroom, which makes that power accessible.

Meshroom does all sorts of cool tricks, like showing you how the 3D reconstruction looks as you add more images to the dataset, so that you’ll know where to take the next photo to fill in incomplete patches. It can also reconstruct from video, say if you just walked around the object with a camera running.

The final render is computationally intensive, but AliceVision makes good use of a CUDA on Nvidia graphics cards, so you can cut your overnight renders down to a few hours if you’ve got the right hardware. But even if you have to wait for the results, they’re truly impressive. And best of all, you can get started building up your 3D model library using nothing more than that phone in your pocket.

If you want to know how to use the models that come out of photogrammetry, check out [Eric Strebel]’s video. And if all of this high-tech software foolery is too much for you, try a milk-based 3D scanner.

Continue reading “Get Great 3D Scans With Open Photogrammetry”

Three Dimensions: What Does That Really Mean?

The holy grail of display technology is to replicate what you see in the real world. This means video playback in 3D — but when it comes to displays, what is 3D anyway?

You don’t need me to tell you how far away we are from succeeding in replicating real life in a video display. Despite all the hype, there are only a couple of different approaches to faking those three-dimensions. Let’s take a look at what they are, and why they can call it 3D, but they’re not fooling us into believing we’re seeing real life… yet.

Continue reading “Three Dimensions: What Does That Really Mean?”

Use Nodes To Code Loads Of G-code For 3D CNC Carving

Most CNC workflows start with a 3D model, which is then passed to CAM software to be converted into the G-code language that CNC machines love and understand. G-code, however, is simple enough that rudimentary coding skills are all you need to start writing your very own programmatic CNC tool paths. Any language that can output plain text is fully capable of enabling you to directly control powerful motors and rapidly spinning blades.

[siemenc] shows us how to use Grasshopper – a visual node-based programming system for Rhino 3D – to output G-code that makes some interesting patterns and shapes in wood when fed to a ShopBot. Though the Rhino software is a bit expensive and thus is not too widely available, [siemenc] walks through some background, theory, and procedures that could be useful and inspirational no matter what software or programming language you’re using to create your bespoke G-code.

For links to code and related blog posts, plus more lovely pictures of intricately carved plywood, check out [siemenc]’s personal site as well.

[via Bantam Tools]