A 3D Scanner that Archimedes Could Get Behind

3D-scanning seems like a straightforward process — put the subject inside a motion control gantry, bounce light off the surface, measure the reflections, and do some math to reconstruct the shape in three dimensions. But traditional 3D-scanning isn’t good for subjects with complex topologies and lots of nooks and crannies that light can’t get to. Which is why volumetric 3D-scanning could become an important tool someday.

As the name implies, volumetric scanning relies on measuring the change in volume of a medium as an object is moved through it. In the case of [Kfir Aberman] and [Oren Katzir]’s “dip scanning” method, the medium is a tank of water whose level is measured to a high precision with a float sensor. The object to be scanned is dipped slowly into the water by a robot as data is gathered. The robot removes the object, changes the orientation, and dips again. Dipping is repeated until enough data has been collected to run through a transformation algorithm that can reconstruct the shape of the object. Anywhere the water can reach can be scanned, and the video below shows how good the results can be with enough data. Full details are available in the PDF of their paper.

While optical 3D-scanning with the standard turntable and laser configuration will probably be around for a while, dip scanning seems like a powerful method for getting topological data using really simple equipment.

Thanks to [bmsleight] for the tip.

48 thoughts on “A 3D Scanner that Archimedes Could Get Behind

    1. Well that approach is basically simulating a depth map by occluding the object based on the depth of the water. So it is still vision based.

      The approach above doesn’t need a camera just equipment sensitive enough to measure the volume in a cube.

      Makes me wonder how it would deal with a model that would trap air inside itself at certain angles…

  1. Thinking about how they calculate all the possible permutations of how much water is displaced and the shape of the displacing object is hurting my brain.
    That must be phenomenally complex.

  2. I’m a bit surprised they’re using water for this. Isopropanol is cheap, readily available, and only has about a third the surface tension of water. I would expect it to increase measurement accuracy, especially in small volumes of liquid in larger-diameter tanks where the meniscus might represent a significant source of error.

    I also wonder about cavities that form air pockets during submersion in some orientations. Do they have a way to compensate for them?

    1. Isopropanol is also a solvent though and there are ways to lower water’s surface tension without making it flammable and more evaporative. At least the surface tension effect is consistent though if the medium stays the same but differing geometry is going to impact that as the object is lowered in.

      Air pockets that suddenly fill up as the object is dipped could be a significant problem.

      Doing this to absorbent objects like a tissue paper elephant could be an big issue and many materials that you typically think of as not absorbing any water will actually absorb a half a percent or a percent or so if left submerged long enough.

          1. Interesting, this seems to be the more common name for Dimethicone (which i called _S_imethicone but that is wrong, it’s Dimethicone with added silicon dioxide or something like this. Chemistry is strange stuff…).

  3. I fail to see how this alone is able to reconstruct the topological data? It seems to potentially augment or refine existing data sets though but the video appears to show it capturing surface geometry level detail?

    1. It’s like those grid puzzles where you deduce which cells are filled from the count of filled cells per row/column. Also like computed tomography.

      Distilling the 2d section into a single displacement quantity is why it requires thousands of immersions to achieve detail where a laser line scanner would only take hundreds.

        1. The same way you convert any other group of lower-dimensionality records of a thing into a higher-dimension representation.

          CT scans are already 1d projections of a 3d thing. (Look up a sinogram)

  4. I really love this way of thinking outside the box. I’m sure that this dipping method will be a positive contribution to solving the problems of 3D scanning. Thanks for posting.

    1. I don’t think either of those things is actually much of a problem, given that they’re doing 1000+ scans. You essentially get massive oversampling that will smooth out any per-scan noise.

  5. That is a very innovative application of Tomosynthesis, however the scan times would be very long as “sloshing” would limit the dip rate. There would be limits on the types of objects that can be scanned too, or you need to come up with a removable hydrophobic coating that you can dip seal the object with first. If you can use a tank liquid with a low freezing point and low viscosity that is chemically inert you may be able to find a compound that can be used as a sealant at those temperatures but melts then evaporates away at higher temperatures leaving the object as it was before the scanning operation.

    1. yes, I was wondering about the sloshing about and if they need to wait a long time for it to settle. Maybe you could use something like opencv to get a picture of the ‘waves’, and then just calculate what the ‘flat’ surface would look like (would probably need to do this across the whole 2d surface though, which might be tricky). You could probably calibrate this first with a few runs, to get a decent algorithm.

  6. I’m not sure this makes sense, since you only measure volume you would only get one dimension per dip.
    Doesn’t make sense to me that this works with all objects in any appreciable time, I would think symmetrical protrusions would make it rather hard to determine the shape that way requiring ever more dips or high precision leading to the need of a immense amount of dips and an crazy precision in measurement too. Making me think they represent it as a bit too easy.

    Also the video claims they normally only use light/vision but they have been using prodding probes for decades too. So that claim is incorrect.

    1. It’s the multiple dips at different angles that give you the data you need for a reconstruction. It’s very similar to how CT data is reconstructed. Look up sinograms to give you a better idea of the math behind this.

      1. Well I get the concept, but it remains a very very slow process needing very very small increment and many many angles for certain type of objects surely.. And even then I imagine there can be objects that defy the algorithm/medium.
        It’s like you were scanning with a needlepoint laser except much slower because you would need to wait 5+ seconds for the object to be stable with each tiny point.

        Thanks for replying though. Although I have to say, with CT scans you do get a cross section 2-dimensional set of data points AFAIK and not a set of single 1 dimensional data-points as this method seems to render.

  7. Archimedes today would add a load cell on a dip arm, and thus made this apparatus scan not only the shape, but also a density distribution inside it, which would give answer to here not asked question: “Is there an weight hidden somewhere inside this elephant?”, or “is this part of homogeneous composition?”, or “is this dice rigged?”

    1. The buoyancy is a function of volume, not of how mass is distributed within the volume. A vertical only load cell would give data redundant with the fluid level change. But measuring torque from buoyancy would give some information regarding the distribution of volume along horizontal axis.

  8. I’m having a difficult time wrapping my head around this mathematics of this method…
    Could someone let me know if it would accurately reproduce these objects, or would it just appear to be a sphere?

    1. I belive the method would still work. As far as I understand these shapes have a constant MAXIMUM width on any arbitrary axis, as such they would be possible to distinquish from a sphere. At any other point, than at the MAXIMUM width, the slice area may differ from that of a sphere.

      1. It was bugging me, and I’ve no way to test it out. I think there is some geometry that would mess things up.
        The maximum width thing went right over my head.
        Thanks for responding!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s