Revolving Camera Mount Helps To Capture 3D Video-game Assests

3d-image-capture

Here’s a camera rig that makes it a snap to produce photorealistic 3D models of an object. It was put together rather inexpensively by an indie game company called Skull Theatre. They published a couple of posts which show off how the rig was built and how it’s used to capture the models.

They’re using 123D, a software suite which is quite popular for digitizing items. The rig has a center table where an object is placed, and a movable jig which holds three different cameras (or one camera for three rotations). You can see the masking tape on the floor which marks the location for each shot. These positions are mapped out in the software so that it has an easy time putting them all together. The shaft which connects the jig to the base is adjustable to accommodate large or small items.

One thing that we found interesting is the team’s technique for dealing with reflections. They use a matte spray to make those surfaces less reflective. This helps 123D do its job but also allows them to map reflective surface more accurately using the game engine.

24 thoughts on “Revolving Camera Mount Helps To Capture 3D Video-game Assests

    1. Because modeling something ‘virtually’ can be an absolute pain in the ass, and it’s often easier to create a physical model and then digitize.

      If you’ve ever spent time with 3D modeling something in extensive detail, you’ll know how difficult it can be to say, model the tread on a tire.

      1. CAD makes things easy, at least as far as more mechanical or geometric forms go, like a tire tread or even a soda bottle.

        Very organic forms are different, I’ll concede. Plastiline clay and urethane foam reign supreme there.

    2. Honestly, having spent a decent amount of time playing with 3D modeling software (and having worked, professionally, using CAD software) I’ve always found the texture creation to be, by far, the hardest part of creating models for entertainment purposes. Geometry editing tools have always seemed straight-forward to me but stuff like calculating texture coordinates was always a real pain to wrap my head around and it makes a MASSIVE difference to the quality of the results if you don’t get it right.

      1. Just to make the point forgot to include in my last post, 123D catch captures the texture info as well as the geometry of the model. From my perspective, I think this is far, far more important than the fact that it stops you from having to model the geometry, especially if it also outputs the texture coordinates in a usable fashion.

    1. 123D uses the image and the background to do its magic. If you use it on an object on a completely white background it won’t work as well, it needs some texture to help it stitch. Similarly, if you rotate the object under the cameras then the background doesn’t change but the object does, this confuses it.

      1. That’s an interesting question though, because the link in this summary to a different build uses a rotating platform with a stationary camera, and seems to work just as well with 123D.

    2. Every problem they said they had could easily be resolved by keeping the camera stationary and putting the object on a turntable instead. Objects too small? Move the camera closer to the turntable.. Objects too big? Move the camera away from the turntable. The lighting setup they’re using ensures the consistency of the lighting no matter how much they rotate the turntable. Colour temperature is irrelevant as long as they use the same white balance settings for each picture as it can easily be altered later. They say that they “set our camera on full manual (except for the focus) with a high f-stop for good depth of field”, but this will actually reduce the quality of the pictures due to slight variations in focus from picture to picture. The camera settings for every picture should be identical, including focus, and they should aim for as large a depth of field as possible to ensure maximum sharpness.

      Putting the object on a turntable and rotating that for for each picture instead of rotating the camera around the object would have been much easier. However, it wouldn’t have been as much fun to build the rig and they seem more than happy with the end results, so none of this really matters anyway.

      1. Blender is nice, but it isn’t and end-all, be-all program. It’s not a viable CAD program in any way, for example, nor is it good for sketching, and correct me if I’m wrong, but it also doesn’t offer image capture and model slicing like 123D.

        Autodesk has a program for just about anything design-related that Adobe doesn’t provide. Blender, on the other hand, offers a decent polygonal modeler, a game engine, and an okay renderer.

    1. Hi electronic SDC,
      for us (archaeologists) it is important to document in 3D the objects (a virtula replica) and to model them (a 3D reconstruction). This two phase of the workflow are different: we document in 3D at the beginning (e.g. during the excavation) and we reconstruct in 3D just at the end of the process, aftre we studied all the data. It is not a matter of time, but a matter of workflow :). Anyway archaeological recostructions are very different from archaeological documentation, ’cause of the 4th dimension (x,y,z,t); e.g. the documentation of some ruins could become the reconstruction of a castle.
      Sorry for the log post, but that’s way we need also this tools.
      Ciao.

      1. Very interesting, thanks. For archaeologists it totally makes sense when you put it that way. For video game assets, though, it still seems like too much work for what you get.

        1. Paleontologists and ichnologist need the same things as archaeologists. I have been working on PPT a bit and have some trouble getting the lighting correct. I have very few points in the “shade” part of the object.

          Not sure how to get around this, is it possible that I have not used the correct camera settings. I think the post by M C may be valuable. Luca could you confirm this?

          The second question I have is relating to the lenses, would this technique work with a macro lens? Or will there be problems with the focal length or field of view?

          I want to be able to do this on 100s of specimen (at multiples scales) over the next 2 years.

          Thanks

          1. Hi bootstrap:
            in 2011 me and my friends tried a test with PPT in which we did exatcly what MC says. We put the object (a human skull) on a rotating table (from Ikea) and keep the camera in a fix point. Here is the result:

            http://arc-team-open-research.blogspot.it/2011/07/python-photogrammetry-toolbox-ppt-and.html

            But the meaning of the test was just to see if we were able to trick the software (to let it “think” that we were going around the object and not turning the object itself). It was working just because we erased the background with a black panel. Here is another work done with the same technique (you can use the ohotoset for a test):

            http://arc-team-open-research.blogspot.it/2012/11/taung-project-3d-with-sfm-ibm.html

            Anyway later we developed the technique and see that it is better to go around the object:

            http://arc-team-open-research.blogspot.it/2013/04/scanning-skulls-to-forensics-with-ppt.html

            Regarding the light and the camera lenses, these parameters should not influence the workflow: it is not “pure photogrammetry”, but Structure from Motion, at least in the first step, so you should be albe to reconstruct with Bundler (first step of PPT) a pointcloud also from picture coming from different cameras and in different light condition.
            For more ifno you can visit the blog ATOR (http://arc-team-open-research.blogspot.it/) and do a research for “PPT” in the search tool.

            I hope it was useful. Have a nice day!

Leave a Reply to Kris LeeCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.