3D Render Live With Kinect And Bubble Boy

[Mike Newell] dropped us a line about his latest project, Bubble boy! Which uses the Kinect point cloud functionality to render polygonal meshes in real time.  In the video [Mike] goes through the entire process from installing the libraries to grabbing code off of his site. Currently the rendering looks like a clump of dough (nightmarishly clawing at us with its nubby arms).

[Mike] is looking for suggestions on more efficient mesh and point cloud code, as he is unable to run any higher resolution than what is in the video. You can hear his computer fan spool up after just a few moments rendering! Anyone good with point clouds?

Also, check out his video after the jump.

[vimeo http://vimeo.com/22542088 w=470]

13 thoughts on “3D Render Live With Kinect And Bubble Boy

  1. Rendering a solid from a point cloud is a pretty well documented problem. One nice technique is described in this paper from NVIDIA :

    http://developer.download.nvidia.com/presentations/2010/gdc/Direct3D_Effects.pdf

    It’s used to render particle fluid simulations. But can be applied to about any point cloud. As it runs totally on the GPU it’s pretty scalable. I was able to render about 30K particles without any problems using this technique.

    Hope it helps!

  2. I’ve messed with using meshlab for converting point cloud data into usable models. Let me tell you, it can be pretty processor intensive. As in, my nice Core i7 CAD machine doesn’t like doing it.

    But there has been some work at creating models on the fly from kinect. I’m sure with some clever work it could be done, but unfortunately I don’t know how.

  3. Your problem is Processing. Java is wicked slow; you should be using C, C++, or (preferably), Haskell, which compiles to C. Anything that is interpreted, runs in a virtual machine, or uses any execution path other than compilation to machine code will be slow.

  4. I think that a really simple way to do it would be to generate the mesh once and then deform it , instead of continuously generating new meshes. If you really want to generate meshes (respond to changes such as people walking in and out of frame) You can regenerate over a couple of time frames and sync the model + skeleton again when a new mesh is made just to correct errors. By dividing the work over updates and deforming existing meshes instead of regeneration the rate should go up considerably.

  5. @ferdie – you’re right! I’m sorry I’ll get a download up there later this afternoon!

    @Franklyn – good idea, so just monitor for a change in the object and regenerate that specific area as opposed to the whole thing right? That may take some insane logic but it might be worth a shot.

    @ UltimateJim – thanks for the advice, I’ll look into how to render NURBS from a point cloud…seems fairly straight forward.

  6. Not exactly, i was thinking more like generating one pointcloud and then using a skeletal structure (that you can track) to basically move the points around instead. But it seems like what you want is more of a realtime 3d scanner.

Leave a Reply to ArktosCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.