Handheld 3D Scanning, Using Raspberry Pi 4 And Intel RealSense Camera

Raspberry Pi 4 (with USB 3.0) and Intel RealSense D415 depth sensing camera.

When the Raspberry Pi 4 came out, [Frank Zhao] saw the potential to make a realtime 3D scanner that was completely handheld and self-contained. The device has an Intel RealSense D415 depth-sensing camera as the main sensor, which uses two IR cameras and an RGB camera along with the Raspberry Pi 4. The Pi uses a piece of software called RTAB-Map — intended for robotic applications — to take care of using the data from the camera to map the environment in 3D and localize itself within that 3D space. Everything gets recorded in realtime.

This handheld device can act as a 3D scanner because the data gathered by RTAB-Map consists of a point cloud of an area as well as depth information. When combined with the origin of the sensing unit (i.e. the location of the camera within that area) it can export a point cloud into a mesh and even apply a texture derived from the camera footage. An example is shown below the break.

As far as 3D scanning goes, it’s OK if you’re thinking these results are not perfect. It’s true that the results don’t hold a candle to photogrammetry. But considering the low resolution of the Intel RealSense’s RGB camera and the fact that RTAB-Map is a SLAM (simultaneous localization and mapping) sandbox and not 3D scanning software, these results are amazing from a handheld device that is essentially outputting this in its spare time.

While this project might appear to consist of only a few components and some open software, getting it all to work together was a challenge. Check out the project’s GitHub repository to take advantage of [Frank]’s hard work, and watch the video embedded below.

14 thoughts on “Handheld 3D Scanning, Using Raspberry Pi 4 And Intel RealSense Camera

    1. It’s clear there are plenty of apertures for airflow around the display and through the camera. Sure could use a guard to protect those fan blades though — would have been easy to design it right into the case too.

      Neat concept, though I’m surprised how noisy the output is.

  1. That’s INCREDIBLE…

    Seriously, to do this in real time this cheaply….off the shelf
    Wow!

    There’s so much potential here,

    I feel sorry for the author in a way, I hope he understands the rabbit hole he’s just entered and just how far it’s gonna go.

    I’ll say it again, incredible.

    1. I haven’t worked on this in a long time. I was going to bring it to Pi day at the library but covid-19 had that cancelled.

      The company I work for (thanks to Hackaday) does plenty of research into SLAM and I sometimes help out.

      I also took this thing to the last blood drive, the IR camera is great for showing veins!

    2. Honestly did similar things with 3d slam in ROS in 2011… without a realsense… so frankly not impressed.

      An RPi just doesn’t have the horsepower to do much stuff like this anyway.

      You’d be better off investing in a 20+MP camera + photogrammetry software.

  2. I use the XYZ (2.0) scanner(the one with the terrible software). With Artec Studio 14. Yeah that license is very expensive compared to the scanner, however the quality of the scans gets very high. We should compare the quality of those scan’s to some photogrammetry scan’s, i believe they are very comparable or my scans could be even better. The benefit of Artec Studio is that their software has a lot of editing tools and the funny thing is the XYZ scanner is compatible with the Macbook pro 2016/17/18/19 where the Artec scanners need a Thunderbolt to usb adapter (No not a usb-c to USB-A, but an active adapter, one that turns the PCI-E in thunderbolt to USB).

  3. neat and fun work but what the video does not show is a native point cloud without the color/image overlay on it. I see a lot of these types of projects (again great work) however when you put the image overlay on top of a point cloud you do not get to see how accurate or far to often inaccurate the results actually are.

  4. I wanted to see the results with the object spinning and the camera fixed.
    Interesting concept. Need to research the price and if the SW can be improved, or replaced with different SW.

  5. I’ve been wondering about being able to create 3D model (like a room) from a TOF camera live/video data, which i happen to have access to. Looks like the software might work quite nicely. Gotta try it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.