When the Raspberry Pi 4 came out, [Frank Zhao] saw the potential to make a realtime 3D scanner that was completely handheld and self-contained. The device has an Intel RealSense D415 depth-sensing camera as the main sensor, which uses two IR cameras and an RGB camera along with the Raspberry Pi 4. The Pi uses a piece of software called RTAB-Map — intended for robotic applications — to take care of using the data from the camera to map the environment in 3D and localize itself within that 3D space. Everything gets recorded in realtime.
This handheld device can act as a 3D scanner because the data gathered by RTAB-Map consists of a point cloud of an area as well as depth information. When combined with the origin of the sensing unit (i.e. the location of the camera within that area) it can export a point cloud into a mesh and even apply a texture derived from the camera footage. An example is shown below the break.
As far as 3D scanning goes, it’s OK if you’re thinking these results are not perfect. It’s true that the results don’t hold a candle to photogrammetry. But considering the low resolution of the Intel RealSense’s RGB camera and the fact that RTAB-Map is a SLAM (simultaneous localization and mapping) sandbox and not 3D scanning software, these results are amazing from a handheld device that is essentially outputting this in its spare time.
While this project might appear to consist of only a few components and some open software, getting it all to work together was a challenge. Check out the project’s GitHub repository to take advantage of [Frank]’s hard work, and watch the video embedded below.
You would want some exhaust vents on the opposite side of that fan to get air flow across the heat sink.
It’s clear there are plenty of apertures for airflow around the display and through the camera. Sure could use a guard to protect those fan blades though — would have been easy to design it right into the case too.
Neat concept, though I’m surprised how noisy the output is.
That’s the best thing you can say about this super impressive design? was he color ok?
That’s INCREDIBLE…
Seriously, to do this in real time this cheaply….off the shelf
Wow!
There’s so much potential here,
I feel sorry for the author in a way, I hope he understands the rabbit hole he’s just entered and just how far it’s gonna go.
I’ll say it again, incredible.
I haven’t worked on this in a long time. I was going to bring it to Pi day at the library but covid-19 had that cancelled.
The company I work for (thanks to Hackaday) does plenty of research into SLAM and I sometimes help out.
I also took this thing to the last blood drive, the IR camera is great for showing veins!
Honestly did similar things with 3d slam in ROS in 2011… without a realsense… so frankly not impressed.
An RPi just doesn’t have the horsepower to do much stuff like this anyway.
You’d be better off investing in a 20+MP camera + photogrammetry software.
.I was impressed. For this size and cost, it’s pretty impressive.
I use the XYZ (2.0) scanner(the one with the terrible software). With Artec Studio 14. Yeah that license is very expensive compared to the scanner, however the quality of the scans gets very high. We should compare the quality of those scan’s to some photogrammetry scan’s, i believe they are very comparable or my scans could be even better. The benefit of Artec Studio is that their software has a lot of editing tools and the funny thing is the XYZ scanner is compatible with the Macbook pro 2016/17/18/19 where the Artec scanners need a Thunderbolt to usb adapter (No not a usb-c to USB-A, but an active adapter, one that turns the PCI-E in thunderbolt to USB).
I also use the XYZ scanner 2.0 with Artec Studio…the license is really expensive, but they released HD mode for AS, it looks quite promising (https://www.artec3d.com/portable-3d-scanners/hd-mode) and I hope it will work with my XYZ scanner, so I’ll spend much less time on editing with this feature.
did it work?
I’m in the market for a cost-efdective solution and the xyz – artec HD combo seemed promising.
This IS interesting although I won’t be giving up on my Alicevision yet. Yes Frank, whether I use it or not thank you for the hard work.
neat and fun work but what the video does not show is a native point cloud without the color/image overlay on it. I see a lot of these types of projects (again great work) however when you put the image overlay on top of a point cloud you do not get to see how accurate or far to often inaccurate the results actually are.
This is a cool demo even though I shot you down in the above comment.
I hope you can find ways to refine it further!
I wanted to see the results with the object spinning and the camera fixed.
Interesting concept. Need to research the price and if the SW can be improved, or replaced with different SW.
That would confuse the algorithms used here. You definitely need different software. Many DIY “turntable” style 3D scanners must use a white background.
I’ve been wondering about being able to create 3D model (like a room) from a TOF camera live/video data, which i happen to have access to. Looks like the software might work quite nicely. Gotta try it.
Did you ever try this and/or get this working?
Can we get the file for the Case of this scanner ??
Is this realsense to pi4 connection compatible with newer versions of python?