[QLRO] wanted a 3D scanner, but didn’t like any of the existing designs. Some were too complex. Some were simple but required you to do things by hand. That led to him designing his own that he calls AAScan. You can see the thing operating in the video below.
In general, you can move the camera around the object or you can move the object around while the camera stays fixed. This design chooses the latter. You’ll need a stepper motor with a driver board and an Arduino to make the turntable rotate. You also need a computer running Python and Meshroom. The phone also has to run Python and [QLRO] used QPython on an Android device.
The base of the turntable has a crazy background. Presumably, this helps Meshroom figure out when the orientation of the thing changes. We wondered if some sort of logarithmic-scaled grid might work even better.
The source code includes two Python files: one for the client and one for the server. You’ll have to change a few things if you are on Windows instead of Linux or if your Arduino doesn’t show up as the same port as his setup. The Python code is surprisingly simple. The PC just sends the string “chez” to the phone via the network and “go” to the Arduino via the serial port. The phone sees that string and takes a picture, while the Arduino turns the platform 1/180th of the way around. As you might expect, then, the whole thing repeats 180 times.
The phone holder is printed, but honestly, it looks like any kind of articulated phone tripod would work just as well. We might even be tempted to just take a stepper motor and cut the platform out of cardboard and call it a day. Still, you probably have a 3D printer if you want to make this, so why not use it?
If you want to go more complex, you might try this 3D printed scanner. Or, try OpenScan.
Shame that Meshlab is still using CUDA whereas OpenCL runs on Intel, AMD, ARM GPUs or even CPUs.
You are not alone.
https://github.com/alicevision/AliceVision/issues/439
Maybe it can run on the NVidia Jetson Nano? But server.py would need to be changed to get images from a camera attached to the Jetson Nano if an all-in-one approach was desired.
I’d rather have a architecture agnostic approach
agreed
I understand there’s a lot of processing involved in generating 3D scans but you would think the fidelity would be amazing especially if your camera is fixed in space and you know the exact angle of your object. These scanners seem to just generate garbage IMO. Maybe I’m just a fidelity snob though.
I’ve been playing with photogrammetry last year and i ended up deliberately lowering the resolution of photos, because otherwise it was waaay slow. The complexity of problem grows really fast with resolution (imagine every feature on every photo being compared to every other feature on every other photo)
There are quite a lot of non linearities to take i to account. It would be trivial if you have perfect images, but a combination of lens distortions, paralax and simply a lack of optical resolution. Now add in the difficulty of actually reliably matching feature points (which can also deform in weird ways due to geometry, compounded with the aforementioned deformations) and you have an interesting mathmatical problem.
And we havent even included the “lets make it prettier for human perception” filters added by essentially all phone images.
This is the main reason why structured light scanners easily outperform photogrammetry. You know where to expect a feature, this significantly tightens the problemspace.
And in consideration of that, it’s freakin amazing how the eyes and the human mind manage to make that all work.
One problem that I see with moving the object and not the camera is that the lighting on a given part of the object will change as it rotates. Would it result in a better-looking reconstruction if the scan were performed in an isolated box where the light either moves with the object or the light is close to even from all directions?
I would think the opposite: keeping both lighting and camera fixed would mean consistent lighting for each photo, resulting in a nearly uniform lighting for the render…
Think of a white cylinder, what gives you visual clues that it’s round? Now light it “perfectly” with diffuse light from all directions, you have what looks like a white bar in front of you.
then do what many others do( FabScan, OpenPiScan ) and use a controllable LED light ring around your camera lens so you can dial in the color and lumins to optimize results.
The “phone holder” is a 3d printed clamp :D
i’m seeking help about my 3d scanner please which i tried to contact their supplier with no answers , the problem that i have a calibration file with .clb and i would like to know if it possible to modify that file . thank you in advance