The old gen 1 Kinect has seen a fair bit of use in the field of making 3D scans out of real world scenes. Now that Xbox 360 Kinects are winding up at yard sales and your local Goodwill, you might even have a chance to pick one up for pocket change. Until now, though, scanning objects in 3D has only been practical in a studio or workshop setting; for a mobile, portable scanner, you’d need to lug around a computer, a power supply, and it’s not really something you can fit in a back pack.
Now, finally, that may be changing. [xxorde] can now get depth data from a Kinect sensor with a Raspberry Pi. And with just about every other ARM board out there as well. It’s a kernel driver that’s small, fast, and does just one thing: turns the Kinect into a webcam that displays depth data.
Of course, a portabalized Kinect 3D scanner has been done before, but that was with an absurdly expensive Gumstix board. With a Raspi or BeagleBone Black, this driver has the beginnings of a very cheap 3D scanner that would be much more useful than the current commercial or DIY desktop scanners.
Hmm, can someone point out what the difference between this driver is and the one that has been in the Linux kernel for more than two years (see for example: http://blog.jozilla.net/2012/03/29/getting-up-and-running-with-the-kinect-in-ubuntu-12-04/) and which I’ve previously used for the kinect on a RasPi?
I am probably completely wrong, but ISTR the kernel drivers are x86-only.
Well, if it’s a USB camera driver in the mainstream kernel, it’s most likely to run on any platform (it wouldn’t be mainstreamed, otherwise).
The difference is, that Antonio’s driver (already in the kernel) gives you only the camera images, not the depth data.
<3 <3 <3 <3 <3
this + OctoPrint on another Pi are my killer app.
Does it need to be on another Pi? Even if the Kinect uses all the CPU capacity, just scan, store the scan, then switch over into printing mode.
Are you new to HaD?
Never do with common sense what you can do by adding more Pi’s or Ardunios!
Here are helpful links:
http://www.raspberrypi.org/forums/viewtopic.php?f=37&t=71919
http://ariandy1.wordpress.com/2013/02/27/getting-raspberry-pi-openni-and-asus-xtion-pro-live-to-work/
http://mewgen.com/Ge107_files/20120921%20Setting%20up%20Rasberry%20pi%20for%20the%20Xtion%20and%20kinect.html
http://answers.ros.org/question/62867/raspberry-pi-openni-usb-interface-not-supported/
http://www.pcl-users.org/Xtion-Pro-Live-Raspberry-PI-Streaming-td4024213.html
Would this work for Asus Xtion too?
Probably.
This is just the gspca based kinect kernel driver that has existed since 2011 with minor modifications to return the depth stream instead of the rgb “web cam” stream.
The interesting part would have been to see what a severly underpowered device is supposed to do with the data, and how well actually getting a depth frame that is not missing parts work on a raspi…
And in case anyone is wondering why it includes a copy of the gspca code: The reason why the original code only returns the RGB stream is that the gspca framwork is meant for webcams, where only the first usb endpoint is used, but the kinect has two endpoints for the rgb camera and the depth camera, so that’s why this modified version needs a patched version of gspca and this is also the reason why there is no code for the depth stream in the kernel.
“severely underpowered device”
“for a lot of projects being able to update a face tracker five times a second is more than enough”
http://hackaday.com/2013/03/04/using-opencv-with-the-raspberry-pi/
” work on a raspi”
Yes because Raspberry Pi computers feature “raspberry logic” and some programs are not compatible
I dunno what you’re getting at here, plz explain.
Does anyone know what is going on with the Kinect2 from the new Xbox-One? When will this get released? I only noticed that there was a sign up time for 500 developers to receive an early access. But when will the Tinkerer tinker with it?
The RasPi probably won’t be powerful enough to generate a mesh and the Kinect is terrible when used as a handheld device. It loses its bearings too quickly from my experience.
“The RasPi probably won’t be powerful enough to generate a mesh”
http://www.roylongbottom.org.uk/Raspberry Pi Benchmarks.htm
Raspberry Pi can draw 16,000 3-D triangles on the screen @ 10 Hz update rate
Raspberry Pi 3-D graphics subsystem BLOWS AWAY the “mind-boggling” 3-D performance of many early Silicon Graphics workstations costing > $100K
It’s not outputting the graphics that’s the problem. I think the problem is feeding the Pi a 2D image of your subject, overlaid by a laser grid, and expecting it to turn that even into a point cloud, and from there into a polygonal representation.
That all needs a lot of power, and I don’t think even the Xbox 360 bothers doing that, it just generates skeletons a lot of the time, from the centres of mass of the large moving things, ie people, in the image.
There’s a LOT of work in getting a real representation of the world, from the view a Kinect gives. Even if you can move it about.
The Ras Pi’s 3D graphics don’t matter at all in this. They’re not reprogrammable in some CUDA-like way to help with maths. It’s the main CPU that would do all the work, and it’s underpowered for real high-end stuff. Not that it hasn’t brought comparitively “large” amounts of MIPS to places that never had them before, but it’s not a powerful number cruncher.
It’s also, as you kindof prove yourself, pretty irrelevant to compare CPU power of a few years ago to now, especially when you’re talking about 3D graphics. Ras Pi blows the late 1990s away, but so do most phones.
with the v3d stuff that was released a while back, the 3d cores could be programmed to aid in this task
but it currently lacks any standard CUDA like api, so you would have to do all that coding directly in v3d assembly
Yes, well done. My washing machine ‘blows away’ the performance of a decrepid SGI too, but neither that nor the Pi are any match for a modern GPU and neither are close to enough power to run a 3D scan.
You may well be able to record a scan and leave it to process over many hours but in order to get best results you need that realtime feedback of whether it is maintaining the lock on it surroundings.
I don’t think the comparison to old SGIs matters. The software I’ve seen to work with Primesense-based devices often struggles to run well on a brand new i7 quad core. I’m not seeing where RasPi can scratch well against that, unless there’s some special “raspberry magic” going on.
The PS4 equivalent is USB 3.0 only (literally no of the USB 2.0 data lines are used)
Someone fixed the raspberry pi isochronous usb transfers yet? Or is it still this unsolvable hardware problem. I could not get usb cams, usb audio or bluetooth a2dp work stable enough because of this (working without lost frames or crashing drivers or jerky audio).
yes I also have problems when I try to extract tractor trailers from the ditch with my bicycle
That’s a rather broad metaphor for the very specific problem the chap is mentioning. Do you know much about the Pi’s problems with isochronous USB transfers? Are you saying this mode is impossible because of lack of CPU speed? If that’s what you’re saying it’d be more helpful to do it specifically, since it would eliminate other possible causes.
It’s not a cpu thing, I used the same logitech usb2.0 webcam on other, slower, platforms before (philips trimedia processor). The raspberry pi isochronous usb transfer problem was known from the start. Some blame the LAN8512 ethernet/usb hub, chip, for taking priority and interrupting the datastream when it thinks it some ethernet data. But the strange thing is it is happening on the A model also. The problem becomes more apparent when multiple usb devices are connected, for example keyboard and webcam.
From forums it looks like they tried to bypass this problem in software, but actually is a usb hardware stack issue on the cpu chip.
Also it looks like, beaglebone black does not have this problem and can work with usb webcams properly. So my opinion is, don’t do this kinect stuff on the raspberry pi. Use beablebone black or something else but not the raspberry pi.
Does it have to be using a Kinect or can I just use a web cam plus software to construct a 3D mesh from a 2D image. It would be a significantly cheaper build.
Does anyone here succeded to make the raspberry pi constructing a 3D model with kinect?
hi
is this project dead???