[Bharath] recently uploaded the source code for an OpenCV based pattern recognition platform that can be used for Augmented Reality, or even robots. It was built with C++ and utilized the OpenCV library to translate marker notations within a single frame.
The program started out by focusing in on one object at a time. This method was chosen to eliminate the creation of additional arrays that contained information of all of the blobs inside the image; which could cause some problems.
Although this implementation did not track marker information through multiple frames, it did provide a nice foundation for integrating pattern recognition into computer systems. The tutorial was straightforward and easy to ready. The entire program and source code can be found on Github which comes with a ZERO license so that anyone can use it. A video of the program comes up after the break:
Continue reading “Open Source Marker Recognition for Augmented Reality”
Virtual Reality by function pushes the boundaries of what we perceive as existence, tricking the mind into believing that the computer generated environment that the user is thrust into actually contains a real place. So in the spirit of seeing what is possible in VR, a developer named [Jacques] hooked up a Raspberry Pi to an Oculus Rift. He used a computer graphics rendering API called OpenGL ES, which is much like any mobile platform found these days, to render a floating, rotating cube.
All his tests were done on a Release build which utilized the official vertex and fragment shaders. There was no attempt to optimize anything; not like there would be much to do anyways. The scene was rendered twice at 16 milliseconds per frame. From there, he attempted 27 ms per frame with texture, followed by 36 ms/frame, and then 45.
The code used can be found on [Jacques]’s Github account. A simple improvement would use a Banana Pi for better processing speed. However, don’t expect any spectacular results with this type of setup. Really, the project only proves that it’s possible to minimize a VR experience into something that could become portable. And in the same vein, the Pi + Oculus integration can produce an uncomfortable lagging effect if things are not lined up properly. But once the energy/computing power issues are addressed, VR devices could transform into a more fashionable product like Google Glass, where a simple flip of a switch would toggle the view between VR and AR into a something more mixed. And then a motion sensing input camera like this Kinect-mapping space experiment could allow people all over the world to jump into the perspectives of other reality-pushing explorers. That’s all far down the line though, but this project lays the foundation for what the future might hold.
To see [Jacques]’s full set up, view the video after the break.
Continue reading “Testing VR Limits with a Raspberry Pi”
Google glasses this, Oculus rift that, CastAR… With all these new vision devices coming out, the world of augmented reality is fast becoming, well, a reality!
Here’s a really cool concept [Ryan Smith] came up for 3D printing. Using [Jeri Ellsworth’s] CastAR, [Ryan Smith] has created a really cool technical illusion to demonstrate visual prototyping on his Makerbot. Using a laser cutter he’s perforated the front plastic panel of the Makerbot, which allows a semi-transparent overlay that when you use the CastAR’s projector it gives you a holographic visual effect.
The glasses track the reference object (in this case, the gear) and then project interfacing gears in an animation over-top of the existing part. [Ryan] sees this as the next step in 3D printing for artists and makers because it can help give you a 3D preview of your part, for example if you’re not fully sure what scale you want it to print at, you could actually put a mating object, or your hand, behind the screen and visually see the interface!
Continue reading “CastAR and Holographic Print Preview for 3D Printers!”
[Julie Wang] has created an augmented reality system on a Field Programmable Gate Array (FPGA). Augmented reality is nothing new – heck, these days even your tablet can do it. [Julie] has taken a slightly different approach though. She’s not using a processor at all. Her entire system, from capture, to image processing, to VGA signal output, is all instantiated in a FPGA.
Using the system is as simple as holding up a green square of cardboard. Viewing the world through an old camcorder, [Julie’s] project detects and tracks the green square. It then adds a 3D image of Cornell’s McGraw Tower on top of the green. The tower moves with the cardboard, appearing to be there. [Julie] injected a bit of humor into the project through the option of substituting the tower for an image of her professor, [Bruce Land].
[Julie] started with an NTSC video signal. The video is captured by a DE2-115 board with an Altera Cyclone IV FPGA. Once the signal was inside the FPGA, [Julie’s] code performs a median filter. A color detector finds an area of green pixels which are passed to a corner follower and corner median filter. The tower or Bruce images are loaded from ROM and overlaid on the video stream, which is then output via VGA.
The amazing part is that there is no microprocessor involved in any of the processing. Logic and state machines control the show. Great work [Julie], we hope [Bruce] gives you an A!
Continue reading “Augmented Reality with an FPGA”
No good at pool? Never fear, Cassapa is here! [Alex Porto] has created an augmented reality system for playing pool, and it means almost anyone can make those cool trick shots!
Ca-what? Cassapa (“caçapa”) is a Portuguese word for pool table pocket. The software works by placing a webcam directly above the pool table for image recognition. Dedicated software interprets the image and identifies the position of the holes, borders, balls and the cue which can then be used to calculate game physics. A projector then projects the forecast physics and allows you to make tiny adjustments — updated in real-time — to make the perfect shot.
Unfortunately, having a big projector shining down on your pool table won’t exactly make anyone believe you’re actually good at pool. Although if you could combine this with Google Glass or any other vision augmenting goggles… that would be pretty cool. Well, you’d still be terribly dishonest and a cheater — but anyway, take a look at the video after the break.
Continue reading “Cassapa: Augmented Pool”
[William Steptoe] is a post-doctoral research associate at University College London. This means he gets to play with some really cool hardware. His most recent project is an augmented reality update to the Oculus Rift. This is much more than hacking a pair of cameras on the Rift though. [William] has created an entire AR/VR user interface, complete with dockable web browser screens. He started with a stock Rift, and a room decked out with a professional motion capture system. The Rift was made wireless with the addition of an ASUS Wavi and a laptop battery system. [William] found that the wireless link added no appreciable latency to the Rift. To move into the realm of augmented reality, [William] added a pair of Logitech C310 cameras. The C310 lens’ field of view was a bit narrow for what he needed, so lenses from a Genius WideCam F100 were swapped in. The Logitech cameras were stripped down to the board level, and mounted on 3D printed brackets that clip onto the Rift’s display. Shapelock was added to the mounts to allow the convergence of the cameras to be easily set.
Stereo camera calibration is a difficult and processor intensive process. Add to that multiple tracking systems (both the 6DOF head tracking on the Rift, and the video tracker built-in to the room) and you’ve got quite a difficult computational process. [William] found that he needed to use a Unity shader running on his PC’s graphics card to get the system to operate in real-time. The results are quite stunning. We didn’t have a Rift handy to view the 3D portions of [William’s] video. However, the sense of presence in the room still showed through. Videos like this make us excited for the future of augmented reality applications, with the Rift, the upcoming castAR, and with other systems.
Continue reading “Oculus Rift Goes from Virtual to Augmented Reality”
[Scott] sent in this tantalizing view of the what could be the future of bread boarding. His day job is at EquipCodes, where he’s working on augmented reality systems for the industrial sector. Most of EquipCodes augmented reality demos involve large electric motors and power transmission systems. When someone suggested a breadboard demo, [Scott] was able to create a simple 555 led blinker circuit as a proof of concept. The results are stunning. An AR glyph tells the software what circuit it is currently viewing. The software then shows a layout of the circuit. Each component can be selected to bring up further information.
The system also acts as a tutor for first time circuit builders – showing them where each component and wire should go. We couldn’t help but think of our old Radio Shack 150 in 1 circuit kit while watching [Scott] assemble the 555 blinker. A breadboard would be a lot more fun than all those old springs! The “virtual” layout can even be overlayed on real one. Any misplaced components would show up before power is turned on (and the magic smoke escapes).
Now we realize this is just a technology demonstrator. Any circuit to be built would have to exist in the software’s database. Simple editing software like Fritzing could be helpful in this case. We’re also not sure how easy it would be working with a tablet between you and your circuit. A pair of CastAR glasses would definitely come in handy here. Even so, we’re excited by this video and hope that some of this augmented reality technology makes its way into our hands.
Continue reading “Augmented Reality Breadboarding”