Navigating with your phone can be a hassle: the phone displays a tiny map that you’re never supposed to look at while driving, but of course you do. [Mikeasaurus] has the ultimate solution: Direction Projection! Mike has created an augmented reality system with no glass heads-up display, and no goggles ala Microsoft Hololens. The road ahead is his canvas. A standard projector mounted atop his car displays maps and turn indicators, all from his phone. Linking the phone and projection system would normally involve HDMI or analog video cables strung through the roof. [Mikeasaurus] simplifies that by using a Chromecast, which allows him to stream his phone’s screen over WiFi.
The projector itself is the HD25-LV, a 3500 Lumen model from Optima. the HD25-LV is capable of 1080p, though in this situation, brightness is much more important than resolution. [Mikeasaurus] mounted the projector along with a gel cell battery and 900 watt DC to AC inverter to power it. A mobile WiFi hotspot fills out the rooftop kit. Leaving an expensive setup like that on top of a car is a recipe for disaster – be it from rain, rocks, or theft. [Mikeasaurus] thought ahead and strapped his setup down inside a roof mounted cargo box. A plastic covered hole in the front of the box allows the projector to shoot down on the road while protecting its lens. We’d want to add a vent and fan to ensure that projector gets a bit of airflow as well.
On the road, the system actually works. Understandably, it’s not going to work very well during the day, but at night the system really shines! Just don’t tailgate – you wouldn’t want the driver in front of you to know exactly where you’re going, would you?
Continue reading “Direction Projection is a beacon in the night”
A little more than a year ago, castAR, the augmented reality glasses with projectors and retro-reflective surfaces made it to Kickstarter. Since then we’ve seen it at Maker Faire, seen it used for visualizing 3D prints, and sat down with the latest version of the hardware. Now, one of the two people we trust to do a proper teardown finally got his developer version of the castAR.
Before [Mike] digs into the hardware, a quick refresher of how the castAR works: inside the glasses are two 720p projectors that shine an image on a piece of retroreflective fabric. This image reflects directly back to the glasses, where a pair of polarized glasses (like the kind you’ll find from a 3D TV), separate the image into left and right for each eye. Add some head tracking capabilities to the glasses, and you have a castAR.
The glasses come with a small bodypack that powers the glasses, adds two jacks for the accessory sockets, and switches the HDMI signal coming from the computer. The glasses are where the real fun starts with two cameras, two projectors, and a few very big chips. The projector itself is a huge innovation; [Jeri] is on record as saying the lens manufacturers told her the optical setup shouldn’t work.
As far as chips go, there’s an HDMI receiver and an Altera Cyclone FPGA. There’s also a neat little graphic from Asteroids on the board. Video below.
Continue reading “CastAR Teardown”
At long last I had the opportunity to try out the CastAR, a glasses-based Augmented Reality system developed by Technical Illusions. The hardware has been in the works now for a couple of years, but every time we have come across a demo we were thwarted by the long lines that accompany them. This time I was really lucky. [Jeri] gave us a private demo in a suite at the Palazzo during CES 2015. Reflecting on the experience, CastAR is exactly the type of Virtual Reality hardware I’ve been longing for.
Continue reading “CastAR Hands-On and Off-Record Look at Next Version”
Most of the incredible flight simulator enthusiasts with 737 cockpits in their garage are from the US. What happens when they’re from Slovenia? They built an A320 cockpit. The majority of the build comes from an old Cyprus Airways aircraft, with most of the work being wiring up the switches, lights, and figuring out how to display the simulated world out of the cockpit.
Google Cardboard is the $4 answer to the Oculus Rift – a cardboard box and smartphone you strap to your head. [Frooxius] missed being able to interact with objects in these 3D virtual worlds, so he came up with this thing. He adapted a symbol tracking library for AR, and is now able to hold an object in his hands while looking at a virtual object in 3D.
Heat your house with candles! Yes, it’s the latest Indiegogo campaign that can be debunked with 7th grade math. This “igloo for candles” will heat a room up by 2 or 3 degrees, or a little bit less than a person with an average metabolism will.
Last week, we saw a post that gave the Samsung NX300 the ability to lock the pictures taken by the camera with public key cryptography. [g3gg0] wrote in to tell us he did the same thing with a Canon EOS camera.
The guys at Flite Test put up a video that should be handy for RC enthusiasts and BattleBot contenders alike. They’re tricking out transmitters, putting push buttons where toggle switches should go, on/off switches where pots should go, and generally making a transmitter more useful. It’s also a useful repair guide.
[Frank Zhao] made a mineral oil aquarium and put a computer in it. i7, GTX 970, 16GB RAM, and a 480GB SSD. It’s a little bigger than most of the other aquarium computers we’ve seen thanks to the microATX mobo, and of course there are NeoPixels and a bubbly treasure chest.
Pinball machines are fascinating pieces of mechanical and electrical engineering, and now [Yair Moshe] and his students at the Israel Institute of Technology has taken the classic game one step further. Using computer vision and a projector, this group of engineers has created an augmented reality pinball game that takes pinball to a whole new level.
Once the laptop, webcam, and projector are set up, a course is drawn on a whiteboard which the computer “sees” to determine the rules of the game. Any course you can imagine can be drawn on the whiteboard too, with an interesting set of rules that no regular pinball game could take advantage of. Most notably, the ball can change size when it hits certain types of objects, which makes for a very interesting and unconventional style of play.
The player uses their hands to control the flippers as well, but not with buttons. The computer watches the position of the player’s hands and flips the flippers when it sees a hand in the right position. [Yair] and his students recently showed this project off at DLD Tel Aviv and even got [Shimon Perez], former President of Israel, to play some pinball at the conference!
[Bharath] recently uploaded the source code for an OpenCV based pattern recognition platform that can be used for Augmented Reality, or even robots. It was built with C++ and utilized the OpenCV library to translate marker notations within a single frame.
The program started out by focusing in on one object at a time. This method was chosen to eliminate the creation of additional arrays that contained information of all of the blobs inside the image; which could cause some problems.
Although this implementation did not track marker information through multiple frames, it did provide a nice foundation for integrating pattern recognition into computer systems. The tutorial was straightforward and easy to ready. The entire program and source code can be found on Github which comes with a ZERO license so that anyone can use it. A video of the program comes up after the break:
Continue reading “Open Source Marker Recognition for Augmented Reality”
Virtual Reality by function pushes the boundaries of what we perceive as existence, tricking the mind into believing that the computer generated environment that the user is thrust into actually contains a real place. So in the spirit of seeing what is possible in VR, a developer named [Jacques] hooked up a Raspberry Pi to an Oculus Rift. He used a computer graphics rendering API called OpenGL ES, which is much like any mobile platform found these days, to render a floating, rotating cube.
All his tests were done on a Release build which utilized the official vertex and fragment shaders. There was no attempt to optimize anything; not like there would be much to do anyways. The scene was rendered twice at 16 milliseconds per frame. From there, he attempted 27 ms per frame with texture, followed by 36 ms/frame, and then 45.
The code used can be found on [Jacques]’s Github account. A simple improvement would use a Banana Pi for better processing speed. However, don’t expect any spectacular results with this type of setup. Really, the project only proves that it’s possible to minimize a VR experience into something that could become portable. And in the same vein, the Pi + Oculus integration can produce an uncomfortable lagging effect if things are not lined up properly. But once the energy/computing power issues are addressed, VR devices could transform into a more fashionable product like Google Glass, where a simple flip of a switch would toggle the view between VR and AR into a something more mixed. And then a motion sensing input camera like this Kinect-mapping space experiment could allow people all over the world to jump into the perspectives of other reality-pushing explorers. That’s all far down the line though, but this project lays the foundation for what the future might hold.
To see [Jacques]’s full set up, view the video after the break.
Continue reading “Testing VR Limits with a Raspberry Pi”