Facebook To Slurp Oculus Rift Users’ Every Move

The web is abuzz with the news that the Facebook-owned Oculus Rift has buried in its terms of service a clause allowing the social media giant access to the “physical movements and dimensions” of its users. This is likely to be used for the purposes of directing advertising to those users and most importantly for the advertisers, measuring the degree of interaction between user and advert. It’s a dream come true for the advertising business, instead of relying on eye-tracking or other engagement studies on limited subsets of users they can take these metrics from their entire user base and hone their offering on an even more targeted basis for peak interaction to maximize their revenue.

Hardly a surprise you might say, given that Facebook is no stranger to criticism on privacy matters. It does however represent a hitherto unseen level of intrusion into a user’s personal space, even to guess the nature of their activities from their movements, and this opens up fresh potential for nefarious uses of the data.

Fortunately for us there is a choice even if our community doesn’t circumvent the data-slurping powers of their headsets; a rash of other virtual reality products are in the offing at the moment from Samsung, HTC, and Sony among others, and of course there is Google’s budget offering. Sadly though it is likely that privacy concerns will not touch the non-tech-savvy end-user, so competition alone will not stop the relentless desire from big business to get this close to you. Instead vigilance is the key, to spot such attempts when they make their way into the small print, and to shine a light on them even when the organisations in question would prefer that they remained incognito.

Oculus Rift development kit 2 image: By Ats Kurvet – Own work, CC BY-SA 4.0, via Wikimedia Commons.

3D Printed Eyeglasses, VR Lenses

[Florian] is hyped for Google Cardboard, Oculus Rifts, and other head mounted displays, and with that comes an interest in lenses. [Floian] wanted to know if it was possible to create these lenses with a 3D printer. Why would anyone want to do this when these lenses can be had from dozens of online retailers for a few dollars? The phrase, ‘because I can’ comes to mind.

The starting point for the lens was a CAD model, a 3D printer, and silicone mold material. Clear casting resin fills the mold, cures, and turns into a translucent lens-shaped blob. This is the process of creating all lenses, and by finely sanding, polishing, and buffing this lens with grits ranging from 200 to 7000, this bit of resin slowly takes on an optically clear shine.

Do these lenses work? Yes, and [Florian] managed to build a head mounted display that can hold an iPhone up to his face for viewing 3D images and movies. The next goal is printing prescription glasses, and [Florian] seems very close to achieving that dream.

The last time we saw home lens making was more than a year ago. Is anyone else dabbling in this dark art? Let us know in the comments below and send in a tip if you have a favorite lens hack in mind.

Castrol Virtual Drift: Hacking Code at 80MPH with a Driver in a VR Helmet

Driving a brand new 670 horsepower Roucsh stage 3 Mustang while wearing virtual reality goggles. Sounds nuts right? That’s exactly what Castrol Oil’s advertising agency came up with though. They didn’t want to just make a commercial though – they wanted to do the real thing. Enter [Adam and Glenn], the engineers who were tasked with getting data from the car into a high end gaming PC. The computer was running a custom simulation under the Unreal Engine. El Toro field provided a vast expanse of empty tarmac to drive the car without worry of hitting any real world obstacles.

The Oculus Rift was never designed to be operated inside a moving vehicle, so it presented a unique challenge for [Adam and Glenn]. Every time the car turned or spun, the Oculus’ on-board Inertial Measurement Unit (IMU) would think driver [Matt Powers] was turning his head. At one point [Matt] was trying to drive while the game engine had him sitting in the passenger seat turned sideways. The solution was to install a 9 degree of freedom IMU in the car, then subtract the movements of that IMU from the one in the Rift.

GPS data came from a Real Time Kinematic (RTK) GPS unit. Unfortunately, the GPS had a 5Hz update rate – not nearly fast enough for a car moving close to 100 MPH. The GPS was relegated to aligning the virtual and real worlds at the start of the simulation. The rest of the data came from the IMUs and the car’s own CAN bus. [Adam and Glenn] used an Arduino with a Microchip mcp2515 can bus interface  to read values such as steering angle, throttle position, brake pressure, and wheel spin. The data was then passed on to the Unreal engine. The Arduino code is up on Github, though the team had to sanitize some of Ford’s proprietary CAN message data to avoid a lawsuit. It’s worth noting that [Adam and Glenn] didn’t have any support from Ford on this, they just sniffed the CAN network to determine each message ID.

The final video has the Hollywood treatment. “In game” footage has been replaced with pre-rendered sequences, which look so good we’d think the whole thing was fake, that is if we didn’t know better.

Click past the break for the final commercial and some behind the scenes footage.

Continue reading “Castrol Virtual Drift: Hacking Code at 80MPH with a Driver in a VR Helmet”

ANUBIS, A Natural User Bot Interface System

[Matt], [Andrew], [Noah], and [Tim] have a pretty interesting build for their capstone project at Ohio Northern University. They’re using a Microsoft Kinect, and a Leap Motion to create a natural user interface for controlling humanoid robots.

The robot the team is using for this project is a tracked humanoid robot they’ve affectionately come to call Johnny Five.  Johnny takes commands from a computer, Kinect, and Leap motion to move the chassis, arm, and gripper around in a way that’s somewhat natural, and surely a lot easier than controlling a humanoid robot with a keyboard.

The team has also released all their software onto Github under an open source license. You can grab that over on the Gits, or take a look at some of the pics and videos from the Columbus Mini Maker Faire.

CES: Building Booths and Simulating Reality

My first day on the ground at CES started with a somewhat amusing wait at the Taxi Stand of the McCarran International Airport. Actually I’m getting ahead of myself… it started with a surprisingly efficient badge-pickup booth in the baggage claim of the airport. Wait in line for about three minutes, and show them the QR code emailed to you from online registration and you’re ready to move to the 1/4 mile-long, six-switchback deep line for cabs. Yeah, there’s a lot of people here for this conference.

It’s striking just how huge this thing is. Every hotel on the strip is crawling with badge-wearing CES attendees. Many of the conference halls in the hotels are filled with booths, meaning the thing is spread out over a huge geographic area. We bought three-day monorail passes and headed to the convention center to get started.

Building the Booths

[Sophi] knows [Ben Unsworth] who put his heart and soul into this year’s IEEE booth. His company, Globacore, builds booths for conferences and this one sounds like it was an exceptional amount of fun to work on. He was part of a tiny team that built a mind-controlled drag strip based on Emotive Insight brainwave measuring hardware shipped directly from the first factory production run. This ties in with the display screens above the track to form a leader board. We’ll have a keen eye out for hacks this week, but the story behind building these booths may be the best hack to be found.


[Ben] told us hands-down the thing to see is the new Oculus hardware called Crescent Bay. He emphatically mentioned The Holodeck which is a comparison we don’t throw around lightly. Seems like a lot of people feel that way because the line to try it out is wicked long. We downloaded their app which allows you to schedule a demo but all appointments are already taken. Hopefully our Twitter plea will be seen by their crew.

In the meantime we tried out the Oculus Gear VR. It uses a Galaxy Note 4 as the screen along with lenses and a variety of motion tracking and user controls. The demo was a Zelda-like game where you view the scene from overhead. This used a handheld controller to command the in-game character with the headset’s motion tracking used to look around the playing area. It was a neat demo, I’m not quite sold on long gaming sessions with the hardware but maybe I just need to get used full-immersion first.

Window to another Dimension


The midways close at six o’clock and we made our way to the Occipital booth just as they were winding done. I’ve been 3D scanned a few times before but those systems used turntables and depth cameras on motorized tracks to do the work. This uses a depth-camera add-on for an iPad which they call Structure Sensor.

It is striking how quickly the rig can capture a model. This high-speed performance is parlayed into other uses, like creating a virtual world inside the iPad which the user navigates by using the screen as if it were a magic window into another dimension. Their demo was something along the lines of the game Portal and has us thinking that the Wii U controller has the right idea for entertainment, but it needs the performance that Occipital offers. I liked this experience more than the Oculus demo because you are not shut off from the real world as you make your way through the virtual.

We shot some video of the hardware and plan to post more about it as soon as we get the time to edit the footage.

Find Us or Follow Us

josh-can-hardwareWe’re wearing our Hackaday shirts and that stopped [Josh] in his tracks. He’s here on business with his company Evermind, but like any good hacker he is carrying around one of his passion projects in his pocket. What he’s showing off are a couple of prototypes for a CANbus sniffer and interface device that he’s build.

We’ll be at CES all week. You can follow our progress through the following Twitter accounts: @Hackaday, @HackadayPrize, @Szczys, and @SophiKravitz. If you’re here in person you can Tweet us to find where we are. We’re also planning a 9am Thursday Breakfast meetup at SambaLatte in the Monte Carlo. We hope you’ll stop by and say hi. Don’t forget to bring your own hardware!


Using the Oculus Rift as a Multi-Monitor Replacement

[Jason] has been playing around with the Oculus Rift lately and came up with a pretty cool software demonstration. It’s probably been done before in some way shape or form, but we love the idea anyway and he’s actually released the program so you can play with it too!

It’s basically a 3D Windows Manager, aptly called 3DWM — though he’s thinking of changing the name to something a bit cooler, maybe the WorkSphere or something.

As he shows in the following video demonstration, the software allows you to set up multiple desktops and windows on your virtual sphere created by the Oculus — essentially creating a virtual multi-monitor setup. There’s a few obvious cons to this setup which makes it a bit unpractical at the moment. Like the inability to see your keyboard (though this shouldn’t really be a problem), the inability to see people around you… and of course the hardware and it’s lack of proper resolution. But besides that, it’s still pretty awesome!

In the future development he hopes to add Kinect 2 and Leap Motion controller integration to help make it even more user intuitive — maybe more Minority Report style.

Continue reading “Using the Oculus Rift as a Multi-Monitor Replacement”

Seeing The World Through Depth Sensing Cameras

The Oculus Rift and all the other 3D video goggle solutions out there are great if you want to explore virtual worlds with stereoscopic vision, but until now we haven’t seen anyone exploring real life with digital stereoscopic viewers. [pabr] combined the Kinect-like sensor in an ASUS Xtion with a smartphone in a Google Cardboard-like setup for 3D views the human eye can’t naturally experience like a third-person view, a radar-like display, and seeing what the world would look like with your eyes 20 inches apart.

[pabr] is using an ASUS Xtion depth sensor connected to a Galaxy SIII via the USB OTG port. With a little bit of code, the output from the depth sensor can be pushed to the phone’s display. The hardware setup consists of a VR-Spective, a rather expensive bit of plastic, but with the right mechanical considerations, a piece of cardboard or some foam board and hot glue would do quite nicely.

[pabr] put together a video demo of his build, along with a few examples of what this project can do. It’s rather odd, and surprisingly not a superfluous way to see in 3D. You can check out that video below.

Continue reading “Seeing The World Through Depth Sensing Cameras”