Modulated Pilot Lights Anchor AR To Real World

We’re going to go out on a limb here and say that wherever you are now, a quick glance around will probably reveal at least one LED. They’re everywhere – we can spot a quick half dozen from our desk, mostly acting as pilot lights and room lighting. In those contexts, LEDs are pretty mundane. But what if a little more flash could be added to the LEDs of the world – literally?

That’s the idea behind LightAnchors, which bills itself as a “spatially-anchored augmented reality interface.” LightAnchors comes from work at [Chris Harrison]’s lab at Carnegie Mellon University which seeks new ways to interface with computers, and leverages the ubiquity of LED point sources and the high-speed cameras on today’s smartphones. LightAnchors are basically beacons of digitally encoded data that a smartphone can sense and decode. The target LED is modulated using amplitude-shift keying and each packet contains a data payload and parity bits along with a pre- and post-amble sequence. Software on the phone uses the camera to isolate the point source, track it, and pull the data out of it, which is used to create an overlay on the scene. The video below shows a number of applications, ranging from displaying guest login credentials through the pilot lights on a router to modulating the headlights of a rideshare vehicle so the next fare can find the right car.

An academic paper (PDF link) goes into greater depth on the protocol, and demo Arduino code for creating LightAnchors is thoughtfully provided. It strikes us that the two main hurdles to adoption of LightAnchors would be convincing device manufacturers to support them, and advertising the fact that what looks like a pilot light might actually be something more, but the idea sure beats fixed markers for AR tracking.

Continue reading “Modulated Pilot Lights Anchor AR To Real World”

Ask Hackaday: Is Anyone Sad Phone VR Is Dead?

It’s official: smartphone-based VR is dead. The two big players in this space were Samsung Gear VR (powered by Oculus, which is owned by Facebook) and Google Daydream. Both have called it quits, with Google omitting support from their newer phones and Oculus confirming that the Gear VR has reached the end of its road. Things aren’t entirely shut down quite yet, but when it does it will sure leave a lot of empty headsets laying around. These things exist in the millions, but did anyone really use phone-based VR? Are any of you sad to see it go?

Google Cardboard, lowering cost and barrier to entry about as low as it could go.

In case you’re unfamiliar with phone-based VR, this is how it works: the user drops their smartphone into a headset, puts it on their head, and optionally uses a wireless controller to interact with things. The smartphone takes care of tracking motion and displaying 3D content while the headset itself takes care of the optics and holds everything in front of the user’s eyeballs. On the low end was Google Cardboard and on the higher end was Daydream and Gear VR. It works, and is both cheap and portable, so what happened?

In short, phone-based VR had constraints that limited just how far it could go when it came to delivering a VR experience, and these constraints kept it from being viable in the long run. Here are some of the reasons smartphone-based VR hit the end of the road: Continue reading “Ask Hackaday: Is Anyone Sad Phone VR Is Dead?”

Literal Stretch-Sensing Glove Reconstructs Your Hand Poses

Our hands are rich forms of gestural expression, but capturing these expressions without hindering the hand itself is no easy task–even in today’s world of virtual reality hardware. Fret not, though, as researchers at the Interactive Geometry Lab have recently developed a glove that’s both comfortable and straightforward to fabricate while capturing not simply gestures but entire hand poses.

Like many hand-recognition gloves, this “stretch-sensing soft glove” mounts the sensors directly into the glove such that movements can be captured while hands are out of plain sight. However, unlike other gloves, sensors are custom-made from two stretchable conductive layers sandwiched between a plain layer of silicone. The result is a grid of 44 capacitive stretch sensors. The team feeds this datastream into a neural network for gesture processing, and the result is a system capable of reconstructing hand poses at 60Hz refresh rates.

In their paper [PDF], the research team details a process of making the glove with a conventional CO2 laser cutter. They first cast a conductive silicone layer onto a conventional sheet of silicone. Then, with two samples, they selectively etch away the conductive layer with the unique capacitive grid images. Finally, they sandwich these layers together with an additional insulating and glue it into a hand-shaped textile pattern. The resulting process is a classy use of the laser cutter for the design of flexible capacitive circuits without any further specialized hardware processes.

While we’re no stranger to retrofitting gloves with sensors or etching unconventional materials, the fidelity of this research project is in a class of its own. We can’t wait to see folks extend this technique into other wearable stretch sensors. For a deeper dive into the glove’s capabilities, have a look at the video after the break.

Continue reading “Literal Stretch-Sensing Glove Reconstructs Your Hand Poses”

Tinker Pilot Project Cranks Cockpit Immersion To 11

One of the more interesting ideas being experimented with in VR is 1:1 mapping of virtual and real-world objects, so that virtual representations can have physically interaction in a normal way. Tinker Pilot is a VR spaceship simulator project by [LLUÍS and JAVI] that takes this idea and runs with it, aiming for the ability to map a cockpit’s joysticks, switches, and other hardware to real-world representations. What does that mean? It means a virtual cockpit with flight sticks, levers, and switches that have working physical versions that actually exist exactly where they appear to be.

A few things about the project design caught our eye. One is the serial communications protocol intended to interface easily with microcontrollers, allowing for feedback between the program and any custom peripherals. (By the way, this is the same approach Kerbal Space Program took with KSPSerialIO, which enables custom mission control hardware at whatever level of complexity a user may wish to implement.)

The possibilities are demonstrated starting around 1:09 in the teaser trailer (embedded below) in which a custom controller is drawn up in CAD, then 3D-printed and attached to an Arduino, and finally the 3D model is imported into the cockpit as a 1:1 representation of the actual working unit, with visual positional feedback.

Unlike this chair experiment we saw which attached a Vive Tracker to a chair, there is no indication of needing positional trackers on individual controls in Tinker Pilot. In a cockpit layout, controls can be reasonably expected to remain in fixed positions relative to the cockpit, meaning that they can be set up as 1:1 representations of a physical layout and otherwise left alone. The kind of experimentation that is available today even to individual developers or small teams is remarkable, and it’s fascinating to see the ideas being given some experimentation.

Continue reading “Tinker Pilot Project Cranks Cockpit Immersion To 11”

Everything You Probably Didn’t Know About FOV In HMDs

VR headsets have been seeing new life for a few years now, and when it comes to head-mounted displays, the field of view (FOV) is one of the specs everyone’s keen to discover. Valve Software have published a highly technical yet accessibly-presented document that explains why Field of View (FOV) is a complex thing when it pertains to head-mounted displays. FOV is relatively simple when it comes to things such as cameras, but it gets much more complicated and hard to define or measure easily when it comes to using lenses to put images right up next to eyeballs.

Simulation of how FOV can be affected by eye relief [Source: Valve Software]
The document goes into some useful detail about head-mounted displays in general, the design trade-offs, and naturally talks about the brand-new Valve Index VR headset in particular. The Index uses proprietary lenses combined with a slight outward cant to each eye’s display, and they explain precisely what benefits are gained from each design point. Eye relief (distance from eye to lens), lens shape and mounting (limiting how close the eye can physically get), and adjustability (because faces and eyes come in different configurations) all have a role to play. It’s a situation where every millimeter matters.

If there’s one main point Valve is trying to make with this document, it’s summed up as “it’s really hard to use a single number to effectively describe the field of view of an HMD.” They plan to publish additional information on the topics of modding as well as optics, so keep an eye out on their Valve Index Deep Dive publication list.

Valve’s VR efforts remain interesting from a hacking perspective, and as an organization they seem mindful of keen interest in being able to modify and extend their products. The Vive Tracker was self-contained and had an accessible hardware pinout for the express purpose of making hacking easier.  We also took a look at Valve’s AR and VR prototypes, which give some insight into how and why they chose the directions they did.

Open Source Headset With Inside-Out Tracking, Video Passthrough

The folks behind the Atmos Extended Reality (XR) headset want to provide improved accessibility with an open ecosystem, and they aim to do it with a WebVR-capable headset design that is self-contained, 3D-printable, and open-sourced. Their immediate goal is to release a development kit, then refine the design for a wider release.

An early prototype of the open source Atmos Extended Reality headset.

The front of the headset has a camera-based tracking board to provide all the modern goodies like inside-out head and hand tracking as well as the ability to pass through video. The design also provides for a variety of interface methods such as eye tracking and 6 DoF controllers.

With all that, the headset gives users maximum flexibility to experiment with and create different applications while working to keep development simple. A short video showing off the modular design of the HMD and optical assembly is embedded below.

Extended Reality (XR) has emerged as a catch-all term to cover broad combinations of real and virtual elements. On one end of the spectrum are completely virtual elements such as in virtual reality (VR), and towards the other end of the spectrum are things like augmented reality (AR) in which virtual elements are integrated with real ones in varying ratios. With the ability to sense the real world and pass through video from the cameras, developers can choose to integrate as much or as little as they wish.

Terms like XR are a sign that the whole scene is still rapidly changing and it’s fascinating to see how development in this area is still within reach of small developers and individual hackers. The Atmos DK 1 developer kit aims to be released sometime in July, so anyone interested in getting in on the ground floor should read up on how to get involved with the project, which currently points people to their Twitter account (@atmosxr) and invites developers to their Discord server. You can also follow along on their newly published Hackaday.io page.

Continue reading “Open Source Headset With Inside-Out Tracking, Video Passthrough”

Virtual Reality For Alzheimer’s Detection

You may think of Alzheimer’s as a disease of the elderly, but the truth is people who suffer from it have had it for years — sometimes decades — before they notice. Early detection can help doctors minimize the impact the condition has on your brain, so there’s starting to be an emphasis on testing middle-aged adults for the earliest signs of the illness. It turns out that one of the first noticeable symptoms is a decline in your ability to navigate. [Dennis Chan] at Cambridge Biomedical Research Centre and his team are now using virtual reality to determine how well people can navigate as a way to assess Alzheimer’s earlier than is possible with other techniques.

Current tests mostly measure your ability to remember things, but by the time that’s a problem, things have often progressed. The test has the subject walk to different cones and remember their locations, and has already proven more effective than the standard test.

Continue reading “Virtual Reality For Alzheimer’s Detection”