Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery

Last year a team of researchers published a paper detailing a method of boosting visual contrast and image quality in stereoscopic displays. The method is called Dichoptic Contrast Enhancement (DiCE) and works by showing each eye a slightly different version of an image, tricking the brain into fusing the two views together in a way that boosts perceived image quality. This only works on stereoscopic displays like VR headsets, but it’s computationally simple and easily implemented. This trick could be used to offset some of the limitations of displays used in headsets, for example making them appear capable of deeper contrast levels than they can physically deliver. This is good, because higher contrasts are generally perceived as being more realistic and three-dimensional; important factors in VR headsets and other stereoscopic displays.

Stereoscopic vision works by having the brain fuse together what both eyes see, and this process is called binocular fusion. The small differences between what each eye sees mostly conveys a sense of depth to us, but DiCE uses some of the quirks of binocular fusion to trick the brain into perceiving enhanced contrast in the visuals. This perceived higher contrast in turn leads to a stronger sense of depth and overall image quality.

Example of DiCE-processed images, showing each eye a different dynamic contrast range. The result is greater perceived contrast and image quality when the brain fuses the two together.

To pull off this trick, DiCE displays a different contrast level to both eyes in a way designed to encourage the brain to fuse them together in a positive way. In short, using a separate and different dynamic contrast range for each eye yields an overall greater perceived contrast range in the fused image. That’s simple in theory, but in practice there were a number of problems to solve. Chief among them was the fact that if the difference between what each eyes sees is too great, the result is discomfort due to binocular rivalry. The hard scientific work behind DiCE came from experimentally determining sweet spots, and pre-computing filters independent of viewer and content so that it could be applied in real-time for a consistent result.

Things like this are reminders that we experience the world only through the filter of our senses, and our perception of reality has quirks that can be demonstrated by things like this project and other “sensory fusion” edge cases like the Thermal Grill Illusion, which we saw used as the basis for a replica of the Pain Box from Dune.

A short video overview of the method is embedded below, and a PDF of the publication can be downloaded for further reading. Want a more hands-on approach? The team even made a DiCE plugin (freely) available from the Unity asset store.

Continue reading “Tricking The Brain Into Seeing Boosted Contrast In Stereo Imagery”

P-51 Cockpit Recreated With Help Of Local Makerspace

It’s surprisingly easy to misjudge tips that come into the Hackaday tip line. After filtering out the omnipresent spam, a quick scan of tip titles will often form a quick impression that turns out to be completely wrong. Such was the case with a recent tip that seemed from the subject line to be a flight simulator cockpit. The mental picture I had was of a model cockpit hooked to Flight Simulator or some other off-the-shelf flying game, many of which we’ve seen over the years.

I couldn’t have been more wrong about the project that Grant Hobbs undertook. His cockpit simulator turned out to be so much more than what I thought, and after trading a few emails with him to get all the details, I felt like I had to share the series of hacks that led to the short video below and the story about how he somehow managed to build the set despite having no previous experience with the usual tools of the trade.

Continue reading “P-51 Cockpit Recreated With Help Of Local Makerspace”

Building A Limitless VR Desktop

[Gabor Horvath] thinks even two monitors is too little space to really lay out his windows properly. That’s why he’s building a VR Desktop straight out of our deepest cyberpunk fantasies.

The software runs on Windows and Android at the moment. The user can put up multiple windows in a sphere around them. As their head moves, the window directly in front grows in focus.  Imagine how many stack overflow windows you could have open at the same time!

Another exciting possibility is that the digital work-spaces can be shared among multiple users. Pair programming isn’t so bad, and now the possibility of doing it effectively while remote seems a little more possible. Even pair CAD might be possible depending on how its done. Imagine sharing your personal CAD session on another user’s screen and seeing theirs beside yours, allowing for simultaneous design.

Overall it’s a very cool tech demo that could turn into something more. It makes us wonder how long it is before tech workers on their way to lunch are marked by a telltale red circle on their face.

Modulated Pilot Lights Anchor AR To Real World

We’re going to go out on a limb here and say that wherever you are now, a quick glance around will probably reveal at least one LED. They’re everywhere – we can spot a quick half dozen from our desk, mostly acting as pilot lights and room lighting. In those contexts, LEDs are pretty mundane. But what if a little more flash could be added to the LEDs of the world – literally?

That’s the idea behind LightAnchors, which bills itself as a “spatially-anchored augmented reality interface.” LightAnchors comes from work at [Chris Harrison]’s lab at Carnegie Mellon University which seeks new ways to interface with computers, and leverages the ubiquity of LED point sources and the high-speed cameras on today’s smartphones. LightAnchors are basically beacons of digitally encoded data that a smartphone can sense and decode. The target LED is modulated using amplitude-shift keying and each packet contains a data payload and parity bits along with a pre- and post-amble sequence. Software on the phone uses the camera to isolate the point source, track it, and pull the data out of it, which is used to create an overlay on the scene. The video below shows a number of applications, ranging from displaying guest login credentials through the pilot lights on a router to modulating the headlights of a rideshare vehicle so the next fare can find the right car.

An academic paper (PDF link) goes into greater depth on the protocol, and demo Arduino code for creating LightAnchors is thoughtfully provided. It strikes us that the two main hurdles to adoption of LightAnchors would be convincing device manufacturers to support them, and advertising the fact that what looks like a pilot light might actually be something more, but the idea sure beats fixed markers for AR tracking.

Continue reading “Modulated Pilot Lights Anchor AR To Real World”

Ask Hackaday: Is Anyone Sad Phone VR Is Dead?

It’s official: smartphone-based VR is dead. The two big players in this space were Samsung Gear VR (powered by Oculus, which is owned by Facebook) and Google Daydream. Both have called it quits, with Google omitting support from their newer phones and Oculus confirming that the Gear VR has reached the end of its road. Things aren’t entirely shut down quite yet, but when it does it will sure leave a lot of empty headsets laying around. These things exist in the millions, but did anyone really use phone-based VR? Are any of you sad to see it go?

Google Cardboard, lowering cost and barrier to entry about as low as it could go.

In case you’re unfamiliar with phone-based VR, this is how it works: the user drops their smartphone into a headset, puts it on their head, and optionally uses a wireless controller to interact with things. The smartphone takes care of tracking motion and displaying 3D content while the headset itself takes care of the optics and holds everything in front of the user’s eyeballs. On the low end was Google Cardboard and on the higher end was Daydream and Gear VR. It works, and is both cheap and portable, so what happened?

In short, phone-based VR had constraints that limited just how far it could go when it came to delivering a VR experience, and these constraints kept it from being viable in the long run. Here are some of the reasons smartphone-based VR hit the end of the road: Continue reading “Ask Hackaday: Is Anyone Sad Phone VR Is Dead?”

Literal Stretch-Sensing Glove Reconstructs Your Hand Poses

Our hands are rich forms of gestural expression, but capturing these expressions without hindering the hand itself is no easy task–even in today’s world of virtual reality hardware. Fret not, though, as researchers at the Interactive Geometry Lab have recently developed a glove that’s both comfortable and straightforward to fabricate while capturing not simply gestures but entire hand poses.

Like many hand-recognition gloves, this “stretch-sensing soft glove” mounts the sensors directly into the glove such that movements can be captured while hands are out of plain sight. However, unlike other gloves, sensors are custom-made from two stretchable conductive layers sandwiched between a plain layer of silicone. The result is a grid of 44 capacitive stretch sensors. The team feeds this datastream into a neural network for gesture processing, and the result is a system capable of reconstructing hand poses at 60Hz refresh rates.

In their paper [PDF], the research team details a process of making the glove with a conventional CO2 laser cutter. They first cast a conductive silicone layer onto a conventional sheet of silicone. Then, with two samples, they selectively etch away the conductive layer with the unique capacitive grid images. Finally, they sandwich these layers together with an additional insulating and glue it into a hand-shaped textile pattern. The resulting process is a classy use of the laser cutter for the design of flexible capacitive circuits without any further specialized hardware processes.

While we’re no stranger to retrofitting gloves with sensors or etching unconventional materials, the fidelity of this research project is in a class of its own. We can’t wait to see folks extend this technique into other wearable stretch sensors. For a deeper dive into the glove’s capabilities, have a look at the video after the break.

Continue reading “Literal Stretch-Sensing Glove Reconstructs Your Hand Poses”

Tinker Pilot Project Cranks Cockpit Immersion To 11

One of the more interesting ideas being experimented with in VR is 1:1 mapping of virtual and real-world objects, so that virtual representations can have physically interaction in a normal way. Tinker Pilot is a VR spaceship simulator project by [LLUÍS and JAVI] that takes this idea and runs with it, aiming for the ability to map a cockpit’s joysticks, switches, and other hardware to real-world representations. What does that mean? It means a virtual cockpit with flight sticks, levers, and switches that have working physical versions that actually exist exactly where they appear to be.

A few things about the project design caught our eye. One is the serial communications protocol intended to interface easily with microcontrollers, allowing for feedback between the program and any custom peripherals. (By the way, this is the same approach Kerbal Space Program took with KSPSerialIO, which enables custom mission control hardware at whatever level of complexity a user may wish to implement.)

The possibilities are demonstrated starting around 1:09 in the teaser trailer (embedded below) in which a custom controller is drawn up in CAD, then 3D-printed and attached to an Arduino, and finally the 3D model is imported into the cockpit as a 1:1 representation of the actual working unit, with visual positional feedback.

Unlike this chair experiment we saw which attached a Vive Tracker to a chair, there is no indication of needing positional trackers on individual controls in Tinker Pilot. In a cockpit layout, controls can be reasonably expected to remain in fixed positions relative to the cockpit, meaning that they can be set up as 1:1 representations of a physical layout and otherwise left alone. The kind of experimentation that is available today even to individual developers or small teams is remarkable, and it’s fascinating to see the ideas being given some experimentation.

Continue reading “Tinker Pilot Project Cranks Cockpit Immersion To 11”