It’s official: smartphone-based VR is dead. The two big players in this space were Samsung Gear VR (powered by Oculus, which is owned by Facebook) and Google Daydream. Both have called it quits, with Google omitting support from their newer phones and Oculus confirming that the Gear VR has reached the end of its road. Things aren’t entirely shut down quite yet, but when it does it will sure leave a lot of empty headsets laying around. These things exist in the millions, but did anyone really use phone-based VR? Are any of you sad to see it go?
In case you’re unfamiliar with phone-based VR, this is how it works: the user drops their smartphone into a headset, puts it on their head, and optionally uses a wireless controller to interact with things. The smartphone takes care of tracking motion and displaying 3D content while the headset itself takes care of the optics and holds everything in front of the user’s eyeballs. On the low end was Google Cardboard and on the higher end was Daydream and Gear VR. It works, and is both cheap and portable, so what happened?
In short, phone-based VR had constraints that limited just how far it could go when it came to delivering a VR experience, and these constraints kept it from being viable in the long run. Here are some of the reasons smartphone-based VR hit the end of the road: Continue reading “Ask Hackaday: Is Anyone Sad Phone VR Is Dead?”→
Like many hand-recognition gloves, this “stretch-sensing soft glove” mounts the sensors directly into the glove such that movements can be captured while hands are out of plain sight. However, unlike other gloves, sensors are custom-made from two stretchable conductive layers sandwiched between a plain layer of silicone. The result is a grid of 44 capacitive stretch sensors. The team feeds this datastream into a neural network for gesture processing, and the result is a system capable of reconstructing hand poses at 60Hz refresh rates.
In their paper [PDF], the research team details a process of making the glove with a conventional CO2 laser cutter. They first cast a conductive silicone layer onto a conventional sheet of silicone. Then, with two samples, they selectively etch away the conductive layer with the unique capacitive grid images. Finally, they sandwich these layers together with an additional insulating and glue it into a hand-shaped textile pattern. The resulting process is a classy use of the laser cutter for the design of flexible capacitive circuits without any further specialized hardware processes.
While we’re no stranger to retrofitting gloves with sensors or etching unconventional materials, the fidelity of this research project is in a class of its own. We can’t wait to see folks extend this technique into other wearable stretch sensors. For a deeper dive into the glove’s capabilities, have a look at the video after the break.
One of the more interesting ideas being experimented with in VR is 1:1 mapping of virtual and real-world objects, so that virtual representations can have physically interaction in a normal way. Tinker Pilot is a VR spaceship simulator project by [LLUÍS and JAVI] that takes this idea and runs with it, aiming for the ability to map a cockpit’s joysticks, switches, and other hardware to real-world representations. What does that mean? It means a virtual cockpit with flight sticks, levers, and switches that have working physical versions that actually exist exactly where they appear to be.
A few things about the project design caught our eye. One is the serial communications protocol intended to interface easily with microcontrollers, allowing for feedback between the program and any custom peripherals. (By the way, this is the same approach Kerbal Space Program took with KSPSerialIO, which enables custom mission control hardware at whatever level of complexity a user may wish to implement.)
The possibilities are demonstrated starting around 1:09 in the teaser trailer (embedded below) in which a custom controller is drawn up in CAD, then 3D-printed and attached to an Arduino, and finally the 3D model is imported into the cockpit as a 1:1 representation of the actual working unit, with visual positional feedback.
Unlike this chair experiment we saw which attached a Vive Tracker to a chair, there is no indication of needing positional trackers on individual controls in Tinker Pilot. In a cockpit layout, controls can be reasonably expected to remain in fixed positions relative to the cockpit, meaning that they can be set up as 1:1 representations of a physical layout and otherwise left alone. The kind of experimentation that is available today even to individual developers or small teams is remarkable, and it’s fascinating to see the ideas being given some experimentation.
VR headsets have been seeing new life for a few years now, and when it comes to head-mounted displays, the field of view (FOV) is one of the specs everyone’s keen to discover. Valve Software have published a highly technical yet accessibly-presented document that explains why Field of View (FOV) is a complex thing when it pertains to head-mounted displays. FOV is relatively simple when it comes to things such as cameras, but it gets much more complicated and hard to define or measure easily when it comes to using lenses to put images right up next to eyeballs.
The document goes into some useful detail about head-mounted displays in general, the design trade-offs, and naturally talks about the brand-new Valve Index VR headset in particular. The Index uses proprietary lenses combined with a slight outward cant to each eye’s display, and they explain precisely what benefits are gained from each design point. Eye relief (distance from eye to lens), lens shape and mounting (limiting how close the eye can physically get), and adjustability (because faces and eyes come in different configurations) all have a role to play. It’s a situation where every millimeter matters.
If there’s one main point Valve is trying to make with this document, it’s summed up as “it’s really hard to use a single number to effectively describe the field of view of an HMD.” They plan to publish additional information on the topics of modding as well as optics, so keep an eye out on their Valve Index Deep Dive publication list.
The folks behind the Atmos Extended Reality (XR) headset want to provide improved accessibility with an open ecosystem, and they aim to do it with a WebVR-capable headset design that is self-contained, 3D-printable, and open-sourced. Their immediate goal is to release a development kit, then refine the design for a wider release.
The front of the headset has a camera-based tracking board to provide all the modern goodies like inside-out head and hand tracking as well as the ability to pass through video. The design also provides for a variety of interface methods such as eye tracking and 6 DoF controllers.
With all that, the headset gives users maximum flexibility to experiment with and create different applications while working to keep development simple. A short video showing off the modular design of the HMD and optical assembly is embedded below.
Extended Reality (XR) has emerged as a catch-all term to cover broad combinations of real and virtual elements. On one end of the spectrum are completely virtual elements such as in virtual reality (VR), and towards the other end of the spectrum are things like augmented reality (AR) in which virtual elements are integrated with real ones in varying ratios. With the ability to sense the real world and pass through video from the cameras, developers can choose to integrate as much or as little as they wish.
You may think of Alzheimer’s as a disease of the elderly, but the truth is people who suffer from it have had it for years — sometimes decades — before they notice. Early detection can help doctors minimize the impact the condition has on your brain, so there’s starting to be an emphasis on testing middle-aged adults for the earliest signs of the illness. It turns out that one of the first noticeable symptoms is a decline in your ability to navigate. [Dennis Chan] at Cambridge Biomedical Research Centre and his team are now using virtual reality to determine how well people can navigate as a way to assess Alzheimer’s earlier than is possible with other techniques.
Current tests mostly measure your ability to remember things, but by the time that’s a problem, things have often progressed. The test has the subject walk to different cones and remember their locations, and has already proven more effective than the standard test.
Consider the complexity of the appendages sitting at the end of your arms. The human hands contain over a quarter of the entire complement of bones in the body, use dozens of muscles both in the hand itself and extending up the forearm, and are capable of almost infinite variance in the movements they can create. They are exquisite machines.
And yet when it comes to virtual reality, most simulations treat the hands like inert blobs. That may be partly due to their complexity; doing motion capture from so many joints can be computationally challenging. But this pressure-sensitive hand motion capture rig aims to change that. The product of an undergraduate project by [Leslie], [Hunter], and [Matthew], the idea was to provide an economical and effective way to capture gestures for virtual reality simulators, which generally focus on capturing large motions from the whole body.
The sensor consists of a sandwich of polyurethane foam with strain gauge sensors embedded within. The user slips his or her hand into the foam and rests the fingers on the sensors. A Teensy and twenty lines of code translate finger motions within the sandwich into five axes of joystick movement, which is then sent to Unreal Engine, where finger motions were translated to a 3D-model of a hand to play a VR game of “Rock, Paper, Scissors.”
[Leslie] and her colleagues have a way to go on this; testers complained that the flat hand posture was unnatural, and that the foam heated things up quickly. Maybe something more along the lines of these gesture-capturing gloves would work?