VR Headset With Custom Face Fitting Gets Even More Custom

The Bigscreen Beyond is a small and lightweight VR headset that in part achieves its small size and weight by requiring custom fitting based on a facial scan. [Val’s Virtuals] managed to improve fitment even more by redesigning a facial interface and using a 3D scan of one’s own head to fine-tune the result even further. The new designs distribute weight more evenly while also providing an optional flip-up connection.

It may be true that only a minority of people own a Bigscreen Beyond headset, and even fewer of them are willing to DIY their own custom facial interface. But [Val]’s workflow and directions for using Blender to combine a 3D scan of one’s face with his redesigned parts to create a custom-fitted, foam-lined facial interface is good reading, and worth keeping in mind for anyone who designs wearables that could benefit from custom fitting. It’s all spelled out in the project’s documentation — look for the .txt file among the 3D models.

We’ve seen a variety of DIY approaches to VR hardware, from nearly scratch-built headsets to lens experiments, and one thing that’s clear is that better comfort is always an improvement. With newer iPhones able to do 3D scanning and 1:1 scale scanning in general becoming more accessible, we have a feeling we’re going to see more of this DIY approach to ultra-customization.

VR Headset With HDMI Input Invites A New Kind Of Cyberdeck

Meta’s Quest VR headset recently got the ability to accept and display video over USB-C, and it’s started some gears turning in folks’ heads. [Ian Hamilton] put together a quick concept machine consisting of a Raspberry Pi 400 that uses a VR headset as its monitor, which sure seems like the bones of a new breed of cyberdeck.

With passthrough on, one still sees the outside world.

The computer-in-a-keyboard nature of the Pi 400 means that little more than a mouse and the VR headset are needed to get a functional computing environment. Well, that and some cables and adapters.

What’s compelling about this is that the VR headset is much more than just a glorified monitor. In the VR environment, the external video source (in this case, the Raspberry Pi) is displayed in a window just like any other application. Pass-through can also be turned on, so that the headset’s external cameras display one’s surroundings as background. This means there’s no loss of environmental awareness while using the rig.

[Note: the following has been updated for clarity and after some hands-on testing] Video over USB-C is technically DisplayPort altmode, and both the video source and the USB-C cable have to support it. In [Ian]’s case, the Raspberry Pi 400 outputs HDMI and he uses a Shadowcast 2 capture card to accept HDMI on one end and outputs video over USB-C on the other.

Here’s how it works: the Quest has a single USB-C port on the side, and an app (somewhat oddly named “Meta Quest HDMI link”) running on the headset takes care of accepting video over USB and displaying it in a window within the headset. The video signal expected is UVC (or USB Video Class), which is what most USB webcams and other video devices output. (There’s another way to do video over USB-C which is technically DisplayPort altmode, and both the video source and the USB-C cable have to support it. That is not what’s being used here; the Quest does not support this format. Neither is it accepting HDMI directly.) In [Ian]’s case, the Raspberry Pi 400 outputs HDMI and he uses a Shadowcast 2 capture card to accept HDMI on one end and output UVC video on the other, which is then fed into the Quest over a USB-C cable.

As a concept it’s an interesting one for sure. Perhaps we’ll see decks of this nature in our next cyberdeck contest?

Meta Cancels Augmented Reality Headset After Apple Vision Pro Falls Flat

The history of consumer technology is littered with things that came and went. For whatever reason, consumers never really adopted the tech, and it eventually dies. Some of those concepts seem to persistently hang on, however, such as augmented reality (AR). Most recently, Apple launched its Vision Pro ‘mixed reality’ headset at an absolutely astounding price to a largely negative response and disappointing sale numbers. This impending market flop seems to now have made Meta (née Facebook) reconsider bringing a similar AR device to market.

To most, this news will come as little of a surprise, considering that Microsoft’s AR product (HoloLens) explicitly seeks out (government) niches with substantial budgets, and Google’s smart glasses have crashed and burned despite multiple market attempts. In a consumer market where virtual reality products are already desperately trying not to become another 3D display debacle, it would seem clear that amidst a lot of this sci-fi adjacent ‘cool technology,’ there are a lot of executives and marketing critters who seem to forego the basic question: ‘why would anyone use this?’

Continue reading “Meta Cancels Augmented Reality Headset After Apple Vision Pro Falls Flat”

Meta Doesn’t Allow Camera Access On VR Headsets, So Here’s A Workaround

The cameras at the front of Meta’s Quest VR headsets are off-limits to developers, but developer [Michael Gschwandtner] created a workaround (Linkedin post) and shared implementation details with a VR news site.

The view isn’t a pure camera feed (it includes virtual and UI elements) but it’s a clever workaround.

The demo shows object detection via MobileNet V2, which we’ve seen used for machine vision on embedded systems like the Raspberry Pi. In this case it is running locally on the VR headset, automatically identifying objects even though the app cannot directly access the front-facing cameras to see what’s in front of it.

The workaround is conceptually simple, and leverages the headset’s ability to cast its video feed over Wi-Fi to other devices. This feature is normally used for people to share and spectate VR gameplay.

First, [Gschwandtner]’s app sets up passthrough video, which means that the camera feed from the front of the headset is used as background in VR, creating a mixed-reality environment. Then the app essentially spawns itself a Chromium browser, and casts its video feed to itself. It is this video that is used to — in a roundabout way — access what the cameras see.

The resulting view isn’t really direct from the cameras, it’s akin to snapshotting a through-the-headset view which means it contains virtual elements like the UI. Still, with passthrough turned on it is a pretty clever workaround that is contained entirely on-device.

Meta is hesitant to give developers direct access to camera views on their VR headset, and while John Carmack (former Meta consulting CTO) thinks it’s worth opening up and can be done safely, it’s not there yet.

Robust Speech-to-Text, Running Locally On Quest VR Headset

[saurabhchalke] recently released whisper.unity, a Unity package that implements whisper locally on the Meta Quest 3 VR headset, bringing nearly real-time transcription of natural speech to the device in an easy-to-use way.

Whisper is a robust and free open source neural network capable of quickly recognizing and transcribing multilingual natural speech with nearly-human level accuracy, and this package implements it entirely on-device, meaning it runs locally and doesn’t interact with any remote service.

Meta Quest 3

It used to be that voice input for projects was a tricky business with iffy results and a strong reliance on speaker training and wake-words, but that’s no longer the case. Reliable and nearly real-time speech recognition is something that’s easily within the average hacker’s reach nowadays.

We covered Whisper getting a plain C/C++ implementation which opened the door to running on a variety of platforms and devices. [Macoron] turned whisper.cpp into a Unity binding which served as inspiration for this project, in which [saurabhchalke] turned it into a Quest 3 package. So if you are doing any VR projects in Unity and want reliable speech input with a side order of easy translation, it’s never been simpler.

Re-imagining Telepresence With Humanoid Robots And VR Headsets

Don’t let the name of the Open-TeleVision project fool you; it’s a framework for improving telepresence and making robotic teleoperation far more intuitive than it otherwise would be. It accomplishes this in part by taking advantage of the remarkable technology packed into modern VR headsets like the Apple Vision Pro and Meta Quest. There are loads of videos on the project page, many of which demonstrate successful teleoperation across vast distances.

Teleoperation of robotic effectors typically takes some getting used to. The camera views are unusual, the limbs don’t move the same way arms do, and intuitive human things like looking around to get a sense of where everything is don’t translate well.

A stereo camera with gimbal streaming to a VR headset complete with head tracking seems like a very hackable design.

To address this, researches provided a user with a robot-mounted, real-time stereo video stream (through which the user can turn their head and look around normally) as well as mapping arm and hand movements to humanoid robotic counterparts. This provides the feedback to manipulate objects and perform tasks in a much more intuitive way. In short, when our eyes, bodies, and hands look and work more or less the way we expect, it turns out it’s far easier to perform tasks.

The research paper goes into detail about the different systems, but in essence, a stereo depth and RGB camera is perched with a 3D printed gimbal atop a humanoid robot frame like the Unitree H1 equipped with high dexterity hands. A VR headset takes care of displaying a real-time stereoscopic video stream and letting the user look around. Hand tracking for the user is mapped to the dexterous hands and fingers. This lets a person look at, manipulate, and handle things without in-depth training. Perhaps slower and more clumsily than they would like, but in an intuitive way all the same.

Interested in taking a closer look? The GitHub repository has the necessary code, and while most of us will never be mashing ADD TO CART on something like the Unitree H1, the reference design for a stereo camera streaming to a VR headset and mirroring head tracking with a two-motor gimbal looks like the sort of thing that would be useful for a telepresence project or two.

Continue reading “Re-imagining Telepresence With Humanoid Robots And VR Headsets”

Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

We love hearing about a good experiment, and here’s a pretty neat one: researchers used a VR headset, an off-the-shelf VR360 camera, and some custom software to glue them together. The result? Owl-Vision squashes a full 360° of un-distorted horizontal visual perception into 90° of neck travel to either side. One can see all around oneself, without needing to physically turn one’s head any further than is natural.

It’s still a work in progress, and accessing the paper currently doesn’t have a free option, but the demonstration video at that link (also embedded below) gives a solid overview of what’s going on.

Continue reading “Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural”