VR Headset With Custom Face Fitting Gets Even More Custom

The Bigscreen Beyond is a small and lightweight VR headset that in part achieves its small size and weight by requiring custom fitting based on a facial scan. [Val’s Virtuals] managed to improve fitment even more by redesigning a facial interface and using a 3D scan of one’s own head to fine-tune the result even further. The new designs distribute weight more evenly while also providing an optional flip-up connection.

It may be true that only a minority of people own a Bigscreen Beyond headset, and even fewer of them are willing to DIY their own custom facial interface. But [Val]’s workflow and directions for using Blender to combine a 3D scan of one’s face with his redesigned parts to create a custom-fitted, foam-lined facial interface is good reading, and worth keeping in mind for anyone who designs wearables that could benefit from custom fitting. It’s all spelled out in the project’s documentation — look for the .txt file among the 3D models.

We’ve seen a variety of DIY approaches to VR hardware, from nearly scratch-built headsets to lens experiments, and one thing that’s clear is that better comfort is always an improvement. With newer iPhones able to do 3D scanning and 1:1 scale scanning in general becoming more accessible, we have a feeling we’re going to see more of this DIY approach to ultra-customization.

VR Headset With HDMI Input Invites A New Kind Of Cyberdeck

Meta’s Quest VR headset recently got the ability to accept and display video over USB-C, and it’s started some gears turning in folks’ heads. [Ian Hamilton] put together a quick concept machine consisting of a Raspberry Pi 400 that uses a VR headset as its monitor, which sure seems like the bones of a new breed of cyberdeck.

With passthrough on, one still sees the outside world.

The computer-in-a-keyboard nature of the Pi 400 means that little more than a mouse and the VR headset are needed to get a functional computing environment. Well, that and some cables and adapters.

What’s compelling about this is that the VR headset is much more than just a glorified monitor. In the VR environment, the external video source (in this case, the Raspberry Pi) is displayed in a window just like any other application. Pass-through can also be turned on, so that the headset’s external cameras display one’s surroundings as background. This means there’s no loss of environmental awareness while using the rig.

[Note: the following has been updated for clarity and after some hands-on testing] Video over USB-C is technically DisplayPort altmode, and both the video source and the USB-C cable have to support it. In [Ian]’s case, the Raspberry Pi 400 outputs HDMI and he uses a Shadowcast 2 capture card to accept HDMI on one end and outputs video over USB-C on the other.

Here’s how it works: the Quest has a single USB-C port on the side, and an app (somewhat oddly named “Meta Quest HDMI link”) running on the headset takes care of accepting video over USB and displaying it in a window within the headset. The video signal expected is UVC (or USB Video Class), which is what most USB webcams and other video devices output. (There’s another way to do video over USB-C which is technically DisplayPort altmode, and both the video source and the USB-C cable have to support it. That is not what’s being used here; the Quest does not support this format. Neither is it accepting HDMI directly.) In [Ian]’s case, the Raspberry Pi 400 outputs HDMI and he uses a Shadowcast 2 capture card to accept HDMI on one end and output UVC video on the other, which is then fed into the Quest over a USB-C cable.

As a concept it’s an interesting one for sure. Perhaps we’ll see decks of this nature in our next cyberdeck contest?

Meta Doesn’t Allow Camera Access On VR Headsets, So Here’s A Workaround

The cameras at the front of Meta’s Quest VR headsets are off-limits to developers, but developer [Michael Gschwandtner] created a workaround (Linkedin post) and shared implementation details with a VR news site.

The view isn’t a pure camera feed (it includes virtual and UI elements) but it’s a clever workaround.

The demo shows object detection via MobileNet V2, which we’ve seen used for machine vision on embedded systems like the Raspberry Pi. In this case it is running locally on the VR headset, automatically identifying objects even though the app cannot directly access the front-facing cameras to see what’s in front of it.

The workaround is conceptually simple, and leverages the headset’s ability to cast its video feed over Wi-Fi to other devices. This feature is normally used for people to share and spectate VR gameplay.

First, [Gschwandtner]’s app sets up passthrough video, which means that the camera feed from the front of the headset is used as background in VR, creating a mixed-reality environment. Then the app essentially spawns itself a Chromium browser, and casts its video feed to itself. It is this video that is used to — in a roundabout way — access what the cameras see.

The resulting view isn’t really direct from the cameras, it’s akin to snapshotting a through-the-headset view which means it contains virtual elements like the UI. Still, with passthrough turned on it is a pretty clever workaround that is contained entirely on-device.

Meta is hesitant to give developers direct access to camera views on their VR headset, and while John Carmack (former Meta consulting CTO) thinks it’s worth opening up and can be done safely, it’s not there yet.

Robust Speech-to-Text, Running Locally On Quest VR Headset

[saurabhchalke] recently released whisper.unity, a Unity package that implements whisper locally on the Meta Quest 3 VR headset, bringing nearly real-time transcription of natural speech to the device in an easy-to-use way.

Whisper is a robust and free open source neural network capable of quickly recognizing and transcribing multilingual natural speech with nearly-human level accuracy, and this package implements it entirely on-device, meaning it runs locally and doesn’t interact with any remote service.

Meta Quest 3

It used to be that voice input for projects was a tricky business with iffy results and a strong reliance on speaker training and wake-words, but that’s no longer the case. Reliable and nearly real-time speech recognition is something that’s easily within the average hacker’s reach nowadays.

We covered Whisper getting a plain C/C++ implementation which opened the door to running on a variety of platforms and devices. [Macoron] turned whisper.cpp into a Unity binding which served as inspiration for this project, in which [saurabhchalke] turned it into a Quest 3 package. So if you are doing any VR projects in Unity and want reliable speech input with a side order of easy translation, it’s never been simpler.

Re-imagining Telepresence With Humanoid Robots And VR Headsets

Don’t let the name of the Open-TeleVision project fool you; it’s a framework for improving telepresence and making robotic teleoperation far more intuitive than it otherwise would be. It accomplishes this in part by taking advantage of the remarkable technology packed into modern VR headsets like the Apple Vision Pro and Meta Quest. There are loads of videos on the project page, many of which demonstrate successful teleoperation across vast distances.

Teleoperation of robotic effectors typically takes some getting used to. The camera views are unusual, the limbs don’t move the same way arms do, and intuitive human things like looking around to get a sense of where everything is don’t translate well.

A stereo camera with gimbal streaming to a VR headset complete with head tracking seems like a very hackable design.

To address this, researches provided a user with a robot-mounted, real-time stereo video stream (through which the user can turn their head and look around normally) as well as mapping arm and hand movements to humanoid robotic counterparts. This provides the feedback to manipulate objects and perform tasks in a much more intuitive way. In short, when our eyes, bodies, and hands look and work more or less the way we expect, it turns out it’s far easier to perform tasks.

The research paper goes into detail about the different systems, but in essence, a stereo depth and RGB camera is perched with a 3D printed gimbal atop a humanoid robot frame like the Unitree H1 equipped with high dexterity hands. A VR headset takes care of displaying a real-time stereoscopic video stream and letting the user look around. Hand tracking for the user is mapped to the dexterous hands and fingers. This lets a person look at, manipulate, and handle things without in-depth training. Perhaps slower and more clumsily than they would like, but in an intuitive way all the same.

Interested in taking a closer look? The GitHub repository has the necessary code, and while most of us will never be mashing ADD TO CART on something like the Unitree H1, the reference design for a stereo camera streaming to a VR headset and mirroring head tracking with a two-motor gimbal looks like the sort of thing that would be useful for a telepresence project or two.

Continue reading “Re-imagining Telepresence With Humanoid Robots And VR Headsets”

Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

We love hearing about a good experiment, and here’s a pretty neat one: researchers used a VR headset, an off-the-shelf VR360 camera, and some custom software to glue them together. The result? Owl-Vision squashes a full 360° of un-distorted horizontal visual perception into 90° of neck travel to either side. One can see all around oneself, without needing to physically turn one’s head any further than is natural.

It’s still a work in progress, and accessing the paper currently doesn’t have a free option, but the demonstration video at that link (also embedded below) gives a solid overview of what’s going on.

Continue reading “Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural”

Bats Can No Longer Haunt Apple VR Headsets Via Web Exploit

Bug reporting doesn’t usually have a lot of visuals. Not so with the visionOS bug [Ryan Pickren] found, which fills a user’s area with screeching bats after visiting a malicious website. Even better, closing the browser doesn’t get rid of them! Better still? Doesn’t need to be bats, it could be spiders. Fun!

The bug has been fixed, but here’s how it worked: the Safari browser build for visionOS allowed a malicious website to fill the user’s 3D space with animated objects without interaction or permission. The code to trigger this is remarkably succinct, and is actually a new twist on an old feature: Apple AR Quick Look, an HTML-based feature for rendering 3D augmented reality content in Safari.

How about spiders, instead?

Leveraging this old feature is what lets an untrusted website launch an arbitrary number of animated 3D objects — complete with sound — into a user’s virtual space without any interaction from the user whatsoever. The icing on the cake is that Quick Look is a separate process, so closing Safari doesn’t get rid of the pests.

Providing immersive 3D via a web browser is a valuable way to deliver interactive content on both desktops and VR headsets; a good example is the fantastic virtual BBC Micro which uses WebXR. But on the Apple Vision Pro the user is always involved and there are privacy boundaries that corral such content. Things being launched into a user’s space in an interaction-free way is certainly not intended behavior.

The final interesting bit about this bug (or loophole) was that in a way, it defied easy classification and highlights a new sort of issue. While it seems obvious from a user experience and interface perspective that a random website spawning screeching crawlies into one’s personal space is not ideal, is this a denial-of-service issue? A privilege escalation that technically isn’t? It’s certainly unexpected behavior, but that doesn’t really capture the potential psychological impact such bugs can have. Perhaps the invasion of personal space and user boundaries will become a quantifiable aspect of bugs in these new platforms. What fun.