The future is VR, or at least that’s what it was two years ago. Until then, there’s still plenty of time to experiment with virtual worlds, the Metaverse, and other high-concept sci-fi tropes from the 80s and 90s. Interactive telepresence is what the Black Mirror Project is all about. Their plan is to create interactive software based on JanusVR platform for creating immersive VR experiences.
The Black Mirror project makes use of the glTF runtime 3D asset delivery to create an environment ranging from simple telepresence to the mind-bending realities the team unabashedly compares to [Neal Stephenson]’s Metaverse.
For their hardware implementation, the team is looking at UDOO X86 single-board computers, with SSDs for data storage as well as a bevy of sensors — gesture, light, accelerometer, magnetometer — supplying the computer with data. There’s an Intel RealSense camera in the build, and the display is unlike any other VR setup we’ve seen before. It’s a tensor display with multiple projection planes and variable backlighting that has a greater depth of field and wider field of view than almost any other display.
After I stripped the buzzwords, all I ended up with was…
https://upload.wikimedia.org/wikipedia/commons/f/fe/France_in_XXI_Century._Correspondance_cinema.jpg
This post is a better article, Nice picture.
Agreed. There was a lot of ‘wut’ in that post.
Not ‘wut just went over my head?’ no.
So, what are they doing exactly? Is this art, a vr platform, hardware for vr, or what exactly?
I’m also very confused:
“From there we started with a more open concept in designing a Hardware Reference Design platform based on Open Hardware and Software that anyone could build and be able to use our tools to build a device suitable for public or private use for many people as possible.”
So they are making tools for people to make VR Experiences?
Isn’t this what Oculus SDK etc already do? I feel like the author has more contact with them than is letting on, and has actually experienced what they are talking about on their project page, which they’ve just used as a dump for a load of articles. The kind of behaviour reminds me of being in school and being told that the best way to make a bibliography for your project was to create a blog and just link everything there, so you had a record of it and could refer to it all easily.
And then they just saw the hack a day prize and went ‘well why not enter?’
“There’s an Intel RealSense camera in the build, and the display is unlike any other VR setup we’ve seen before. It’s a tensor display with multiple projection planes and variable backlighting that has a greater depth of field and wider field of view than almost any other display.”
Me, want!
It’s transcendental hyper-reality by virtualisation of the environmental continuum cyberwotsit…
… they’re actually *gasp* Living on video !!!
https://youtu.be/pL9Tx-fC7x0
That was meant to end up under Olsen’s comment, not sure if it was me or a WP flaky.
Sorry, but that is just a load of random buzzwords which don’t even make sense when put together. They don’t have even a proof of concept, just look at the “demo” video.
I do wonder how e.g. the UDOO that doesn’t have a dedicated GPU (nor means to attach one) is going to handle that “tensor display” (which is basically several displays stacked together). Current high-end GPUs have enough to do with Oculus Rift or Vive that have only a single display to drive. And these folks want to use the built-in Intel GPUs? I wish them luck, they will need it. And that’s just the most obvious thing that has no chance to work in their project.
Then how do they plan to power all this? Carry a few car batteries on the back? Or are they planning to use the Atom version of the UDOO? (good luck again).
Apart from the display (which I don’t think they will be able to build), all of this can be handled much simpler and cheaper by an off-the-shelf laptop. But then they wouldn’t have a project, would they?
On the software side – gITF is a basic file format, as many others. There is nothing magic about it. Yet another random technology pick which makes zero sense – normal person would choose a 3D engine and then use whatever that engine actually supports, not choose a file format first.
These guys obviously have no experience in this field and very little idea about what is actually needed to make any virtual reality to work.
They’re probably very good at winning grants though ;)
I don’t think the rendering for the tensor display is any more difficult than a conventional display.
All you need to do is divide up the objects by z-distance, and render their pixels to a different display. Someone who writes 3D drivers could figure this out “easily” (i.e. it’s way too hard for me).
The hard part would be dealing with objects that cover a wide range of distances – the ground in your typical video game, for example. There might be annoying discontinuities as it splits across the screens.
I’m still impressed that they seem to be rendering to solid surfaces moving at a large proportion of the speed of light, such that they have to use tensor math to know where to aim.
(Tensor seems to be marketing speak for time variant here, you know, like an oldschool CRT scanning beam.)
It is not the question of the math being hard. The problem is that it requires 4x (or how many layers they want to draw) as much fill/frame rate as normal display does. Considering that you need 90-120Hz refresh rate to not feel sick, at least 1440×1200 resolution (that’s what Vive/Rift use and the image is still pixelated), antialiasing without which the picture looks terrible in the HMD (which typically means drawing & “averaging” multiple frames, effectively increasing the required framerate yet again) and suddenly you discover that your GeForce 1060 is at its limits.
And these guys want to draw 4x (or more) as much using an UDOO that can have only the integrated Intel GPU? Even if they dedicated one UDOO to one layer (which I suspect they are intending to do because their materials show a stack of these UDOO computers) it would struggle, never mind the synchronization problems (the image streams would need to be frame locked together otherwise you would see disturbing artifacts due to the layers redrawing at different times).
This just ain’t happening.