Stepping Inside Art In VR, And The Workflow Behind It

The process of creating something is always chock-full of things to learn, so it’s always a treat when someone takes the time and effort to share it. [Teadrinker] recently published the technique and workflow behind bringing art into VR, which explains exactly how they created a virtual reality art gallery that allows one to step inside paintings, called Art Plunge (free on Steam.)

Extending a painting’s content to fill in the environment is best done by using other works by the same artist.

It walks through not just how to obtain high-resolution images of paintings, but also discusses how to address things like adjusting the dynamic range and color grading to better match the intended VR experience. There is little that is objectively correct in technical terms when it comes to the aesthetic presentation details like brightness and lighting, so guidance on what does and doesn’t work well and how to tailor to the VR experience is useful information.

One thing that is also intriguing is the attention paid to creating a sense of awe for viewers. The quality, the presentation, and even choosing sounds are all important for creating something that not only creates a sense of awe, but does so in a way that preserves and cultivates a relationship between the art and the viewer that strives to stay true to the original. Giving a viewer a sense of presence, after all, can be more than just presenting stereoscopic 3D images or fancy lightfields.

You can get a brief overview of the process in a video below, but if you have the time, we really do recommend reading the whole breakdown.

Continue reading “Stepping Inside Art In VR, And The Workflow Behind It”

Explore Neural Radiance Fields In Real-time, Even On A Phone

Neural Radiance Fields (NeRF) is a method of reconstructing complex 3D scenes from sparse 2D inputs, and the field has been growing by leaps and bounds. Viewing a reconstructed scene is still nontrivial, but there’s a new innovation on the block: SMERF is a browser-based method of enabling full 3D navigation of even large scenes, efficient enough to render in real time on phones and laptops.

Don’t miss the gallery of demos which will run on anything from powerful desktops to smartphones. Notable is the distinct lack of blurry, cloudy, or distorted areas which tend to appear in under-observed areas of a NeRF scene (such as indoor corners and ceilings). The technical paper explains SMERF’s approach in more detail.

NeRFs as a concept first hit the scene in 2020 and the rate of advancement has been simply astounding, especially compared to demos from just last year. Watch the short video summarizing SMERF below, and marvel at how it compares to other methods, some of which are themselves only months old.

Continue reading “Explore Neural Radiance Fields In Real-time, Even On A Phone”

Quest 3 VR Headset Can Capture 3D Video (Some Tampering Required)

The Quest 3 VR headset is an impressive piece of hardware. It is also not open; not in the way most of us understand the word. One consequence of this is the inability in general for developers or users to directly access the feed of the two color cameras on the front of the headset. However, [Hugh Hou] shares a method of doing exactly this to capture 3D video on the Quest 3 headset for later playback on different devices.

The Quest 3 runs Android under the hood, and Developer Mode plus some ADB commands does the trick.

There are a few steps to the process and it involves enabling developer mode on the hardware then using ADB (Android Debug Bridge) commands to enable the necessary functionality, but it’s nothing the average curious hacker can’t handle. The directions are written out in the video’s description, along with a few handy links. (The video is embedded below just under the page break, but view it on YouTube to access the description and all the info in it.)

He also provides some excellent guidance on practical things like how to capture stable shots, editing the videos, and injecting the necessary metadata for optimal playback on different platforms, including hassle-free uploading to a service like YouTube. [Hugh] is no stranger to this kind of video and camera handling and really knows his stuff, and it’s great to see someone provide detailed instructions.

This kind of 3D video comes down to recording two different views, one for each eye. There’s another way to approach 3D video, however: light fields are also within reach of enterprising hackers, and while they need more hardware they yield far more compelling results.

Continue reading “Quest 3 VR Headset Can Capture 3D Video (Some Tampering Required)”

3D Design With Text-Based AI

Generative AI is the new thing right now, proving to be a useful tool both for professional programmers, writers of high school essays and all kinds of other applications in between. It’s also been shown to be effective in generating images, as the DALL-E program has demonstrated with its impressive image-creating abilities. It should surprise no one as this type of AI continues to make in-roads into other areas, this time with a program from OpenAI called Shap-E which can render 3D images.

Like most of OpenAI’s offerings, this takes plain language as its input and can generate relatively simple 3D models with this text. The examples given by OpenAI include some bizarre models using text prompts such as a chair shaped like an avocado or an airplane that looks like a banana. It can generate textured meshes and neural radiance fields, both of which have various advantages when it comes to available computing power, training methods, and other considerations. The 3D models that it is able to generate have a Super Nintendo-style feel to them but we can only expect this technology to grow exponentially like other AI has been doing lately.

For those wondering about the name, it’s apparently a play on the 2D rendering program DALL-E which is itself a combination of the names of the famous robot WALL-E and the famous artist Salvador Dali. The Shap-E program is available for anyone to use from this GitHub page. Even though this code comes from OpenAI themselves, plenty are speculating that the AI revolution to come will largely come from open-source sources rather than OpenAI or Google, something for which the future is somewhat hazy.

Holograms Display Time With ESP32

Holograms and holographic imagery are typically viewed within the frame of science fiction, with perhaps the most iconic examples being Princess Leia’s message to Obi-Wan in Star Wars, or the holodecks from Star Trek. In reality, holograms have been around for a surprising amount of time, with early holographic images being produced in the late 1940s. There are plenty of uses outside of imagery for modern holographic systems as well, and it’s a common enough technology that it’s possible to construct one using an ESP32 as well.

In this build, [Fiberpunk] demonstrates the construction and operation of a holographic clock. The image is three-dimensional and somewhat transparent and is driven by an ESP32 microcontroller. The display is based around a beamsplitter prism which, when viewed from the front, is almost completely invisible to the viewer. The ESP32 is housed in a casing beneath this prism, and [Fiberpunk] has two firmware versions available for the device. The first is the clock which displays an image as well as the time, and the second is more of a demonstration which can show more in-depth 3D videos using gcode models and also has motion sensing controls.

For anyone interested in holography, a platform like this is might make an excellent entry point to explore, and with the source for this build available becomes even easier. It’s almost certainly less expensive than these 3D printers that can turn out custom holographic images, and has the added benefit of being customizable and programmable as well.

Continue reading “Holograms Display Time With ESP32”

A Look At Sega’s 8-Bit 3D Glasses

From around 2012 onwards, there was a 3D viewing and VR renaissance in the entertainment industry. That hardware has grown in popularity, even if it’s not yet mainstream. However, 3D tech goes back much further, as [Nicole] shows us with a look at Sega’s ancient 8-bit 3D glasses [via Adafruit].

[Nicole]’s pair of Sega shutter glasses are battered and bruised, but she notes more modern versions are available using the same basic idea. The technology is based on liquid-crystal shutters, one for each eye. By showing the left and right eyes different images, it’s possible to create a 3D-vision effect even with very limited display hardware.

The glasses can be plugged directly into a Japanese Sega Master System, which hails from the mid-1980s. It sends out AC signals to trigger the liquid-crystal shutters via a humble 3.5mm TRS jack. Games like Space Harrier 3D, which were written to use the glasses, effectively run at a half-speed refresh rate. This is because of the 60 Hz NTSC or 50 Hz PAL screen refresh rate is split in half to serve each eye.  Unfortunately, though, the glasses don’t work on modern LCD screens, as their inherent display lag throws off the timing of the pulses the console sends to the glasses.

It’s a neat look at an ancient bit of display tech that had a small resurgence with 3DTVs in the 2010s. By and large, it seems like humans just aren’t that into 3D, at least beneath a full-VR experience. Meanwhile, if you’re wondering what 8-bit 3D looked like, we’ve got a 3D video (!) after the break.

Continue reading “A Look At Sega’s 8-Bit 3D Glasses”

Building The World’s Largest Nintendo 3DS

While the Nintendo 3DS was capable of fairly impressive graphics (at least for a portable system) back in its heyday, there’s little challenge in emulating the now discontinued handheld on a modern computer or even smartphone. One thing that’s still difficult to replicate though is the stereoscopic 3D display the system was named for. But this didn’t stop [BigRig Creates] from creating this giant 3DS with almost all of the features of an original console present.

The main hurdle here is that the stereoscopic effect that Nintendo used to allow the 3DS to display 3D graphics without special glasses doesn’t work well at long distances, and doesn’t work at all if there is more than one player. To get around those limitations, this build uses a 3D TV with active glasses. This TV is mounted to a bar stool with the help of some counterweights, and a second touch-sensitive screen courtesy of McDonalds makes up the other display.

The computer driving this massive handheld console runs Citra, and also handles the scaled-up controls as well. To recreate the system’s analog touch pad, a custom joystick tipped with conductive filament is used to interact with a smartphone hidden inside the case. Opposing rubber bands are used to pull the stick back into the center when it’s not being pushed.

Plenty of 3DS games are faithfully replicated with this arcade-sized replica, and as Citra supports various 3D displays, upscaling of the graphics, and the touchscreen interface, almost everything from the original console is produced here. There are a few games that don’t work exactly right, but all in all it’s a remarkable build and, as far as we can tell, the largest 3DS in the world. Don’t forget that even though this console is out of production now, there’s still a healthy homebrew scene to take part in.

Continue reading “Building The World’s Largest Nintendo 3DS”