Explore Neural Radiance Fields In Real-time, Even On A Phone

Neural Radiance Fields (NeRF) is a method of reconstructing complex 3D scenes from sparse 2D inputs, and the field has been growing by leaps and bounds. Viewing a reconstructed scene is still nontrivial, but there’s a new innovation on the block: SMERF is a browser-based method of enabling full 3D navigation of even large scenes, efficient enough to render in real time on phones and laptops.

Don’t miss the gallery of demos which will run on anything from powerful desktops to smartphones. Notable is the distinct lack of blurry, cloudy, or distorted areas which tend to appear in under-observed areas of a NeRF scene (such as indoor corners and ceilings). The technical paper explains SMERF’s approach in more detail.

NeRFs as a concept first hit the scene in 2020 and the rate of advancement has been simply astounding, especially compared to demos from just last year. Watch the short video summarizing SMERF below, and marvel at how it compares to other methods, some of which are themselves only months old.

Continue reading “Explore Neural Radiance Fields In Real-time, Even On A Phone”

Synthesizing 360-degree Views From Single Source Images

ZeroNVS is one of those research projects that is rather more impressive than it may look at first glance. On one hand, the 3D reconstructions — we urge you to click that first link to see them — look a bit grainy and imperfect. But on the other hand, it was reconstructed using a single still image as an input.

Most results look great, but some — like this bike visible through a park bench — come out a bit strange. A valiant effort for a single-image input, all things considered.

How is this done? It’s NeRFs (neural radiance fields) which leverages machine learning, but with yet another new twist. Existing methods mainly focus on single objects and masked backgrounds, but a new approach makes this method applicable to a variety of complex, in-the-wild images without the need to train new models.

There are a ton of sample outputs on the project summary page that are worth a browse if you find this sort of thing at all interesting. Some of the 360 degree reconstructions look rough, some are impressive, and some are a bit amusing. For example indoor shots tend to reconstruct rooms that look good, but lack doorways.

There is a research paper for those seeking additional details and a GitHub repository for the code, but the implementation requires some significant hardware.

High Quality 3D Scene Generation From 2D Source, In Realtime

Here’s some fascinating work presented at SIGGRAPH 2023 of a method for radiance field rendering using a novel technique called Gaussian Splatting. What’s that mean? It means synthesizing a 3D scene from 2D images, in high quality and in real time, as the short animation shown above shows.

Neural Radiance Fields (NeRFs) are a method of leveraging machine learning to, in a way, do what photogrammetry does: synthesize complex scenes and views based on input images. But NeRFs work in a fraction of the time, and require only a fraction of the source material. There are different ways to go about this and unsurprisingly, there tends to be a clear speed vs. quality tradeoff. But as the video accompanying this new work seems to show, clever techniques mean the best of both worlds.

A short video summary is embedded just below the page break. Interested in deeper details? The research PDF is here. The amount of development this field has seen is nothing short of staggering, and certainly higher in quality than what was state-of-the-art for NeRFs only a year ago.

Continue reading “High Quality 3D Scene Generation From 2D Source, In Realtime”

3D Design With Text-Based AI

Generative AI is the new thing right now, proving to be a useful tool both for professional programmers, writers of high school essays and all kinds of other applications in between. It’s also been shown to be effective in generating images, as the DALL-E program has demonstrated with its impressive image-creating abilities. It should surprise no one as this type of AI continues to make in-roads into other areas, this time with a program from OpenAI called Shap-E which can render 3D images.

Like most of OpenAI’s offerings, this takes plain language as its input and can generate relatively simple 3D models with this text. The examples given by OpenAI include some bizarre models using text prompts such as a chair shaped like an avocado or an airplane that looks like a banana. It can generate textured meshes and neural radiance fields, both of which have various advantages when it comes to available computing power, training methods, and other considerations. The 3D models that it is able to generate have a Super Nintendo-style feel to them but we can only expect this technology to grow exponentially like other AI has been doing lately.

For those wondering about the name, it’s apparently a play on the 2D rendering program DALL-E which is itself a combination of the names of the famous robot WALL-E and the famous artist Salvador Dali. The Shap-E program is available for anyone to use from this GitHub page. Even though this code comes from OpenAI themselves, plenty are speculating that the AI revolution to come will largely come from open-source sources rather than OpenAI or Google, something for which the future is somewhat hazy.