small actor on giant table

NERF – Neural Radiance Fields

Making narrative film just keeps getting easier. What once took a studio is now within reach of the dedicated hobbyist. And Neural Radiance Fields are making it a dramatic step easier. The guys from [Corridor Crew] give an early peek.

Filming and editing have reached the cell phone and laptop stage of easy. But sets, costumes, actors, lighting, and so on haven’t gotten substantially cheaper, and making your own short film is still a major project.

Enter 3D graphics. With a good gaming laptop, anybody can make a photorealistic scene in Blender and place live action actors in it. But it takes both a lot of skill and work. And often, the scene you’re making is available as  a real place, but you can’t get permission to film or haul actors, props, crew, and so on to the set.

A new technology, NERF, for “NEural Radiance Fields”, has decreased the headaches a lot.  Instead of making a 3D model of the scene and using that to predict what reaches the camera, the software starts with video of the scene and machine learns a “radiance field” – a model of how light is reflected by the scene. Continue reading “NERF – Neural Radiance Fields”

NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.

NVIDIA-Instant-NeRF-3D-Mesh

If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”

Nintendo Headquarters Plaques

3D Print A Piece Of Nintendo History Before The Real One Is Gone

Nintendo wasn’t always in the videogames business. Long before Mario, the company was one of the foremost producers of Hanafuda playing cards in Japan. From 1930 until 1959, Nintendo ran its printing business from a four-story art deco style building that featured distinctive plaques at the front entrance. We now have a chance to print those former Nintendo HQ plaques at home thanks to [Mr. Talida] who shared some 3D models on Twitter. Talida, a self-described “retro video game archivist”, recreated the plaques via photogrammetry from a number of reference photos he took from a visit to the Kyoto site late last year.

These 3D models come at a crucial time as the old Nintendo HQ building, which sat dormant for years, is set to be turned into a boutique hotel next year. According to JPC, the hotel will feature twenty rooms, a restaurant, and a gym and is expected to be completed by summer 2021 (although that estimate was from the “before” times). The renovation is expected to retain as much of the original exterior’s appearance as possible, but the Nintendo plaques almost assuredly will not be included. For a first-person tour of the former Nintendo headquarters building, there is a video from the world2529 YouTube channel provided below.

It is encouraging to see examples of this DIY-style of historical preservation. Many companies have proven themselves to be less-than-stellar stewards of their own history. Though if his Twitter timeline is any indication, [Mr. Talida] is up to something further with this photogrammetry project. A video export exhibiting a fully textured 3D model of the old Nintendo headquarters’ entrance was published recently along with the words, “What have I done.”

Continue reading “3D Print A Piece Of Nintendo History Before The Real One Is Gone”

Virtual Reality Gets Real With 3 Kinect Cameras

No, that isn’t a scene from a horror movie up there, it’s [Oliver Kreylos’] avatar in a 3D office environment. If he looks a bit strange, it’s because he’s wearing an Oculus Rift, and his image is being stitched together from 3 Microsoft Kinect cameras.

[Oliver] has created a 3D environment which is incredibly realistic, at least to the wearer. He believes the secret is in the low latency of the entire system. When coupled with a good 3D environment, like the office shown above, the mind is tricked into believing it is really in the room. [Oliver] mentions that he finds himself subconsciously moving to avoid bumping into a table leg that he knows isn’t there. In [Oliver’s] words, “It circumnavigates the uncanny valley“.

Instead of pulling skeleton data from the 3 Kinect cameras, [Oliver] is using video and depth data. He’s stitching and processing this data on an i7 Linux box with an Nvidia Geforce GTX 770 video card. Powerful hardware for sure, but not the cutting edge monster rig one might expect. [Oliver] also documented his software stack. He’s using Vrui VR Toolkit, the Kinect 3D Video Capture Project, and the Collaboration Infrastructure.

We can’t wait to see what [Oliver] does when he gets his hands on the Kinect One (and some good Linux drivers for it).

Continue reading “Virtual Reality Gets Real With 3 Kinect Cameras”

3-Sweep: Turning 2D Images Into 3D Models

As 3D printing continues to grow, people are developing more and more ways to get 3D models. From the hardware based scanners like the Microsoft Kinect to software based like 123D Catch there are a lot of ways to create a 3D model from a series of images. But what if you could make a 3D model out of a single image? Sound crazy? Maybe not. A team of researchers have created 3-Sweep, an interactive technique for turning objects in 2D images into 3D models that can be manipulated.

To be clear, the recognition of 3D components within a single image is a bit out of reach for computer algorithms alone. But by combining the cognitive abilities of a person with the computational accuracy of a computer they have been able to create a very simple tool for extracting 3D models. This is done by outlining the shape similar to how one might model in a CAD package — once the outline is complete, the algorithm takes over and creates a model.

The software was debuted at Siggraph Asia 2013 and has caused quite a stir on the internet. Watch the fascinating video that demonstrates the software process after the break!

Continue reading “3-Sweep: Turning 2D Images Into 3D Models”