NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.

NVIDIA-Instant-NeRF-3D-Mesh

If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”

NVIDIA Releases Drivers With Openness Flavor

This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!

The Driver’s Present State

Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!

Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. Continue reading “NVIDIA Releases Drivers With Openness Flavor”

NVIDIA Unveils Jetson AGX Orin Developer Kit

When you think of high-performance computing powered by NVIDIA hardware, you probably think of applications leveraging the capabilities of the company’s graphics cards. In many cases, you’d be right. But naturally there are situations where the traditional combination of x86 computer and bolt-on GPU simply isn’t going to cut it; try packing a modern gaming computer onto a quadcopter and let us know how it goes.

For these so-called “edge computing” situations, NVIDIA offers the Jetson line of ARM single-board computers which include a scaled-down GPU that gives them vastly improved performance for machine learning applications than something like the Raspberry Pi. Today during their annual GPU Technology Conference (GTC), NVIDIA announced the immediate availability of the Jetson AGX Orin Developer Kit, which the company promises can deliver “server-class AI performance” in a package small enough for use in IoT or robotics.

As with the earlier Jetsons, the palm-sized development kit acts as a sort of breakout board for the far smaller module slotted into it. This gives developers access to the full suite of the connectivity and I/O options offered by the Jetson module in a desktop-friendly form that makes prototyping the software side of things much easier. Once the code is working as intended, you can simply pop the Jetson module out of the development kit and install it in your final hardware.

NVIDIA is offering the Orin module in a range of configurations, depending on your computational needs and budget. At the high end is the AGX Orin 64 GB at $1599 USD; which offers a 12-core ARM Cortex-A78AE processor, 32 GB of DDR5 RAM, 64 GB of onboard flash, and a Ampere GPU with 2048 CUDA cores and 64 Tensor cores, which all told enables it to perform an incredible 275 trillion operations per second (TOPS).

At the other end of the spectrum is the Orin NX 8 GB, a SO-DIMM module that delivers 70 TOPS for $399. It’s worth noting that even this low-end flavor of the Orin is capable of more than double the operations per second as 2018’s Jetson AGX Xavier, which until now was the most powerful entry in the product line.

The Jetson AGX Orin Developer Kit is available for $1,999 USD, and includes the AGX Orin 64 GB module. Interestingly, NVIDIA says the onboard software is able to emulate any of of the lower tier modules, so you won’t necessarily have to swap out the internal modules if your final hardware will end up using one of the cheaper modules. Of course the inverse of that is even folks who only planned on using the more budget-friendly units either have to shell out for an expensive dev kit, or try to spin their own breakout board.

While the $50 USD Jetson Nano is far more likely to be on the workbench of the average Hackaday reader, we have to admit that the specs of these new Orin modules are very exciting. Then again, we’ve covered several projects that used the previously top-of-the-line Jetson Xavier, so we don’t doubt one of you is already reaching for their wallet to pick up this latest entry into NVIDIA’s line of diminutive powerhouses.

This Week In Security: Ukraine, Nvidia, And Conti

The geopolitics surrounding the invasion of Ukraine are outside the scope of this column, but the cybersecurity ramifications are certainly fitting fodder. The challenge here is that almost everything of note that has happened in the last week has been initially linked to the conflict, but in several cases, the reported link hasn’t withstood scrutiny. We do know that the Vice Prime Minister of Ukraine put out a call on Twitter for “cyber specialists” to go after a list of Russian businesses and state agencies. Many of the sites on the list did go down for some time, the digital equivalent of tearing down a poster. In response, the largest Russian ISP stopped announcing BGP routes to some of the targeted sites, effectively ending any attacks against them from the outside.

A smattering of similar events have unfolded over the last week, like electric car charging stations in Russia refusing to charge, and displaying a political message, “GLORY TO UKRAINE”. Not all the attacks have been so trivial. Researchers at Eset have identified HermeticWiper, a bit of malware with no other purpose but to destroy data. It has been found on hundreds of high-value targets, likely causing much damage. It is likely the same malware that Microsoft has dubbed FoxBlade, and published details about their response. Continue reading “This Week In Security: Ukraine, Nvidia, And Conti”

video of someone pushing the button to generate new art

AI Generating Paintings Off To A Flying Art

The philosophical question of “What is art?” has an ethereal, transient quality to it. A definition seems to slip away as you get close to an answer. Embracing that quality, [Max Fischer] has created an AI-powered painting that paints a new piece of art at the push of a button. When the button below the screen is pushed, a new image is generated and the old one is forever lost, which in a way, makes the frame a piece of art itself.

The really makes this project stand is the sheer quality of documentation on the GitHub repo. The instructions are incredibly detailed. Everything from setting up the Jetson to building the control box out of half-inch MDF (12mm for the sane part of the world) is laid out with copious pictures. Despite the ease of generating images ahead of time, [Max] took the hard route Hackaday route and did all inference locally and in real-time. To handle the processing requirements, an Nvidia Jetson Xavier NX single-board computer was used. He trained StyleGAN with high-resolution abstract art that gets generated whenever the button below the screen is pushed. To prevent screen burn-in, a PIR was added to turn the screen off when no one is around.

Here at Hackaday, we’ve seen several projects putting old laptop screens or monitors into a nice wooden case and mounting them to the wall. Since 32″ laptops are rather hard to find, [Max] opted to take a different approach and instead got a 32″ Samsung Frame for relatively cheap.

For all their detail, [Max] did leave one thing out of the readme: the AI that generates the art. [Max] hints that he wants others to create their picture frames, but with their own art generation. So what are you waiting for? Go make some art.

Hackaday Links Column Banner

Hackaday Links: May 30, 2021

That collective “Phew!” you heard this week was probably everyone on the Mars Ingenuity helicopter team letting out a sigh of relief while watching telemetry from the sixth and somewhat shaky flight of the UAV above Jezero crater. With Ingenuity now in an “operations demonstration” phase, the sixth flight was to stretch the limits of what the craft can do and learn how it can be used to scout out potential sites to explore for its robot buddy on the surface, Perseverance.

While the aircraft was performing its 150 m move to the southwest, the stream from the downward-looking navigation camera dropped a single frame. By itself, that wouldn’t have been so bad, but the glitch caused subsequent frames to come in with the wrong timestamps. This apparently confused the hell out of the flight controller, which commanded some pretty dramatic moves in the roll and pitch axes — up to 20° off normal. Thankfully, the flight controller was designed to handle just such an anomaly, and the aircraft was able to land safely within five meters of its planned touchdown. As pilots say, any landing you can walk away from is a good landing, so we’ll chalk this one up as a win for the Ingenuity team, who we’re sure are busily writing code to prevent this from happening again.

If wobbling UAVs on another planet aren’t enough cringe for you, how about a blind mechanical demi-ostrich drunk-walking up and down a flight of stairs? The work comes from the Oregon State University and Agility Robotics, and the robot in question is called Cassie, an autonomous bipedal bot with a curious, bird-like gait. Without cameras or lidar for this test, the robot relied on proprioception, which detects the angle of joints and the feedback from motors when the robot touches a solid surface. And for ten tries up and down the stairs, Cassie did pretty well — she only failed twice, with only one counting as a face-plant, if indeed she had a face. We noticed that the robot often did that little move where you misjudge the step and land with the instep of your foot hanging over the tread; that one always has us grabbing for the handrail, but Cassie was able to power through it every time. The paper describing how Cassie was trained is pretty interesting — too bad ED-209’s designers couldn’t have read it.

So this is what it has come to: NVIDIA is now purposely crippling its flagship GPU cards to make them less attractive to cryptocurrency miners. The LHR, or “Lite Hash Rate” cards include new-manufactured GeForce RTX 3080, 3070, and 3060 Ti cards, which will now have reduced Ethereum hash rates baked into the chip from the factory. When we first heard about this a few months ago, we puzzled a bit — why would a GPU card manufacturer care how its cards are used, especially if they’re selling a ton of them. But it makes sense that NVIDIA would like to protect their brand with their core demographic — gamers — and having miners snarf up all the cards and leaving none for gamers is probably a bad practice. So while it makes sense, we’ll have to wait and see how the semi-lobotomized cards are received by the market, and how the changes impact other non-standard uses for them, like weather modeling and genetic analysis.

Speaking of crypto, we found it interesting that police in the UK accidentally found a Bitcoin mine this week while searching for an illegal cannabis growing operation. It turns out that something that uses a lot of electricity, gives off a lot of heat, and has people going in and out of a small storage unit at all hours of the day and night usually is a cannabis farm, but in this case it turned out to be about 100 Antminer S9s set up on janky looking shelves. The whole rig was confiscated and hauled away; while Bitcoin mining is not illegal in the UK, stealing the electricity to run the mine is, which the miners allegedly did.

And finally, we have no idea what useful purpose this information serves, but we do know that it’s vitally important to relate to our dear readers that yellow LEDs change color when immersed in liquid nitrogen. There’s obviously some deep principle of quantum mechanics at play here, and we’re sure someone will adequately explain it in the comments. But for now, it’s just a super interesting phenomenon that has us keen to buy some liquid nitrogen to try out. Or maybe dry ice — that’s a lot easier to source.

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”