AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”

Real Time Object Detection For $59

There was a time when making a machine to identify objects in a camera was difficult, even without trying to do it in real time. But now, you can do it with a Jetson Nano board for under $60. How well does it work? Watch [Murtaza’s] video below and see what you think.

The first few minutes of the video piqued our interest, and good thing, too, because the 50 lines of code get a 50-plus minute video! It is worth watching, though, because there’s a lot of good information about how to apply this technique in your own projects.

Continue reading “Real Time Object Detection For $59”

Video Ram Transplant Doubles RTX 3070 Memory To 16 GB

Making unobtainium graphics cards even more unobtainable, [VIK-on] has swapped out the RAM chips on an Nvidia RTX 3070. This makes it the only 3070 the world to work with 16 GB.

If this sounds familiar, it’s because he tried the same trick with the RTX 2070 back in January but couldn’t get it working. When he first published the video showing the process of desoldering the 3070’s eight Hynix 1 GB memory chips and replacing them with eight Samsung 2 GB chips he hit the same wall — the card would boot and detect the increased RAM, but was unstable and would eventually crash. Helpful hints from his viewers led him to use an EVGA configuration GUI to lock the operating frequency which fixed the problem. Further troubleshooting (YouTube comment in Russian and machine translation of it) showed that the “max performance mode” setting in the Nvidia tool is also a solution to stabilize performance.

The new memory chips don’t self-report their specs to the configuration tool. Instead, a set of three resistors are used to electronically identify which hardware is present. The problem was that [VIK-on] had no idea which resistors and what the different configurations accomplished. It sounds like you can just start changing zero Ohm resistors around to see the effect in the GUI, as they configure both the brand of memory and the size available. The fact that this board is not currently sold with a 16 GB option, yet the configuration tool has settings for it when the resistors are correctly configured is kismet.

So did it make a huge difference? That’s difficult to say. He’s running some benchmarks in the video, both Unigine 2 SuperPosition and 3DMark Time Spy results are shown. However, we didn’t see any tests run prior to the chip swap. This would have been the key to characterizing the true impact of the hack. That said, reworking these with a handheld hot air station, and working your way through the resistor configuration is darn impressive no matter what the performance bump ends up being.

Continue reading “Video Ram Transplant Doubles RTX 3070 Memory To 16 GB”

Machine Learning Helps You Track Your Internet Misery Index

We all seem to intuitively know that a lot of what we do online is not great for our mental health. Hang out on enough social media platforms and you can practically feel the changes your mind inflicts on your body as a result of what you see — the racing heart, the tight facial expression, the clenched fists raised in seething rage. Not on Hackaday, of course — nothing but sweetness and light here.

That’s all highly subjective, of course. If you’d like to quantify your online misery more objectively, take a look at the aptly named BrowZen, a machine learning application by [Nick Bild]. Built around an NVIDIA Jetson Xavier NX and a web camera, BrowZen captures images of the user’s face periodically. The expression on the user’s face is classified using a facial recognition model that has been trained to recognize facial postures related to emotions like anger, surprise, fear, and happiness. The app captures your mood and which website you’re currently looking at and stores the results in a database. Handy charts let you know which sites are best for your state of mind; it’s not much of a surprise that Twitter induces rage while Hackaday pushes [Nick]’s happiness button. See? Sweetness and light.

Seriously, we could see something like this being very useful for psychological testing, marketing research, or even medical assessments. This adds to [Nick]’s array of AI apps, which range from tracking which surfaces you touch in a room to preventing you from committing a fireable offense on a video conference.

Continue reading “Machine Learning Helps You Track Your Internet Misery Index”

Add An Extra 8GB Of VRAM To Your 2070

Most of us make do with the VRAM that came with our graphics cards. We can just wait until the next one comes out and get a little more memory. After all, it’d be madness to try and delicately solder on new components of something so timing-sensitive as RAM chips, right?

[VIK-on] took it upon himself to do just that. The inspiration came when a leaked diagram suggested that the RTX 2000 line could support 16 GB of RAM by using 2GB chips. NVIDIA never did release a 16GB version of the 2070, so this card is truly one of a kind. After some careful scouring of the internet, the GDDR6 chips were procured and carefully soldered on with a hot air gun. A few resistors had to be moved to accommodate the new RAM chips. During power-on, [VIK-on] saw all 16 GB enumerate and was able to run some stress tests. Unfortunately, the card wasn’t stable and started having black screen issues and wonky clocks. Whether it was a bad solder joint or firmware issues, it’s hard to say but he is pretty convinced it is a BIOS error. Switching the resistors back to the 8GB configuration yielded a stable system.

While a little more recent, this isn’t the only RAM upgrade we’ve covered in the last few months. Video after the break (it’s not in English but captions are available).
Continue reading “Add An Extra 8GB Of VRAM To Your 2070”

Jetson Emulator Gives Students A Free AI Lesson

With the Jetson Nano, NVIDIA has done a fantastic job of bringing GPU-accelerated machine learning to the masses. For less than the cost of a used graphics card, you get a turn-key Linux computer that’s ready and able to handle whatever AI code you throw at it. But if you’re trying to set up a lab for 30 students, the cost of even relatively affordable development boards can really add up.

Spoiler: These things don’t exist.

Which is why [Tea Vui Huang] has developed jetson-emulator. This Python library provides a work-alike environment to NVIDIA’s own “Hello AI World” tutorials designed for the Jetson family of devices, with one big difference: you don’t need the actual hardware. In fact, it doesn’t matter what kind of computer you’ve got; with this library, anything that can run Python 3.7.9 or better can take you through NVIDIA’s getting started tutorial.

So what’s the trick? Well, if you haven’t guessed already, it’s all fake. Obviously it can’t actually run GPU-accelerated code without a GPU, so the library [Tea] has developed simply pretends. It provides virtual images and even “live” camera feeds to which randomly generated objects have been assigned.

The original NVIDIA functions have been rewritten to work with these feeds, so when you call something like net.Classify(img) against one of them you’ll get a report of what faux objects were detected. The output will look just like it would if you were running on a real Jetson, down to providing fictitious dimensions and positions for the bounding boxes.

If you’re a hacker looking to dive into machine learning and computer vision, you’d be better off getting a $59 Jetson Nano and a webcam. But if you’re putting together a workshop that shows a dozen people the basics of NVIDIA’s AI workflow, jetson-emulator will allow everyone in attendance to run code and get results back regardless of what they’ve got under the hood.

Attempting To Generate Photorealistic Video With Neural Networks

Over the past decade, we’ve seen great strides made in the area of AI and neural networks. When trained appropriately, they can be coaxed into generating impressive output, whether it be in text, images, or simply in classifying objects. There’s also much fun to be had in pushing them outside their prescribed operating region, as [Jon Warlick] attempted recently.

[Jon]’s work began using NVIDIA’s GauGAN tool. It’s capable of generating pseudo-photorealistic images of landscapes from segmentation maps, where different colors of a 2D image represent things such as trees, dirt, or mountains, or water. After spending much time toying with the software, [Jon] decided to see if it could be pressed into service to generate video instead.

The GauGAN tool is only capable of taking in a single segmentation map, and outputting a single image, so [Jon] had to get creative. Experiments were undertaken wherein a video was generated and exported as individual frames, with these frames fed to GauGAN as individual segmentation maps. The output frames from GauGAN were then reassembled into a video again.

The results are somewhat psychedelic, as one would expect. GauGAN’s single image workflow means there is only coincidental relevance between consecutive frames, creating a wild, shifting visage. While it’s not a technique we expect to see used for serious purposes anytime soon, it’s a great experiment at seeing how far the technology can be pushed. It’s not the first time we’ve seen such technology used to create full motion video, either. Video after the break.

Continue reading “Attempting To Generate Photorealistic Video With Neural Networks”