Hackaday Links Column Banner

Hackaday Links: May 30, 2021

That collective “Phew!” you heard this week was probably everyone on the Mars Ingenuity helicopter team letting out a sigh of relief while watching telemetry from the sixth and somewhat shaky flight of the UAV above Jezero crater. With Ingenuity now in an “operations demonstration” phase, the sixth flight was to stretch the limits of what the craft can do and learn how it can be used to scout out potential sites to explore for its robot buddy on the surface, Perseverance.

While the aircraft was performing its 150 m move to the southwest, the stream from the downward-looking navigation camera dropped a single frame. By itself, that wouldn’t have been so bad, but the glitch caused subsequent frames to come in with the wrong timestamps. This apparently confused the hell out of the flight controller, which commanded some pretty dramatic moves in the roll and pitch axes — up to 20° off normal. Thankfully, the flight controller was designed to handle just such an anomaly, and the aircraft was able to land safely within five meters of its planned touchdown. As pilots say, any landing you can walk away from is a good landing, so we’ll chalk this one up as a win for the Ingenuity team, who we’re sure are busily writing code to prevent this from happening again.

If wobbling UAVs on another planet aren’t enough cringe for you, how about a blind mechanical demi-ostrich drunk-walking up and down a flight of stairs? The work comes from the Oregon State University and Agility Robotics, and the robot in question is called Cassie, an autonomous bipedal bot with a curious, bird-like gait. Without cameras or lidar for this test, the robot relied on proprioception, which detects the angle of joints and the feedback from motors when the robot touches a solid surface. And for ten tries up and down the stairs, Cassie did pretty well — she only failed twice, with only one counting as a face-plant, if indeed she had a face. We noticed that the robot often did that little move where you misjudge the step and land with the instep of your foot hanging over the tread; that one always has us grabbing for the handrail, but Cassie was able to power through it every time. The paper describing how Cassie was trained is pretty interesting — too bad ED-209’s designers couldn’t have read it.

So this is what it has come to: NVIDIA is now purposely crippling its flagship GPU cards to make them less attractive to cryptocurrency miners. The LHR, or “Lite Hash Rate” cards include new-manufactured GeForce RTX 3080, 3070, and 3060 Ti cards, which will now have reduced Ethereum hash rates baked into the chip from the factory. When we first heard about this a few months ago, we puzzled a bit — why would a GPU card manufacturer care how its cards are used, especially if they’re selling a ton of them. But it makes sense that NVIDIA would like to protect their brand with their core demographic — gamers — and having miners snarf up all the cards and leaving none for gamers is probably a bad practice. So while it makes sense, we’ll have to wait and see how the semi-lobotomized cards are received by the market, and how the changes impact other non-standard uses for them, like weather modeling and genetic analysis.

Speaking of crypto, we found it interesting that police in the UK accidentally found a Bitcoin mine this week while searching for an illegal cannabis growing operation. It turns out that something that uses a lot of electricity, gives off a lot of heat, and has people going in and out of a small storage unit at all hours of the day and night usually is a cannabis farm, but in this case it turned out to be about 100 Antminer S9s set up on janky looking shelves. The whole rig was confiscated and hauled away; while Bitcoin mining is not illegal in the UK, stealing the electricity to run the mine is, which the miners allegedly did.

And finally, we have no idea what useful purpose this information serves, but we do know that it’s vitally important to relate to our dear readers that yellow LEDs change color when immersed in liquid nitrogen. There’s obviously some deep principle of quantum mechanics at play here, and we’re sure someone will adequately explain it in the comments. But for now, it’s just a super interesting phenomenon that has us keen to buy some liquid nitrogen to try out. Or maybe dry ice — that’s a lot easier to source.

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”

Real Time Object Detection For $59

There was a time when making a machine to identify objects in a camera was difficult, even without trying to do it in real time. But now, you can do it with a Jetson Nano board for under $60. How well does it work? Watch [Murtaza’s] video below and see what you think.

The first few minutes of the video piqued our interest, and good thing, too, because the 50 lines of code get a 50-plus minute video! It is worth watching, though, because there’s a lot of good information about how to apply this technique in your own projects.

Continue reading “Real Time Object Detection For $59”

Video Ram Transplant Doubles RTX 3070 Memory To 16 GB

Making unobtainium graphics cards even more unobtainable, [VIK-on] has swapped out the RAM chips on an Nvidia RTX 3070. This makes it the only 3070 the world to work with 16 GB.

If this sounds familiar, it’s because he tried the same trick with the RTX 2070 back in January but couldn’t get it working. When he first published the video showing the process of desoldering the 3070’s eight Hynix 1 GB memory chips and replacing them with eight Samsung 2 GB chips he hit the same wall — the card would boot and detect the increased RAM, but was unstable and would eventually crash. Helpful hints from his viewers led him to use an EVGA configuration GUI to lock the operating frequency which fixed the problem. Further troubleshooting (YouTube comment in Russian and machine translation of it) showed that the “max performance mode” setting in the Nvidia tool is also a solution to stabilize performance.

The new memory chips don’t self-report their specs to the configuration tool. Instead, a set of three resistors are used to electronically identify which hardware is present. The problem was that [VIK-on] had no idea which resistors and what the different configurations accomplished. It sounds like you can just start changing zero Ohm resistors around to see the effect in the GUI, as they configure both the brand of memory and the size available. The fact that this board is not currently sold with a 16 GB option, yet the configuration tool has settings for it when the resistors are correctly configured is kismet.

So did it make a huge difference? That’s difficult to say. He’s running some benchmarks in the video, both Unigine 2 SuperPosition and 3DMark Time Spy results are shown. However, we didn’t see any tests run prior to the chip swap. This would have been the key to characterizing the true impact of the hack. That said, reworking these with a handheld hot air station, and working your way through the resistor configuration is darn impressive no matter what the performance bump ends up being.

Continue reading “Video Ram Transplant Doubles RTX 3070 Memory To 16 GB”

Machine Learning Helps You Track Your Internet Misery Index

We all seem to intuitively know that a lot of what we do online is not great for our mental health. Hang out on enough social media platforms and you can practically feel the changes your mind inflicts on your body as a result of what you see — the racing heart, the tight facial expression, the clenched fists raised in seething rage. Not on Hackaday, of course — nothing but sweetness and light here.

That’s all highly subjective, of course. If you’d like to quantify your online misery more objectively, take a look at the aptly named BrowZen, a machine learning application by [Nick Bild]. Built around an NVIDIA Jetson Xavier NX and a web camera, BrowZen captures images of the user’s face periodically. The expression on the user’s face is classified using a facial recognition model that has been trained to recognize facial postures related to emotions like anger, surprise, fear, and happiness. The app captures your mood and which website you’re currently looking at and stores the results in a database. Handy charts let you know which sites are best for your state of mind; it’s not much of a surprise that Twitter induces rage while Hackaday pushes [Nick]’s happiness button. See? Sweetness and light.

Seriously, we could see something like this being very useful for psychological testing, marketing research, or even medical assessments. This adds to [Nick]’s array of AI apps, which range from tracking which surfaces you touch in a room to preventing you from committing a fireable offense on a video conference.

Continue reading “Machine Learning Helps You Track Your Internet Misery Index”

Add An Extra 8GB Of VRAM To Your 2070

Most of us make do with the VRAM that came with our graphics cards. We can just wait until the next one comes out and get a little more memory. After all, it’d be madness to try and delicately solder on new components of something so timing-sensitive as RAM chips, right?

[VIK-on] took it upon himself to do just that. The inspiration came when a leaked diagram suggested that the RTX 2000 line could support 16 GB of RAM by using 2GB chips. NVIDIA never did release a 16GB version of the 2070, so this card is truly one of a kind. After some careful scouring of the internet, the GDDR6 chips were procured and carefully soldered on with a hot air gun. A few resistors had to be moved to accommodate the new RAM chips. During power-on, [VIK-on] saw all 16 GB enumerate and was able to run some stress tests. Unfortunately, the card wasn’t stable and started having black screen issues and wonky clocks. Whether it was a bad solder joint or firmware issues, it’s hard to say but he is pretty convinced it is a BIOS error. Switching the resistors back to the 8GB configuration yielded a stable system.

While a little more recent, this isn’t the only RAM upgrade we’ve covered in the last few months. Video after the break (it’s not in English but captions are available).
Continue reading “Add An Extra 8GB Of VRAM To Your 2070”

Jetson Emulator Gives Students A Free AI Lesson

With the Jetson Nano, NVIDIA has done a fantastic job of bringing GPU-accelerated machine learning to the masses. For less than the cost of a used graphics card, you get a turn-key Linux computer that’s ready and able to handle whatever AI code you throw at it. But if you’re trying to set up a lab for 30 students, the cost of even relatively affordable development boards can really add up.

Spoiler: These things don’t exist.

Which is why [Tea Vui Huang] has developed jetson-emulator. This Python library provides a work-alike environment to NVIDIA’s own “Hello AI World” tutorials designed for the Jetson family of devices, with one big difference: you don’t need the actual hardware. In fact, it doesn’t matter what kind of computer you’ve got; with this library, anything that can run Python 3.7.9 or better can take you through NVIDIA’s getting started tutorial.

So what’s the trick? Well, if you haven’t guessed already, it’s all fake. Obviously it can’t actually run GPU-accelerated code without a GPU, so the library [Tea] has developed simply pretends. It provides virtual images and even “live” camera feeds to which randomly generated objects have been assigned.

The original NVIDIA functions have been rewritten to work with these feeds, so when you call something like net.Classify(img) against one of them you’ll get a report of what faux objects were detected. The output will look just like it would if you were running on a real Jetson, down to providing fictitious dimensions and positions for the bounding boxes.

If you’re a hacker looking to dive into machine learning and computer vision, you’d be better off getting a $59 Jetson Nano and a webcam. But if you’re putting together a workshop that shows a dozen people the basics of NVIDIA’s AI workflow, jetson-emulator will allow everyone in attendance to run code and get results back regardless of what they’ve got under the hood.