NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like

Videogames have always existed in a weird place between high art and cutting-edge technology. Their consumer-facing nature has always forced them to be both eye-catching and affordable, while remaining tasteful enough to sit on retail shelves (both physical and digital). Running in real-time is a necessity, so it’s not as if game creators are able to pre-render the incredibly complex visuals found in feature films. These pieces of software constantly ride the line between exploiting the hardware of the future while supporting the past where their true user base resides. Each pixel formed and every polygon assembled comes at the cost of a finite supply of floating point operations today’s pieces of silicon can deliver. Compromises must be made.

Often one of the first areas in games that fall victim to compromise are environmental model textures. Maintaining a viable framerate is paramount to a game’s playability, and elements of the background can end up getting pushed to “the background”. The resulting look of these environments is somewhat more blurry than what they would have otherwise been if artists were given more time, or more computing resources, to optimize their creations. But what if you could update that ten-year-old game to take advantage of today’s processing capabilities and screen resolutions?

NVIDIA is currently using artificial intelligence to revise textures in many classic videogames to bring them up to spec with today’s monitors. Their neural network is able fundamentally alter how a game looks without any human intervention. Is this a good thing?

Continue reading “NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like”

Digital License Plates Are Here, But Do We Need Them?

It’s a story as old as time: you need to swap between your custom license plates, but you can’t find a screwdriver and you’re already running late for a big meeting at the Business Factory. You called AAA to see if they could come out and do it for you, but as luck would have it something must be wrong with your phone because the line was disconnected as soon as you explained the situation. As if life in the First World couldn’t get any more difficult.

Luckily, a company called Reviver Auto has come up with a thoroughly modern solution to this age old problem. Assuming you live in Arizona, California, and Michigan and are willing to pay $800 USD (plus a small monthly service fee), you can join the Rplate revolution! Less a license plate and more of a “cool-looking, multi-functional digital display and connected vehicle platform”, the Rplate will ensure you never again find yourself stuck on the side of the road with an unfashionable license plate.

What’s that? You’ve had the same license plate for years, possibly decades, and have never given it much thought? Well, in that case the Rplate might be sort of a tough sell. Did we mention that someday you might be able to display the current weather on it while your car is parked? Of course, if you can see the license plate you’re already outside, so…

This all might sound like an out of season April Fool’s joke, but as far as I can tell from reading the Reviver Auto site and watching their promotional videos, this is essentially the value proposition of their line of Rplate digital license plates. There are some admittedly interesting potential extensions of the technology if they can convince other companies and systems to plug into their ecosystem, but given the cost of the Rplate and the few states in which it’s currently legal to use, that seems far from a given at this point.

But of course we’re fans of weird and wonderful technology here at Hackaday, so we should give this device a fair shake. On the surface it might seem to be a solution looking for a problem, but that’s often said of technology ahead of its time. So what exactly is the Rplate, how does it work, and where does it go from here?

Continue reading “Digital License Plates Are Here, But Do We Need Them?”

AI On Raspberry Pi With The Intel Neural Compute Stick

I’ve always been fascinated by AI and machine learning. Google TensorFlow offers tutorials and has been on my ‘to-learn’ list since it was first released, although I always seem to neglect it in favor of the shiniest new embedded platform.

Last July, I took note when Intel released the Neural Compute Stick. It looked like an oversized USB stick, and acted as an accelerator for local AI applications, especially machine vision. I thought it was a pretty neat idea: it allowed me to test out AI applications on embedded systems at a power cost of about 1W. It requires pre-trained models, but there are enough of them available now to do some interesting things.

You can add a few of them in a hub for parallel tasks. Image credit Intel Corporation.

I wasn’t convinced I would get great performance out of it, and forgot about it until last November when they released an improved version. Unambiguously named the ‘Neural Compute Stick 2’ (NCS2), it was reasonably priced and promised a 6-8x performance increase over the last model, so I decided to give it a try to see how well it worked.

 

I took a few days off work around Christmas to set up Intel’s OpenVino Toolkit on my laptop. The installation script provided by Intel wasn’t particularly user-friendly, but it worked well enough and included several example applications I could use to test performance. I found that face detection was possible with my webcam in near real-time (something like 19 FPS), and pose detection at about 3 FPS. So in accordance with the holiday spirit, it knows when I am sleeping, and knows when I’m awake.

That was promising, but the NCS2 was marketed as allowing AI processing on edge computing devices. I set about installing it on the Raspberry Pi 3 Model B+ and compiling the application samples to see if it worked better than previous methods. This turned out to be more difficult than I expected, and the main goal of this article is to share the process I followed and save some of you a little frustration.

Continue reading “AI On Raspberry Pi With The Intel Neural Compute Stick”

Now That’s What I Call Crypto: 10 Years Of The Best Of Bitcoin

On January 3rd, 2009, the Genesis Block was created. This was the first entry on the Bitcoin blockchain. Because of the nature of Bitcoin, all transactions lead back to this block. This is where Bitcoin began, almost exactly ten years ago.

The Genesis Block was created by Satoshi, a person or persons we know nothing about. In the decade since, we’ve seen the astonishing rise and meteoric descent of Bitcoin, and then it happened again after the bubble was re-inflated.

Due to the nature of Bitcoins, blockchains, and ledgers, the entire history of Bitcoin has been recorded. Every coin spent and every satoshi scrupled has been recorded for all to see. It’s time for a retrospective, and not just because I wanted to see some art based on the covers of Now That’s What I Call Music albums. No, ten years is a lot of stories to tell.

Continue reading “Now That’s What I Call Crypto: 10 Years Of The Best Of Bitcoin”

Drone Sightings, A New British Comedy Soap Opera

There’s a new soap opera that I can’t stop watching. Actually, I wish I could change the channel but this is unfortunately happening in real life. It’s likely the ups and downs of drone sightings would be too far fetched for fiction anyway.

If you aren’t British, maybe you will know a little of our culture through the medium of television. We don’t all live in stately homes like Downton Abbey of course, instead we’re closer to the sometimes comedic sets, bad lighting, and ridiculously complicated lives of the residents of Coronation Street or of Albert Square in Eastenders that you may have flashed past late at night on a high-number channel.

Our new comedy soap lacks the regional accents of Emmerdale or Hollyoaks, but has no less of the farce about it. Here at Hackaday we’ve brought you news of the UK’s peculiar habit of bad reporting and shoddy investigation of questionable drone sightings several times over the last year or two. Most recently we covered a series of events before Christmas that closed Gatwick, London’s second airport for several days over what turned out to be nothing of substance.

Unfortunately it didn’t end there. We’re back once more to catch up with the latest events down on the tarmac, and come away with a fresh set of reasonable questions unanswered by the popular coverage of the matter.

Continue reading “Drone Sightings, A New British Comedy Soap Opera”

Video: Putting High Speed PCB Design To The Test

Designing circuit boards for high speed applications requires special considerations. This you already know, but what exactly do you need to do differently from common board layout? Building on where I left off discussing impedance in 2 layer Printed Circuit Board (PCB) designs, I wanted to start talking about high speed design techniques as they relate to PCBs.  This is the world of multi-layer PCBs and where the impedance of both the Power Delivery Network (PDN) and the integrity of the signals themselves (Signal Integrity or SI) become very important factors.

I put together a few board designs to test out different situations that affect high speed signals. You’ve likely heard of vias and traces laid out at right angles having an impact. But have you considered how the glass fabric weave in the board itself impacts a design? In this video I grabbed some of my fanciest test equipment and put these design assumptions to the test. Have a look and then join me after the break for more details on what went into this!

Continue reading “Video: Putting High Speed PCB Design To The Test”

The Short And Tragic Story Of Life On The Moon

The Moon is a desolate rock, completely incapable of harboring life as we know it. Despite being our closest celestial neighbor, conditions on the surface couldn’t be more different from the warm and wet world we call home. Variations in surface temperature are so extreme, from a blistering 106 C (223 F) during the lunar day to a frigid -183 C (-297 F) at night, that even robotic probes struggle to survive. The Moon’s atmosphere, if one is willing to call the wispy collection of oddball gasses including argon, helium, and neon at nearly negligible concentrations an atmosphere, does nothing to protect the lunar surface from being bombarded with cosmic radiation.

Von Kármán Crater

Yet for a brief time, very recently, life flourished on the Moon. Of course, it did have a little help. China’s Chang’e 4 lander, which made a historic touchdown in the Von Kármán crater on January 3rd, brought with it an experiment designed to test if plants could actually grow on the lunar surface. The device, known as the Lunar Micro Ecosystem (LME), contained air, soil, water, and a collection of seeds. When it received the appropriate signal, LME watered the seeds and carefully monitored their response. Not long after, Chinese media proudly announced that the cotton seeds within the LME had sprouted and were doing well.

Unfortunately, the success was exceptionally short-lived. Just a few days after announcing the success of the LME experiment, it was revealed that all the plants which sprouted had died. The timeline here is a bit hazy. It was not even immediately clear if the abrupt end of the LME experiment was intentional, or due to some hardware failure.

So what exactly do we know about Chang’e 4’s Lunar Micro Ecosystem, and the lifeforms it held? Why did the plants die? But perhaps most importantly, what does all this have to do with potential future human missions to that inhospitable rock floating just a few hundred thousand kilometers away from us?

Continue reading “The Short And Tragic Story Of Life On The Moon”