NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like

Videogames have always existed in a weird place between high art and cutting-edge technology. Their consumer-facing nature has always forced them to be both eye-catching and affordable, while remaining tasteful enough to sit on retail shelves (both physical and digital). Running in real-time is a necessity, so it’s not as if game creators are able to pre-render the incredibly complex visuals found in feature films. These pieces of software constantly ride the line between exploiting the hardware of the future while supporting the past where their true user base resides. Each pixel formed and every polygon assembled comes at the cost of a finite supply of floating point operations today’s pieces of silicon can deliver. Compromises must be made.

Often one of the first areas in games that fall victim to compromise are environmental model textures. Maintaining a viable framerate is paramount to a game’s playability, and elements of the background can end up getting pushed to “the background”. The resulting look of these environments is somewhat more blurry than what they would have otherwise been if artists were given more time, or more computing resources, to optimize their creations. But what if you could update that ten-year-old game to take advantage of today’s processing capabilities and screen resolutions?

NVIDIA is currently using artificial intelligence to revise textures in many classic videogames to bring them up to spec with today’s monitors. Their neural network is able fundamentally alter how a game looks without any human intervention. Is this a good thing?

Continue reading “NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like”

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”

CUDA Is Like Owning A Supercomputer

The word supercomputer gets thrown around quite a bit. The original Cray-1, for example, operated at about 150 MIPS and had about eight megabytes of memory. A modern Intel i7 CPU can hit almost 250,000 MIPS and is unlikely to have less than eight gigabytes of memory, and probably has quite a bit more. Sure, MIPS isn’t a great performance number, but clearly, a top-end PC is way more powerful than the old Cray. The problem is, it’s never enough.

Today’s computers have to processes huge numbers of pixels, video data, audio data, neural networks, and long key encryption. Because of this, video cards have become what in the old days would have been called vector processors. That is, they are optimized to do operations on multiple data items in parallel. There are a few standards for using the video card processing for computation and today I’m going to show you how simple it is to use CUDA — the NVIDIA proprietary library for this task. You can also use OpenCL which works with many different kinds of hardware, but I’ll show you that it is a bit more verbose.
Continue reading “CUDA Is Like Owning A Supercomputer”

Hackaday Links Column Banner

Hackaday Links: Not A Creature Was Stirring, Except For A Trackball

Hey, did you know Hackaday is starting an Open Access, peer-reviewed journal? The Hackaday Journal of What You Don’t Know (HJWYDK) is looking for submissions detailing the tools, techniques, and skills that we don’t know, but should. Want to teach everyone how to make sand think? Write a paper and tell us about it! Send in your submissions here.

Have you noticed OSH Park updated their website?

The MSP430 line of microcontrollers are super cool, low power, and cheap. Occasionally, TI pumps out a few MSP430 dev boards and sells them for the rock-bottom price of $4.30. Here ya go, fam. This one is loaded up with the MSP430FR2433.

lol, Bitcoin this week.

Noisebridge, the San Francisco hackerspace and one of the first hackerspaces in the US, is now looking for a new place. Why, you may ask? Because San Francisco real estate. The current price per square foot is triple what their current lease provides. While we hope Noisebridge will find a new home, we’re really looking forward to the hipster restaurant that’s only open for brunch that will take its place.

The coolest soundcards, filled with DOS blips and bloops, were based on the OPL2 and OPL3 sound chips. If you want one of these things, you’re probably going to be digging up an old ISA SoundBlaster soundcard. The OPL2LPT is the classic sound card for computers that don’t have an easily-accessible ISA bus, like those cool vintage laptops. The 8-Bit Guy recently took a look at this at this neat piece of hardware, and apart from requiring a driver to work with any OPL2-compatible game, this thing actually works.

NVIDIA just did something amazing. They created a piece of hardware that everyone wants but isn’t used to turn electricity into heat and Bitcoin. This fantastic device, that is completely original and not at all derivative, is sold in the NVIDIA company store for under five dollars. Actually, the green logo silk/art on this PCB ruler is kinda cool, and I’d like to know how they did that. Also, and completely unrelated: does anyone want ten pounds of Digikey PCB rulers?

Hackaday Links Column Banner

Hackaday Links: November 19, 2017

[Peter]’s homebuilt ultralight is actually flying now and not in ground effect, much to the chagrin of YouTube commenters. [Peter Sripol] built a Part 103 ultralight (no license required, any moron can jump in one and fly) in his basement out of foam board from Lowes. Now, he’s actually doing flight testing, and he managed to build a good plane. Someone gifted him a ballistic parachute so the GoFundMe for the parachute is unneeded right now, but this gift parachute is a bit too big for the airframe. Not a problem; he’ll just sell it and buy the smaller model.

Last week, rumors circulated of Broadcom acquiring Qualcomm for the sum of One… Hundred… Billion Dollars. It looks like that’s not happening now. Qualcomm rejected a deal for $103B, saying the offer, ‘undervalued the company and would face regulatory hurdles.’ Does this mean the deal is off? No, there are 80s guys out there who put the dollar signs in Busine$$, and there’s politicking going on.

A few links posts ago, I pointed out there were some very fancy LED panels available on eBay for very cheap. The Barco NX-4 LED panels are a 32×36 panels of RGB LEDs, driven very quickly by some FPGA goodness. The reverse engineering of these panels is well underway, and [Ian] and his team almost have everything figured out. Glad I got my ten panels…

TechShop is gone. With a heavy heart, we bid adieu to a business with a whole bunch of tools anyone can use. This leaves a lot of people with TechShop memberships out in the cold, and to ease the pain, Glowforge, Inventables, Formlabs, and littleBits are offering some discounts so you can build a hackerspace in your garage or basement. In other TechShop news, the question on everyone’s mind is, ‘what are they going to do with all the machines?’. Nobody knows, but the smart money is a liquidation/auction. Yes, in a few months, you’ll probably be renting a U-Haul and driving to TechShop one last time.

3D Hubs has come out with a 3D Printing Handbook. There’s a lot in the world of filament-based 3D printing that isn’t written down. It’s all based on experience, passed on from person to person. How much of an overhang can you really get away with? How do you orient a part correctly? God damned stringing. How do you design a friction-fit between two parts? All of these techniques are learned by experience. Is it possible to put this knowledge in a book? I have no idea, so look for that review in a week or two.

Like many of us, I’m sure, [Adam] is a collector of vintage computers. Instead of letting them sit in the attic, he’s taking gorgeous pictures of them. The collection includes most of the big-time Atari and Commodore 8-bitters, your requisite Apples, all of the case designs of the all-in-one Macs, some Pentium-era PCs, and even a few of the post-97 Macs. Is that Bondi Blue? Bonus points: all of these images are free to use with attribution.

Nvidia is blowing out their TX1 development kits. You can grab one for $200. What’s the TX1? It’s a really, really fast ARM computer stuffed into a heat sink that’s about the size of a deck of cards. You can attach it to a MiniITX breakout board that provides you with Ethernet, WiFi, and a bunch of other goodies. It’s a step above the Raspberry Pi for sure and is capable enough to run as a normal desktop computer.

Hackaday Links: August 27, 2017

Hulk Hands! Who remembers Hulk Hands? These were a toy originally released for the 2003 Hulk movie and were basically large foam clenched fists you could wear. Hulk Hands have been consistently been re-released for various Marvel films, but now there’s something better: it’s the stupidest tool ever. Two guys thought it would be fun and not dangerous at all to create cast iron Hulk Hands and use them as demolition and renovation equipment. This is being sold as a tool comparable to a sledgehammer or a wrecking bar.

New Pogs! We’re up to 0x0C. Is your collection complete?

[Peter] is building an airplane out of foam in his basement. He’s also doing it as a five or six-part series on his YouTube channel. Part two is now up. This update covers the tail surfaces, weighing and balancing the fuselage, and a general Q&A with YouTube comments.  Yes, [Peter] still has a GoFundMe up for a parachute, and it’s already about half funded. With any luck, he’ll have the $2600 for a parachute before he builds the rest of the plane. Another option is a ballistic parachute system — a parachute for the whole plane, like a Cirrus. That would be a bit more than $4000, so we’ll see how far the GoFundMe goes.

Hey, remember the Nvidia Jetson TX1? It’s a miniATX motherboard running a fast ARM core with a GPU housing 256 CUDA cores. It’s cool, and the new version — the TX2 — is designed for ‘machine learning at the edge’. They’re on sale now, for only $199.

Primitive Technology has another video out. This time, he’s improving his bow string blower into something that kinda, sorta resembles a modern forge. This time, the experiment was a success when it comes to pottery — he’s now able to fire clay at a much higher temperature, bringing him reasonably close to modern ceramics. At least, as close as you can get starting with the technology of a pointed stick. The experiment was marginally successful when it came to creating iron. He’s using iron-bearing bacteria (!) for his source of ore and was able to smelt millimeter-sized pellets of iron. This guy needs a source of copper or tin. Zinc is also surprisingly possible given his new found capabilities for ceramics.

Hackaday Links: May 14, 2017

Maker Faire Bay Area is next weekend, and you know what that means: we’re having a meetup on Saturday night. If you’re in the area, it’s highly recommended you attend. It’s a blinky bring-a-hack with booze. You can’t beat it. I heard the OPShark is showing up. All hail the OPShark. You’re gonna want to RSVP if you’re going k thx.

It only took twelve years, but [ladyada] finally got herself on the cover of Make.

Nvidia has the Jetson, an extremely powerful single board computer + GPU meant for machine learning, imagifying, and robotics applications. If you want to do fancy ML stuff with low power devices, I’d highly recommend you check the Jetson out. Of course, the Jetson is only the brains of any Machine Learning robot; you also need some muscle. To that end, Nvidia released the Isaac robotic simulator. It’s a simulator for standard bits of hardware like quadcopters, hovercrafts (?), robotic arms, and yes, selfie drones. What does this mean? Standardized hardware means someone is going to produce 3rd party hardware, and that’s awesome.

This is just an observation, but fidget spinners are just now hitting the mainstream. We didn’t know what they were for a year ago, and we don’t know now.

A Hebocon is a shitty robot battle. DorkbotPDX just had their first Hebocon and the results were… just about as shitty as you would expect. Since this is a shitty robot battle, a MakerBot made an appearance. This robot, SpitterBot, was designed to blow extruded filament all over its opponent. Did the MakerBot win? Yes, SpitterBot won the ‘Poorest Quality’ award.

Supplyframe, Hackaday’s parent company, hosts monthly-ish electronic get-togethers in the San Fransisco office. The focus of these meetups is to find someone cool who built something awesome and get them to talk about it. The March meetup featured [Pete Bevelacqua] who built a Vector Network Analyzer from scratch. The video is worth a watch.