Using the GPU from JavaScript

Everyone knows that writing programs that exploit the GPU (Graphics Processing Unit) in your computer’s video card requires special arcane tools, right? Well, thanks to [Matthew Saw], [Fazil Sapuan], and [Cheah Eugene], perhaps not. At a hackathon, they turned out a Javascript library that allows you to create “kernel” functions to execute on the GPU of the target system. There’s a demo available with a benchmark which on our machine sped up a 512×512 calculation by well over five times. You can download the library from the same page. There’s also a GitHub page.

The documentation is a bit sparse but readable. You simply define the function you want to execute and the dimensions of the problem. You can specify one, two, or three dimensions, as suits your problem space. When you execute the associated function it will try to run the kernels on your GPU in parallel. If it can’t, it will still get the right answer, just slowly.

Continue reading “Using the GPU from JavaScript”

World’s Worst Bitcoin Mining Rig

Even if we don’t quite understand what’s happening in a Bitcoin mine, we all pretty much know what’s needed to set one up. Racks of GPUs and specialized software will eventually find a few of these vanishingly rare virtual treasures, but if you have enough time, even a Xerox Alto from 1973 can be turned into a Bitcoin mine. As for how much time it’ll take [Ken Shirriff]’s rig to find a Bitcoin, let’s just say that his Alto would need to survive the heat death of the universe. About 5000 times. And it would take the electricity generated by a small country to do it.

Even though it’s not exactly a profit center, it gives [Ken] a chance to show off his lovingly restored Alto. The Xerox machine is the granddaddy of all modern PCs, having introduced almost every aspect of the GUI world we live in. But with a processor built from discrete TTL chips and an instruction set that doesn’t even have logical OR or XOR functions, the machine isn’t exactly optimized for SHA-256 hashing. The fact that [Ken] was able to implement a mining algorithm at all is impressive, and his explanation of how Bitcoin mining is done is quite clear and a great primer for cryptocurrency newbies.

[Ken] seems to enjoy sending old computer hardware to the Bitcoin mines — he made an old IBM mainframe perform the trick a while back. But if you don’t have a room-size computer around, perhaps reading up on alternate uses for the block chain would be a good idea.

[via Dangerous Prototypes]

Microchip’s PIC32MZ DA — The Microcontroller With A GPU

When it comes to displays, there is a gap between a traditional microcontroller and a Linux system-on-a-chip (SoC). The SoC that lives in a smartphone will always have enough RAM for a framebuffer and usually has a few pins dedicated to an LCD interface. Today, Microchip has announced a microcontroller that blurs the lines between what can be done with an SoC and what can be done with a microcontroller. The PIC32MZ ‘DA’ family of microcontrollers is designed for graphics applications and comes with a boatload of RAM and a dedicated GPU.

The key feature for this chip is a boatload of RAM for a framebuffer and a 2D GPU. The PIC32MZ DA family includes packages with 32 MB of integrated DRAM designed to be used as framebuffers. Support for 24-bit color on SXGA (1280 x 1024) panels is included. There’s also a 2D GPU in there with support for sprites, blitting, alpha blending, line drawing, and filling rectangles. No, it can’t play Crysis — just to get that meme out of the way — but it is an excellent platform for GUIs.

Continue reading “Microchip’s PIC32MZ DA — The Microcontroller With A GPU”

Neural Networks: You’ve Got It So Easy

Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker’s perspective, what tools and other resources were available back then, what’s available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.

Continue reading “Neural Networks: You’ve Got It So Easy”

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

Peering Inside the GPU Black Box

Researchers at Binghamton University have built their own graphics processor unit (GPU) that can be flashed into an FGPA. While “graphics” is in the name, this GPU design aims to provide a general-purpose computing peripheral, a GPGPU testbed. Of course, that doesn’t mean that you can’t play Quake (slowly) on it.

The Binghamton crew’s design is not only open, but easily modifiable. It’s a GPGPU where you not only know what’s going on inside the silicon, but also have open-source drivers and interfaces. As Prof. [Timothy Miller] says,

 It was bad for the open-source community that GPU manufacturers had all decided to keep their chip specifications secret. That prevented open source developers from writing software that could utilize that hardware. With contributions from the ‘open hardware’ community, we can incorporate more creative ideas and produce an increasingly better tool.

That’s where you come in. [Jeff Bush], a member of the team, has a great blog with a detailed walk-through of a known GPU design. All of the Verilog and C++ code is up on [Jeff]’s GitHub, including documentation.

If you’re interested in the deep magic that goes on inside GPUs, here’s a great way to peek inside the black box.

DOE Announces a High Performance Computing Fortran Compiler Agreement

The U.S. Department of Energy’s National Nuclear Security Administration (NNSA) and its three national labs this week announced they have reached an agreement for an open-source Fortran front-end for Higher Performance Computing (HPC). The agreement is with IBM? Microsoft? Google? Nope, the agreement is with NVIDIA, a company known for making graphics cards for gamers.

The heart of a graphics card is the graphics processor unit (GPU) which is an extremely powerful computing engine. It’s actually got more raw horsepower than the computer CPU, although not as much as many claim. A number of years ago NVIDIA branched into providing compiler toolsets for their GPUs. The obvious goal is to drive sales. NVIDIA will use as a starting point their existing Fortran compiler and integrate it with the existing LLVM compiler infrastructure. That Fortran, it just keeps chugging along.

You can try out GPU programming on your Raspberry Pi. Yup! Even it has one, a Broadcom. Just follow the directions from Raspberry Pi Playground. You’re going to get your hands dirty with assembly language so this is not for the faint hearted. One of the big challenges with GPUs is exchanging data with them which gets into DMA processing. You could also take a look at [Pete Warden’s] work on using the Pi’s GPU.

Still wondering about the performance of CPU vs GPU? Here’s Adam Savage taking a look…

Continue reading “DOE Announces a High Performance Computing Fortran Compiler Agreement”