[Chris]’s build starts with some extruded aluminum and a handful of GPUs. He wanted to build something that didn’t take up too much space in the small apartment. Once the main computer was installed, each GPU was installed upwards in the rack, with each set having its own dedicated fan. After installing a fan controller and some plexiglass the rig was up and running, although [Chris] did have to finagle the software a little bit to get all of the GPUs to work properly.
While this build did use some tools that might only be available at a makerspace, like a mill and a 3D printer, the hardware is still within reason with someone with a little cash burning a hole in their pockets. And, if Etherium keeps going up in value like it has been since the summer, it might pay for itself eventually, providing that your electric utility doesn’t charge too much for power.
We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.
TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.
The documentation is a bit sparse but readable. You simply define the function you want to execute and the dimensions of the problem. You can specify one, two, or three dimensions, as suits your problem space. When you execute the associated function it will try to run the kernels on your GPU in parallel. If it can’t, it will still get the right answer, just slowly.
Even if we don’t quite understand what’s happening in a Bitcoin mine, we all pretty much know what’s needed to set one up. Racks of GPUs and specialized software will eventually find a few of these vanishingly rare virtual treasures, but if you have enough time, even a Xerox Alto from 1973 can be turned into a Bitcoin mine. As for how much time it’ll take [Ken Shirriff]’s rig to find a Bitcoin, let’s just say that his Alto would need to survive the heat death of the universe. About 5000 times. And it would take the electricity generated by a small country to do it.
Even though it’s not exactly a profit center, it gives [Ken] a chance to show off his lovingly restored Alto. The Xerox machine is the granddaddy of all modern PCs, having introduced almost every aspect of the GUI world we live in. But with a processor built from discrete TTL chips and an instruction set that doesn’t even have logical OR or XOR functions, the machine isn’t exactly optimized for SHA-256 hashing. The fact that [Ken] was able to implement a mining algorithm at all is impressive, and his explanation of how Bitcoin mining is done is quite clear and a great primer for cryptocurrency newbies.
When it comes to displays, there is a gap between a traditional microcontroller and a Linux system-on-a-chip (SoC). The SoC that lives in a smartphone will always have enough RAM for a framebuffer and usually has a few pins dedicated to an LCD interface. Today, Microchip has announced a microcontroller that blurs the lines between what can be done with an SoC and what can be done with a microcontroller. The PIC32MZ ‘DA’ family of microcontrollers is designed for graphics applications and comes with a boatload of RAM and a dedicated GPU.
The key feature for this chip is a boatload of RAM for a framebuffer and a 2D GPU. The PIC32MZ DA family includes packages with 32 MB of integrated DRAM designed to be used as framebuffers. Support for 24-bit color on SXGA (1280 x 1024) panels is included. There’s also a 2D GPU in there with support for sprites, blitting, alpha blending, line drawing, and filling rectangles. No, it can’t play Crysis — just to get that meme out of the way — but it is an excellent platform for GUIs.
Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker’s perspective, what tools and other resources were available back then, what’s available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.
Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.
The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).
Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.