TensorFlow Lite demos

Smarter Phones In Your Hacks With TensorFlow Lite

One way to run a compute-intensive neural network on a hack has been to put a decent laptop onboard. But wouldn’t it be great if you could go smaller and cheaper by using a phone instead? If your neural network was written using Google’s TensorFlow framework then you’ve had the option of using TensorFlow Mobile, but it doesn’t use any of the phone’s accelerated hardware, and so it might not have been fast enough.

TensorFlow Lite architecture
TensorFlow Lite architecture

Google has just released a new solution, the developer preview of TensofFlow Lite for iOS and Android and announced plans to support Raspberry Pi 3. On Android, the bottom layer is the Android Neural Networks API which makes use of the phone’s DSP, GPU and/or any other specialized hardware to speed up computations. Failing that, it falls back on the CPU.

Currently, fewer operators are supported than with TensforFlor Mobile, but more will be added. (Most of what you do in TensorFlow is done through operators, or ops. See our introduction to TensorFlow article if you need a refresher on how TensorFlow works.) The Lite version is intended to be the successor to Mobile. As with Mobile, you’d only do inference on the device. That means you’d train the neural network elsewhere, perhaps on a GPU-rich desktop or on a GPU farm over the network, and then make use of the trained network on your device.

What are we envisioning here? How about replacing the MacBook Pro on the self-driving RC cars we’ve talked about with a much smaller, lighter and less power-hungry Android phone? The phone even has a camera and an IMU built-in, though you’d need a way to talk to the rest of the hardware in lieu of GPIO.

You can try out TensorFlow Lite fairly easily by going to their GitHub and downloading a pre-built binary. We suspect that’s what was done to produce the first of the demonstration videos below.

Continue reading “Smarter Phones In Your Hacks With TensorFlow Lite”

Interactive Visual Programming With Vvvv

Did you ever feel the urge to turn the power of image processing and OCR into music? Maybe you wanted to use motion capture to illustrate the dynamic movement of a kung-fu master in stunning images like the one above?  Both projects were created with the same software.

vvvv -pronounced ‘four vee’, ‘vee four’ and sometimes even ‘veeveeveevee’- calls itself ‘a multi purpose framework’, which is as vague and correct as calling a computer ‘a device that performs calculations’. What can it do, and what does the framework look like? I’d like to show you.

Since its first release in 1998 the project has never officially left beta stage. This doesn’t mean the recent beta releases are unstable, it’s just that the people behind vvvv refrain from declaring their software ‘finished’. It also provides an excuse for some quirks, such as requiring 7-zip to unpack the binaries and the UI that takes some getting used to. vvvv requires DirectX and as such is limited to Windows.

With the bad stuff out of the way, let’s take a look what vvvv can do. First, as implied by the close relationship with DirectX, it’s really good at producing graphics. An example for interactive video is embedded below the break. With its data flow/ visual programming approach it also lends itself to rapid prototyping or live coding. Modifications to a patch, as programs are called in this context, immediately affect the output.

The name ‘patch’ harkens back to the times of analog synthesizers and working with vvvv has indeed some similarities with signal processing that will make the DSP nerds among you feel right at home.

Continue reading “Interactive Visual Programming With Vvvv”

Solving Mazes With Graphics Cards

What if we told you that you are likely to have more computers than you think? And we are not talking about things that are computers while not looking like one, like most modern cars or certain lightbulbs. We are talking about the powerful machines hiding in your desktop computer called ‘graphics card’. In the ordinary gaming rig graphics cards that are much more powerful than the machine they’re built into are a common occurrence. In his tutorial [Viktor Chlumský] demonstrates how to harness your GPU’s power to solve a maze.

Software that runs on a GPU is called a shader. In this example a shader is shown that finds the way through a maze. We also get to catch a glimpse at the limitations that make this field of software special: [Viktor]’s solution has to work with only four variables, because all information is stored in the red, green, blue and alpha channels of an image. The alpha channel represents the boundaries of the maze. Red and green channels are used to broadcast waves from the beginning and end points of the maze. Where these two waves meet is the shortest solution, a value which is captured through the blue channel.

Despite having tons of cores and large memory, programming shaders feels a lot like working on microcontrollers. See for yourself in the maze solving walk through below.

Continue reading “Solving Mazes With Graphics Cards”

Cryptocurrency Mining Post-Bitcoin

While the age of using your own computer to mine Bitcoin during spare CPU cycles has long passed, average folks aren’t entirely shut out of the cryptocurrency game yet. Luckily, Bitcoin isn’t the only game in town anymore, and with GPUs coming down in price it’s possible to build a mining rig for other currencies like Etherium.

[Chris]’s build starts with some extruded aluminum and a handful of GPUs. He wanted to build something that didn’t take up too much space in the small apartment. Once the main computer was installed, each GPU was installed upwards in the rack, with each set having its own dedicated fan. After installing a fan controller and some plexiglass the rig was up and running, although [Chris] did have to finagle the software a little bit to get all of the GPUs to work properly.

While this build did use some tools that might only be available at a makerspace, like a mill and a 3D printer, the hardware is still within reason with someone with a little cash burning a hole in their pockets. And, if Etherium keeps going up in value like it has been since the summer, it might pay for itself eventually, providing that your electric utility doesn’t charge too much for power.

And if you missed it, we just ran a feature on Etherium.  Check it out.

Neural Nets In The Browser: Why Not?

We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.

TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.

Continue reading “Neural Nets In The Browser: Why Not?”

Using The GPU From JavaScript

Everyone knows that writing programs that exploit the GPU (Graphics Processing Unit) in your computer’s video card requires special arcane tools, right? Well, thanks to [Matthew Saw], [Fazil Sapuan], and [Cheah Eugene], perhaps not. At a hackathon, they turned out a Javascript library that allows you to create “kernel” functions to execute on the GPU of the target system. There’s a demo available with a benchmark which on our machine sped up a 512×512 calculation by well over five times. You can download the library from the same page. There’s also a GitHub page.

The documentation is a bit sparse but readable. You simply define the function you want to execute and the dimensions of the problem. You can specify one, two, or three dimensions, as suits your problem space. When you execute the associated function it will try to run the kernels on your GPU in parallel. If it can’t, it will still get the right answer, just slowly.

Continue reading “Using The GPU From JavaScript”

World’s Worst Bitcoin Mining Rig

Even if we don’t quite understand what’s happening in a Bitcoin mine, we all pretty much know what’s needed to set one up. Racks of GPUs and specialized software will eventually find a few of these vanishingly rare virtual treasures, but if you have enough time, even a Xerox Alto from 1973 can be turned into a Bitcoin mine. As for how much time it’ll take [Ken Shirriff]’s rig to find a Bitcoin, let’s just say that his Alto would need to survive the heat death of the universe. About 5000 times. And it would take the electricity generated by a small country to do it.

Even though it’s not exactly a profit center, it gives [Ken] a chance to show off his lovingly restored Alto. The Xerox machine is the granddaddy of all modern PCs, having introduced almost every aspect of the GUI world we live in. But with a processor built from discrete TTL chips and an instruction set that doesn’t even have logical OR or XOR functions, the machine isn’t exactly optimized for SHA-256 hashing. The fact that [Ken] was able to implement a mining algorithm at all is impressive, and his explanation of how Bitcoin mining is done is quite clear and a great primer for cryptocurrency newbies.

[Ken] seems to enjoy sending old computer hardware to the Bitcoin mines — he made an old IBM mainframe perform the trick a while back. But if you don’t have a room-size computer around, perhaps reading up on alternate uses for the block chain would be a good idea.

[via Dangerous Prototypes]