OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Testing VR Limits With A Raspberry Pi

vrpi

Virtual Reality by function pushes the boundaries of what we perceive as existence, tricking the mind into believing that the computer generated environment that the user is thrust into actually contains a real place. So in the spirit of seeing what is possible in VR, a developer named [Jacques] hooked up a Raspberry Pi to an Oculus Rift. He used a computer graphics rendering API called OpenGL ES, which is much like any mobile platform found these days, to render a floating, rotating cube.

All his tests were done on a Release build which utilized the official vertex and fragment shaders. There was no attempt to optimize anything; not like there would be much to do anyways. The scene was rendered twice at 16 milliseconds per frame. From there, he attempted 27 ms per frame with texture, followed by 36 ms/frame, and then 45.

The code used can be found on [Jacques]’s Github account. A simple improvement would use a Banana Pi for better processing speed. However, don’t expect any spectacular results with this type of setup. Really, the project only proves that it’s possible to minimize a VR experience into something that could become portable. And in the same vein, the Pi + Oculus integration can produce an uncomfortable lagging effect if things are not lined up properly. But once the energy/computing power issues are addressed, VR devices could transform into a more fashionable product like Google Glass, where a simple flip of a switch would toggle the view between VR and AR into a something more mixed. And then a motion sensing input camera like this Kinect-mapping space experiment could allow people all over the world to jump into the perspectives of other reality-pushing explorers. That’s all far down the line though, but this project lays the foundation for what the future might hold.

To see [Jacques]’s full set up, view the video after the break.

Continue reading “Testing VR Limits With A Raspberry Pi”