OpenGL In 500 Lines (Sort Of…)

How difficult is OpenGL? How difficult can it be if you can build a basic renderer in 500 lines of code? That’s what [Dmitry] did as part of a series of tiny applications. The renderer is part of a course and the line limit is to allow students to build their own rendering software. [Dmitry] feels that you can’t write efficient code for things like OpenGL without understanding how they work first.

For educational purposes, the system uses few external dependencies. Students get a class that can work with TGA format files and a way to set the color of one pixel. The rest of the renderer is up to the student guided by nine lessons ranging from Bresenham’s algorithm to ambient occlusion. One of the last lessons switches gears to OpenGL so you can see how it all applies.

Continue reading “OpenGL In 500 Lines (Sort Of…)”

WebGPU… Better Than WebGL?

As the browser becomes more like an operating system, we are seeing more deep features being built into them. For example, you can now do a form of assembly language for the browser. Sophisticated graphics have been around using WebGL since around 2011, but some people find it hard to use. [Surma] was one of those people and tried a new method that is just surfacing to do the same thing: WebGPU.

[Surma] liked it better and shares a lot of information in the post and — oddly — the post doesn’t use WebGPU for graphics very much. Instead, the post focuses on using GPU cores for fast computation, something else you can do with WebGPU. If your goal is to draw on the screen, though, you need to know the basics and the post links to a site with examples of doing this.

Continue reading “WebGPU… Better Than WebGL?”

Firefox logo displayed on screen

Firefox Brings The Fire: Shifting From GLX To EGL

You may (or may not) have heard that Firefox is moving from GLX to EGL for the Linux graphics stack. It’s an indicator of which way the tides are moving in the software world. Let’s look at what it means, why it matters, and why it’s cool.

A graphics stack is a complex system with many layers. But on Linux, there needs to be an interface between something like OpenGL and a windowing system like X11. X11 provides a fundamental framework for drawing and moving windows around a display, capturing user input, and determining focus, but little else. An X11 server is just a program that manages all the windows (clients). Each window in X11 is considered a client. A client connects to the server over a Unix process socket or the internet.

OpenGL focuses on what to draw within the confines of the screen space given by the window system. GLX (which stands for OpenGL Extension to the X window system) was originally developed by Silicon Graphics. It has changed over the years, gaining hardware acceleration support and DRI (Direct Rendering Interface). DRI is a way for OpenGL to talk directly to the graphical hardware if the server and the client are on the same computer. At its core, GLX provides OpenGL functions to X11, adds to the X protocol by allowing 3d rendering commands to be sent, and an extension that reads rendering commands and passes them to OpenGL.

EGL (Embedded-System Graphics Library) is a successor of GLX, but it started with a different environment in mind. Initially, the focus was embedded systems, and devices such as Android, Raspberry Pi, and Blackberry heavily lean on EGL for their graphical needs. Finally, however, Wayland decided to use EGL as GLX brought in X11 dependencies, and EGL offers closer access to hardware.

When Martin Stránský initially added Wayland support to Firefox, he used EGL instead of GLX. Additionally, the Wayland implementation had zero-copy GPU buffer sharing via DMABUF (a Linux kernel subsystem for sharing buffers). Unfortunately, Firefox couldn’t turn on this improved WebGL’s performance for X11 (it existed but was never stable enough). Nevertheless, features kept coming making Wayland (and consequently EGL) a more first-class citizen. Now EGL will be enabled by default in Firefox 94+ with Mesa 21+ drivers (Mesa is an implementation of OpenGL, Vulkan, and other specifications that translate commands into instructions the GPU can understand).

Continue reading “Firefox Brings The Fire: Shifting From GLX To EGL”

OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

Hypnotic Visuals Synthesizer

Ever wanted to make some seriously trippy retro graphics to go along with your lo-fi hip hop beats? Now you can, with [teafella]’s aptly named Hypno Video Synthesizer, a Raspberry Pi-based video synthesizer that digitally emulates and extends analog video workflows through colorization, shape generation, and feedback, patching the modifications into a compact interface. The device allows music creators to perform with live visuals, or alternatively to create a unique visual source for a video setup. Once the CV input is plugged in, all it requires is a composite display and power to start working.

Hypno takes input through a control voltage (CV) jack using a MCP3008 ADC via SPI, with voltages scaled from -5-5V to 0-5V. The device attaches on top of a Raspberry Pi, using Raspbian for the operating system and the Pi Zero GPIO to interface with an OpenGL Engine. The input parameters are taken from knobs through a multiplexer into a single channel of the ADC, with values offset in software based on the CV inputs.

Using the Hypno ends up being fairly straight forward, as the controls are organized onto two mirrored sides for the two oscillators A & B, with global controls in the center. There are knobs that control polarization, rotation, shape, feedback modes (regular, hyper digital, zooming, rotating zoom), clock in/clock out, frequency, root hue, and master gain, as well as RGB LEDs that provide visual feedback.

A single jack outputs the composite result, although a micro-HDMI plug can also be used on the back. For advanced functionality, Hypno allows for patching, which mixes effects on top of one another and allows for shapes such as oscillator cross modulation. There are also alt-controls that open up self modulation and other shapes. Examples include bipolar drift (smoothly scrolls the oscillator) and mirroring (mirrors the oscillator’s shape in different patterns for a kaleidoscope-eque tiled madness).

The software is written in C++ and GLSL, with the main engine running with one plane in OpenGL, drawing the output of a GLSL shader. The CV and knob inputs are fed into shader uniforms that are used to change the visuals in the engine.

[teafella], a self-professed Arduino user, uses WiringPi for the GPIO interactions. The Shader system is inspired by analog video synthesis, with every shape having a simulated “scan” over the screen and function mapped to it that can be transformed into polar coordinates.

The setup for Hypno is fully compatible with analog CV equipment such as Eurorack synthesizers, which makes it easy for music creators to plug and play. Here’s a couple of sample outputs from some soundtracks inputted into Hypno:

Too many combinations to even imagine? Check out a demo of Hypno in action!

Continue reading “Hypnotic Visuals Synthesizer”

OpenGL Shaders And An LED Cube

Back in February at the Hacker Hotel camp in the Netherlands, among the many pieces of work around the venue was a rather attractive LED cube. Very pretty, but LED cubes have been done many times before.

If a casual attendee had taken the time to ask though, they might have found something a little more interesting, for while the cube in question might have had the same hardware as the others it certainy didn’t have the same software. [Polyfloyd] had equipped his LED cube with OpenGL shaders to map arbitrary images to the cube’s pixels in 3D space.

Hardware-wise it’s the same collection of AliExpress LED panels and Raspberry Pi driver board that the other cubes use, in this case mounted on a custom laser-cut frame. Driver software comes from an open-source library round which he’s put a wrapper allowing input through a UNIX pipe. This can take the RGB output of an OpenGL shader, of which he has created both 2D to 3D and spherical projection versions. The must-see demo is a global map of light pollution, and the result is a rather impressive piece of work.

If LED cubes are your thing, don’t forget this recent Hackaday Prize entry.

OpenGL On The Raspi

Perhaps we’ve been concentrating too much on the hardware side of the Raspberry Pi. Sure, connecting the Raspi to the outside world through GPIO pins is cool, but let’s not forget we’re dealing with a full-fledged Linux box here. [chris] is doing his best to keep us in check with by bringing the power of OpenGL graphics to the Raspberry Pi.

Previously, OpenGL ES was only available for xorg but [chris] successfully added support for Raspbian. There’s a great physics demo [chris] put together showing off 128 spheres and cubes bouncing around a plane.

Right now, [chris] is looking for people to contribute samples and tutorials for making accelerated 3D graphics on the Raspi. You can grab all the code over at [chris]’ Git and contact him over on the Raspberry Pi forums if you’d like to help out.

As with any graphics demo, check out the videos after the break.

Continue reading “OpenGL On The Raspi”