What if we told you that you are likely to have more computers than you think? And we are not talking about things that are computers while not looking like one, like most modern cars or certain lightbulbs. We are talking about the powerful machines hiding in your desktop computer called ‘graphics card’. In the ordinary gaming rig graphics cards that are much more powerful than the machine they’re built into are a common occurrence. In his tutorial [Viktor Chlumský] demonstrates how to harness your GPU’s power to solve a maze.
Software that runs on a GPU is called a shader. In this example a shader is shown that finds the way through a maze. We also get to catch a glimpse at the limitations that make this field of software special: [Viktor]’s solution has to work with only four variables, because all information is stored in the red, green, blue and alpha channels of an image. The alpha channel represents the boundaries of the maze. Red and green channels are used to broadcast waves from the beginning and end points of the maze. Where these two waves meet is the shortest solution, a value which is captured through the blue channel.
Despite having tons of cores and large memory, programming shaders feels a lot like working on microcontrollers. See for yourself in the maze solving walk through below.
If we got you hooked on GPU programming keep your eyes peeled, there’s more on the way. Until then you may enjoy creating pixel shaders from the command line or using them to drive WS2811 LEDs with VGA.
“Despite having tons of cores and large memory, programming shaders feels a lot like working on microcontrollers. ”
Half the fun right there, not to mention there might be an ARM core, sound processor, or even DRM engine buried in there somewhere. Effectively a computer within a computer.
Hmm, if my card has its own z80 inside it….
More like 1024 z80s.
I first did (by accident) a Maze solver like this on a Z80 machine (TRS-80) colloquially known as the Trash-80 even though it was a very good machine for it’s time.
The TRS-80 had a very limited character set so I ended up using the characters like “>”, “<", "\", "/" to draw the maze diagonally and I wanted to get the computer to be able to make random mazes that a user would use the keyboard to move a token and complete the maze. This meant that I had to logically or mathematically define what is common to all completible mazes.
The good luck accident is that my maze was diagonal.
So for a 10 by 10 diagonal maze you compress the logic down to 10 rows of odd or even parity and 10 columns of odd or even parity. Then you evaluate the parity of the rows added to the columns and if the resulting parity is odd then the maze is solvable.
So to make a solvable 10 x 10 maze, for each row in turn you pick the first 9 columns at random and choose only the last character to make the parity of that row odd. You do this for 9 rows and choose the characters in the last row so that the parity of each column is odd and then choose the last column of the last row so that the whole maze is odd.
This logic is reversible an can be used to solve the maze just with 20 parity variables (bits) for a 10 x 10 maze.
For a standard box maze you just mathematically rotate maze array or parity array by 45 degrees to achieve the same result.
Pretty interesting.
Now, who solves a maze by starting at the midpoint of the known solution? And with the direction to go written on the walls?
How about exploring from one end, and without beacons? Or both ends?
I’d like to see a GPU running an OS. There are a few applications where a (nearly) standalone GPU might make sense.
Unfortunately GPUs and CPUs are geared towards different problems. AMD’s Fusion might come close in bringing the strengths of both together.
https://www.infoworld.com/article/3099204/hardware/amd-mulls-a-cpugpu-super-chip-in-a-server-reboot.html
Well, you could run an operating system on the card’s embedded processors* (as someone already said GPUs have built-in cores used for debugging, startup/power management, audio and maybe DRM), but running it on the actual graphics engine would be 99% impossible since the architecture is completely different.
* https://hackaday.com/2012/10/08/stm32-driving-a-pcie-video-card/ (no docs, but it’s possible)
It’d be a nuisance to have to run all everything on the the GPU but there is a KGPU (https://github.com/wbsun/kgpu) that gives the kernel the ability to run on a GPU core.
Interestingly it mentions that running the entire OS on a current (not so current now) GPU would not work well.
How about Xeon phi? Technically, it’s not a GPU, but it comes on a PCIE card just like a GPU and is designed to compete against GPUs, like Nvidias Tesla products. Anyway, the Xeon Phi has it’s own memory space, runs it’s own linux kernel, and connects to the host system through a virtual network connection.
72 Cores! Nice.
Many of the modern GPU’s support OpenCL or CUDA. So you don’t need to learn shader stuff anymore imho.
Yeah I was very surprised to see that, I’m by no means an expert but have programmed a few cuda programs and definitely didn’t have any shader stuff to mess with.
Doesn’t OpenCL use GLSL?
What is the goal of this project? Solving a maze or play with algorithms for GPUs?
For solving mazes, prooved algorithms are available.
The most simple is: “Follow the right wall”. This works good for mazes when you have to cross a maze (start and exit outside of the maze). You will have to go through many dead ends, but the algorithm always leads you out.
The main idea for one-way-though-mazes is, that there is only one big wall left and one big wall right on your way, each with many dead ends.
So I solved mazes with graphic programs. “color the right wall blue and the left wall red”. Than the shortest way to the exit is to follow the corridor with different colors. Corridors with same color left and right are dead ends.
– When you come up to a wall not yet colored, than the maze contains more than one route. Everytime you find an unpainted wall, color it with a new color and you will easily see all possible routes.
This strategy is easily to implement, even on Z80 (was my first computer: Sharp MZ-80).
I found out, that all strategies base finally on “Follow the right wall”, only someone other runs through the maze, the GPU or my graphic app and show you the shortest way.
How about Xeon phi? It’s not technically a GPU, but it comes on a PCIE card like a GPU and is designed to compete with GPUs, like Nvidia’s Tesla products. Regardless, the Xeon Phi has its own memory space, runs its own Linux kernel, and connects to the host system via a virtual network connection.