Interfacing A GPU With A CPU

interfacing-a-gpu-with-a-cpu

[Quinn Dunki] pulled together many months worth of work by interfacing her GPU with the CPU. This is one of the major points in her Veronica project which aims to build a computer from the ground up.

We’ve seen quite a number of posts from her regarding the AVR-powered GPU. So far the development of that component has been happening separately from the 6502 centered CPU. But putting them together is anything but trivial. The timing issues that were so important to consider when developing the GPU get even hairier when it comes writing to the VRAM from an external component. Her first thought was to share a portion of the external RAM between the CPU and GPU as a way to push rendering commands from one to the other. This proved troublesome both in timing and in the number of pins available on the AVR chip. She ended up using something of a virtual register on the AVR chip that can receive commands from the CPU asynchronously. Timing dictates that these commands be written only during vertical blanking so this virtual register also acts as a status register to let the CPU know when it can send the next command.

Her post is packed with the theory behind the design, timing tests on the oscilloscope, and a rather intimidating schematic. But the most important part is the video showing her success in the end.

Leveraging The GPU To Accelerate The Linux Kernel

Powerful graphics cards are pretty affordable these days. Even though we rarely do high-end gaming on our daily machine we still have a GeForce 9800 GT. That goes to waste on a machine used mainly to publish posts and write code for microcontrollers. But perhaps we can put the GPU to good use when it comes compile time. The KGPU package enlists your graphics card to help the kernel do some heavy lifting.

This won’t work for just any GPU. The technique uses CUDA, which is a parallel computing package for NVIDIA hardware. But don’t let lack of hardware keep you from checking it out. [Weibin Sun] is one of the researchers behind the technique. He posted a whitepaper (PDF) on the topic over at his website.

Add this to the growing list of non-graphic applications for today graphics hardware.

UPDATE: Looks like we won’t be trying this out after all. Your GPU must support CUDA 2.0 or higher. We found ours on this list and it’s only capable of CUDA 1.0.

[Thanks John]

25 GPUs Brute Force 348 Billion Hashes Per Second To Crack Your Passwords

It’s our understanding that the video game industry has long been a driving force in new and better graphics processing hardware. But they’re not the only benefactors to these advances. As we’ve heard before, a graphics processing unit is uniquely qualified to process encryption hashes quickly (we’ve seen this with bitcoin mining). This project strings together 25 GPU cards in 5 servers to form a super fast brute force attack. It’s so fast that the actual specs are beyond our comprehension. How can one understand 348 billion hashes per second?

The testing was used on a collection of password hashes using LM and NTLM protocols. The NTLM is a bit stronger and fared better than the LM, but that’s not actually saying much. An eight character NTLM password will fall in 5.5 hours, while a 14 character LM hash makes it only about six minutes before the solution is discovered. Of course this type of hardware is only good if you have a copy of the password hashes themselves. Login protocols will lock out after a certain number of attempts and have measures in place to slow down automated systems like this one.

[via Boing Boing]

GPU Programming For Easy & Fast Image Processing

If you ever need to manipulate images really fast, or just want to make some pretty fractals, [Reuben] has just what you need. He developed a neat command line tool to send code to a graphics card and generate images using pixel shaders. Opposed to making these images with a CPU, a GPU processes every pixel in parallel, making image processing much faster.

All the GPU coding is done by writing a bit of code in GLSL. [Reuben]’s command line utility takes that code, sends it to the graphics card, and returns the image calculated by the GPU. It’s very simple for to make pretty Mandebrolt set images and sine wave interference this way, but [Reuben]’s project can do much more than that. By sending an image to the GPU and performing a few operations, [Reuben] can do very fast edge detection and other algorithmic processing on pre-existing images.

So far, [Reuben] has tested his software with a few NVIDIA graphics cards under Windows and Linux, although it should work with any graphics card with pixel shaders.

Although [Reuben] is sending code to his GPU, it’s not quite on the level of the NVIDIA CUDA parallel computing platform; [Reuben] is only working with images. Cleverly written software could get around that, though. Still, even if [Reuben]’s project is only used for image processing, it’s still much faster than any CPU-bound method.

You can grab a copy of [Reuben]’s work over on GitHub.

ATmega324 Acts As A GPU For Homebrew Computer

[Quinn Dunki’s] homebrew computer project is moving up another evolutionary rung. She needs a more versatile user interface and this starts with the data output. Up to this point a set of 7-segment digits has served as a way to display register values. But her current work is aimed at adding VGA output to the system.

She starts off her write up by justifying the protocol choice. Although composite video would be easier to get up and running (we see it in a lot of AVR projects) [Quinn] doesn’t have a screen that will display composite video. But there’s also a lot of info out there about VGA signal generation. She delved into the specifics and even found a great AVR-based example over at Lucid Science.

The version seen above uses the 40-pin ATmega324. It’s a lot bigger than necessary for the example she put together, but in the future she plans to add video memory and will be glad to have all of those extra I/O pins. When it comes to video sync, timing is everything. She wrote the code to drive the display using assembly. In this way, she was able to look up the cycles used for each command to ensure that the loop is running with near perfect timing.

external_laptop_gpu

Beefing Up Your Laptop’s Gaming Chops With An External GPU

If you’re not willing to shell out for a reasonably powerful laptop it seems that there’s not a ton that can be done to boost your gaming performance. That is, unless you have an empty Express card slot and the right chipset.

[Phatboy69] recently put together an external video card for his notebook, with fantastic results. His Vaio Z128GG had an Nvidia GT330M graphics card onboard, which is decent but nothing to write home about. Using an Express card to PCIe adapter, he added an external Nvidia GTX580 to his system, and he couldn’t be more pleased with the results. While the card does take a performance hit when connected to his laptop in this way, he claims that his graphics performance has increased ten-fold, which isn’t too shabby.

There are many variables on which this process is heavily dependent, but with the right amount of tweaking, some great laptop gaming performance can be had. That said, it really does take the portability factor of your notebook down to about zero.

If this is something you might be interested in, be sure to check out this thread over at the Notebook Review Forums – it’s where [Phatboy69] found all the information he needed to get his system up and running properly.

[Thanks, Henry]

Paddle Controller For GPU Overclocking

[Fred] likes to squeeze every cycle possible out of his graphics card. But sometimes pushing the clock speed too high causes corruption. He figured out a way to turn a knob to adjust the clock speed while your applications are still running.

The actuator seen above is a Griffin Powermate 3.0. It’s a USB peripheral which is meant to be used for anything you can imagine. [Fred] uses an AutoHotKey script that he wrote to capture the input from the spinner, process that information, then adjust GPU clock speed in the background. Since the clock on his ATi Radeon 5800 can be adjusted using the AMD GPU clock tool, it’s an easy choice for this application. Now better graphics are at the tips of his fingers. See for yourself in the video after the break.

Of course if you don’t want to shell out for the fancy hardware you could always build your own paddle controller.

Continue reading “Paddle Controller For GPU Overclocking”