FFT On The Raspi’s GPU

fft

The Raspberry Pi has been around for two years now, and still there’s little the hardware hacker can actually do with the integrated GPU. That just changed, as the Raspberry Pi foundation just announced a library for Fourier transforms using the GPU.

For those of you who haven’t yet taken your DSP course, fourier transforms take a function (or audio signal, radio signal, or what have you) and output the fundamental frequency. It’s damn useful for everything from software defined radios to guitar pedals, and the new GPU_FFT library is about ten times faster at this task than the Raspi’s CPU.

You can get a copy of  the GPU_FFT library by running rpi-update on your pi. If you happen to build anything interesting – something with a software defined radio or even a guitar pedal – you’re more than welcome to send it in to the Hackaday tips line. We’d love to see what you’re up to.

A GPU For An Arduino

GPU

As the creator of the Gameduino, a shield that adds a VGA port and graphics capability to any Arduino, [James] knows a little something about generating high quality video with a microcontroller. His latest project, the Gameduino 2, blows his previous projects out of the water. He’s created an Arduino shield with a built-in touchscreen that has the same graphics performance as the Quake box you had in the late 1990s.

The power behind this shield comes from a single-chip graphics solution called the FTDI EVE. This isn’t the first time we’ve heard about the FTDI EVE, but this is the first instance of a project or product using this very cool embedded graphics engine. The Gameduino 2 uses an FT800 graphics chip over an SPI connection to give a 480×272 TFT touch panel the same graphical capabilities as a Voodoo 2 graphics card. From the video, [James] is able to put thousands of sprites on a screen, as well as simple 3D animation, and extremely impressive 2D animations using only an Arduino.

While the Gameduino 2 is designed to be a game console you program yourself, we’re thinking this would be even more useful as a display for standalone projects.

An Open Source GPU

Unless you’re bit-banging a CRT interface or using a bunch of resistors to connect a VGA monitor to your project, odds are you’re using proprietary hardware as a graphics engine. The GPU on the Raspberry Pi is locked up under an NDA, and the dream of an open source graphics processor has yet to be realized. [Frank Bruno] at Silicon Spectrum thinks he has the solution to that: a completely open source GPU implemented on an FPGA.

Right now, [Frank] has a very lightweight 2D and 3D engine well-suited for everything from servers to embedded devices. If their Kickstarter meets its goal, they’ll release their project to the world, giving every developer and hardware hacker out there a complete, fully functional, open source GPU.

Given the difficulties [Bunnie] had finding a GPU that doesn’t require an NDA to develop for, we’re thinking this is an awesome project that gets away from the closed-source binary blobs found on the Raspberry Pi and other ARM dev boards.

A Macbook Air And A Thunderbolt GPU

When Intel and Apple released Thunderbolt, hallelujahs from the Apple choir were heard. Since very little in any of Apple’s hardware lineup is upgradeable, an external video card is the best of all possible world. Unfortunately, Intel doesn’t seem to be taking kindly to the idea of external GPUs. That hasn’t stopped a few creative people like [Larry Gadea] from figuring it out on their own. Right now he’s running a GTX 570 through the Thunderbolt port of his MacBook Air, and displaying everything on the internal LCD. A dream come true.

[Larry] is doing this with a few fairly specialized bits of hardware. The first is a Thunderbolt to ExpressCard/34 adapter, after that an ExpressCard to PCI-E adapter. Couple that with a power supply, GPU, and a whole lot of software configuration, and [Larry] had a real Thunderbolt GPU on his hands.

There are, of course, a few downsides to running a GPU through a Thunderbolt port. The current Thunderbolt spec is equivalent to a PCI-E 4X slot, a quarter of what is needed to get all the horsepower out of high-end GPUs. That being said, it is an elegant-yet-kludgy way for better graphics performance on the MBA,

Demo video below.

Continue reading “A Macbook Air And A Thunderbolt GPU”

Interfacing A GPU With A CPU

interfacing-a-gpu-with-a-cpu

[Quinn Dunki] pulled together many months worth of work by interfacing her GPU with the CPU. This is one of the major points in her Veronica project which aims to build a computer from the ground up.

We’ve seen quite a number of posts from her regarding the AVR-powered GPU. So far the development of that component has been happening separately from the 6502 centered CPU. But putting them together is anything but trivial. The timing issues that were so important to consider when developing the GPU get even hairier when it comes writing to the VRAM from an external component. Her first thought was to share a portion of the external RAM between the CPU and GPU as a way to push rendering commands from one to the other. This proved troublesome both in timing and in the number of pins available on the AVR chip. She ended up using something of a virtual register on the AVR chip that can receive commands from the CPU asynchronously. Timing dictates that these commands be written only during vertical blanking so this virtual register also acts as a status register to let the CPU know when it can send the next command.

Her post is packed with the theory behind the design, timing tests on the oscilloscope, and a rather intimidating schematic. But the most important part is the video showing her success in the end.

Leveraging The GPU To Accelerate The Linux Kernel

Powerful graphics cards are pretty affordable these days. Even though we rarely do high-end gaming on our daily machine we still have a GeForce 9800 GT. That goes to waste on a machine used mainly to publish posts and write code for microcontrollers. But perhaps we can put the GPU to good use when it comes compile time. The KGPU package enlists your graphics card to help the kernel do some heavy lifting.

This won’t work for just any GPU. The technique uses CUDA, which is a parallel computing package for NVIDIA hardware. But don’t let lack of hardware keep you from checking it out. [Weibin Sun] is one of the researchers behind the technique. He posted a whitepaper (PDF) on the topic over at his website.

Add this to the growing list of non-graphic applications for today graphics hardware.

UPDATE: Looks like we won’t be trying this out after all. Your GPU must support CUDA 2.0 or higher. We found ours on this list and it’s only capable of CUDA 1.0.

[Thanks John]

25 GPUs Brute Force 348 Billion Hashes Per Second To Crack Your Passwords

It’s our understanding that the video game industry has long been a driving force in new and better graphics processing hardware. But they’re not the only benefactors to these advances. As we’ve heard before, a graphics processing unit is uniquely qualified to process encryption hashes quickly (we’ve seen this with bitcoin mining). This project strings together 25 GPU cards in 5 servers to form a super fast brute force attack. It’s so fast that the actual specs are beyond our comprehension. How can one understand 348 billion hashes per second?

The testing was used on a collection of password hashes using LM and NTLM protocols. The NTLM is a bit stronger and fared better than the LM, but that’s not actually saying much. An eight character NTLM password will fall in 5.5 hours, while a 14 character LM hash makes it only about six minutes before the solution is discovered. Of course this type of hardware is only good if you have a copy of the password hashes themselves. Login protocols will lock out after a certain number of attempts and have measures in place to slow down automated systems like this one.

[via Boing Boing]