Programming FPGAs With Python

If you’ve ever wanted to jump into the world of FPGAs but don’t want to learn yet another language, you can now program an FPGA with Python. PyCPU converts very, very simple Python code into either VHDL or Verilog. From this, a hardware description can be uploaded to an FPGA.

The portion of the Python language supported by PyCPU is extremely minimal, with only ints being the only built-in data type supported. Of course ifs and whiles are still included along with all the assignments and operators. A new addition is a way to get digital IO access with Python, and obvious requirement if you’re going to be programming Silicon.

PyCPU surely won’t replace VHDL or Verilog anytime soon, but if you’re looking to get into FPGAs and the ‘telling a chip what to be’ paradigm it offers, it’s certainly a tool worth looking into.

Hats off to [hardsoftlucid] for sending this in. Our wonderful (we mean that, really) noticed a few mistakes when this was first posted. Those mistakes have been corrected.

The First Step To Running IPhone Apps In Linux

[Christina] has been working on a project she calls Magenta to put Darwin/BSD on top of Linux. What does that mean? Well, hopefully it’s the first step towards running iPhone/iPad apps on a Linux machine.

Before you get too excited, there are a few caveats; Magenta only works on ARMv7 platforms, none of the fancy iOS frameworks are included, and it’s currently impossible to run iOS apps with this build. Think of this project as a very, very early version of Wine. If you’d like to take Magenta for a spin, [Christina] put the source up here.

Although [Christina]’s project is entirely useless for anyone wanting Siri on their Android phone, it’s possible to add all those fancy iOS frameworks to Magenta and create an open source OS able to run iPhone apps.

We really have to admire [Christina]’s work on this. It’s an amazingly impressive project, and her final goal of recreating the iOS stack would be a boon to the jailbreaking scene. Cue the sound of millions of iPhone clones marching out of China…

via [OleRazzleDazzle] on the reddits

Tracking Small Changes In Video To See Someone’s Pulse

[Gil] sent in an awesome paper from this year’s SIGGRAPH. It’s a way to detect subtle changes in a video feed from [Hao-Yu Wu, et al.] at the MIT CS and AI lab and Quanta Research. To get a feel for what this paper is about, check out the video and come back when you pick your jaw off the floor.

The project works by detecting and amplifying very small changes in color occurring in several frames of video. From the demo, the researchers were able to detect someone’s pulse by noting the very minute changes in the color of their skin whenever their face is pumped full of blood.

A neat side effect of detecting small changes in color is the ability to also detect motion. In the video, there’s an example of detecting someone’s pulse by exaggerating the expanding artery in someone’s wrist, and the change in a shadow produced by the sun over the course of 15 seconds. This is Batman-level tech here, and we can’t wait to see an OpenCV library for this.

Even though the researchers have shown an extremely limited use case – just pulses and breathing – we’re seeing a whole lot of potential applications. We’d love to see an open source version of this tech turned into a lie detector for the upcoming US presidential debates, and the motion exaggeration is  perfect for showing why every sports referee is blind as a bat.

If you want to read the actual paper, here’s the PDF. As always, video after the break.

Continue reading “Tracking Small Changes In Video To See Someone’s Pulse”

GPU Programming For Easy & Fast Image Processing

If you ever need to manipulate images really fast, or just want to make some pretty fractals, [Reuben] has just what you need. He developed a neat command line tool to send code to a graphics card and generate images using pixel shaders. Opposed to making these images with a CPU, a GPU processes every pixel in parallel, making image processing much faster.

All the GPU coding is done by writing a bit of code in GLSL. [Reuben]’s command line utility takes that code, sends it to the graphics card, and returns the image calculated by the GPU. It’s very simple for to make pretty Mandebrolt set images and sine wave interference this way, but [Reuben]’s project can do much more than that. By sending an image to the GPU and performing a few operations, [Reuben] can do very fast edge detection and other algorithmic processing on pre-existing images.

So far, [Reuben] has tested his software with a few NVIDIA graphics cards under Windows and Linux, although it should work with any graphics card with pixel shaders.

Although [Reuben] is sending code to his GPU, it’s not quite on the level of the NVIDIA CUDA parallel computing platform; [Reuben] is only working with images. Cleverly written software could get around that, though. Still, even if [Reuben]’s project is only used for image processing, it’s still much faster than any CPU-bound method.

You can grab a copy of [Reuben]’s work over on GitHub.

Getting A Textured 3D Scan From Just A Webcam

Here’s an oldie but a goodie that passed us up the first time it went around the Internet. [Qi Pan], (former) PhD student at Cambridge, made a 3D modeling program using only a simple webcam. Not only does this make very fast work of building 3D models, the real texture is also rendered on the virtual object.

The project is called ProFORMA, and to get some idea of exactly how fast it is, the model of a church seen above was captured and rendered in a little over a minute. To get the incredible speed of ProFORMA, [Qi] had his webcam take a series of keyframes. When the model is rotated about 10°, another keyframe is taken and the corners are triangulated with some very fancy math.

Even though [Qi]’s project is from 2009, it seems like it would be better than the ReconstructMe, the Kinect-able 3D scanning we saw a while ago. There’s a great video of [Qi] modeling a papercraft church after the break, but check out the actual paper for a better idea of how ProFORMA works.

Continue reading “Getting A Textured 3D Scan From Just A Webcam”

Detecting ASCII Art Across The Internet

As a web developer and designer, [Victor] has a habit of putting a very nice ASCII signature in an HTML comment at the top of every web page he designs. He was inspired by seeing others do this,  and this piqued his curiosity to see who else was doing this. His idea was to scan through a chunk of the Internet and see what other web pages had ASCII signatures in an HTML comment. With a lot of very clever work, [Victor] managed to grab some interesting ASCII art that would have been missed without looking at the source of millions of web pages.

After gathering a list of the top million top-level domains from Alexa, [Victor] wrote a script to download the HTML for all the pages in parallel. After that, it was just an issue of detecting the ASCII art in all the HTML files. There were a few earlier ASCII art detection algorithms, but nothing that suited [Victor]’s use case. The best result came from only looking at the first comment (otherwise the signatory wouldn’t want you to find it with a quick glance at the source) that were at least 3 lines long and 40 characters wide. After discarding everything with HTML tags in it, [Victor] had an awesome gallery of the ASCII art from webpages all around the Internet.

What did he find? Well, there’s far too many ASCII signatures for [Victor] to put up on his webpage, but he did provide a nice sample of what he found. They’re mostly logos, although there is a Hypnotoad and Aperture Science sentry turret in there.

If you’d like to try out [Victor]’s script, he made everything available on GitHub.

Finding The Average Of Every Font

An old book – the smell, the texture of the slowly rotting paper, and the smudges and margin notes accrued over decades – is one of the finer points in life taken for granted much too often. We’re bombarded with high precision vector typefaces all day, but [Dan]’s Avería font is beautiful in its irregularity. [Dan] made a font that is the average of all the fonts installed on his computer, and the result looks surprisingly great.

[Dan] started his journey down the generative font path by making images of every letter of all his fonts and mashing them together with a PHP script. The result was a terribly blurry font, and unfortunately this had been done before. [Dan] wanted a font with clearly defined edges, though, so the obvious solution would be to take the grayscale result of his first experiment, set a threshold, and make a monochromatic image. This plan didn’t pan out, and [Dan] needed a cleverer way to go about things.

The solution to the problem is astonishingly simple; [Dan] took the perimeter of each font glyph and divided it into hundreds of points. These points could then be averaged in 2D space making a real ‘average’ font.

Even though this project isn’t the usual ‘Arduino doing something’ fare, [Dan] came up with a really clever way of doing something that produced something really cool. It’s enough of a hack in our books. Tip ‘o the hat to [Aleks Clark] for sending this one in.