Racing The Beam On A Thin Client, In FPGAs

A few years back, a company by the name of Pano Logic launched a line of FPGA-based thin clients. Sadly, the market didn’t eventuate, and the majority of this stock ended up on eBay, to eventually be snapped up by eager hackers. [Tom] is one of those very hackers, and decided to try some raytracing experiments with the hardware.

[Tom] has one of the earlier Pano Logic clients, with VGA output and a Xilinx Spartan-3E 1600 FPGA under the hood. Due to limited RAM in the FPGA, and wanting to avoid coding a custom DRAM controller for the memory on the board, there just wasn’t room for a framebuffer. Instead, it was decided that the raytracer would instead “race the beam” – calculating each pixel on the fly, beating the monitor’s refresh rate.

This approach means that resource management is key, and [Tom] notes that even seemingly minor changes to the raytracing environment require inordinately large increases in calculation. Simply adding a shadow and directional light increased core logic utilisation from 66% to 92%!

While the project may not be scalable, [Tom] was able to implement the classic reflective sphere, which bounces upon a checkered plane and even added some camera motion to liven things up through an onboard CPU core. It’s a real nuts-and-bolts walkthrough of how to work with limited resources on an FPGA platform. Code is available on Github if you fancy taking a further peek under the hood.

If you’re new to FPGAs yourself, why not check out our FPGA bootcamp?

Getting Started With Free ARM Cores On Xilinx

We reported earlier about Xilinx offering free-to-use ARM Cortex M1 and M3 cores. [Adam Taylor] posted his experiences getting things working and there’s also a video done by [Geek Til It Hertz] based on the material that you can see in the second video, below.

The post covers using the Arty A35T or Arty S50 FPGA boards (based on Artix FPGAs) and the Xilinx Vivado software. Although Vivado will allow you to do conventional FPGA development, it also can work to compose function blocks to produce CPUs and that’s really what’s going on here.

Continue reading “Getting Started With Free ARM Cores On Xilinx”

MIPI CSI-2 Implementation In FPGAs

[Adam Taylor] always has interesting FPGA posts and his latest is no exception. He wanted to use a Zynq for image processing. Makes sense. You can do the high-speed parallel parts in the FPGA fabric and do higher-level processing on the built-in CPU. The problem is, of course, you need to get the video data into the system. [Adam] elected to use the Mobile Industry Processor Interface (MIPI) Camera Serial Interface Issue 2 (CSI-2).

This high-speed serial interface is optimized for data flowing in one direction. The camera, or the master, sends a number of bits (at least one) serially with one clock. To increase speed, data transfers on both rising and falling clock edges. The slave also has a pretty standard I2C master to send commands to the camera which, for the purposes of I2C, is the slave.

Continue reading “MIPI CSI-2 Implementation In FPGAs”

RISC-V CPU Gets A Peripheral

One of the ways people use FPGAs is to have part of the FPGA fabric hold a CPU. That makes sense because CPUs are good at some jobs that are hard to do with an FPGA, and vice versa. Now that the RISC-V architecture is available it makes sense that it can be used as an FPGA-based CPU. [Clifford Wolf] created PicoSOC — a RISC-V CPU made to work as a SOC or System on Chip with a Lattice 8K evaluation board. [Mattvenn] ported that over to a TinyFPGA board that also contains a Lattice FPGA and shows an example of interfacing it with a WS2812 intelligent LED peripheral. You can see a video about the project, below.

True to the open source nature of the RISC-V, the project uses the open source Icestorm toolchain which we’ve talked about many times before. [Matt] thoughtfully provided the firmware precompiled so you don’t have to install gcc for the RISC-V unless you want to write you own software. Which, of course, you will.

Continue reading “RISC-V CPU Gets A Peripheral”

Quick Face Recognition With An FPGA

It’s the 21st century, and according to a lot of sci-fi movies we should have perfected AI by now, right? Well we are getting there, and this project from a group of Cornell University students titled, “FPGA kNN Recognition” is a graceful attempt at facial recognition.

For the uninitiated, the K-nearest neighbors or kNN Algorithm is a very simple classification algorithm that uses similarities between given sets of data and a data point being examined to predict where the said data point belongs. In this project, the authors use a camera to take an image and then save its histogram instead of the entire image. To train the network, the camera is made to take mug-shots of sorts and create a database of histograms that are tagged to be for the same face. This process is repeated for a number of faces and this is shown as a relatively quick process in the accompanying video.

The process of classification or ‘guess who’, takes an image from the camera and compares it with all the faces already stored. The system selects the one with the highest similarity and the results claimed are pretty fantastic, though that is not the brilliant part. The implementation is done using an FPGA which means that the whole process has been pipe-lined to reduce computational time. This makes the project worth a look especially for people looking into FPGA based development. There is a hardware implementation of a k-distance calculator, sorting and selector. Be sure to read through the text for the sorting algorithm as we found it quite interesting.

Arduino recently released the Arduino MKR4000 board which has an FPGA, and there are many opensource boards out there in the wild that you can easily get started with today. We hope to see some of these in conference badges in the upcoming years.

Continue reading “Quick Face Recognition With An FPGA”

FPGA Testbenches Made Easier

You finally finish writing the Verilog for that amazing new DSP function that will revolutionize human society and make you rich. Does it work? Your first instinct, of course, is to blow it into your FPGA of choice and see if it works. If it does, that was a great idea. If it doesn’t, it was a terrible idea because — typically — it is hard to look inside the FPGA. That’s why you’ll typical simulate your logic on a desktop computer before you commit it to the FPGA. But that means you have to delay gratification long enough to write a testbench — a piece of hardware description language (HDL) code that exercises the function you wrote. In this post I’ll show you a small piece of software that can read your Verilog module and automatically create most of a testbench for you. The code originally came from GitHub, but I wanted to make some changes to it, so I forked it and I’ll tell you about the changes I made. This isn’t specific to a particular FPGA. Any Verilog project can use the tool to generate a simple starter testbench.

Writing a testbench isn’t that hard. You usually use the same language you wrote the original code in but since it won’t reside in silicon, you can do things in the simulator that you can’t get away with in code that you’ll synthesize. However, it is a bit painful to have to always write more or less the same code, especially if you have a lot of modules you want to test. But it is a good idea to test small modules before linking them together and then test them linked together, too. With this little Python script, it is very easy to generate a simple testbench and then further elaborate it. It isn’t life-changing, but it does save some time. If you want to try this out, you’ll need something to run the Python script on, of course. You also need a Verilog simulator or you can use EDA Playground to try all this out in your browser.

Continue reading “FPGA Testbenches Made Easier”

X-Ray Vision For FPGAs: Using Verifla

Last time I talked about how I took the open source Verifla logic analyzer and modified it to have some extra features. As promised, this time I want to show it in action, so you can incorporate it into your own designs. The original code didn’t actually capture your data. Instead, it created a Verilog simulation that would produce identical outputs to your FPGA. If you were trying to do some black box simulation, that probably makes sense. I just wanted to view data, so I created a simple C program that generates a VCD file you can read with common tools like gtkwave. It is all on GitHub along with the original files, even though some of those are not updated to match the new code (notably, the PDF document and the examples).

If you have enough pins, of course, you can use an external logic analyzer. If you have enough free space on the FPGA, you could put something like SUMP or SUMP2 in your design which would be very flexible. However, since these analyzers are made to be configurable from the host computer, they probably have a lot of circuitry that will compete with yours for FPGA space. You configure Verifla at compile time which is not as convenient but lets it have a smaller footprint.

Continue reading “X-Ray Vision For FPGAs: Using Verifla”