The Raspberry Pi is the Arduino of 2016, and that means shields, hats, add-ons, and other fun toys that can be plugged right into the GPIO pins of a Pi. For this year’s Hackaday Prize, [Valentin] is combining the Pi with the next age of homebrew computation. He’s developed the Flea Ohm, an FPGA backpack or hat for the Pi Zero.
The Flea Ohm is based on Lattice’s ECP5 FPGA featuring 24k LUTs and 112kB BRAM. That’s enough for some relatively interesting applications, but the real fun comes from the added 32MB or 128MB of SDRAM, a micro SD card slot, USB + PS/2 host port and an LVDS output.
The combination of Raspberry Pis and FPGAs are extremely interesting and seem to be one of the best FPGA learning platforms anyone can imagine. Another Hackaday Prize entry, the ZinqBerry does a similar trick, but instead of a Pi hat, the ZinqBerry drops a Xilinx Zynq with an FPGA and ARM Cortex A9 core onto a board with Ethernet, HDMI, and USB.
If it’s a Flea or a Zinq, the age of FPGA’d Raspberry Pis is quickly approaching, and hopefully we’ll see them as finalists in the Hackaday Prize. You can check out a video of the Flea Ohm below.
Playing around with FPGAs used to be a daunting prospect. You had to fork out a hundred bucks or so for a development kit, sign the Devil’s bargain to get your hands on a toolchain, and only then can you start learning. In the last few years, a number of forces have converged to bring the FPGA experience within the reach of even the cheapest and most principled open-source hacker.
[Ken Boak] and [Alan Wood] put together a no-nonsense FPGA board with the goal of getting the price under $30. They basically took a Lattice iCE40HX4K, an STMF103 ARM Cortex-M3 microcontroller, some SRAM, and put it all together on a single board.
The Lattice part is a natural choice because the IceStorm project created a full open-source toolchain for it. (Watch [Clifford Wolf]’s presentation). The ARM chip is there to load the bitstream into the FPGA on boot up, and also brings USB connectivity, ADC pins, and other peripherals into the mix. There’s enough RAM on board to get a lot done, and between the ARM and FPGA, there’s more GPIO pins than we can count.
Modeling an open processor core? Sure. High-speed digital signal capture? Why not. It even connects to a Raspberry Pi, so you could use the whole affair as a high-speed peripheral. With so much flexibility, there’s very little that you couldn’t do with this thing. The trick is going to be taming the beast. And that’s where you come in.
Every year, new models of laptops arrive on the shelves. This means that old laptops usually end up in landfills, which isn’t exactly ideal. If you don’t want to waste an old or obsolete laptop, though, there’s a way to reuse at least the screen out of one. Simply grab an FPGA off the shelf and get to work.
[Martin] shows us all how to perform this feat on our own, and goes into great detail about how all of the electronics involved work. Once everything was disassembled and the FPGA was wired up, it took him a substantial amount of time just to turn the display on. From there it was all downhill: [Martin] can now get any pattern to show up on the screen, within reason. The only limit to his display now seems to be the lack of external RAM. He currently uses the setup to drive an impressive-looking clock.
This is a big step from days passed where it was next to impossible to repurpose a laptop screen. Eventually someone discovered a way to drive these displays, and now there are cheap electronics from China that can usually get a screen like this running. It’s impressive to see it done from scratch, though, and the amount of detail in the videos are a great way to understand how everything is working.
For humans, moving our arms and hands onto an object to pick it up is pretty easy; but for manipulators, it’s a different story. Once we’ve found the object we want our robot to pick up, we still need to plan a path from our robot hand to the object all the while lugging the remaining limbs along for the ride without snagging them on any incoming obstacles. The space of all possible joint configurations is called the “joint configuration space.” Planning a collision-free path through them is called path planning, and it’s a tricky one to solve quickly in the world of robotics.
These days, roboticists have nailed out a few algorithms, but executing them takes 100s of milliseconds to compute. The result? Robots spend most of their time “thinking” about moving, rather than executing the actual move.
It’s worth asking: why is this problem so hard? How did hardware make it faster? There’s a few layers here, but it’s worth investigating the big ones. Planning a path from point A to point B usually happens probabilistically (randomly iterating to the finishing point), and if there exists a path, the algorithm will find it. The issue, however, arises when we need to lug our remaining limbs through the space to reach that object. This feature is called the swept volume, and it’s the entire shape that our ‘bot limbs envelope while getting from A to B. This is not just a collision-free path for the hand, but for the entire set of joints.
Encoding a map on a computer is done by discretizing the space into a sufficient resolution of 3D voxels. If a voxel is occupied by an obstacle, it gets one state. If it’s not occupied, it gets another. To compute whether or not a path is OK, a set of voxels that represent the swept volume needs to be compared against the voxels that represent the environment. Here’s where the FPGA kicks in with the speed bump. With the hardware implementation, voxel occupation is encoded in bits, and the entire volume calculation is done in parallel. Nifty to have custom hardware for this, right?
We applaud the folks at Duke University for getting this up-and-running, and we can’t wait to see custom “robot path-planning chips” hit the market some day. For now, though, if you’d like to sink your teeth into seeing how FPGAs can parallelize conventional algorithms, check out our linear-time sorting feature from a few months back.
Slowly, very slowly, the time when we don’t subject embedded beginners to AVRs and PICs is coming. At a glacial pace, FPGA development platforms are becoming ever more capable and less expensive. [Eric Brombaugh] has been playing around with both ARMs and FPGAs for a while now and decided to combine these two loves into a single board that’s capable of a lot.
This board is fittingly called an STM32F303 + ice5 development board, and does exactly what it says on the tin. There’s an STM32F303 on board providing a 32-bit CPU running at 72 MHz, 48 kB of SRAM, a quarter meg of Flash, and enough peripherals to keep anyone happy. The FPGA side of this board is a Lattice iCE5 with about 3k Look Up Tables (LUTs), and one time programmable non-volatile config memory.
The connections between the ARM and FPGA include a dedicated SPI port, and enough GPIOs to implement full-duplex I2S and a USART. Like all good projects, [Eric] has shared all the files, schematics, and BOMs required to make this board your very own reality, and has provided a few links to the development toolchains. While the FPGA is from Lattice’s ice40 family, it’s not supported by the Open Source Project Icestorm toolchain. Still, it’s a very capable board for ARM and FPGA development.
To decode the device’s packets he reached for his RTL-SDR receiver and took a look at it in software. GQRX confirmed the presence of the carrier and allowed him to record a raw I/Q file, which he could then supply to Inspectrum to analyse the packet structure. He found it to be a simple on-off keying scheme, with bits expressed through differing pulse widths. He was then able to create a Gnu Radio project to read and decode them in real time.
Emulating the transmitter was then a fairly straightforward process of generating a 350MHz clock using the on-board PLL and gating it with his generated data stream to provide modulation. The result was able to control his fan with a short wire antenna, indeed he was worried that it might also be doing so for other similar fans in his apartment complex. You can take a look at his source code on GitHub if you would like to try something similar.
It’s worth pointing out that a transmitter like this will radiate a significant amount of harmonics at multiples of its base frequency, and thus without a filter on its output is likely to cause interference. It will also be breaking all the rules set out by whoever the spectrum regulator is where you live, despite its low power. However it’s an interesting project to read, with its reverse engineering and slightly novel use of an FPGA.
It always surprises us that magnetic levitation seems to have two main purposes: trains and toys. It is reasonably inexpensive to get floating Bluetooth speakers, globes, or just floating platforms for display. The idea is reasonably simple, especially if you only care about levitation in two dimensions. You let an electromagnet pull the levitating object (which is, of course, ferrous). A sensor detects when the object is at a certain height and shuts off the magnet. The object falls, which turns the magnet back on, repeating the process. If you do it right, the object will reach equilibrium and hover near the sensor.
Some students at Cornell University decided to implement the control loop to produce levitation using an Altera FPGA. An inductive sensor determined the position of an iron ball. The device uses a standard proportional integral derivative (PID) control loop. The control loop and PWM generation occur in the FPGA hardware. You can see a video of their result, below.