Giving The World A Better SID

Here’s a business plan for you, should you ever run into an old silicon fab sitting in a dumpster: build Commodore SID chips. The MOS 6581 and 8580 are synthesizers on a chip, famously used in the demoscene, and even today command prices of up to $40 USD per chip. There’s a market for this, and with the right process, this could conceivably be a viable business plan.

Finding a silicon fab in a dumpster is a longshot, but here’s the next best thing: an FPGASID project. The FPGASID is a project to re-create the now-unobtanium MOS 6581 found in the Commodore 64.

The Commodore SID chip has been out of production for a while now, and nearly every available SID chip has already been snapped up by people building MIDIbox SIDs, or by Elektron for their SidStation, which has been out of production for nearly a decade. There is a demand for SID chips, one that has been filled by “clones” or recreations using ATmegas, Propellers, and nearly every other microcontroller architecture available. While these clones can get the four voices of the SID right, there’s one universal problem: the SID had analog filters, and no two SIDs ever sounded alike.

From the audio samples available on the project page for the FPGASID, the filters might be a solved problem. The output from the FPGASID sounds a lot like the output from a vintage SID. Whether or not this is what everyone agrees a SID should sound like is another matter entirely, but this is the best attempt so far to drag the synth on a chip found in the Commodore 64 into modern times.

The files, firmware, and FPGA special sauce aren’t available yet, but the FPGASID is in alpha testing, with a proper release tentatively scheduled for early 2017. Maybe now it’s time to dig out those plans for the Uber MIDIbox, with octophonic SID goodness.

New Part Day: Pynq Zynq

FPGAs are the future, and there’s a chip out there that brings us the future today. I speak, of course, of the Xilinx Zynq, a combination of a high-power ARM A9 processor and a very capable FPGA. Now the Zynq has been made Pynq with a new dev board from Digilent.

The heart of this board, is, of course, the Xilinx Zynq packing a Dual-core ARM Cortex A9 processor and an FPGA with 1.3 Million reconfigurable gates. This is a dev board, though, and with that comes memory and peripherals. To the board, Digilent added 512MB of DDR3 RAM, a microSD slot, HDMI in and out, Ethernet, USB host, and GPIOs, some of which match the standard Arduino configuration.

This isn’t the first Zynq board out there by any measure. Last year, [antti] had a lot of fun with the Zynq and created the ZynqBerry, a Zynq in a Raspberry Pi form factor, and a Zynq Arduino shield. Barring that, we’ve seen the Zynq in a few research projects, but not so much in a basic dev board. The Pynq Zynq is among the first that will be produced in massive quantities.

There is, of course, one downside to the Pynq Zynq, and that is the price. It’s $229 USD, or $65 with an educational discount. That’s actually not that bad for what you’re getting. FPGAs will always be more expensive than an SoC stolen from a router or cell phone, no matter how powerful it is. That said, putting a powerful ARM processor and a hefty FPGA in a single package is an interesting proposition. Adding HDMI in and out even more so. Already we’ve seen a few interesting applications of the Zynq like synthesizers, quadcopters, and all of British radio. With this new board, hopefully a few enterprising FPGA gurus will pick one up and tell the rest of us mere mortals how to do some really cool stuff.

Tearing into Delta Sigma ADCs Part 2

In part one, I compared the different Analog to Digital Converters (ADC) and the roles and properties of Delta Sigma ADC’s. I covered a lot of the theory behind these devices, so in this installment, I set out to find a design or two that would help me demonstrate the important points like oversampling, noise shaping and the relationship between the signal-to-noise ratio and resolution.

Modulator Implementation

modulatorCheck out part one to see the block diagrams of what what got us to here. The schematics shown below are of a couple of implementations that I played with depicting a single-order and a dual-order Delta Sigma modulators.

schematicBasically I used a clock enabled, high speed comparator, with two polarities in case I got it the logic backwards in my current state of burn out to grey matter ratio. The video includes the actual schematic used.

Since I wasn’t designing for production I accepted the need for three voltages since my bench supply was capable of providing them and this widget is destined for the drawer with the other widgets made for just a few minutes of video time anyway. Continue reading “Tearing into Delta Sigma ADCs Part 2”

DEF CON: BSODomizing In High Definition

A few years ago, [Kingpin] a.k.a. [Joe Grand] (A judge for the 2014 Hackaday Prize) designed the most beautiful electronic prank ever. The BSODomizer is a simple device with a pass-through connection for a VGA display and an infrared receiver. Plug the BSODomizer into an unsuspecting coworker’s monitor, press a button on a remote, and watch Microsoft’s blue screen of death appear. It’s brilliant, devious, and actually a pretty simple device if you pick the right microcontroller.

The original BSODomizer is getting a little long in the tooth. VGA is finally dead. The Propeller chip used to generate the video only generates text, and can’t reproduce Microsoft’s fancy new graphical error screens. HDMI is the future, and FPGAs have never been more accessible. For this year’s DEF CON, [Kingpin] and [Zoz] needed something to impress an audience that is just learning how to solder. They’ve revisited the BSODomizer, and have created the greatest hardware project at this year’s DEF CON.

Continue reading “DEF CON: BSODomizing In High Definition”

The Perfect Storm: Open ARM + FPGA Board

Playing around with FPGAs used to be a daunting prospect. You had to fork out a hundred bucks or so for a development kit, sign the Devil’s bargain to get your hands on a toolchain, and only then can you start learning. In the last few years, a number of forces have converged to bring the FPGA experience within the reach of even the cheapest and most principled open-source hacker.

[Ken Boak] and [Alan Wood] put together a no-nonsense FPGA board with the goal of getting the price under $30. They basically took a Lattice iCE40HX4K, an STMF103 ARM Cortex-M3 microcontroller, some SRAM, and put it all together on a single board.

The Lattice part is a natural choice because the IceStorm project created a full open-source toolchain for it. (Watch [Clifford Wolf]’s presentation). The ARM chip is there to load the bitstream into the FPGA on boot up, and also brings USB connectivity, ADC pins, and other peripherals into the mix. There’s enough RAM on board to get a lot done, and between the ARM and FPGA, there’s more GPIO pins than we can count.

Modeling an open processor core? Sure. High-speed digital signal capture? Why not. It even connects to a Raspberry Pi, so you could use the whole affair as a high-speed peripheral. With so much flexibility, there’s very little that you couldn’t do with this thing. The trick is going to be taming the beast. And that’s where you come in.

FPGA Drives Old Laptop Screen

Every year, new models of laptops arrive on the shelves. This means that old laptops usually end up in landfills, which isn’t exactly ideal. If you don’t want to waste an old or obsolete laptop, though, there’s a way to reuse at least the screen out of one. Simply grab an FPGA off the shelf and get to work.

[Martin] shows us all how to perform this feat on our own, and goes into great detail about how all of the electronics involved work. Once everything was disassembled and the FPGA was wired up, it took him a substantial amount of time just to turn the display on. From there it was all downhill: [Martin] can now get any pattern to show up on the screen, within reason. The only limit to his display now seems to be the lack of external RAM. He currently uses the setup to drive an impressive-looking clock.

This is a big step from days passed where it was next to impossible to repurpose a laptop screen. Eventually someone discovered a way to drive these displays, and now there are cheap electronics from China that can usually get a screen like this running. It’s impressive to see it done from scratch, though, and the amount of detail in the videos are a great way to understand how everything is working.

Continue reading “FPGA Drives Old Laptop Screen”

Manipulators get a 1000x FPGA-based speed bump

For humans, moving our arms and hands onto an object to pick it up is pretty easy; but for manipulators, it’s a different story. Once we’ve found the object we want our robot to pick up, we still need to plan a path from our robot hand to the object all the while lugging the remaining limbs along for the ride without snagging them on any incoming obstacles. The space of all possible joint configurations is called the “joint configuration space.” Planning a collision-free path through them is called path planning, and it’s a tricky one to solve quickly in the world of robotics.

These days, roboticists have nailed out a few algorithms, but executing them takes 100s of milliseconds to compute. The result? Robots spend most of their time “thinking” about moving, rather than executing the actual move.

Robots have been lurching along pretty slowly for a while until recently when researchers at Duke University [PDF] pushed much of the computation to hardware on an FPGA. The result? Path planning in hardware with a 6-degree-of-freedom arm takes under a millisecond to compute!

It’s worth asking: why is this problem so hard? How did hardware make it faster? There’s a few layers here, but it’s worth investigating the big ones. Planning a path from point A to point B usually happens probabilistically (randomly iterating to the finishing point), and if there exists a path, the algorithm will find it. The issue, however, arises when we need to lug our remaining limbs through the space to reach that object. This feature is called the swept volume, and it’s the entire shape that our ‘bot limbs envelope while getting from A to B. This is not just a collision-free path for the hand, but for the entire set of joints.

swept_volume
Image Credit: Robot Motion Planning on a Chip

Encoding a map on a computer is done by discretizing the space into a sufficient resolution of 3D voxels. If a voxel is occupied by an obstacle, it gets one state. If it’s not occupied, it gets another. To compute whether or not a path is OK, a set of voxels that represent the swept volume needs to be compared against the voxels that represent the environment. Here’s where the FPGA kicks in with the speed bump. With the hardware implementation, voxel occupation is encoded in bits, and the entire volume calculation is done in parallel. Nifty to have custom hardware for this, right?

We applaud the folks at Duke University for getting this up-and-running, and we can’t wait to see custom “robot path-planning chips” hit the market some day. For now, though, if you’d like to sink your teeth into seeing how FPGAs can parallelize conventional algorithms, check out our linear-time sorting feature from a few months back.

Continue reading “Manipulators get a 1000x FPGA-based speed bump”