Did You Know YoSys Knows VHDL Too?

We’ve been fans of the Yosys / Nextpnr open-source FPGA toolchain for a long while now, and like [Michael] we had no idea that their oss-cad-suite installer sets up everything so that you can write in Verilog or VHDL, your choice. Very cool!

Verilog and VHDL are kind of like the C and ADA of the FPGA world. Verilog will seem familiar to you if you’re used to writing code for computers. For instance, it will turn integer variables into wires that carry the binary values for you. VHDL code looks odd from a software programmer’s perspective because it’s closer to the hardware and strongly typed: an 8-bit integer isn’t the same as eight wires in VHDL. VHDL is a bigger jump if you have software in your brain, but it’s also a lot closer to describing how the hardware actually works.

We learned Verilog, because it’s what Yosys supported. But thanks to GHDL, a VHDL analyzer and synthesizer, and the yosys-ghdl-plugin, you can write your logic in VHDL too. Does this put an end to the FPGA-language holy wars? Thanks, Yosys.

[Michael] points out that this isn’t really news, because the oss-cad-suite install has been doing this for a while now, but like him, it was news to us, and we thought we’d share it with you all.

Want to get started with FPGAs and the open-source toolchain? Our own [Al Williams] wrote up a nice FPGA Boot Camp series that’ll take you from bits to blinking in no time.

Making USB Blaster Clones Work For Linux

The last time we checked in with [Downtown Doug Brown], he had some cheap Altera USB Blaster clones that didn’t want to work under Linux. The trick at that time was to change the device’s 24 MHz clock to 12 MHz. This month, he’s found some different ones that don’t work, but now the clock change doesn’t work. What’s the problem?

He also picked up a Terasic clone, which does work on Linux and is considered, according to [Doug], the best of the clones. The units were superficially similar. So what follows is a lot of USB tracing and dumping of the CPLD chip’s configuration.

Continue reading “Making USB Blaster Clones Work For Linux”

Tiny Tapeout 3

Tiny Tapeout 3: Get Your Own Chip Design To A Fab

Custom semiconductor chips are generally big projects made by big companies with big budgets. Thanks to Tiny Tapeout, students, hobbyists, or anyone else can quickly get their designs onto an actual fabricated chip. [Matt Venn] has announced the opening of a third round of the Tiny Tapeout project for March 2023.

In 2022, Tiny Tapeout 1 piloted fabrication of user designs onto custom chips referred to as application-specific integrated circuits or ASICs. Following success of the pilot round, Tiny Tapeout 2 became the first paid version delivering guaranteed silicon. For Tiny Tapeout 2, there were 165 submissions. Most submissions were designed using a hardware description language such as Verilog or Amaranth, but ASICs can also be designed in the visual schematic capture tool Wokwi.

Each submitted design must fit within 150 by 170 microns. That footprint can accommodate around one thousand standard cells, which is certainly enough to explore a digital system of real interest.  Examples from Tiny Tapeout 2 include digital neurons, FPGAs, and RISC-V processor cores.

Once the 250 designs are submitted, they’ll be combined into a large grid along with a controller. The controller will receive input signals and pump the inputs via a scan chain through the entire grid to each design. The results from each design continue through the scan chain to be output from the grid. Since all 250 designs will be combined on to one chip, each designer will receive everybody else’s design along with their own. This shared process opens a huge opportunity for experimentation.

To get started on your own ASIC design right away, visit Tiny Tapeout. Also check out the talk [Matt] gave at Supercon 2022: Bringing Chip Design to the Masses along with his Zero to ASIC videos. And we’re not saying anything official, but he’ll probably be giving a workshop at Hackaday Berlin.

Continue reading “Tiny Tapeout 3: Get Your Own Chip Design To A Fab”

Ztachip Accelerates Tensorflow And Image Workloads

[Vuong Nguyen] clearly knows his way around artificial intelligence accelerator hardware, creating ztachip: an open source implementation of an accelerator platform for AI and traditional image processing workloads. Ztachip (pronounced “zeta-chip”) contains an array of custom processors, and is not tied to one particular architecture. Ztachip implements a new tensor programming paradigm that [Vuong] has created, which can accelerate TensorFlow tasks, but is not limited to that. In fact it can process TensorFlow in parallel with non-AI tasks, as the video below shows.

A RISC-V core, based on the VexRiscV design, is used as the host processor handling the distribution of the application. VexRiscV itself is quite interesting. Written in SpinalHDL (a Scala variant), it’s super configurable, producing a Verilog core, ready to drop into the design.

A Digilent Arty-A7, Arducam and a VGA PMOD is all you need

From a hardware design perspective the RISC-V core hooks up to an AXI crossbar, with all the AXI-lite busses muxed as is usual for the AMBA AXI ecosystem. The Ztachip core as well as a DDR3 controller are also connected, together with a camera interface and VGA video.

Other than providing an FPGA-specific DDR3 controller and AXI crossbar IP, the rest of the design is generic RTL. This is good news. The demo below deploys onto an Artix-7 based Digilent (Arty-A7) with a VGA PMOD module, but little else needed. Pre-build Xilinx IP is provided, but targeting a different FPGA shouldn’t be a huge task for the experienced FPGA ninja.

Ztachip top level architecture

The magic happens in the Ztachip core, which is mostly an array of Pcores. Each Pcore has both vector and scalar processing capability, making it super flexible. The Tensor Engine (internally this is the ‘dataplane processor’) is in charge here, sending instructions from the RISC-V core into the Pcore array together with image data, as well as streaming video data out. That camera is only a 0.3 MP Arducam, and the video is VGA resolution, but give it a bigger FPGA and those limits could be raised.

This domain-specific approach uses a highly modified C-like language (with a custom compiler) to describe the application that is to be distributed across the accelerator array. We couldn’t find any documentation on this, but there are a few example algorithms.

The demo video shows a real-time mix of four algorithms running in parallel; one object classification (Google’s Tensorflow mobilenet-ssd, a pre-trained AI model) canny edge detection, a Harris corner detection, and Optical flow which gives it a predator-like motion vision.

[Vuong] reckons, efficiency wise it is 5.5x more computationally efficient than a Jetson Nano and 37x more than Google’s TPU edge. These are bold claims, to say the least, but who are we to argue with a clearly incredibly talented engineer?

We cover many AI-related topics, like this AI assisted tap-typing gadget, for starters. And not wanting to forget about the original AI hardware, the good old-fashioned neuron, we got that covered as well!

Continue reading “Ztachip Accelerates Tensorflow And Image Workloads”

Using VHDL To Generate Discrete Logic PCB Designs

VHDL and Verilog are hardware description languages, used to describe and define logic circuits. They’re typically used to design ASICs and to program FPGAs, essentially using software to define hardware. However, [Tim] has done something altogether quite creative, creating tools to take VHDL and Verilog and spit out PCB designs for discrete logic. 

Yes, you read that correctly. The basic idea is to take VHDL source code, and then make a PCB layout that implements the desired logic using resistor-transistor logic. From there, the PCB design files can be shipped off to a manufacturer for pick-and-place assembly at a fraction of the cost of producing a bespoke ASIC.

The drawbacks are obvious; tons of individual discrete parts are required, the size penalty is hilariously bad, and power usage is almost certainly orders of magnitude higher than doing the same logic on an ASIC or even FPGA. Oh, and everything’s much slower, too.

However, as an academic exercise or simply for fun, it’s an awesome bit of work. The idea that one can define a complicated logic circuit and have a PCB implementing the logic whipped up by automated tools is amazing, and we absolutely want to see more of this type of thing.

We’ve seen similar work done with VHDL synthesis into 74-series logic design. If you’ve been developing your own fancy digital-logic-fu, be sure to drop us a line!

[Thanks to Yann Guidon for the tip!]

Capacitive Touch Controller For FPGAs

Most projects that interface with the real world need some sort of input device. Obviously this article is being written from a standardized “human interface device” but when the computers become smaller the problem can get more complicated. We can’t hook up a USB keyboard to every microcontroller since we often only need a few buttons, but even buttons can be a little bit too cumbersome for some applications. For something even simpler, we would like to turn your attention to capacitive touch controllers.

Granted, these devices are really only simpler from a hardware perspective. Rather than a switch that can be prone to failure either when its moving parts break or its contacts become corroded, a capacitive touch button only needs a certain conductive area on something like a PCB, along with a few passive components, to work. The real difficulty is in the software, so this project aims to make it simpler to bring these sort of devices to any FPGA that needs some sort of interface like this. It can operate in stand-alone mode or in a custom user interface, and was written to be platform-independent in VHDL without the need for any dependencies or macros.

The project’s page goes into a great amount of detail on how capacitive touch sensors like these work in general, and describes the operation of this specific code as well. Everything is open source, so it’s ready to be put to work right away. If you need capacitive touch capabilities on something like a microcontroller, though, take a look at this tiny Atmel-powered musical instrument instead.

Custom RISC-V Processor Built In VHDL

While ARM continues to make inroads into the personal computing market against traditional chip makers like Intel and AMD, it’s not a perfect architecture and does have some disadvantages. While it’s a great step on the road to software and hardware freedom, it’s not completely free as it requires a license to build. There is one completely open-source and free architecture though, known as RISC-V, and its design and philosophy allow anyone to build and experiment with it, like this build which implements a RISC-V processor in VHDL.

Since the processor is built in VHDL, a language which allows the design and simulation of integrated circuits, it is possible to download the code for the processor and then program it into virtually any FPGA. The processor itself, called NEORV32, is designed as a system-on-chip complete with GPIO capabilities and of course the full RISC-V processor implementation. The project’s creator, [Stephan], also struggled when first learning about RISC-V so he went to great lengths to make sure that this project is fully documented, easy to set up, and that it would work out-of-the-box.

Of course, since it’s completely open-source and requires no pesky licensing agreements like an ARM platform might, it is capable of being easily modified or augmented in any way that one might need. All of the code and documentation is available on the project’s GitHub page. This is the real benefit of fully open-source hardware (or software) which we can all get behind, even if there are still limited options available for RISC-V personal computers for the time being.

How does this compare to VexRISC or PicoSOC? We don’t know yet, but we’re always psyched to have choices.