Two New FPGA Families, Designed In China

The two largest manufacturers of FPGAs are, by far, Altera and Xilinx. They control over 80% of the market share, with Lattice and others picking up the tail end. The impact of this can be seen in EE labs and alibaba; nearly every FPGA dev board, every instructional, and every bit of coursework is based on Altera or Xilinx chips.

There’s a new contender from the east. Gowin Semiconductor has released two lines of FPGAs (Google translate) in just under two years. That’s incredibly fast for a company that appears to be gearing up to take on the Altera and Xilinx monolith.

The FPGA line released last week, the GW1N family, is comprised of two devices with 1,152 and 8,640 LUTs. These FPGAs are built on a 55nm process, and are meant to compete with the low end of Altera’s and Xilinx’ offerings. This adds to Gowin’s portfolio introduced last May with the GW2A (Google translate) family, featuring devices ranging from 18,000 to 55,000 LUTs and DSP blocks. Packages will range from easily solderable QFN32 and LQFP100, to BGA packages with more pins than an eighteenth century seamstress at the royal ball.

For comparison, Xilinx’ Spartan-6 LX family begins with devices featuring 3,840 LUTs and 216kb of block RAM, with larger devices featuring 147,443 LUTs and up to 268kb of block RAM. Altera’s Cyclone IV E devices are similarly equipped, with devices ranging from 6,272 to 114,480 LUTs. Between the two device families introduced by Gowin recently, nearly the entire market of low-end FPGAs is covered, and they’re improving on the current offerings: the GW1N chips feature random access on-chip Flash memory. Neither the low-end devices from Altera nor devices from Lattice provide random-access Flash.

The toolchain for Gowin’s new FPGAs is based nearly entirely on Synopsys’ Synplify Pro, with dedicated tools from Gowin for transforming HDL into a bitstream for the chip. This deal was inked last year. As for when these devices will make it to market, Gowin is hoping to send out kits to well-qualified devs soon, and the devices may soon show up in the warehouses of distributors.

Gowin’s FPGAs, in contrast to the vast, vast majority of FPGAs, are designed and fabbed in China. This gives Gowin a unique home-field advantage in the land where everything is made. With LVDS, DSP, and other peripherals these FPGAs can handle, Gowin’s offerings open up a wide variety of options to developers and product engineers a few miles away from the Gowin plant.

The GW1N and GW2A families of FPGAs are fairly small when it comes to the world of FPGAs. This limitation is by capability though, and not number of units shipped. It’s nearly tautological that the largest market for FPGAs would be consumer goods, and Gowin is focusing on what will sell well before digging in to higher end designs. We will be seeing these chips show up in devices shortly, and with that comes a new platform to tinker around with.

If you’re looking to make your mark on the world of open source hardware and software, you could do worse than to start digging into the synthesis and bitstream of these Gowin chips. Just months ago, Lattice’s iCE40 bitstream was reverse engineered, and already there are a few boards capitalizing on a fully open source toolchain for programmable logic. With more capable FPGAs coming out of China that could be stuffed into every imaginable product, it’s a golden opportunity for hardware hackers and developers alike.

[Thanks for the tip Antti]

73 thoughts on “Two New FPGA Families, Designed In China

  1. The technology is not bleeding edge, but then again they are going for the value market that the big guys kind of not want to spend more time supporting with their toolchain updates. So good luck guys. The more competitions, the better.

    Ultimately the FPGA is only as good as the toolchain and getting decent routing/resource utilization. The designers are twice removed from the actual hardware – by the place & route tool tools and the language abstractions. At least they don’t try to roll their own.

  2. Closed nature of FPGA toolchains is what stifles their broader adoption in the hobbyist field. The FPGA market is too fragmented and every vendor uses their own proprietary toolchains.

    Lattice actually has a nice selection at the lower end with their MachXO2 and Ice40 lines. So I am not sure how there is a market for this company, especially when you consider poor track record with software from Chinese hardware companies.

    If they could open source their bitstream and toolchains I could see them having something unique, but I am not holding my breath. Otherwise there is little reason to pick it over Lattice for instance.

    Unless of course we’re talking orders of magnitude cheaper and a usable toolchain.

    1. “Closed nature of FPGA toolchains is what stifles their broader adoption in the hobbyist field”.

      Wow. almost FUD. 8-)

      The nature of FPGA development makes it a niche of a niche, and the complexities of hardware development make it unattractive to all but the most dedicated.

      I’m currently working on an open DisplayPort implementation, and it is going to take a very long while. You can’t just stand up 3Gb/s transceivers and a protocol stack in a few hours without using closed IP…. the DisplayPort 1.1 specification is 228 pages, and that has minimal details about actual implementation. The Xilinx 7-series Transceiver user Guide is another 500+ pages, EDID spec is 90+ pages,..

      And debugging FPGA designs on a hobbyist budget is very hard. Most of the time it is gut feel and hard-won experience that finds and fixes problems…

      1. Not everyone cares about implementing things on a DisplayPort level of complexity. Besides that type of stuff isn’t accessible on the lower end CPLDs anyways. I am talking about a community coming together on a common platform, for simple logic glue stuff hobbyist cares about, things just beyond a bunch of shift registers.

        An open sourced bitstream would enable automatic integration of higher level tools like MyHDL and would increase the adoption in this market. I mean do you really think that we would have Arduino if it weren’t for the open source avr-gcc? And look at what that’s done to the adoption of microcontrollers in the hobbyist market.

        But even for performance computing, OpenCL seems to be going places. Open bitstream for an FPGA arch is no different.

        1. Most hobbyists only care about avr-gcc being free (of charge) on their platform. Very few people need the source.

          Same for FPGA tools. You can download them for free for Windows/Linux machines. Very few people would benefit from the source.

          1. @Artenz It’s not that the hobbyists need to care, or need the source. The point is that the designers of the micro-controller platforms which use avr-gcc were able to make their hardware and toolchains and environments for hobbyists because avr-gcc was open source.

      2. “debugging FPGA designs on a hobbyist budget is very hard”
        I don’t disagree that this is the state of FPGA development currently, but I suspect this is where having an OpenSource toolchain in the future would make things easier, more transparency certainly isn’t going to hinder the debugging process and will probably encourage better tools to be made.

        Although FPGA development is a niche currently, so was micro-controller development not that many years ago, and having open toolchains is what’s made projects like Arduino possible and grown that niche significantly. Also the more recent introduction of stuff like the Xillinx Zynq cores, where you have programmable logic sat next to an ARM core capable of running Linux is going to mean things like hot-swappable video decoding/encoding co-processors become a reality and will increase the popularity of targeting designs for FPGA’s, making them less of a niche and bringing the software development and FPGA design worlds closer together.

        All the best with the open DisplayPort implementation, sounds like an ace project.

    2. All the major players have free tools for devices that have price realistic for hobbyists, verilog/VHDL is pretty much as portable between different FPGAs, as C is between different MCUs
      I believe it was XIlinx that said their major cost of FPGA development was the software not the hardware

      1. Open source community tends to embrace and extend technology organically. Sort of like Processing -> Wiring -> Arduino. This is why I don’t see why they don’t open source it. As you said they already aren’t making any money on the software for the low end parts. They can keep their high end IP closed source, but let’s see what the community can do with the lower end parts.

    3. “Closed nature of FPGA toolchains is what stifles their broader adoption in the hobbyist field.”
      Utter nonsense. Toolchains for any FPGA that a hobbyist is likely to want are freely available and reasonably well documented.
      The simple fact is that few hobbyists have the combination of a use for an FPGA and teh skills to design for it. Open tools/chips would make no difference to that.
      Lack of cheaper/easier to use packaging options is much more more of a barrier.

      1. I couldn’t disagree more. Arduino’s success was entirely based around openness and a community coming together. Not something a single company could ever pull of.

        The parts that are interesting to the hobbyist (3.3v CPLDs) already come in a relatively easy to use packages TQFP-100.

        Remember, I am not talking about people who already do FPGA development, I am talking about guys who don’t like Windows or anything proprietary in their toolchain, there is a huge market of those folks whose needs aren’t being addressed.

        Sure more logic and less pins would be nice, but that’s not what’s holding FPGA adoption.

        1. I also agree with sirmo. Even though open sourcing will cause fragmentation, the benefits of open sourcing the low end products will easily overshadow the negative effects. It is the (relative) beginners that will come up with the ideas and implementations that will explode the application of FPGA’s.

          1. I doubt it. FPGAs are not really all that useful for beginners. The level of complexity is much higher than with an Arduino, and the benefits of FPGAs don’t really shine until you master some of that complexity.

        2. I disagree with your analysis of Arduino. At the time when Arduino took the helm the general AVR community was far larger, the entire thing was better documented, and the entire tool chain was and still is freely available. The success of the Arduino is entirely about dumbing it down to the entry level in a few easy steps:
          1) Provide a hardware platform that you can buy off the shelf and works with the software.
          2) Provide expansion modules (shields) which are plug and play, complete with documented drivers.
          3) Provide a website with design guidelines for basic things like operating I/O complete with pictures showing where to place a wire on a breadboard, not even full on schematics.
          4) Show the many awesome things you can do with the Arduino both advanced, and really stupidly simple stuff that make EEs bash their head against the wall shouting “Why didn’t he use a 555 and a transistor for that!”

          I’ll tell you why for the question in number 4: Documentation and ease of use.

          The Arduino was not the cheapest, best, most useful, did not have the biggest community, the most support and by a long shot not the best toolchain. It was just damn simple.

          1. Arduino is Visual Basic & RAD (Rapid Application Development) all over again.

            (Although they could have dumbed it down a little more by making keywords case insensitive.)

          2. You’re missing the point. Arduino made the microcontroller accessible to the non EE crowd. And it achieved that not despite but because of open source. Open Source is not just about releasing your code to the public, it’s also about bringing people together and allowing everyone a way to participate in shaping it. Open Source is first and foremost a grassroots movement.

            Because of avr-gcc, and processing then later wiring (open source projects), Arduino folks were able to present/package it in a fashion that was easy to understand to a non EE crowd. A lot of software devs familiar with Processing, or just OpenSource enthusiasts, jumped in and created a great beginner friendly community. Because they themselves were learning microcontrollers and hardware at the same time as well. This also gave them the perspective necessary to present it all in a fresh way which was beginner friendly.

            This is where the great documentation and ease of use comes from. From the ecosystem Open Source kickstarted, everyone felt a sense of ownership.

            Biggest contributors aren’t even affiliated with Arduino: https://github.com/arduino/Arduino/graphs/contributors

            Let’s face it hardware guys don’t know how to simplify software, hardware guys aren’t good at software. Software guys are good at software. And this is precisely why I think open sourcing the FPGA toolchains could open FPGAs to a whole new demographic, of Software first folks who could take FPGAs to an entirely new level of accessibility, just like the Arduino did.

  3. While we’re still not at a point where a company is saying, “Hey, we can get a competitive advantage by using tools that are open”, more competition is always good and having chipsets come out of China has bore some unusual (and low cost) fruits like the ESP8266. Hopeful this will do good things for the FPGA world.

  4. If someone wants to do something really useful in the FPGA world, how about some smaller packages that are 2-layer PCB friendly. QFP or QFN48 and 64, maybe even SSOP28 would fill some gaps that none of the major suppliers care about.

    1. One cool thing to keep in mind with existing FPGAs is that, despite the ridiculous number of pins, you can frequently get away with ignoring the inner ones. By making sure that you route your input and output signals to pins around the edge of the chip, you can keep your layer count down. This also works with higher power CPUs like TI Sitaras.

      I don’t disagree though, I’d love to have some easy to use FPGAs that can get plunked down into a board without having to do anything too exotic. Lattice does make a few, but I haven’t spent any time playing with them yet.

      1. My biggest problem is that ‘at first’ FPGAs can seem at once kind of ‘magical’ and ‘esoteric’– Any number of embedded MCU’s today come in the GHz range, and so at first it might seem as if ‘How do FPGA’s even have a place ?’ (and they really, definitely do), though it is more difficult to explain as readily their ‘innate virtue’– Or not as if it is ‘impossible’ to learn, but one’s first question is ‘why ?’…

        And in a strange way, it is actually the FPGA/CPLD that chooses ‘you’, and most often it has something to do with both coordinated signaling and any number of simultaneous peripherals– In which case you probably need all the pins in the forest you can get. Or, if you start in on a higher level, it is about immense parallel processing, without having to make an ASIC.

      2. For I/O pins, you are correct, but there are usually config pins or others that might be clustered together and might not give you a choice of not routing them.

        The ball pitch also limits on whether or not you can at least get 1 track between columns of vias. Once you have to go beyond 2 layers PCB for a proto, those eval boal boards starts to look attractive. There are hidden cost on stencils, solder paste for working with BGA.

        I like QFN. I can hot air rework a part and put it back without having to worry about bent pins etc.

      3. It’s not [just] the number of layers – it’s also the tolerances.
        Some of the smaller BGA devices are 0.8mm or less between pads. That puts constraints on which PCB manufacturer you can use, and how much fabrication will cost.
        Spreading the pads out to about 1.2mm [a tad under 0.05 inch] would lower manufacturing costs.

    2. CPLDs are often too limited regarding the number of gates, and the problem is not accessing the inner BGA balls, it is the requirement of using 4-mils PCBs with buried vias et al.

      Having packages with < 50 pins and using standard 6 mils 2 or 4 layer PCBs would be definitively great!

      1. Fir more capable cplds, you could look at Altera’s Max 10 series (has PLLs, ADCs, memory interfaces, and has the option of running from a single voltage. Unfortunately the only hand solderable package is a 144-pin QFP. I’m assuming Xilinx has something similar, but I’m not familiar with their line

        With regard to fpgas requiring blind/buried vias, it’s simply not the case. With 1mm ball pitch, you can get two traces in a routing channel with 5-mil trace/space and if you limit yourself to the outer 4 rings of IO, you can fan it out with a 4-layer board using only the top and bottom layers for routing. I fanned out an 1152-pin BGA with 6 routing layers without any fancier technology than OSH park supports (other than layer count).

    3. @ [Mikes electric stuff]

      I agree, I would love to see CPLD or small (instant on) FPGA in the traditional DIP packages so they could be programmed to replace older chips. But sadly this is never going to happen.

      The first problem is voltage. The only 5 Volt tolerant chips (LVTTL) that I know of that are still in production are the Xilinx XC9572XL/XC9536XL/XC95144XL and the Altera EPM240/EPM570. The oldest pin format they are available in is *qfp44 Xilinx and *qfp100 Altera.

      I can hand solder qfp44 easily with an old $5 soldering iron that would be better suited to auto wiring. And being CPLD I can go as low as workshop grade single sided board with links on the other side because I can re-rout pins on the CPLD. You can also fit the 44 pin chip on a double sided breakout board in the standard 0.6″ DIP pinout.

      I would assume it’s not that hard to hand solder qfp100 but I haven’t tried. Workshop quality PCB are fine for qfp100. I think it would end there. I don’t think you would be able to reliably hand solder qfp144. I wouldn’t bother to try with the equipment I have.

      I have seen many qfp44 to DIP44 adaptors but they have the chip diagonal instead of square so they’re wider than the standard 0.6″ DIP.

      1. Standard salesman’s reply… Certainly sir, what colour do you want the Unicorn… “Purple”… no problem sir…. and invisible too you say… I’m sure the engineering team should be able to manage that within a week…

      2. Cute. Do you feel threatened by this in some way or are you just a natural asshole?

        A simple FPGA isn’t something magical and there are enough papers describing different designs that a competent team could produce a blueprint in a reasonable time period. What is the big problem then is manufacturing – and that is mostly the cost of manufacturing chips and testing/verifying them. But even that happens a number of times per year for non-profit academic projects (sometimes even for free sponsored by foundries).

        The software could also evolve – also here there are papers describing some kinds of layout. The first kind of software available could be extremely low level, using the native LUTs/routing directly or (a simple abstraction) schematic entry. Support for other hardware design languages could then evolve.

        But an open source FPGA wouldn’t be even close to a modern FPGA chip – even if the project could skip a lot of evolutionary steps experience in both hardware and software in the existing FPGA companies are hard to overvalue.

    1. I think it would be huge, in the hobbyist space. HDL is a barrier of entry that could be bridged with a higher level compiler, something along the lines of MyHDL. And I am not even talking about the benefits of being multiplatform. This is 2015. People don’t all run Windows, in fact every open source developer I know has ditched Windows years ago.

      1. Everybody dreams of something like that, however it is never going to be the case. Writing HDL is not writing software. Think of all the things that disappear when you shift to an HDL – dynamic resource allocation, unbounded loops, serialisation of operations, datatypes can grow and shrink at runtime (e.g. strings), runtime debugging, runtime function calls. You can’t just pick up a program written in a high level language and then map it to the FPGA’s logic.

        The guts of the problem is that software works by having a very large state vector (maybe many GBs of memory, and multiple TBs of disk) that evolves relatively slowly over time – maybe up to 128 bits can change per cycle. FPGAs have smaller state vectors (e.g. a few thousands of flip-flops and registers on smaller parts) that evolve very rapidly over time (e.g. can change every clock cycle if required).

        1. It is an entirely different paradigm yes. But it can be abstracted to a higher level. Sure you would lose flexibility in the process, but for some it could provide a low barrier of entry.

          As I’ve mentioned something like this already exists, see the example: http://www.myhdl.org/examples/flipflops.html

          Also I am not entirely sure Verilog and VHDL are best we can come up with in terms of HDLs. Why not open it and see. There are a lot of great languages that were born because of open source.

          FPGA vendors aren’t really interested in language design anyways.

          1. I totally agree that VHDL/Verilog can be improved upon, but writing a new front-end does not require any of the existing tools to be open source.

          2. Sure, but you can easily make the compiler for another language output VHDL or verilog. Or even directly the netlist that will then be synthesized by whatever FPGA design suite you will use. Integrating it with the vendor tools makes almost no sense, since it will then only work with tools from that vendor.

        2. What is to be opened!? The VHDL, verilog language specs is available. The great thing about FPGA/ASIC is that there a few languages so that all of the designers can master both of them and not waste time going to the same mistakes that the software guys does trying to keep up with the language of the month race.

          Stop trying to make tool open source. May what you do with the HDL code open source. The existing Windows + Linux tools already covered 90+% of the users. The fringes have diminishing returns. You need huge amount of computing power and resource to compile FPGA, so not like you are going to run it in anything less than 64-bit machines. The NIH is a waste of time.

          Too much effort to make be earn that single digit of million market. Even the FPGA vendor gave up supporting their older FPGA line because it is a great drain on the sub 3 digits market. Good luck on convincing them to open source their tool for the single digits jelly bean parts.

          If you really want to make a language compiler open source, go the gcc path, make your own compiler and make it good enough that no one else want to write the own. Now go and do that and you can help everyone else while going toward your open source goal. Mind you that the effort of making a HDL toolchain is a lot harder than making a programming language compiler.

    2. There is a *huge* difference between open source the toolchain and having a toolchain working in Linux.
      The latter does not requiring open sourcing… In fact Xilinx guys told us years ago that they wirte the fitting tools on linux, then port to windows.

      If you need something bad enough, may be you should ditch your political agenda…

      1. Open sourcing a toolchain makes it multiplatform by default almost. I would like to see an open source toolchain for all of those reasons.

        What political agenda is there in wanting an Open Source and multiplatform tool? To have an agenda would be the opposite.

        I am suggesting it if they are interested in increasing their adoption. As that would clearly be something that would raise eyebrows and generate interest. Something Gowin could leverage to provide value over the current offerings. Similar to expressive and open sourcing the esp8266 sdk.

        I don’t really stand to gain anything if they do it or not, I just think they have nothing to lose. Especially considering IP protection laws aren’t enforced in China anyways.

      2. except for this part: “The toolchain for Gowin’s new FPGAs is based nearly entirely on Synopsys’ Synplify Pro,”
        Since Synopsys make money on their toolchain, it is unlikely they or anyone in simular boat will open source their golden goose.

        To be honest, the hobby market for FPGA is tiny. One of the places I work for uses 7-8 FGPA on a server card. We used the bleeding edge FPGA that cost easily 3-4 figures a piece and we are not buying single quantities from a store either. So there isn’t much of a money making thing for the vendors to notice you.

        In the long run, the more time you are distracted by the political agenda is less time for hacking. If you want something bad enough, learn hold your nose and use a Windows box or run it in Linux. Anything outside of doing read hardware or HDL hacking is a distraction.

        A developer who cannot adapt to the tools/environment/OS for a job is not a serious one.

        1. Agreed, the hobbyist market for FPGAs is tiny. There was a time when the hobbyist market for microcontrollers was tiny too. FPGA vendors have acknowledged that there is some sort of market out there that is price sensitive on tools, so they’ve released free versions capable of implementing smaller designs. It may well be that the vendor who identifies open source toolchains as important will win an important advantage in the market, the way Atmel has won a huge number of developers with avr-gcc and all that has come from that.

          My point is that dismissing the desire for open source toolchains for programable logic is short sighted. Yes, if you want to design something with an FPGA today, you need to accept the use of existing closed-source Windows tools. But it’s entirely possible that an open source tool chain for programmable logic might open up business opportunities that we can’t entirely anticipate today, and that’s exciting to contemplate.

          1. A couple of minor corrections… ‘Free’ versions of the design tools can now work with some quite hefty chips – One I’m using at the moment has 215,360 logic cells, 269,200 flip-flops, a few 6.6Gb transceivers and 740 DSP blocks.

            XIlinx, Altera and Lattice’s design tools run on both Windows and Linux. so if you are developing for these devices you are not limited to using Windows for your OS. At the moment I recommend Linux because of Win7/Win8/Win10 issues.

            But your argument stands that an an open source tool chain might inspire something that hasn’t already been envisaged – however after 23 years of FPGAs are relatively mature tech.

            The real game-changer might be what comes out of Intel’s acquisition of Altera.

    3. I would live this as well, but even the open source chips that do exist even a smaller number use an open standard cell library. Although since we are talking about opensource even if some just built a chip, but provided enough specifications on it’s routing and timing natures of the FPGA, I am sure some open source tools would be developed. Good luck finding a corporation that would build a chip for that purpose. Quite sad indeed, it would have to be a group effort by those who would want such a device.

    4. An open source FPGA? In this modern era, it can be accomplished.

      How much are you willing to wager?

      Would you like to collaborate financially, or via a crowdfunded project?

      I would like to help, perhaps, it is the alternative to getting NSA malware? If we all crowdfund together, it may become possible to get an opensource FPGA.

  5. If you look at how many different copyright messages get spat out when you do a FPGA compile it is blatantly obvious that there is zero chance of anything ever being open sourced because there are too many players whose business is selling their software to the FPGA manufacturers. Opening the tools would be of negligible benefit to vendors, and help potential competitors. It just isn’t going to happen, so arguing about how nice it would be is a waste of time. There are much more useful things to be done in the FPGA world, but at the front end.

    1. Also, opening up the tools requires the manufacturer to open up all the internal timing details of their devices. That’s probably something they don’t want have end up in competitor’s hands.

    1. Shortest definition I can come up with that will give any useful information…

      An FPGA is a (relatively) expensive silicon chip, that is conceptually like a really big solderless breadboard, filled with basic logic chips, You use software and a special programming languages call a Hardware Definition Language (HDL) to define how everything is wired together, and that configuration can be changed on a whim – hence the name Field Programmable Gate Array.

      A small FPGA might be equivalent to a solderless breadboard the size of a small room (100k logic gates), a larger FPGA is like a solderless breadboard the size of a basketball court, all filled with little 1980’s style 0.4″:x 1.7″ dual-in-line IC packages, just waiting to be wired together.

      They fall half way between general purpose chips (like an ATmega microcontroller) and fully custom Application Specific Integrated Circurts (ASICs). What they lack in raw clock speed (they can run at about 1/10th that of a PC’s CPU or ASIC) they can make up by running things in parallel – but only for tasks that have inherently high levels of parallelism..

  6. Hmm. Biggest hurdle to any adoption is information about them :-/
    I can’t readily find anywhere to source them, no sign of eval boards and the only data sheets are not that complete (while I can’t read 90% of the document, the tables of data are quite obvious in their content).

    Anyone have anything more tangible about these devices?

    1. That’s because you’re most likely using it for the wrong thing. The languages are very different because they do completely different things. Microprocessors like the arduino execute tasks sequentially, the code represents instructions. FPGA execute tasks in parallel, the code represents hardware. Trying to program one as if it was the other doesn’t work.

  7. The perfect FPGA for me would be: hand solderable, around 64-72 I/O pins, analog and digital flexibility similar to the Cypress PSOC chips, embedded hard ARM M4F, with a hard UART, with enough LUTs/slices to implement a soft M0 or two.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.