Will We Soon Be Running Linux On SiFive Cores Made By Intel?

There’s an understandably high level of interest in RISC-V processors among our community, but while we’ve devoured the various microcontroller offerings containing the open-source core it’s fair to say we’re still waiting on the promise of more capable hardware for anything like an affordable price. This could however change, as the last week or so has seen a flurry of interest surrounding SiFive, the fabless semiconductor company that has pioneered RISC-V technology. Amid speculation of a $2 billion buyout offer from the chip giant Intel it has been revealed that the company best known for the x86 line of processors has licensed the SiFive portfolio for its 7nm process. This includes their latest and fastest P550 64-bit core, bringing forward the prospect of readily available high-power RISC-V computing. Your GNU/Linux box could soon have a processor implementing an open-source ISA, without compromising too much on speed and, we hope, price.

All this sounds pretty rosy, but there is of course a downer for open-source hardware enthusiasts. These chips may rely on some open-source technologies, but sadly they will not themselves be open-source chips as there will be plenty of proprietary IP contained within them. We can thus only hope that Intel see fit to provide the same level of Linux support for them as they do for their x86 ranges, and we’re not left in the same situation with respect to ongoing support as we are with so many other chips. Meanwhile it’s worth remembering that SiFive are not the only player in the world of RISC-V cores, so it’s likely that competitors to the P550 and its stablemates will not be far behind.

If you’d like a more in-depth explanation of the true open-source nature of a RISC-V chip, we’ve featured something on that theme before.

Header image: Gareth Halfacree, CC BY-SA 2.0.

38 thoughts on “Will We Soon Be Running Linux On SiFive Cores Made By Intel?

    1. The beauty is that the ISA is open so if we want to jump ship to another provider or even climb onto out own H/W (FPGA) or dedicated silicon (rich are you?) then you can. And with RISC-V you can choose what goes onto the silicon, adding things (cores qty/speed) including your own secret sauce and dropping blocks that you don’t need (FP, GPU). Yes, Intel’s (and SciFive’s) hardware is closed sauce but the ISA matters more long term. It certainly lags ARM by a few years but it is a more logical design cherry-picking the best and ejecting the cruft. I’m eager for it to succeed similar to how I was eager for the Linux kernel to succeed. Its future as the top-dog is by no means secured but it deserves to win…

  1. yeah i don’t really see the advantage for users from ‘open source’ instruction sets or chip designs. i understand it’s a big deal to the manufacturers and vendors but from the software developer perspective, you won’t hardly be able to tell the difference between developing for ARM vs RISC-V vs OpenRISC. you can get a good description of the instruction set from each of them, hopefully you can find a compiler with good support, and so on. and you will still be banging your head against the wall all day long to interface with all the proprietary I/O modules and accelerators and so on.

    the kind of openness we need out of a chip is documented peripherals. and the kind of openness we need out of a completed product is an unlocked bootloader (and/or an unlocked stock OS). i don’t know what kind of openness we actually need from an ISA, even though i think these open ISAs really do have the potential for shaking up the chip industry itself.

    1. Absolutely, Intel, AMD, Samsung, etc. have their big vats of “magic sauce” that go far beyond the spec sheets, things that are necessary for robustness and interoperability, and they are certainly not going to share that stuff. They spent zillions of dollars sorting it out and they want to protect their investment. I’m not expecting any “open source” implementations of gigabit Ethernet or USB 3 or 4K HDMI to happen anytime soon.

    2. The kind of openness the market needs is standards.
      And that people, companies and projects follow those standards.

      https://xkcd.com/927/

      Though.
      Developers can only really follow standards that they can access, and understand.

      A lot of documentation is locked behind tons of paperwork, licenses, and membership fees, not to mention NDAs.
      But a lot of documentation is also wonderfully poorly written and uninformative to the point that it isn’t of any real help. Mainly by being exceptionally strict in staying on topic, and not mentioning related functions or features that one do need to keep in mind, or might have better use of.

      Open source projects are partly solving the “lack of documentation” by simply giving the source code itself. But considering how oddly cryptic code can be at times, even with proper comments, then this isn’t really any major help.

      Closed sourced projects only give what they give on the other hand. But at least they don’t have the excuse, “Just look into the source code.” or “fix the problem yourself!” when one stumbles into some problem, or is interfacing with the program or hardware.

    3. There is a huge advantage! If you implement the ISA you have an entire slew of pre-written compilers! You couldn’t implement the modern ARM ISA and be able to share it which means you needed to write your own compiler too! This cuts the effort of making a new chip in half!

      1. Considering how some people write a C compiler as a “challenge” for themselves. (ie, a bit of a sport.)
        And usually targeting some random architecture they selected on a whim, or at times at random.

        Then I don’t suspect that someone that is more familiar with the ISA in question would have much problem with making the complier. Especially considering how one would need to amend the ISA a few times to optimize it during its development, and therefor need to remake the complier.

        Having designed computer architectures the last decade an a half as a hobby myself. I don’t actually see the point about “but one needs to make a compiler for it!” as much of an issue.

        Though, sometimes one makes stuff in the ISA to improve performance, but the thing to a degree isn’t even supported by most common programming languages. So there is also that… And sometimes when one finds these “fun” solutions, then it a lot of times redefines the whole ISA itself, so one is back on square one. (feature creep at its finest!)

        1. Your personal opinion is noted but how many of the compilers you wrote were capable for building Linux? How many were even compliant to any standard? Did any compile C++ code or do templating? The name of the game is reusing existing code and your inability to grasp it’s significance does not diminish it’s importance.

          1. I myself fiddle with architecture design. Not compilers.
            But I have stumbled over people that do code C compilers to the book and do use them at times.

            I also do have to say that I don’t have an inability to grasp the significance of supporting a rich code base (since that is your interpretation, not my statement). After all, it is one of the reasons X86 still proliferates, and to a degree the story is the same for both ARM, and PowerPC, among others.

            I rather said that making a complier isn’t particularly hard. (Not that one even have to rebuild them from scratch…)

            But an architecture do not need to have full C support on day one, or even the first year or two of its inception. And some architectures don’t have much C support at all, for architectural reasons.

            Not to mention that a lot of the code base in existence have dependencies that can be hard or at times “impossible” to port over to an alternate architecture. So even with full C/C++ support, the code still won’t work.

            For a general purpose architecture, there is however a larger need for supporting a wider code base compare to more application specific architectures.

            But in the end, it depends on what type of application and market the architecture is being designed for.

          2. You don’t need to write an entire compile from scratch for a new ISA. All the frontend stuff that parse the language and translate into an intermediate form remains the same for all ISA. The part you’ll need to fill in is the backend code generation and may be some additional tweaks and libraries to take advantage of features of the ISA.

            https://en.wikipedia.org/wiki/LLVM
            >LLVM is a set of compiler and toolchain technologies, which can be used to develop a front end for any programming language and a back end for any instruction set architecture

            It is not like GCC only supports x86/x64… There are people that add support for new ISA.

            As for building Linux to a new ISA. It takes more than a compiler. You are essentially supporting a new machine. You would need to have a BSP to initialize the chip, memory, the boot environment, support the hardware peripherals etc

    4. > yeah i don’t really see the advantage for users from ‘open source’ instruction sets or chip designs.

      So you don’t see the advantage of running silicon that doesn’t have black boxes carrying obscene privacy implications? Neat.

      1. an ‘open source ISA’ will not get rid of the black boxes! in a modern ARM SoC like you find in phones or TV boxes, the core itself is hardly a black box at all…you may not know how the transistors work but you certainly can find documentation for the ISA. it’s the I/O peripherals and accelerators that are black boxes. and when RISC-V is more popular, the same will probably still be true.

        1. Only a few I/O and peripherals are publicly documented and not black boxes indeed.

          Most such stuff is where the real IP and NDAs comes in. Want a PCIe driver? Well, that will be a bit of paper work, and a fair sum of money.

          On the other hand, want an I2C driver? Well, develop it yourself, it is trivial!

          Though, to be fair, LVDS drivers used for PCIe, HDMI, SFP, etc are somewhat public domain. But the magic to make them low noise, have tight specs, not waste a ton of power, etc. That is the real black boxes full of magic. And laying them out on a chip with good impedance matching and such, that is a bit of an art…

          Then there is the protocols, encoding schemes, etc.
          AES encryption is public domain, but a circuit implementation that has fast throughput or high power efficiency is on the other hand not. Same story for PNG de-/compression. And a fair few other stuff too.

          Then there is the Display Port standard. It is “royalty free” as in only costing 5 grand annually to have a chip (or product) with a Display Port implementation. An honestly not all too crazy expense to be fair, but yet again. If you want an “off the shelf” implementation of it, then there is licensing costs again, and I’ll get into why in this crazy long comment…

          And this is generally the thing as far as chip design goes.
          One can typically get a relatively cheap license to “implement” a standard and get to officially say one supports it. (If one the implementation one makes lives up to the specs.)

          But the problem with hardware is that implementations aren’t really something one can just shuffle off to another fab or process. Since an implementation tends to be dependent on certain process parameters that aren’t easily accounted for parametrically.

          In short, a design meant to be built on for an example TMSC’s 22 nm node won’t just casually port over to Global Foundries’ 32 nm node. Or to Intel’s 28 nm one. Aside from needing a potentially bigger chip, there is far more differences to consider. And yes, I know I am using old nodes here, but the fact still remains that on chip circuit designs aren’t even remotely as easily portable like software is.

          The reasons why one can’t just scale a design to fit on another process is usually due to simple factors like characteristics of the transistors being made. The capacitance and resistance of interconnects. And the interconnect, materials and process options available at the fab in question.

          For an example, one fab might offer you 1 metal layer at “node”, while another offers you 2 before going up to the “next” larger node for the next set of metal layers, that they in turn provide x amount of, before going to the next size up.

          So the sizes of their offers will impact what you can do. Then there is the other manufacturing processes, and the materials that they are able to work with. One can technically go in and say, “do this” and that will “work”. But the closer to cutting edge one gets, the more specialized the fab’s offers are, and the less flexible they are in what amendments they are willing to make.

          In short, if one comes in to a fab and is a special butterfly, then they likely just say NO. Since they have plenty of other customers in their queue willing to pay and be less of a bother to deal with. (unless one goes to a 1-5µm fab, they practically just nod and agree with anything, since making odd novelty chips is their thing these days…)

          Some people might though say, “But what about Verilog and VHDL code. Can’t I just recomplie that for the other process?”

          And the answer there is both yes, and no. (It is also about here I hope that people haven’t just skimmed through this and made some reply that is irrelevant or already explained by this book of a comment, or just highly opinionated and simply ignore the information provided…)

          No since Verilog and VHDL largely only deals with logic. (they can do way more than that, but stick with me!) Yes, porting the logic and taping out the fabric is “trivial”. (putting that in quotes since it really isn’t! Just look at larger FPGA project routing times, and then think of doing an actual chip on transistor level where half the stuff isn’t premade for you…. But compared to the bellow, it really is trivial…)

          But circuit optimization is more than just 1s and 0s, and timing delays. Clock jitter, and rise and fall times. And parasitic capacitance, resistance, cross talk, etc… The more of these parameters we want to characterize and control, the longer our compiling will take.

          And eventually, one starts looking at either renting or building a super computer just to do that work. (Intel for an example do have a super computer in house. nVidia likely has some fancy GPU cluster, and AMD likely runs an in house EPYC cluster. And I personally wouldn’t be surprised if they all built them from “scrap”/defected components, they are cheap especially if they are yours already.)

          For chip makers that deals with smaller/simpler chips, one can just look at building a couple of racks for one’s cluster. Maybe a single work station might suffice if one isn’t in a hurry. (or one just rents a computer for the once in a blue moon large project.)

          Some look at FPGA acceleration, but this is lackluster as hell when one actually runs the numbers… It doesn’t take a particularly complex chip to make even a gigantic FPGA seem pathetic. Though, FPGAs runs way faster than software simulation, but step away from logic and into the analog domain and FPGAs suddenly aren’t even applicable… (unless one makes a circuit simulation accelerator on the FPGA, but GPUs do that rather well already, with typically higher power efficiency too…)

          But in the end.
          Getting an “off the shelf” implementation that will just run on a manufacturing process of one’s choosing is far from cheap. And the ones that have done it, usually want some money back from having spent their time on working things out. Be it manually designing the circuit, or letting a “decent” computer go ham at it for a few weeks.

          So there is reasons for why I/O and peripherals will be black boxes.
          First, licenses for the idea.
          Secondly, licenses for the logical implementation of the idea. (optimized code in other words)
          Thirdly, optimized hardware design for a specific manufacturing process.
          And lastly, it just takes tons of effort that always needs to be redone if one moves to a new process…

          But this is even true for implementations of the core components of the ISA itself. Make a full adder that runs at 5% higher clock speeds for the same power on a specific node, then that is still a fair bit of work. But it is obviously more work for more complex functions.

  2. Is this going to be another Intel Quark? You know, where Intel simultaneously marketed the product to hobbyists and even made a low cost Arduino compatible dev board. But, then they didn’t make the documentation needed to program it public until after the product had failed and even then never bothered to remove the “confidential” watermarks. I hope they don’t do that again, because that was just disappointing.

  3. Personally I have found little interest of my own in RISC-V, as in interest to use it as a go to computing platform. And it is the open source nature surrounding RISC-V that gives me that opinion.

    A lot of people look at open source and say that it is inherently good.

    I for one don’t care if a chip is open source.
    It if supports publicly documented standards, then that is a positive thing. But it is also here that most open source projects falls flat by implementing their own or diverting from those standards. Linux as an example has more incompatible distros than I care to poke a stick at, not to mention software targeting said distros.

    One can look over at ARM as an example for what RISC-V is heading towards, if one complies code to run on one ARM platform, then it isn’t all that likely that it will run on another ARM platform. Even if the two platforms uses the same ARM core.

    Meanwhile, for X86. You can install almost any X86 software on almost any X86 platform. UEFI did rock the boat a bit, but other than that there isn’t much to watch out for.

    It is actually impressive how a PC104 system can run the same exact software as a laptop, or a blade server as well as almost any desktop. One can’t say the same for ARM systems, due to it being too fragmented in comparison.

    But unlike the closed source ARM. RISC-V is open source and encourages companies to do their own stuff, it will fragment like a fine vase falling off the 5th floor balcony. I will be surprised if it doesn’t.

    One can though ask. “But is fragmentation a bad thing?”

    In the embedded world, No, it isn’t. It is actually a bit of a benefit here since in the embedded world RISC-V has some strong advantages over existing solutions. Thanks to standardizing the ISA, while not restricting the possibilities for fancy IO solutions commonly seen among microcontrollers.

    In the world of general computers, it partly is a problem on the other hand. Though depends how techy the end user is. I for one don’t want to amend software just to get it to run on the system I choose to run. Nor do I want to run 3+ different computing platforms to make life “easier”. (Even if I might be running multiple systems already, partly for that reason. But I am a nerd, not Joe average… (Though, personally I would likely set up a RISC-V system or two since it would still be interesting to poke at. But that doesn’t change my opinion when viewing the topic from the market perspective.)

    Even in the world of servers, a lot of places generally just want to stick to as few solutions as possible, that they can then dice up as needed with virtualization. Application specific servers is getting more rare, there used to be far more servers with fancy application specific accelerators, but the cost of running a few more CPUs/GPUs is simpler and cheaper from a maintenance and upkeep perspective. And among servers, and companies in general, if a product provides advantages, then most companies don’t mind paying for it. (So the typical stance that “open source is cheaper” doesn’t fly, especially if it requires more effort due to a lack of compatibility. Tech staff is far more expensive than licenses for a lot of companies.)

    I don’t think that RISC-V is going to venture much outside of the embedded world. And to a degree, I don’t even suspect that it will majorly compete with ARM in the mobile world. I could though be wrong.

    But I wouldn’t be surprised in the slightest if Intel started using RISC-V in their chipsets, or as their management engine, or as a network controller or so forth. Neither would I be surprised if other companies does the same. (And I wouldn’t be surprised if Western Digital have RISC-V on their HDD’s already.)

    1. meh, the interoperability you cite is only present on x86 because people have bothered to work on projects like WINE, WSL, dosbox, VMware, etc…a rich variety of virtualization and emulation layers. there’s no reason that couldn’t happen on ARM and to a large extent it already has where it has been desired. those are software questions. the fragmentation of ARM environments isn’t forced by the instruction set, it’s just a result of the vastly different problems being solved by people who build ARM OSes.

      even as a developer, the only time i really care about the fragmentation of the ARM instruction set is when i’m building embedded software..and then i’m glad of it! otherwise it’s just like PC, for the most part you are either targetting 32-bit or 64-bit and you don’t care about the details. if you’re making a videogame, you might need to learn all the different options for floating point / vector / graphics acceleration, but that’s true on any platform if you’re trying to get the best performance.

      1. The interoperability of X86 largely comes from the fact that the larger system is heavily standardized, on top of the X86 ISA being almost fully forward compatible. To an almost insane degree.

        ARM on the other hand just gives you a core. So stuff like interconnects, networking, IO, etc, is all for the chip maker and system developer to do as they please with. Not to mention that different ARM cores are also a bit different on the ISA level too and aren’t always forward compatible either.

        RISC-V is even more loose in regards to altering stuff surrounding the ISA as well as the ISA itself, so I would be surprised if interoperability is going to be a common thing. Unless a standard is formulized in a marketsegment.

        1. I should also probably quickly clarify that when I talked about Linux distros, that were only an introduction to a subject. A look at how it is in software.

          When I went on to talk about “ARM Platforms”, I do mean in Hard-ware.

          Ie, two “ARM Platforms” don’t have to be compatible even if you try to run the same OS on them. Since the OS might not run on one of the platforms. Usually due to IO issues, but sometimes due to ISA related reasons even if the base ARM core is the same. (ARM after all do allow chip makers to add some custom instructions for application specific needs or for IO. (last I checked at least. My sources can be wrong.))

    2. On the contrary, the Linux API is remarkably stable compared to Windows or macOS or AIX or Solaris. Each of these has always suffered major incompatibility problems with new versions while Linux programs from the 1990s still compile and run with no issues. You can’t say that about any other operating system. And you can’t expect consistency between debian and redhat any more than expecting consistency between AIX and Solaris. Stick to one OS and you’re good.

      1. I think I will have to reiterate.

        My statement in regards to fragmentation of Linux is largely irrelevant in my comment as a whole. It is just an audience introduction to what fragmentation is and some of the consequences it brings. It is an example of an experience that a lot of people relate to.

        But it doesn’t mean I say that fragmentation on Linux isn’t a thing that one can relatively easily work oneself around. Since yes, one can just recomplie and be back on track, and yes the Linux API is indeed very stable, that however doesn’t fix the numerous dependency issues/differences of the various distros….

        But I use the fragmentation of the many Linux distros as a jumping off point for diving into fragmentation of instruction set architectures on a hardware level. Ie, the Linux example is just there to give people something to relate to.

        And the methods of going around the distro differences for Linux is similar to the methods one would employ for going about the various distros of the many RISC-V implementations that are likely to crop up over time. Or in the case of ARM, already exists.

        We can compare the RISC-V ISA to the API of the Linux kernel. That part will remain largely static.
        While the various core complexes, interconnects, caching systems, IO/peripherals, accelerators, etc is more like the numerous libraries and other dependencies that we can include into our OS distribution.

        I will though clarify that I am greatly oversimplifying here.

      1. Oh, one of the reasons why is four magical words “spectre resistant speculative processor” a speculative processor that is immune (at the level of digital logic) to attacks based on mis-speculated instructions leaking data or changing timing. So no performance hit working around a hardware problem using software like ARM, Intel, AMD, ….

        How could you possibly go wrong with advanced technology based on un-patented ideas that were originally implemented in a 1964 mainframe designed by Seymour Cray (CDC 6600).

        Intel will probably have an optional e.g. https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)#Controversy

    1. This is my concern, and what a lot of people in these comments seem to miss. If Intel buys it out, it will absolutely be another black box with another OOB coprocessor, and there will be literally no point in using it. Given its track record, there’s no way Intel will leave it alone or be altruistic. Intel’s buyouts are like Apple’s: buy potential competitors, “work” on product and either abandon it or roll it into existing implementations, close it off and keep selling privacy nightmares to the unaware masses.

      It’s alarming how many people can’t grasp this concept.

      OpenPOWER it is.

  4. I’ve said it before and I’ll say it again: people’s hopes are WAY to high for “Libre” RISC-V chips.

    The ISA is open but the chips will not be. The ONLY reason companies like WD are interested is to get free core designs from the research community that they can use (instead of ARM cores they have to licence). That’s it.

    Not only that, but while the ISA is “open” (anyone can implement it and it’s free of patents) the various standards bodies around it are not.

    As Libre-SoC (Formerly Libre-RSICV) found out, all standards discussion happens behind closed doors on secret mailing lists. NDAs are required to learn what the procedure to propose an extension even involves.

    It seems the companies gathered around RISC-V are really not that welcoming to community involvement. This has been a sharp contrast from, say, OpenPOWER, who are readily aiding the development of Libre-SoC’s (GPU focused) vector extensions (for example, by allocating them their own operating mode).

    If you want a “Libre” RISC-V chip, you’re just as on your own for getting it made yourself as always. You won’t see any appearing in products.

    1. 64 bit, 2.4 GHz, and .25 mm^2 is about as informative as saying:
      “My car has 19 inch rims, engine does 5000 RPM peak, and the car weighs 1600 kg.”

      Ie, it states almost nothing.

      The performance of a CPU is dependent on a lot more than just instruction width and clock speed.

      Now I am already pedantic.
      Though, also biased due to designing computer architectures as a hobby for years.

      There is so much more nuance to the performance of a processor than the specs above.

      As an example.
      Just changing something as little as the number of instructions in the out of order queue can have major impacts on application performance, core clock speed, and power efficiency, not to mention implementation density.

      Increasing the queue length can increase performance (since the core can find more non serially dependent instructions), reduce clock speed (since looking at a bigger queue increases switching complexity and also adds complexity in regards to checking for dependencies), reduce power efficiency (since we run the execution units harder, increasing peak current and therefor conductive losses), while making the design less dense (the queue got bigger and pushed the other stuff aside a little).

      But other changes between architectures can make even bigger differences.
      Like an architecture that only has basic instructions like, adding, subtracting, OR, NOR, AND, and maybe an XOR. Will be beaten in most applications by an architecture that has a multiply instruction, not to mention divide and bit shifts.

      Though, most real architectures have a fair bit more complicated instructions than that. In general, an architecture with instructions catering to a specific workload will tend to perform better in said workload. Be it having better performance, and/or power efficiency. (unless the workload in question isn’t complied to use those instructions, and this is actually somewhat common….)

      The main reason I am being pedantic about this is due to all the people who ONLY look at the number of bits, core count and clock speed to make “informed” judgments of what is and isn’t better than something else. Completely overlooking implementation differences, or even architectural differences.

      In the end. Clock speed, core count, and even the number of bits is all largely worthless information. Especially when comparing between architectures. Look at actual benchmark scores. Preferably benchmarks complied to actually use the hardware at hand. (unless one intends to run unoptimized code.)

      1. Granted. It’s like specifying a cameras performance by megapixels. Let me put it another way: On a combined SPECint2006 and SPECfp2006, it packs 3 times more performance per square millimeter than the ARM’s A75.

        1. And that there, is a far more useful piece of information.
          If we knew the core count and clock speed of the ARM A75 implementation being benchmarked. (benchmarks are relative after all.)

          One should also generally look at a wide portfolio of benchmarks. Or run the scope of applications of interest. Since a given architecture can have certain pros and cons depending on the exact code being executed. And sometimes background processes will affect things too.

  5. After the compilers made by intel produced inferior code if it could not detect the “Genuine Intel” string in the processor identification just to throw a stick in the wheel of AMD I stopped buying intel products.

  6. There are 2 Xiangshan RISC-V processors being developed by the Chinese Academy of Sciences, which are reported to be Open Source.

    The first – Yanqi Lake – which comes out next month is said to be the equivalent of Arm’s A72 or A73. It runs Debian Linux.

    The second,- South Lake – coming out next year, is said to be close to the i9 in performance.

    The Intel/SiFive collaboration won’t be available until 2023… And it’s proprietary, and (only!) A75 equivalent.

  7. “Will We Soon Be Running Linux On SiFive Cores Made By Intel?” Not unless the SiFive device has TPM2.0, the user has a Microsoft account, and the Internet connection is persistent so it can spy on you 24×7.

Leave a Reply to Alexander WikströmCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.