What grew out of a university research project is finally becoming real silicon. RISC-V, the ISA that’s completely Big-O Open, is making inroads in dev boards, Arduino-ish things, and some light Internet of Things things. That’s great and all, but it doesn’t mean anything until you can find RISC-V cores in actual products. The great hope for RISC-V in this regard looks to be Western Digital, manufacturers of storage. They’re going to put RISC-V in all their drives, and they’ve just released their own version of the core, the SweRV.
Last year, Western Digital made the amazing claim that they will transition their consumption of silicon over to RISC-V, putting one Billion RISC-V cores per year into the marketplace. This is huge news, akin to Apple saying they’re not going to bother with ARM anymore. Sure, these cores won’t necessarily be user-facing but at least we’re getting something.
As far as technical specs for the Western Digital SweRV core go, it’s a 32-bit in-order core, with a target implementation process of 28nm, running at 1.8GHz. Performance per MHz is good, and if you want a chip or device to compare the SweRV core to (this is an inexact comparison, because we’re just talking about a core here and not an entire CPU or device), we’re looking at something between a decade-old iPhone or a very early version of the Raspberry Pi and a modern-ish tablet. Again, an inexact comparison, but no direct comparison can be made at this point.
Since Western Digital put the entire design for the SweRV core on Github, you too can download and simulate the core. It’s just slightly less than useless right now, but the design is proven in Verilator; running this on a cheap off-the-shelf FPGA dev board is almost a fool’s errand. However, this does mean there’s progress in bringing RISC-V to the masses, and putting Open cores in a Billion devices a year.
Very nice! Hopefully this will creep its way into other things over time.
It’s funny how ARM, who used to be the disruptive player, are now on the other end of the equation. ARM will need to morph into something other than vendor of CPU IP to survive. Also interesting to see how the “Libre” movement started many years ago is having profound effects in the market, I guess that was the intent of Stallman et al.
Yeah right, dream on. RISC-V is hardly the only architecture around, if you really want to get rid of license costs, stuff like Z80, MCS51, MIPS, OpenRISC and plenty of other ISAs are available for chip vendors, with actually shipping silicon. Doesn’t look like ARM is exactly dying, despite of all that.
BTW, there is nothing “libre” about the actual RISC-V chips (except maybe the SiFive reference designs) – the ISA may be open and royalty free but that doesn’t mean that the actual silicon that the companies are making with it is. When it comes to embedded stuff, the core being open means very little – that only matters for implementing stuff like compilers. What matters a lot more is the peripherals around that core and specific implementation that governs stuff like energy consumption, for ex. Try to ask Western Digital for the sources for their chips (not just the core, which is on Github – and even that is just WD’s good will, they didn’t have to publish it, RISC-V license doesn’t require it), you will see how far you get …
Another big problem that will hamper RISC-V is fragmentation – ARM has the huge advantage that the cores are standardized, regardless of vendor. So if you compile something for Cortex M3 (for ex), it will work on any other Cortex M3 (modulo peripheral access) – one compiler per architecture is enough. RISC-V allows arbitrary non-standard ISA extensions which will require specific tooling support.
ARM is not going anywhere any time soon, especially on the high end market. It is certainly good to have viable competition but a little less of the uncritical hype and cool-aid would help here.
“Another big problem that will hamper RISC-V is fragmentation – ARM has the huge advantage that the cores are standardized, regardless of vendor. So if you compile something for Cortex M3 (for ex), it will work on any other Cortex M3 (modulo peripheral access) – one compiler per architecture is enough. RISC-V allows arbitrary non-standard ISA extensions which will require specific tooling support.”
This is the *weirdest* argument I’ve ever heard. ARM has the same issue, which is obvious based on the fact that you say “if you compile something for Cortex-M3” and not “for ARM”. The difference is that there are a limited number of options there, based on what ARM wants to do, and they make sure that differences are clear enough so compilers can easily figure it out and handle it.
But “modulo peripheral access” is such a hilarious caveat there, because exactly how is peripheral fragmentation any different that extension fragmentation? It isn’t. ARM GPUs are plenty fragmented, obviously, so why isn’t that a problem? Because hardware abstraction and drivers handle it. Which is exactly what would happen if competing non-standard extensions proliferate.
Fragmentation in communities doesn’t come because of the *possibility* of non-standard extensions. It comes because the leading foundation doesn’t lead well.
You sound a bit like Bill Gates as he said (must have been early/mid-90s) that a free operating system kernel is well and good, but the free software movement couldn’t possibly build something as complex as a web browser.
Some time after that, his own company browser was pushed aside by a… free software browser. As an especially bittersweet twist, a free software browser born from the ashes of something whose parent company was driven bankrupt by Microsoft’s monopolistic (and most probably illegal) practices.
The rest, as they say, is…
This is a bit crazy. In ARM-land the most basic instructions you have in your program very a lot between Cortex M0 (which is pretty much Thumb1), the other Cortex M CPUs (which are Thumb2), A7/9/15 which do Thumb2 but also traditional ARM32. And then the range of 64 bit ARM CPUs use a totally different instruction set again. Most of them can also run ARM32 and Thumb2 software, but newer ones and especially 3rd party such as Apple and Cavium are dropping support for that.
In RISC-V the basic instruction set that you need to run application programs is identical from 32 bit Cortex-M0 competitors such as the SiFive 2-series and PULP zero-riscy right up to 64 bit Cortex-A55 competitors such as the SiFive 7-series. RISC-V 32 bit and 64 bit programs have exactly the same opcodes but are not binary compatible due to the different register widths. They are compatible at the assembly-language source code level with the addition of a couple of macros.
The kinds of things you see as non-standard extensions in RISC-V are not instructions that are going to be useful in bash or gcc or the linux kernel — they are in general highly specialised things that you can perfectly well handle by hand-writing an assembly language function or two using them, and put it in a library.
I think MIPS managed to show that binary compatibility across the whole range isn’t something anyone really cares about. Especially on the microcontroller end of things you just get saddled with a load of compatibility baggage for no real benefit.
The only thing you are right about is that the BSD-like licensing is a problem. It benefits freeloaders and discourage contribution. It will create fragmentation as many companies will use an open core and closed peripherals.
I have yet to see how this will benefit the average user. Will the RISC-V be cheaper in production? Will the design tools be somehow better than they are for existing architectures? Will the computations-per-watt be better? The benefit of open source software is that anyone can take it, modify it, and use it…with only a tiny investment in an average laptop computer. Who will be able to spend enough money to tweak, improve or otherwise modify an actual processor chip? These chips will continue to be built by a handful of vendors and they almost certainly will not release their masks as open-source IP, so the physical chips will not be open-source. And trying to create a competitive processor chip from perfectly good Verilog is still an enormous task.
“Who will be able to spend enough money to tweak, improve or otherwise modify an actual processor chip?”
That’s a good question. There are some people trying to make silicon more available to small users, e.g. https://www.crowdsupply.com/chips4makers/retro-uc. Obviously the unit price can not compete with mass produced silicon, and the design costs would need to be shared in a “group buy”. People use OSS because it is basically free as in beer, OSH will always be not free beer.
In practice, almost no one modifies Android and loads it onto their phone. Does that mean Open Source phone software has made no difference for users?
Android is just one example of open-source software but it is a good comparison to the RISC-V hype. In both cases the fact that the software/hardware is “open-source” has very little value for the average person. I certain did not suggest that “open-source” is a bad idea…it’s a great idea…but just because something is open-source doesn’t make it necessarily the “next great thing”. Your retro-uc example is also instructive…for only $42 I can get a cloned processor (with no peripherals!) from the last century.
I am not sure where the idea that “RISC-V is a benefit to the average user” came from, that seems to be your assumption. No one has said it would ?
Did you read the first comment on this post, from Jeff? You might also look some of Brian Benchoff’s other posts about how “RISC is going to change the world” (https://hackaday.com/2019/02/04/openisa-launches-free-risc-v-vegaboard/) as if the RISC-V was going to start some kind of revolution? As if RISC was a brand new idea in 2018?
@kjoehass “RISC is going to change the world” is likely a (mis)quote reference to the so-terrible-its-good 1995 movie Hackers (“RISC architecture is gonna change everything”) https://www.imdb.com/title/tt0113243/
Sheesh, kids these days… Now get off my lawn.
Even for nonaverage RISC-V is still dealing with market inertia. And when it does reach average it’s going to be the same situation, “pick any one” that’s current.
Sorry for being clear, but this is utter nonsense: without Android as it is[1] there would be no Lineage OS (née Cyanogen Mod), and those offering GNU/Linux on deprecated smartphone hardware (e.g. postmarketOS) would have an even more difficult job reverse-engineering the proprietary residuals.
This is something I, as an end user, profit from. Hugely.
[1] And no, I’m *not* a Google fan. Not by a long shot.
Benefit to average users. I would guess western digital made the move because it reduces cost. Hard drive processors are not the power issue here. It’s the moving parts.
I read the WD press release linked above, and I don’t see any specific claims that it will reduce costs, either to WD or to consumers. And WD has not actually done anything yet…they announced “plans to transition” in the future. As far as I can tell, their actual plan is to outsource software and hardware development by “leveraging” the open-source community.
WD will almost certainly save money, but the savings per drive will be counted in pennies. It adds up if you make millions of devices, but you won’t see it in retail prices.
Some of the benefits of risc-v (per some PDFs a few years back when rocket and hurricane were published) were that comparable performance could be achieved with 14% less silicon, and does not share ARM’s heavy penalty switching in out of compressed instructions. The lack of license costs and smaller size makes it attractive for giving things more power than a microcontroller for minimal increase in cost. I seem to recall WD and Samsung mentioning using them for hardware encryption and Nvidia using them for resource management in their GPUs and possibly to replace the FPGA in use for G-Sync.
TL;DR consumer gets more features for less
Note that I referred to “the average user”, not to large companies like Nvidia. I’ll believe it when I can by a RISC-V processor from DigiKey or Mouser that has comparable performance to an ARM or AVR or whatever at a lower cost. Shrinking the CPU silicon by 14% may not make much difference when you add cache and memory management functions, and I wonder if the lack of a license fee may be swamped by the increased unit cost due to low volume production, or the increased software development cost due to a lack of mature tools.
TL;DR Consumer product vendors shave a few cents, little value to the hackaday community
Ah, so by “average user” you mean “average hackaday reader”, or hobby MCU user. I concede this development is of little interest to people who want to flash LEDs with an AVR, but there are plenty of other articles for that.
Not all hackaday readers have such a narrow interest by any means, but perhaps there should be a tag “this is not the article average users are looking for, move on”.
Benefit for the end user? close from none.
Benefit for the manufacturer: huge. You have a well defined arch, licensed under BSD like, with plenty of tools and good community support. That mean faster time to market, that mean less internal support (but not none, upstreaming is still a big hurdle).
It’s mostly the same that linux kernel: there was plenty of other OS that could have fullfilled linux role, but in the end the community support changed everything.
‘It’s just slightly less than useless right now’ sounds like it is more useless than useless. Just like how my comment is useless and waste of data
Well I was thinking that Cisco already use RISC-V in some of its products
I dont think we need another architecture, open or not.
The market today is totaly dominated by ARM and Intel.
To keep the price down, lets keep it that way.
Um, is this a troll?
I’m honestly finding it hard to tell the trolling from real commenters today. Perhaps that is just normal Hackaday and I never noticed before.
Its not a troll.
With a new architecture you will have a higher cost.
And a new learnig curve.
You are trolling nonetheless.
RISC-V is already out there competing directly against ARM. Get over it. Those of us who care are using it. How does that hurt you? Keep writing assembler on ARM until you are as rare a bird as a COBOL programmer, if you want, but some of us don’t want an Intel Management Engine or an Alternate Execution Bridge (C3) hidden in our processors.
There is absolutely nothing stopping anyone from adding execution modes, coprocessors, ROMs, backdoors, or anything else in their RISC-V processors.
I understand the sentiment – standardization cuts costs and annoyance. On the other hand, two is definitely too few. There is plenty of room for some specialty architectures – IBM’s POWER comes to mind if for no other reason than the single threaded execution speed. TMS430 makes ARM look positively power hungry. Various DSPs do one thing well, but man do they do it well. And I fear the last 360/370/390/Zseries engineer’s parents haven’t been born yet, for compatibility reasons I must begrudgingly admit.
MSP430 (which I assume you meant) once had a power consumption advantage, but nowadays they’re pretty average. Last time I looked (which granted was over a year ago), the newest devices from the likes of Energy Micro/Silabs totally whipped every MSP430 on the market.
Just depends on your usage. The MSP430’s advantage at this point obviously isn’t architectural, it’s the fact that many of them are FRAM based. Which means you get a huge amount of nonvolatile storage that costs you virtually no power to maintain and is essentially instant to write with microamps of current needed. That’s orders of magnitude difference between flash-based devices, which take milliseconds and milliamps to erase/program flash. (Which, of course, is why the MSP430 FRAM devices are usually targeted as battery-powered dataloggers nowadays).
TMS. sigh. I’m suddenly feeling… old. Sorry about that. :-)
I like TMS430 as a better name but I guess infer less than half as good as TMS1000. Tangent but I do wonder why dont see more MSP430 Launchpad projects. Snagged Arduino IDE(energia) to the point where some of menus still say Arduino. Prices comparable in a Brand sense not china mart. Some of Launchpad come with toys (modules in ‘ Duino speak) built in. Is it <3.3V bus? Phobia towards 16bit processors? FRAM sounds like an automotive part maybe? Wondering.
@profumple
I see it more as it being too close to Arduino. Arduino dominates it’s niche and breaking in to that is hard.
And if Arduino is not enough people seem to make the jump to 32bit ARM instead of slightly faster competition.
This leaves the PIC32, dsPIC, MSP430 and others in the dust. Which is most unfortunate.
Personally I would not be sad to see the death of the general-purpose 16-bit MCU. Their niche has been squashed between high-end 8-bitters, and low-end 32-bitters. Also, because they’ve all had to add some ugly hacks to extend their address space that makes them awkward to develop for.
It’s quite possible Western Digital has no intention of ever deploying on RISC-V and is throwing it into the market as a means to pressure ARM during future contract negotiations.
If the investors behind ARM had held your perspective, it’s very probable the current boom in mobile (cellphones & tablets) would not have occurred.
I am curious to where Risc-V goes in the future.
Though, I wouldn’t be surprised it stays mostly in system controller type of applications.
And the open source part of it likely isn’t all that important, considering how there are other open source ISAs on the market.
In the end, it will likely aid fairly well in the development cycle of larger projects, like Hard Drive and SSD controllers in western digital’s case, probably some network chipsets for companies like Cisco, or on board management for GPUs in Nvidia’s case.
I don’t suspect that it will replace existing architectures like ARM, X86, “Power PC” (discontinued, to my knowledge, but IBM still has their Z architecture for main frame computers), and various other architectures that most people likely never heard of, but are still catered towards specifications that are important in their respective domains.
But regardless to most of that, the most important part of an architecture (open source or not) is that one can get a comprehensive guide in how to develop for the platform, what creature comforts it has, and how its wants its data served. (And of cores if it is little or big endian.) And the Risc-V community to what I have seen does provide a decent chunk of documentation.
PowerPC lives on as the POWER9. I say “lives on”, there are a handful of different instructions, but they’re really, really close. PowerPC was a branch off of POWER, eons ago. Also there are embedded versions of the PowerPC that were big in telecom and as engine management computers in cars – there may still be.
At one point, Zseries and POWER shared some very small functional blocks on the dies, probably with some minor tweaks. I doubt it’s fair to say there’s much at all in the way of commonality, but it seems like maybe some parts of the “level somethingorother” (L3?) cache may have been shared. I honestly can’t remember now.
Looking at the differences between different CPU architectures can be a fun thing.
But also a fairly confusing one at the same time.
Been fiddling with some of my own, though, if anything comes of that effort in the future only time will tell.
Others have pointed out other open architectures, and I’m not knowledgeable enough to know why someone would pick RISK-V over one of the other open CPU designs.
But with the hype around it, I hope we see the development of more open source parts needed to make computer out of this stuff. Things like open source host buses, open source peripheral buses (preferably software enumerable), and hopefully open source peripherals in the future.
I’m not sure if these things come with memory controllers but hopefully they come with something that can be paired with RAM modules as well.
A part of me wants the old school PC building experience when the market was exploring lots of different things like various sound cards, IO controllers, and other accelerators.
The open source architectures of old are:
MIPS (1981) It have honestly been fairly popular, it is just very “old”. (3 years younger then the “first” x86 processor.)
SPARC (1985) It made up a sizable chunk of the top500 list of supercomputers in the late 90’s and into the early 2000’s. (Even as off November 2018 a SPARC system were still clinging to the 18th place on the list.)
Power (aka PowerPC) (1991) this one also takes up a good portion of the top500 list of supercomputers, despite the ISA not being royalty free. (Both the XBox 360 and PS3 used PowerPC. (But IBM stopped production when PowerPC weren’t selling well in the server market, forcing both Sony and Microsoft to find a new partner for making the processors for their respective consoles. (They both went to AMD and are now using x86 for their Xbox one, and PS4 respectively)))
MMIX (1999) Key feature of this ISA were that it’s machine code would have similarities to normal programming, as to make it easier to develop for the platform. (It has no known hardware implementations, but there is a Verilog softcore for it.)
LatticeMico32 (2006) it is a softcore made my Lattice Semiconductor optimized for FPGAs.
OpenRISC (2010) Exists mostly as a softcore. Someone made a crowdfunding campaign to make a processor using the ISA, but the project never reached its goal. And there are no known implementations of it in hardware.
RISC-V (2010) Started at the University of California, Berkeley as a three month summer project by Krste Asanović, with later help from David A. Patterson.
It focuses on being fairly flexible in its scope of applications, while also keeping a mix of simplicity, and at the same time aiming at still having the needed features to be a useful architecture in practice.
Though, why and how it has developed into what it is today is not something I find much information about. But it has gotten founding from DARPA, and a lot of companies has joined in on the effort (Likely due to RISC-V using a BSD license). Simply stated, RISC-V just seems to have gone viral.
(Though, one can easily write another few paragraphs for each one of these ISAs, looking into their differences, development history, where they are used, and so forth. But this is just a “short” comment, and not a whole Hack A Day article.)
I’d love to read an article on this. I have a reasonable working knowledge of x86 and Intel platform architecture (I worked in BIOS and UEFI for a number of years). I’ve gotten a little ARM experience as well, enough that I see what I believe are some of the technical reasons the ISA hasn’t really been able to displace Intel in the PC space (though AMD has taken something of a swing at it and enabled more work on the parts of Cavium and Qualcomm). But these why none of these other architectures?
The info on PowerPC is enlightening and I think shows why a fully open source processor would be desirable. But IIRC, Sun released the Spark as open source (and I thought that included the masks for fabrication).
I have to wonder if RISK-V has gone viral just because it’s new and licensed under the BSD license.
Did MIPS, SPARC and Power start as open source architectures, like RISC-V and OpenRISC did?
And with no license fees and a permissive lisence?
Because i recall the news when SPARC was open sourced. So listing “1991” for SPARC might make it sound old, but has it been open source since the beginning?
This is a good question, my research were quick and I’ll admit that it can contain factual errors.
The date for when SPARC were released were in 1987, and handed over to SPARC International trade group in 1989, and according to themselves, it is open source and royalty free. https://sparc.org/faq/
Though, PowerPC is not royalty free according to all sources I find. Though, it is still open source.
On a closer look, MIPS on the other hand is scheduled to be open source in the first quarter of 2019, so any day now practically. (So MIPS currently shouldn’t be on the list…)
But why entertain the world with a new architecture?
The ones doing the heavy lifting programming dont need another platform?
Why do programmers want a new platform? So they can achieve a hardware startup and be in control of their whole platform. Open is very empowering. The other options have a lot of known cons, compared to a design you control and where you can just include whatever is convenient for your use case.
And also, for the programmer it is just another embedded RISC processor, there isn’t anything exotic about it. But it is much more comfortable than the very old architectures that tend to also be free. So this gives you all the normal stuff you’d want to have.
Why entertain the world with it? Because it will generate lots of opportunities for related products and services for WD. If they offer a regular product, it is a new area for them and they have to do more work. If they do an Open hardware thing instead, they don’t have to invest as much, and they automatically become the top place to go to for consulting on extending their design. In this case, many parts were already done for them, too, they just added the parts of the work that are easy for them to add. This is the type of easy growth that open systems encourage and enable.
Because more competition means lower prices.
If you only have one supplier, that supplier has no incentive to keep prices down. But they do have incentive to keep prices high.
You can buy Arm microcontrollers for less than a dollar.
Three reasons for a new open architecture are freedom, security, and privacy.
Do you really want to building your shiny house in the middle of a swam with no secure foundation.
Intel – ME – Management Engine ( https://libreboot.org/faq.html#intel )
AMD – PSP – Platform Security Processor ( https://libreboot.org/faq.html#amd )
With holes being poked in because people keep equating OSH with OSS.
If you buy your chips rather than make them, the only thing that changes is who you decide to trust.
Amen to that.
But you do have the option to make and with the cheap military access to fabs anyone can afford to do a small test batch (for personal use) at a very low price.
Seriously? Where can you fabricate a 32-bit processor with hundreds of I/Os in a reasonably modern process for less than $250k? What is this “cheap military access to fabs” that you are talking about?
Look it up, it is not difficult to find if you know that something exists. And you even have the name.
I have fabricated chips at MOSIS and I have fabricated chips, including 32-bit processors, at captive fabs, both private and government, in the US. My experience suggests that you need a significant fraction of a million dollars to really fabricate a processor that can compete with current commercial offerings. The software tools to do the chip logic design, verification, and layout will be six figures…I’ve used Synopsys, Mentor Graphics, and Silvaco. Quite frankly, I think the notion of the average hackaday user modifying and fabricating a real microprocessor is pure fantasy.
I never said “average” and I also never said create a new product line of chips that could “compete with current commercial offerings”. I did say that it would take a lot of invested time. You have said things that contradict yourself, but maybe that has to do the industry you are in. What I did say was a small prototype run at a reasonable price, and I never said it would be easy.
Any individual could easily afford to use the MOSIS service, as long as they know what they are doing and can pay for a minimum run size of 40 chips.
So how much does MOSIS charge for a device with hundreds of thousands of transistors, several hundred I/Os, in a reasonably state-of-the-art process? This isn’t some educational project with 40 pins in a 2um process. If you have a choice between buying your own private jet or building some chips, and that fits your notion of “easily afford”, then yes you can!
Cheap *and* convenient!
@kjoehass
If you actually spent less than 60 seconds to look it up, you would know that you could get down to 12nm process. But the smaller the fabrication process used the more you pay, those slots are in high demand. Ultimately what MOSIS service provides, is a short runtime on the fab, when they are changing from one customer (e.g. AMD) production run to another’s, there may be a few minutes free to make additional profit, your design would typically be mixed and matched with others to be masked and etched on the same silicon wafer. So they would run minimum of 40 wafers. The price would be based on area on the die, fabrication process (12nm,…, 0.5 µm) and you only have specific fabrication timeslots for each process available in the next two months of the schedule for all participating fabrication plants. And then you would need to pay additional costs for assembly (attaching gold bond wires and packaging). Asking for a price is not easy with so many variables, that is why they have quote system. But it is not going to cost crazy private jet money, to get 40 chips produced, what it will take is a lot of time. That would be a bit like claiming that it would take private jet money to get a 12 layer PCB produced. The fabs don’t care what they run, they only care about making profit, the MOSIS service, takes on the complex task of merging multiple designs from multiple customers onto single wafers and then present exactly what the fabs want (no delays).
I’ve done it, several times. I know what it takes to design, verify and fabricate a microprocessor. I speak from years of experience in this field.
l believe you Joe. And the reality is that the current open source tools ( http://opencircuitdesign.com/ ) that could generate designs accepted by the MOSIS service will limit you to a technology node of 180 nm at best. But a really smart individual could, it is not impossible. I was looking at it from the perspective of work of passion to produce 40 custom made chips (not RISC-V chips) for tiny group of people, and possibly you are looking at it in terms of large scale commercial production line ?
I’d much rather a blazing fast 68000 type 64 bit core, the instruction set was so much easier to pick up and run with which is why I wrote in assembly when I was 12 rather than basic or c. I don’t mind new architectures but I have to admit RISC-V isn’t doing it for me at all, no decent processor silicon being the major point really!!