Once upon a time, about twenty years ago, there was a Linux-based router, the Linksys WRT54G. Back then, the number of useful devices running embedded Linux was rather small, and this was a standout. Back then, getting a hacker device that wasn’t a full-fledged computer onto a WiFi network was also fairly difficult. This one, relatively inexpensive WiFi router got you both in one box, so it was no surprise that we saw rovers with WRT54Gs as their brains, among other projects.
Of course, some people just wanted a better router, and thus the OpenWRT project was born as a minimal Linux system that let you do fancy stuff with the stock router. Years passed, and OpenWRT was ported to newer routers, and features were added. Software grew, and as far as we know, current versions won’t even run on the minuscule RAM of the original hardware that gave it it’s name.
Enter the ironic proposal that OpenWRT – the free software group that developed their code on a long-gone purple box – is developing their own hardware. Normally, we think of the development flow going the other way, right? But there’s a certain logic here as well. The software stack is now tried-and-true. They’ve got brand recognition, at least within the Hackaday universe. And in comparison, developing some known-good hardware to work with it is relatively easy.
We’re hardware hobbyists, and for us it’s often the case that the software is the hard part. It’s also the part that can make or break the user experience, so getting it right is crucial. On our hacker scale, we often choose a microcontroller to work with a codebase or tools that we want to use, because it’s easier to move some wires around on a PCB than it is to re-jigger a software house of cards. So maybe OpenWRT’s router proposal isn’t backwards after all? How many other examples of hardware designed to fit into existing software ecosystems can you think of?
“How many other examples of hardware designed to fit into existing software ecosystems can you think of?”
I think this is a little different – what you’re saying here is basically the entire x86 hardware tree for the past 20 years. Or most cheap Android/Chromebook builds, which take the existing Google reference designs with only minor changes (*especially* the embedded controller firmware). Manufacturer reference designs becoming “de facto” standards due to software compatibility is pretty common.
This isn’t that – it’s an ad-hoc software ecosystem *developing into* a reference hardware design, and yeah, to me that’s far less common.
It used to be hardware dictated software. O/S’s had to be lean to run in restrictive ram/CPU designs of that time (TRS80s/C64s, early PCs). Expense of the hardware and it’s limitations were a factor. Now the O/S’s are bloated and dictating the hardware (ram and specific CPUs – thank you M$ for helping to generate e-waste while forcing a half baked O/S on us).
Come now – modern Windows is many things but half-baked? I gotta disagree – it’s fairly mature these days (& & 10 anyway…)
*cough*
7 & 10[…]
*cough*
I’d put espressif and the esp series in this category, except that they had a willingness to build a base, invite the community in, and follow along as products thrived according to the tensions held in balance by trying things out and iterating the things that thrive.
I wonder, though, if the community isn’t also setting standards by participating in design in the open, making things and flailing around until an idea catches hold and the existing products selected by the features that enable those ideas. Then when our communal compatriots that write standards sit down and write the next one, they include those features. Conference rooms may be ruthless in trimming the nice-to-haves but our ideas make it into the first round more than you might guess.
I think I recall many software-package specific commands making it into the silicon all the way back to the 386. What is new to our community is that we can now potentially get the features and software we need into a compelling standard that could become the bare minimum for all future professional products. Companies who make routers have products in the pipeline that should be scrapped because the decision makers thought they were leading when they cut components and functionality sets. They woke up one morning and discovered that they now look like chumps for ignoring the consumer needs.
I expect similar outcomes in industrial automation and automotive/cam. Boardrooms keep putting out terrible ideas and implementations trying to save money, or give the right friends high margin contracts, and standards making is a beautiful way to call them out. It will be great for consumers and makers and right to repair enthusiasts when crappy specs shrivel in the harsh sunlight of an unmet standard.
Hang on… is hackaday proposing this, out is openwrt saying that they ARE building their own hardware?
If I’ve missed a source link, please forgive me … but I’m wanting to support this and can’t find where to support it!
https://hackaday.com/2024/01/13/openwrt-to-mark-20-years-with-reference-hardware/
https://hackaday.com/2024/01/13/openwrt-to-mark-20-years-with-reference-hardware/
I just think it’s crappy that manufacturers put out whatever shovelware firmware they want onto a router with decent hardware, knowing that anyone who cares will just throw OpenWRT on it, and the manufacturer is off the hook for support. The community does the hard part for free.
If OpenWRT can get some money back into the project and support development by making actual sales, and take that revenue away from the purveyors of buggy junk, isn’t that a good thing?
All true.
Now add “race to bottom”
The proposed H/W is mediocre and obsolete by today’s standards. Not suitable most homes today.
Are you kidding? You must be new to this.
Hardware that you CAN put good software such as OpenWRT on, rather than something that is locked behind an encrypted bootloader or whatever… That alone is something to be thankful for!
The fact that they wasted their time implementing their own software poorly… that’s their problem.
The OpenGL specification led to early fixed-function GPUs designs mapping to those abstractions.
Would it matter if I mentioned that I know the brother of the chap who practically started the Open WRT march? That was Jim Buzzbee, his brother Bill built his own processor and went along to make it even better. There’s a whole story behind the creation of the project inside the archives of Linux Journal, Jim also turned his talents to the ill-fated NSLU2 device.
I haven’t heard of it.
https://en.m.wikipedia.org/wiki/NSLU2
“How many other examples of hardware designed to fit into existing software ecosystems can you think of?”
A pretty substantial portion of the Open Compute Project is basically very large datacenter operators telling ODMs how they want things in terms of how BMCs are going to cooperate; what abstractions switches are going to provide so you can install your own environment, what counts as acceptable for a firmware update mechanism for an NMVe drive; and the like.
That’s a consortium of companies often large enough to have their own gravitational fields; so the dynamics are rather different; but the fact that, for reasonably mature flavors of hardware and reasonably sophisticated users of software, the users do not feel entirely well served by what the ODMs do when left to their own devices(whether it’s trying to do proprietary value “adds” like eccentric vendor BMC behavior and FRU locking; or aggressive cost optimization that ruins platform driver stability to save a nickle that can’t cover the cost of the software hassle).
“How many other examples of hardware designed to fit into existing software ecosystems can you think of?”
Hardware-accelerated media codecs follow this path. It’s only once a codec is mature enough to have broad traction and acceptance that designers deem it worthy enough to dedicate precious silicon to offload the CPU load. But that whole intermediate time between the research papers and the hardware implementation is filled with ever-improving software implementations, architecture ports, optimizations, usage in games, streaming platforms, etc.
That’s a good one!
i kind of wished rockbox would have gone down this route. since most modern mp3 players leave a lot to be desired. most people (the ones not convinced that the smartphone is the mark of the beast) just use their phones and so the demand for a well designed mp3 player just isnt there. cross compiling for other socs or even doing custom hardware are very approachable now. letting such an awesome piece of software go the way of the dodo just seems like such a waste.
The link between hardware and software is – the driver.
The 16550 UART, for example, provides a fixed interface, and it’s up to the driver to interface that with the rest of your software. Drivers can have bugs, so they get updates.
This reverses that – the drivers can be as simple (bullet proof?) as you want, and it’s up to the hardware to implement those features. If you want to revise your hardware after the fact, you’d better have an FPGA in the mix.
“Of course, some people just wanted a better router, and thus the OpenWRT project was born as a minimal Linux system that let you do fancy stuff with the stock router. Years passed, and OpenWRT was ported to newer routers, and features were added. Software grew, and as far as we know, current versions won’t even run on the minuscule RAM of the original hardware that gave it it’s name.”
But to be fair, isn’t that the nature of Linux? It always seemed to me, at least. I’m observing Linux since late 90s.
Linux consumes all resources, like memory. Because, by Linux’s logic, free resources equals wasted resources.
And it grows and grows and grows. Thanks to its monolithic design, too.
Other OSes, by contrast, go for a more modular design.
They load drivers and services as needed, they’re not hogging memory.
Linux is not designed for small ’embedded’ devices. The trend is not to make the kernel smaller but bigger. Openwrt devs were comparing the ressources consumptions over the versions, and they kepr increasing.
In the pre-openwrt days, i was using LinuxAP, which could run on 4MB RAM and 1MB flash, but that was a real sport to compile any additional binary in there. You had to buy an expensive SRAM pcmcia card to flash them via a jumper to boot select. Now i am wondering if i am not gonna try to load freertos on them.
In the pre-openwrt days, there were also a bunch of routers with uClinux on a Samsung s3c4510:
https://flickr.com/photos/zoobab/145044819/
I’ve always seen that relation as Hardware and Software taking turns at leading. Both sides having resulted in creation of the other in some circumstances.
The hardware guys get +1 itis. They release the same thing but inflate the numbers a little and tell everyone to throw the old stuff away and buy the new stuff. Even though there is objectively little difference.
It’s called mortgage driven development. No matter what, sell more stuff to pay the mortgage even though it’s all the same stuff with inflated numbers.
Purple?