Open Source Needs A New Mission: Protecting Users

[Bruce Perens] isn’t very happy with the current state of Free and Open Source Software (FOSS), and an article by [Rupert Goodwins] expounds on this to explain Open Source’s need for a new mission in 2024, and beyond. He suggests a focus shift from software, to data.

The internet as we know it and all the services it runs are built on FOSS architecture and infrastructure. None of the big tech companies would be where they are without FOSS, and certainly none could do without it. But FOSS has its share of what can be thought of as loopholes, and in the years during which the internet has exploded in growth and use, large tech companies have found and exploited all of them. A product doesn’t need to disclose a single line of source code if it’s never actually distributed. And Red Hat (which [Perens] asserts is really just IBM) have simply stopped releasing public distributions of CentOS.

In addition, the inherent weak points of FOSS remain largely the same. These include funding distributions, lack of user-focused design, and the fact that users frankly don’t understand what FOSS offers them, why it’s important, or even that it exists at all.

A change is needed, and it’s suggested that the time has come to move away from a focus on software, and shift that focus instead to data. Expand the inherent transparency of FOSS to ensure that people have control and visibility of their own data.

While the ideals of FOSS remain relevant, this isn’t the first time the changing tech landscape has raised questions about how things are done, like the intersection of bug bounties and FOSS.

What do you think? Let us know in the comments.

42 thoughts on “Open Source Needs A New Mission: Protecting Users

    1. I would love to see a way for there to be open standards in a productive way (https://xkcd.com/927/ comes to mind).
      That said, I’d love to see this go beyond just software and into all the various pieces of tech that are needed to build a computer too. There is a lot beyond an ISA needed to do that.

    2. simply remember your goverment to do his job. like the EU is kicking meta and the rest in his …. Still dont know why this Ashton Kutscher regularly hanging up in Eu parlament and try to force each EU member country to mass spy his people… ahh i guess to sell his software.

  1. I think that cat got out of the bag fifteen or twenty years ago. It had a litter and died of old age already. Sorry but people’s data is too pwned to fix now, unless this is just another “safety” euphemism for FRAND like the other comment implied (this is probably the case).

  2. FOSS may protect users from vendor-lock, but it doesn’t really protect all that well against data leakage or scams. To do that, you still need skilled people running the services, whether volunteer or paid.

    I don’t think users will ever care much about the difference between free-to-use and FOSS. Volunteer-built software will always be a bit rough on the edges. Fortunately there is a growing number of companies that manage to make money while releasing their product as open source.

    1. Even if they did care, it’s not reasonable to expect that they should do something about it.

      If the Open Source alternative sucks, then you’re sucking lemons either way. The question is, which way do you actually get your job done?

  3. Who was it that said “I don’t like anarchy – it has too many rules”? A large portion of the problems of FOSS is that people turn software into politics and attempt to control what other people can do with or to it.

    It can be things like refusing to maintain a stable driver ABI because “All drivers should be open source instead of binary blobs.” – i.e. trying to own other people’s work. Well, the result is that you don’t get very many good drivers from the OEMs because you refuse to play ball with them. Who’s at fault there? They who refuse to give up their trade secrets, their business model, and break licensing agreements with third parties, or the “freedom warriors”?

    1. ive bricked no fewer than six distros because of this. including the one i was planning to use to bail on windows when 10 goes eol. if my $800 gpu now performs like a $300 gpu because the drivers suck. i either have no incentive to use linux, or no incentive to buy fancy hardware. its lose-lose all the way.

      1. Worse – if one distro gets one thing right, like having proper out-of-box support for your GPU, then they do another thing wrong like insisting on using a different sound stack that gimps your surround sound setup. Then you go whacking moles getting everything working, and six months later they upgrade the distro, change stuff, and it breaks down again – but you can’t keep to the old version because the new versions of the software you use are all on the next version libraries, so you have to upgrade to keep up.

        1. GPU are awkward mostly only because Nvidia sucks for Linux support, yet make up a huge share of the market. Still can work, but it goes wrong rather too often and is not usually a Linux problem but an Nvidia one when it does. However get an AMD and other than the old volcano islands? era that marks the crossover point from the Radeon to AMDGPU driver and it should just work, same for Intel on integrated graphics at least, not sure about Arc but being Intel I expect its perfect… Intel and AMD actually support their chips really well.

          The only way surround sound can be legally be shipped is effectively it can’t be – at least for the Dolby stuff, as the licensing BS gets in the way. But any other surround sound system should just work, and you can just toggle the Dolby stuff on pretty easily if you know how. Which as you did it is fine legally, for the distro anyway. It is you on the hook should you not legally have a licence to use it (which you should as the Mobo would come with one). But once you properly create the config file that makes it work you won’t have to touch it for years, upgrade to your hearts content the config doesn’t get changed. It only becomes a pain again when you want to install something else and because its been forever you don’t just remember how…

    2. The decision to not have a stable driver API is so that we can have higher performance and more features as time goes on, and so that drivers continue to have active maintainers or are removed. Linux has features at the driver level now that nobody thought of a few years ago, and they drive performance and functionality. In contrast, other OS have a pretty severe problem with old unmaintained drivers that have had the same bugs for decades and perform as if they are connected to a 1980’s OS, because that’s what the API was written for.

      The modern version of this screw-up is the pre-FCC-approved WiFi module, all of which have absolutely horrid firmware that can’t be fixed and breaks the network in various ways while working just well enough to be shipped. It’s bad enough that more groups went into Open Hardware just to replace it.

      If Apple and MS actually tolerate not having the continuing participation that Linux requires from driver writers, it’s got to be causing them big problems.

  4. If users and companies use the free stuff, it’s in their best interests to be actively involved in helping to keep the stuff running, riding roughshod over these things doesn’t win you friends or customers. Binary blobs are probably necessary in a competitive world, as long as support is plentiful and isn’t ditched early.

    No one is obliged to do anything they don’t want to but if you don’t encourage the open source, you’re probably going to lose out to long development cycles and stuffy roadmaps that can’t adapt quickly enough to true innovations that come from open source and mad lads/lasses trying stuff for the giggles.

    How many open source projects and innovations have we seen folded into commercial products a few years down the line with a slightly different badge on them?

    1. > it’s in their best interests to be actively involved in helping to keep the stuff running

      Too many cooks spoils the soup. If everyone “gets involved”, the whole thing gets bogged down in logistics.

      1. Of course but actively involved doesn’t necessarily mean ‘let everyone have at it’ all of the time. Forks, branches and pull requests seem to do a good job of allowing people to keep their gatekeeper status whilst allowing others to contribute in meaningful ways. It’s probably not a perfect system but it does allow for participation with less logistics.

        1. Users are not developers, nor should they be. You’re still suggesting a scenario where a million users should somehow get “involved” with the software by forking and branching and contributing code to it – while being almost entirely incompetent and uninterested in doing so. What good could ever come out of that?

          The best contribution that a user can have on the success of a piece of software is to pay money for it – to keep actual competent developers and managers working on it.

          1. This is kind of funny, as I kind of have a hard time envisioning that others don’t have/work like me.
            I’m an electrical engineer, and I code. But I also assume that every store clerk, architect, and farmer is using software/thinking about their process as a significant part of their daily work.
            There is always some data to keep track of, and where there is data, coding will help manage this data.

          2. In the typical case, 99% of your users effectively can’t “code”. Even people who at least know some basics of programming, most simply use Excel formulas to handle their data. It’s because people are cognitively lazy, uninterested, have other stuff to do, and their particular tasks don’t demand it, so they only learn and remember what is absolutely necessary.

            There is a Grand Canyon sized skill and knowledge gap between “can code if necessary” and “can maintain desktop productivity software”.

        2. The issue with “community contribution” in open source is that moderating the community becomes a chore of its own, and useful information gets lost in the noise, to the point that the people who actually maintain the software or a branch of it will find it completely useless. All their time will be spent dealing with people rather than code.

  5. foss already does this by default. its really just a popularity game. foss needs to gain market share. one place where foss is typically weak is in ux. where things don’t work as expected or where the ui diverges from the target platform (for example i hate gimp’s save dialog when under windows because it diverges from what the user expects). or the very common ok/cancel dialog where the buttons are backwards from the local convention. linux distros have this problem where they give you clear instructions on how to do x, but they often are sensitive to dependencies and version incompatibility and fail for reasons unknown, followed by very little instruction about what to do when things go wrong or how to roll back the change in a non-destructive way. users just want software that works and is mostly self explanatory.

    1. proton is a good example of it done right. way better at making games work than wine alone, which could require significant tweaking. and those tweaks, if done incorrectly can potentially be system-bricking. ask me how i know. proton, i install it and im playing my games, flawlessly in many cases. gj valve.

    2. I’d argue at this point the core Linux distro’s and the core desktop software folks actually need are now and have been for quite some time in the ‘just works’ and ‘self explanatory’. And you are far far less likely to get problems with dependencies and versions with Linux as anything other than a more advanced user – use the package management system of your distro and everything just works, and changes that break previous workflows are pretty darn rare.

      If anything I’d say the big distro’s are much more self explanatory than windoze now – the OS that still can’t manage to put all the stuff that used to be in one place under control panel in one place or explain which bits should be where consistently between versions anymore, who’s audio stack can’t be told to stop trying to be ‘helpful’ unhelpfully every time a device changes, and …

      Windows might feel like it ‘just works’ better to a historical windon’t user, but really that is nothing to do with it being better and entirely to do with how familiar the Users are with its idiosyncrasy. Seriously try putting a Mac user infront of a PC and visa versa…

      Which is probably why you like proton as though it is darn good the real magic is the user does nothing but click launch in Steam, same UI as always, very familiar and now it just works in nearly all cases, even though the game is supposed to Windon’t native only.

      1. Windows is better for the intermediate user. Not the expert, not your grandma, but every day folks who understand something but aren’t necessarily interested in getting any deeper than necessary. It’s not done most optimally, but there are buttons and gadgets for everything somewhere.

        Linux is pretty much either or. Either you do it the way they make you do it, stay in the walled garden and never venture outside of the software repository or default settings, or break everything and get a master’s degree in IT just to understand what you broke.

        1. >Linux is pretty much either or. Either you do it the way they make you do it, stay in the walled garden and never venture outside of the software repository or default settings, or break everything

          With the world of flatpack/snap type concepts that really isn’t true at all, programs get a little larger (more inline with windows programs) but it is basically impossible to break anything and many programs seem to be getting shipped that way now. And sticking with the better know distro (and the ones based of them) for a normal user any software that you might want that isn’t already in the repo is probably available as a download .rpm or .deb exactly the same way as it is on windows with their .exe installer. Straight off the website pre-packaged to just install and that is very very unlikely to break anything either – its almost certain to just work or tell you very precisely in a way you can ask the web for help why it didn’t install.

          If anything a windows installer that does go wrong is more likely to go wrong in ways you need master’s degree and to know the M$ secret handshakes as it doesn’t have a community that can know how everything works and generally give you two lines of bash script that identifies what went wrong and how to fix it… I’m fairly sure almost nobody at M$ themselves know how their own OS works well enough to do that.

          You have to actually work at it to break stuff in the more normal Linux distro now, as the polish level is so high and the GUI config tools for the ‘Windon’t converts’ that are not comfortable with a terminal are so complete – so unless you try… Now if you want to go back 10, probably more like 20 years or really insist on doing complex expert user things without actually being competent I could agree with you. But on that latter point if you do that in Windoze or Mac you are just as screwed, if not more so.

          1. >With the world of flatpack/snap type concepts that really isn’t true at all,

            Fortunately – but then again, how many distros and users/devs actively fight against these concepts because they find it “restrictive”, don’t like the particular way they do it, or just don’t like the organization/people behind the particular implementation? The fact that there is snap AND flatpak already illustrates the point: for anyone to distribute software to the widest audience, they have to do both, and there are still others…

            There’s no sane default. You can pick which partisan group you want to join, but they’re all pretty much balkanized to do their own thing in slightly incompatible ways.

          2. >If anything a windows installer that does go wrong is more likely to go wrong in ways you need master’s degree

            There’s a fundamental difference in how “applications” are installed in Windows vs. Linux, because the latter is trying to integrate all software into the structure and hierarchy of the operating system itself – whereas in Windows a program simply resides in some folder on your hard drive, or anywhere really, and does its own thing. That’s possible in Linux as well, but it’s not the default way you do things.

            Of course that paradigm has been muddled over the years, but in principle there isn’t that much that CAN go wrong because the windows installer is basically just copying files to a folder and writing a couple registry entries for itself. That’s why it’s very very rare to have an installer go wrong.

          3. >But on that latter point if you do that in Windoze or Mac you are just as screwed, if not more so.

            Interestingly enough, there are many “complex expert” things that you can do with ease on Windows because so many people have used it that they’ve bothered to include the option/tools to do it. In fact, I think that function comes from the driver for that class of devices, so it’s not a function of the OS itself, but comes from the fact that driver packages can easily add GUI elements to the OS – something which is sorely lacking in Linux.

            Such as, “If you see this USB serial port adapter, always assign it this COM port number”. If you go into the device properties in Device Manager, there’s a menu that lets you pick the number. How do you do that in Linux? It’s probably not too difficult, but you’ll spend the next working day figuring it out.

    3. Linux fixed dependencies and incompatibility a long time ago, if you just use snap packages.
      We fixed UX(mostly) with GNOME and Cinnamon. Grandma can use it just fine, only some more obscure things like web serial being broken by default because of permissions still need a command line.

      The problem is that if you ask “Hey, how do I install Linux” all the hackers and tinkerers respond, advising icky nonsense like Manjaro, where installing most anything requires a ton of commands to first set up the AUR, then a half an hour of waiting for it to compile, or maybe mint, which is very good but still subject to linux’s biggest issue of mutable dependencies for large applications.

      The public doesn’t understand what “advanced” distros are, they think that the stuff all the smart people are using must be full of features and really great, like the “Pro” edition of proprietary stuff.

      People think advanced distros are like a really fancy power tool, but they’re actually like an antique hand saw that purists enjoy, but most pro carpenters and beginners alike probably use electric if they can afford it. The enthusiast distros have *less* capability, because they’re meant for people who want to DIY everything, because their user base does very very different things with computers than average users or even devs, and they value not having anything unnecessary on their system.

      Stick with Ubuntu and use Snap packages and you’ll find incompatible packages and third party repos are mostly not a thing.

  6. It’s difficult to hear that Perens isn’t happy with the way corps are abusing users with FOSS-licensed code. He quit the OSI over their approval of the CAL, the license that covers the software we’re building at work, whose explicit purpose was to protect users’ right to access their data that’s generated and held by the licensed software — sort of an AGPL, but for data. I don’t know his exact reasoning; I gathered that he thought this was overreaching the definition of open-source and turning it political. Maybe he’s softening?

    1. I don’t believe that the CAL is an Open Source license or that the OSI was correct in accepting it. Although the user data provision is well-meant, it is a use restriction (contrary to the OSD prohibition on such) and requires technical competence that a naive user of the software can simply not be expected to have. Fundamental to Open Source is that simple users, like individuals and small businesses, should be able to simply use the software without reading the license, having a lawyer, or being able to perform some arbitrary technical requirement. All of which CAL requires. If you modify the code, you _are_ expected to be able to do those things.

      That I propose another paradigm isn’t really “softening”. I think that Open Source should continue and should have the same rules, and CAL doesn’t belong in that space. Post-Open is also about having users who don’t have to have a lawyer or be able to perform arbitrary technical requirements, so CAL would not work for the below $5M entities who do not modify the software (the number is not cast in stone at this time). I can see having a data-return requirement for the deep pockets entities that Post-Open treats differently (and Open Source treats the same), but I am not convinced that this is our mission.

  7. Companies have a payroll to pay. The money has to come from somewhere.

    Data privacy means no selling / making money.

    If universities (GOVT MONEY) begin or continue to help develop FOSS projects or improvements to existing ones that may be one way to keep funding professional developers.

    Most of what is available that is selfhosted could use a lot of improvement in terms of ease of setup / use / maintenance. That would be the best solution, don’t expose your data in the first place. As that body of software grows, improves and gets more interesting perhaps more people will adopt it.

    1. Data privacy means the companies have to make money honestly – provide goods or services instead of selling every bit of information they can beg, borrow, or steal about people.

      Data privacy laws need teeth. A data breech should cost the companies so much that they don’t keep any more information than is absolutely necessary.

      It needs to reach the point that companies view personal data as toxic waste – don’t gather it, don’t keep it, dispose of it as quickly as possible.

      1. I think the toxic waste level might be overkill, but on the whole I agree. That is a business model I’d be able to get behind, and one that really has worked and should still be able to. However as some of that data is genuinely useful there has to be some balance to be struck, and a debate on just how personal should count as private.

        I’ve however given up on hoping for that any time soon, trying to have data privacy to the extent I’d like just isn’t very compatible with the internet as it stands now. But you can at least keep away from the cloud services, ‘smart home’ crap like TVs and echo dot things etc so you are not actively helping everyone collect all your data…

    2. In the world of FOSS much of the money comes from the computer giants like Google, Microsoft, IBM and the smaller companies that are using it – they are paying for developers time as the FOSS software is so useful for all of them to provide the services they do, and none of them have to spend all the money to develop the entire thing inhouse or buy in a software product that they can never ever leave, tweak, fix, or control that will become the bulk of their operating costs the way Adobe or CAD suits for instance tend to price themselves – ‘oh no you can’t keep using the version 5 you paid for, that won’t activate anymore, etc). They are still making money if they do it right, probably more money as now the development costs are so widely shared and with the confidence that there is no massive upset to their business model that can be foisted on them by some bean counters in another company looking to bump their profit…

      1. The companies that are trying to push SaaS are simply pissing off their clients and shooting themselves in the foot. However, the open source alternatives suffer greatly from design by committee where they’re trying to cater to everyone and nobody is really taking responsibility or putting in too much effort. There’s too much inertia and too much politics, and you can’t really vote with your wallet because the community behind the software doesn’t care if you leave – they’ll just say goodbye and good riddance.

        Which is why the real competition to Adobe isn’t GIMP but Affinity. The success of the latter depends on pleasing customers who want Photoshop or Illustrator etc. but don’t want to pay $400 a month to use it, so there’s a real incentive to get things right.

        1. Loads of folks have for ages loved GIMP, and Blender is the biggest tool of its sort and open source – you might have issue with some projects style, that is fine, some of them certain are less smoothly managed than others.

          But don’t just tar all Open Source as rubbish for being community driven, there is a reason FOSS software has almost entirely taken over in so many areas like web servers – it can be really really darn good, to the point it is way ahead of any single companies ‘alternative’ and that often is in part because it has so many folks invested in it and able to create or sponsor the features they wish to see. There is plenty of incentive to get things right.

    1. Unfortunately FSF has been about the least effective of any organization at reaching the common person. This is because they are philosophy-first rather than user-first. Users consider their immediate needs first, and you reach them by having software they want to run. If you are lucky some will eventually appreciate the freedom and privacy, but we need to be clear that philosophy is our business, and not resent the fact that many of them will never get it.

  8. “A change is needed, and it’s suggested that the time has come to move away from a focus on software, and shift that focus instead to data. Expand the inherent transparency of FOSS to ensure that people have control and visibility of their own data.”

    Uh oh… Pooh Bear’s not gunna like this!

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.