Linux’s Marketing Problem

The cult classic movie Office Space is a scathing critique of life for software engineers in a cubicle farm, and it did get a lot of things right even if it didn’t always mean to. One of those is the character of Tom Smykowski whose job is to “deal with the customers so the engineers don’t have to”. The movie treats Tom and his job as a punchline in a way, but his role is actually very important for most real businesses that rely on engineers or programmers for their core products.

Engineers can have difficulty relating to customers, and often don’t have the time (or even willingness) to handle the logistics of interacting with them in the first place. Customers may get frustrated understanding engineers or communicating their ideas clearly to them. A person like Tom Smykowski is often necessary to bridge the gap and smooth out the rough edges on both sides, but in the Linux world there are very few Toms to rely on. The customers, or users, have to deal directly with the engineers in many situations, and it’s not working out very well for either group. Linux has a marketing problem, and it needs a marketing solution if it ever wants to increase its market share in the PC realm.

If you’ve ever gone further into the diverse and layered world of Linux than installing a pre-packaged distribution like Ubuntu or Mint, you’ve probably come across someone who claims that the proper way to refer to “Linux” is actually as “GNU/Linux”, or has gone on a rant about how binary blobs are dangerous, or any number of other topics. Any of these points may in fact be valid, but will instantly turn away anyone who is simply looking for a quality piece of software and may not yet care about the finer points of FOSS or the motivations of the people who are involved in creating the software. Truly, these developers and coders should be commended and respected for the creations that they have brought into the world but can’t be expected to market their products effectively since they aren’t marketers. These beliefs about software are passionately held and firmly believed, but aren’t a good way of interacting with the outside world. The core problem here is that people with deep knowledge on a subject often have difficulty relating that knowledge to the general public, and they need some help.

2099: The Year of Linux on the Desktop

Let’s look a little deeper into this problem as it relates to Linux and take a broad overview of the current state of operating system useage rates. For desktops and laptops, Windows has 87% of the market, with macOS trailing at around 10% and Linux under 4%. Both Microsoft and Apple have huge marketing budgets and also benefit from some institutional advantage here. But if we look at systems who do not rely on marketing for sales, such as the supercomputing or server worlds, Linux is dominant in every way. Virtually 100% of supercomputers use Linux now. How you define a webserver is contentious, and Linux figures range from 70% to 98% depending on whether you count cloud services and subdomains, but anyway Linux runs the vast majority of the web. Even smartphones are dominated by the Linux-powered Android, with about 65% of devices, 20% using iOS, and the rest being an amalgamation of fading Blackberries, Windows Phones, and others.

From these numbers we can infer that there is some intrinsic benefit to working in a Linux environment. Not only does it have dominance when raw computing ability is needed, either in a supercomputer or a webserver, but it must have some ability to effectively work as a personal computer as well, otherwise Android wouldn’t be so popular on smartphones and tablets. From there it follows that the only reason that Microsoft and Apple dominate the desktop world is because they have a marketing group behind their products, which provides customers with a comfortable customer service layer between themselves and the engineers and programmers at those companies, and also drowns out the message that Linux even exists in the personal computing realm.

You Can’t Handle the Linux

To give an example of how frustrating it can be to get through jargon in the Linux world, take a look at Puppy Linux, a version of Linux specifically designed to run on a jump drive or on legacy hardware. It’s been around since the early 2000s, so it’s not new to the game. Its main features are its small size and the ability to save its state to the jump drive it’s installed on, preserving the settings and files between reboots and across different machines.

Cute, but he can bite!

The installation process is not straightforward, despite its age, and requires two separate jump drives or a single jump drive and a computer with Puppy already installed. It seems as though the website for the distribution should have directions, or at least link to the directions. Instead, the front page is largely a treatise on how Puppy Linux isn’t actually a “distribution” per se, and a technical description of what does and doesn’t count as a true Linux distribution.

Confusingly, underneath this paragraph is a set of download links labeled “Official Distributions”. This is a perfect example of the customer having too much direct interaction with the engineers. It’s as if we have to listen to a lecture on the difference between Phillips and Torx screws before being allowed to use a screwdriver for a simple task. We need to know how to install and use the software first, and then we can investigate its nuances and ideology once we know how to use it.

Of course we’re picking on Puppy Linux a little to help illustrate a point, but this trend is far from rare in the Linux world. On the other hand, a counterexample of how even a simple buffer between users and developers can work, and work well, can be found at Canonical, the company that manages the Ubuntu distribution. Their home page is informative, easy to understand, and not cluttered by jargon. The download page is clearly located, as are directions for installing the software. There are some hiccups though, like the 64-bit versions being labeled as “AMD” despite being able to run on Intel hardware, which is a needless holdover from a forgotten time when 32-bit processors were the norm. Nonetheless, it’s a great example of how smooth a Linux distribution can be when a group of people who understand people’s needs and wants act as a Tom Smykowski-like layer between the creators of the software and its users.

The Problem is Choice

Part of the problem too is that Linux and most of its associated software is free and open source. What is often a strength when it comes to the quality of software and its flexibility and customizablity becomes a weakness when there’s no revenue coming in to actually fund a marketing group that would be able to address this core communications issue between potential future users and the creators of the software. Canonical, Red Hat, SUSE and others all had varying successes, but this illistrates another problem: the splintered nature of open-source software causes a fragmenting not just in the software itself but the resources.

Imagine if there were hundreds of different versions of macOS that all Apple users had to learn about and then decide which one was the best for their needs. Instead, Apple maintained its unity and is all the better for it, from a user’s point-of-view. They also have an annual operating budget of $71 billion compared to Canonical’s $6.2 million, which surely doesn’t hurt Apple either and further cements the point that marketing (and budget size) is important.

“Penguins” by TomaLaPlaya

Now, I am making a few assumptions here, namely that “the Linux community” is a monolithic bloc rather than a loose confederation of people who have specific, often unrelated, interests within the computing world. There is no single point-of-contact for all things Linux-related, and that makes it a little difficult to generalize about the entire community as a whole. To that end, there is no single “goal” of the Linux community and no one in it may even care about having a 1-2% market share in the personal computing arena.

As an electrical engineer and someone who occasionally has difficulty with pointers when stumbling through code, I am admittedly on the outskirts of the community as a whole, but this critique comes from a place of respect and admiration for everyone who has made it possible for me to use free software, even if I have to work hard to figure things out sometimes. I have been using Linux exclusively since I ditched XP for 5.10 Breezy Badger and would love to live in a world where I’m not forced into the corporate hellscape of a Windows environment every day for no other reason than most people already know how to use Windows.

With a cohesive marketing strategy, I think this could become a reality, but it won’t happen through passionate essays on “free as in freedom” or the proper way to pronounce “GNU” or the benefits of using Gentoo instead of Arch. It’ll only come if someone can unify all the splintered groups around a cohesive, simple message and market it to the public. We need someone who can turn something like a “Jump to Conclusions Mat” into a million dollars.

265 thoughts on “Linux’s Marketing Problem

      1. Spot on 🎯
        Funny how every comment below arguing with the article reaffirms it.
        I’ll argue every day of the week that the Nokia N95 was better than the iPhone 1 …interface isn’t everything… but Linux (without a complete OS/GUI replacement like Android) is unsuitable for >95% of the population. Telling them to learn to use it is missing the point

        1. It was suitable for my 90+ years old grand-parents to read/write emails and surf the banking websites… and they were not technology-aware people at all.

          So hell no, give it a nice interface (like Ubuntu for example), and it is way easier than Mac stuff for example, let alone the Windows b***sh*ts (pay an anti virus ? get popups everyday with pre-installed crapware trying to sell you unwanted stuff? I’m fed up with explaining things like : “we changed the mobo, that’s normal if it says you’re a pirate”) .

          1. Agreed. I set my mom up with linux on her old macbook pro, which apple no longer supports or updates – even though it’s still a reasonably capable rig in terms of hardware specs. My mom knows nothing of advanced computer use, and uses linux happily, even growing to prefer it to OSX. The problem is installing it and setting everything up to be user friendly. My mom is fortunate enough to have me to go in and set everything up so that anything she might need is just a desktop icon click away… and once thats all set, linux just runs like a fox… but if it was up to her to install and configure everything, she’d be at the apple store shelling out a couple grand for a new, soon to be unsupported macbook, and her old macbook would be a paperweight. Point is – linux is clearly the best OS for a lot of reasons, but lacks any distro that holds your hand through the process. They say OSX is more user friendly than linux… but OSX is built on UNIX, which is almost identical to Linux. The difference is that OSX, which is basically a desktop environment that runs on UNIX was designed by a well oiled machine to hide the inner workings away from the average person, offering standardized and slick UI apps to configure things that a linux user would likely need to get their hands dirty under the hood to accomplish… or sort through a countless number of options for distros, desktop environments etc. The amount of choices you have in linux is overwhelming for the common person, who just wants to turn on their computer and have it all set up and working straight away. Couple that with all of the aggressive marketing commercial third party development that favors corporate platforms like MS or OSX for the obvious financial reasons, where linux is a community of ‘do it yourselfers,’ and its a no brainer why a relatively small percentage of people use it, even though it’s technically a better system. If Linux wants to dominate the desktop world, they just need to offer a distro that makes all of the desktop environment decisions and has wine set up to run windows apps with little to no effort from the user, then market that distro so people know it exists. They do that, they’ll rule the world.

    1. Hijacking the first post…

      Linux doesn’t have a lack of marketing problem. Marketing is the problem of Linux.

      It’s a system built by specialists and experts to serve specialist needs in the industry and IT. It is not a suitable system for desktop users from the standpoint of system philosophy, construction, or function.

      The problem is the people who aren’t quite developers, and aren’t quite regular users either, who believe that by pitching Linux as the next best thing since sliced bread to ordinary users, they would either make the users see the light and convert to the philosophies and standards of Linux or OSS, abandoning their casual consumerist ways and become just as nerdy as they are, OR, that making enough people use Linux would make the actual developers of Linux to bend backwards in order to accommodate these users – without actually being paid to do so.

      Neither is going to happen. Linux for the desktop is a square peg in a round hole. Sometimes the square is small enough to fall through, or the user simply accepts the fact that they have to hammer the corners off and make dents in the round hole – but the idea itself is stupid from the beginning.

      1. Linux is actually Unix, indeed it’s taken over the server market from Unix.

        Unix was developed as a next step, and to legitimize the work was promoted as a means to handle patent processing at Bell. So it got practical use among non-comouter types.

        Some of its design is inherently secure, and some of that has been copied into later operating systems.

        It was written about as the next best thing to sliced bread in the early eighties.

        GNU and LInux came because some wanted that, but Unix was so expensive. And tgen Linux overtook Unix, so you barely hear about Unix. But it is the same operating system, robust because of design, and because its had fifty years to become better. (Windows may have stabilized, but I gather for a time they started from scratch with a few iterations, rather than build on good.)

        It’s a different type of operating system, but not lesser.

        Those who feel it needs to bend to their vision, usually colored by Windows, wouid ruin Linux by making it something else.

        A great thing about Linux is that it’s in pieces. The kernel is separate from everythingbelse, so you can run a command line or a GUI on top, or write your own interface to do what you want. My tv set has the Linux kernel inside, then some software that handles tv functions runs on top.

        That’s some of the reliability. A utility or app can crash, if written badly, and usually the operating system can keep running. No blue screen of death, no regular rebooting.

        Michael

        1. Linux isn’t really UNIX. It is POSIX commands with the executables rewritten from the ground up, but many of the Microsoft command line utilities are also POSIX, even down to the command line arguments. There are significant differences between the structure of UNIX and Linux, but because of the open source aspect of Linux and the profit driven development of *both* Windows and UNIX, the more active open development community has advanced Linux to outpace both of the others. Profit driven software development moves only when there are significant deficiencies that create business risk or an upgrade would sell more software. Otherwise stagnant IMHO.

          1. Windows hasn’t maintained its sham POSIX layer for nearly two decades. They only added it so that they could win a US government contract, and even then it was hardly functional. As far as I know, it’s no longer an officially supported component of Windows.

            And, no, Linux is not “POSIX commands with the executables rewritten from the ground up”. Linux is a KERNEL not the commands. The Linux Kernel was aspired to POSIX-compliance, and was written to support the UNIX ABI, with syscall compatability in mind. It has since eclipsed any of the many UNIX variants that inspired it.

            The Linux COMMAND-LINE is based partly on GNU, which WAS a “rewritten from the ground up” implementation of the semantics of most traditional UNIX command-line utilities, but did not have a viable kernel at the time – GNU’s “native” kernel is called the HURD, and has been in constant development for about 35 years, but never reached stability or attained a significant user base.

          2. Yeah, I’d say that’s about right. Pretty good summary. Still, as one who was on the Solaris annual thousand dollar seat plans and gave up many years ago on Windows for serious computing because of stability, I think Linux was a true gift to the future of computing. Thank you Mr. Torvalds and the folks at MIT artificial intelligence lab as well as all the contributors to the current code. I contributed to code changes in an IP address manager a few years ago and it was very interesting to see the very high level of quality and integrity of the process.

          3. My main point to the previous poster was that Linux wasn’t randomly designed by hobbyists, but based on Unix which has been around for fifty years. The philosphy didn’t rise up in the early nineties with Linux. Unix was created by “computer professionals” and was desired by many in the early eighties at least.

            And aside from design, Linux isn’t just “hackers” as too many here want to portray it. Some people get paid for key parts, and some must have “computer science” background.

            Yes, anyone can write utilities and even apps, but that used to be common with commercial operating systems. But since the Linux kernel is strong, a badly written app won’t take much with it. I’ve paid money for commercial OS apps that weren’t great, or in one case had a fatal bug, that happened more tgan once.

            And since it’s open source, people can dig in and find and fix things. Not everyone, but it beats that time in 1984 where I wanted an assembler listing with a margin, so I had to disassemble code to find what to patch, and then write a patch, and find a place to put it. Then legallly I couldn’t help others by passing the patch around (the license said I wasn’t allowed to dusassemble the code).

            I did patch a small Linux app a few years ago. It wasn’t a bug, just a bit that the auethir admitted was lacking. I’ve barely used C but figured out what was needed and where to lut it. Anyone with some skill can do that, and open source means one doesn’t have to start from scratch.

            I wasn’t implying that Linux copied Unix code, but I always thought it was very much about being Unix. And while Linux has advanced in thirty years, it is still functionally very much like Unix. But even if internally the kernel is different, Linux follows Unix philosophy, so again it’s not a random design by “hackers”.

            Michael

          4. Linux wasn’t POSIX compliant from the get go, it got worked in around the 96-97 timeframe on most distros, though there were some claiming full POSIX compliance earlier.

          5. >”My main point to the previous poster was that Linux wasn’t randomly designed by hobbyists”

            But nobody said so.

            You’re just ranting past the point. Linux as an ecosystem is more business-to-business and developer-to-developer than a system built for users. The structure and the particular solutions are geared towards system administrators and specialist applications, and none of the people really care whether it’s useful or comfortable for the regular users because it is not their interest to care about it.

            When engineers of any kind make tools for themselves, they know them inside out. They know which button to press and which wires you shouldn’t cross. They become so familiar with the systems they’ve built that they simply take it for granted, and are often genuinely surprised that somebody else doesn’t see how brilliant or simple it is. But that’s just the point. it isn’t brilliant or simple – that’s just the mindset you’re in. Everything is easy once you already know how to do it, and it’s exactly because you know how to do it this way that everything else seems worse.

            There’s a tremendous amount of work that needs to be done to sanitize any system for the “regular guy” – to have sane defaults and a nice interface – and to work with the OEMs and other hardware and software vendors in the field to ensure that things operate together more or less seamlessly. If you’re only doing it to creates solutions to your own field, none of that matters. As long as the devices and software -you- need are there, the rest of it can go rot for all you care.

            And that’s where Linux is. There’s all these people who are adapting it for their own use, and almost nobody who’s looking at the big picture and trying to make it into something that everyone could use. Some are, but they lack the resources to do anything. There are a much more people who are simply trying to market it as such anyways. Well, because these “more people” aren’t developers or software engineers but regular users, they can’t do anything about it either. They’re at the mercy of the actual developers, who don’t care about the users because nobody’s paying them to care.

            For example, if my chosen distro doesn’t carry the software I need in its repository, I’m suddenly having to fight the system just to install software. If the kernel doesn’t include drivers for the hardware I want to use, I’m having to fight the system just to find and install the correct drivers. I’m really up against the maintainers and the “community” who feel that my individual needs aren’t important enough, because they’ve really built the system for themselves and the software and hardware they want to use. They don’t bother to include things that somebody else would like to have because that’s just extra work for no pay. This is not a user-friendly system.

          6. You can contrast it with other systems like Android, OSX, Windows, where the point of the operating system is to sell it to as many people as possible, which means to build it into a platform that others can use with as little effort as you can manage.

            That means you aim for stability of the interfaces and fewer possible permutations – ideally you’d have just one default configuration that everyone uses for all eternity. That way you minimize the effort one needs to spend in order to target or use your platform. It’s like standardizing to the Metric system instead of having twenty two different inches in different countries.

            Of course, this is literally the anti-thesis of what Linux is about. This is why Linux is never going to be the desktop OS for regular users.

      2. Every time I see these comments I wonder when you actually used Linux as a general purpose OS, if ever.

        As the article states, modern Linux is not difficult to use, having a graphical interface that is ready to learn was achieved a long time ago. Easy installers exist, though many end users never install operating systems so that is still a barrier to entry.

        Your post reads as someone who has only ever interacted with Linux servers or embedded devices and thinks you have to manually enter irq settings or back and front porch configuration for X, those days are looking gone. And all that is really needed is marketing.

        Currently any child can install Linux, potentially even more readily than Windows. The difficulty is roughly the same as installing OSX only the driver support is better. Installing software from a repository is as easy as using any app store, and for software that isn’t listed you can download installers from a web page like the other OSs.

        If we can successfully say the above things and demonstrate them users will come. Switching operating systems is always rough though, so the community needs to improve how welcome those new users feel as well as how readily they can find solutions to issues.

        1. Yes, it is very easy to install if you happen to have the magical hardware that works with your particular LiveCD, and/or you don’t demand much of your computer beyond the point of “turns on, shows a picture, makes a sound.”

          Linux and Linux distributions as an ecosystem are a whole clusterfxxx where, if something doesn’t work, it really doesn’t work, and you’re up the creek in no time. It’s an expert system that is designed to be hackable like a bunch of meccano – so it can run a server or a supercomputer, but when it comes to consumer hardware it really really just doesn’t work all that well and you need to be an expert yourself to even understand why.

          1. Translation… “I am a gamer and making my video card work is hard”

            Yah, sorry about that. What would you like Linux devs to do to make that better for you? It’s mostly a decision of the video accelerator industry.

            But pretty much everything else just works. Look at just how many PCs offices buy which include the Microsoft tax just to use Outlook plus the most basic parts of MS Word.

          2. I rarely have to resort to special measures these days to get Debian, Ubuntu or even CentOS/RedHat to work correctly on just about any hardware – server, desktop or laptop. Usually, the install is completed in about 10 minutes, after answering a few mundane details like what you want to call your computer and what type of system it will be – server, desktop or whatever. I’m not even exaggerating. If you install Debian with a GUI, you will most likely not have to install anything extra afterwards and it will boot up with your graphics, sound, network, wifi and kitchen sink working properly FIRST TIME. No special commands, incantations or goats sacrificed. I’ve done this hundreds of times, and it literally “just works”.

            Try that with windows.

          3. If Linux is hackable like a bunch of Meccano, then that makes it more suitable to a diversity of uses and systems and architectures and CPUs and weird hardware, not less. If it were intrinsically for servers and supercomputers, it could more easily be monolithic and static and unhackable, and it wouldn’t matter because there’s less diversity (and more money) in those ecosystems.

            I find the situation quite the opposite as what you describe. If something doesn’t work under Linux there’s a lot more information that can be had about it, a lot of people motivated and most importantly _empowered_ to make it work (because it _should_ work, dammit), and the ultimate recourse of figuring it out and fixing it yourself is always a possibility, which simply isn’t an option for proprietary systems. When something doesn’t work under MS-Windows, or MacOS, it just doesn’t work. Period.

            Fortunately, most things that most people use most of the time are well-supported and work well. If you’re just a consumer, and Microsoft or Apple told you that your piece of 10 year old hardware was no longer ‘supported’ even though it was still perfectly functional and serviceable, you’d shrug and buy a new one. Nothing to be done about it. But with Linux, there’s a pretty fair chance that somebody who knows just enough and cares just enough has updated that driver to keep it working just a little longer. And if not, then you’re not in any worse shape than if you were using a proprietary OS, and you don’t have to pay a premium for the privilege of being shamed by a giant corporation for not being a servile enough consumer.

          4. >” then that makes it more suitable to a diversity of uses”

            Then why aren’t all appliances you buy at the hardware store simply built out of meccano, or other similar things like standard aluminum extrusion? After all, if it’s more hackable like meccano, then it must be suitable for more uses.

            Not really.

            Also, if something doesn’t work under linux, there’s generally a lot more information about WHY it doesn’t work, but often very little solutions about how to make it work. Partial solutions are dime a dozen, but I’m not happy about my printer or my speakers “half-working” – I want them working like I paid for them to work.

          5. >”It’s mostly a decision of the video accelerator industry.”

            …to stand back and wait whether the Linux community would finally settle down to provide a proper software delivery chain and stable standardized interfaces so they wouldn’t have to target 40+ slightly different variations of the same platform that all want to own the drivers for themselves rather than let the industry support their own products.

          6. I’ve used Ubuntu and Linux Mi t exclusively for ythe last few years (started about a year before Microsoft ‘killed off’ XP) Seeing most of the comments just shows that ‘the nerds’ would rather argue amongst themselves about the basic system and where it started rather than try to be more positive about the ‘LINUX’ experience.. Win 10 STILL has BSOD so I completely gave up on it. It may have largest market share but it’s really a pretty crap OS and in my opinion, has gotten worse? It took about 14 yrs to get XP reliable enough that you didn’t need to re-install every few months, the best anyone can come up with on Win10 is to re-install after an update screws your system. With all the money they have, you would think they could actually make a stable OS? On my admittedly older laptop and desktop I have had zero issues with hardware, wireless and even Nvidia graphics.

      3. I disagree, Linux is not some specialty system. Yes it was developed by special interests, but that has brought a great deal of flexibility in its design.
        In my opinion, Linux is already perfectly suitable for desktop use. It has its hiccups, but most of them are minor and not at all related to its design or intended use. The real problem lies with lack of support from the companies that consumers typically deal with. As Brian mentioned in his article, Android is wildly popular. Is that somehow in spite of Linux not being suitable, or is it the result of some marketing effort by a large company?
        So far my experience with using Linux on the desktop has come in the form of Ubuntu and Mint. I have installed it for a handful of incredibly computer illiterate friends and family and it works just fine for them. Drivers were the most annoying problem I encountered with some really old graphics cards and some Brother printers, which really wasn’t that bad to work around. Neither of those have anything to do with Linux not being suitable for the desktop. But they are a result of the lack of support from those companies. Companies choose not to put much effort into supporting Linux because it doesn’t have many desktop users, which is in no small part because of marketing.
        I think your comment is a perfect example of what Brian was talking about. Let’s say for example that someone just wants to use their computer to browse the web, and occasionally print something. Would you tell that person that Linux is a specialty system, that it is not intended for that use case, and that the idea of using it was stupid from the beginning? The reality is it already works just fine for most people.

        1. >The real problem lies with lack of support from the companies that consumers typically deal with.

          Which is an issue of Linux being hostile to such companies by actively resisting their ways of doing business. You have questions like, why can’t I have binary drivers that interoperate between minor kernel revisions? Because there’s no stable ABI for them, because the devs are demanding you to give out the source code rather than the drier itself. The OEMs don’t want to do that, so it’s just people banging their heads into a wall because of idealism and politics.

          >As Brian mentioned in his article, Android is wildly popular. Is that somehow in spite of Linux not being suitable, or is it the result of some marketing effort by a large company?

          It’s rather because Google abstracts everything the user sees away from ever having to deal with Linux at all, and standardizes the base system to the point that equipment manufacturers are actually able to target their hardware for the platform. Other than that, Android is actually pretty crap.

          Like if I have a 32 GB SD card in my phone, why can’t I install this software? Why am I constantly getting full on the overstuffed internal flash? Oh, because some files just have to exist within the rigid hierarchy of the filesystem, because that’s the way Richard Stallman intended it and he saw it was good.

          1. Though on a real linux system you can symbolic link different parts of the root filesystem to directories on different drives. I had a machine set up like that back in the day when hard drive space was expensive, so I had it spread over a 40MB and two 120MB drives.

          2. But that’s just band-aid on the problem.

            The rigid hierarchy of the filesystem was designed for a different purpose. The insistence on doing thing “correctly”, e.g. having all shared libraries, splitting things into this and that, had benefits in disk and memory usage, and the ability to update libraries for all software at the same time – a feature which is nifty to have in a server application back in the 90’s when system resources were at premium – but at the same time this makes software less portable and easier to break due to various differences between what libraries and provisions are in use.

            The very first major question I ran into when I had my first encounters with Linux was, “How do I download software?”. Well, back then, if it didn’t come with the stack of diskettes or on the CD, there really was no easy way to do it. You had to find the correct dependencies and download them all or you could compile from source using the libraries that were present, if you had them all. A little bit later, people “solved” this issue by introducing the software repositories and automated the package management.

            But the whole distribution model using software repositories was just another band-aid solution that simply hid the main issue.For a software vendor, deploying applications for Linux in general is still a nightmare. It really can’t be done in any reasonable sense unless you target a very specific niche of customers. You have to rely on the distributions to include your software, which means there’s a third party between you and your customers who really decides whether the software is even available in the first place.

            In this sense, every Linux distribution is it’s own little kingdom with walls put up against outsiders, and the whole spiel about “freedom of choice” is just more division and more incompatibility – which drives away the software vendors, which drives away the users.

      4. > It’s a system built by specialists and experts to serve specialist
        > needs in the industry and IT.

        Close, but only close. Linux is built by experts mostly to serve them self.

        For technical advantages that’s quite good, because these experts want the best and eventually get the best. But there’s (almost) nobody who works for these non-technical people, because these non-technical people can’t participate in the GNU/Linux ecosystem due to the lack of, well, technical skills.

        There are a few exceptions, though. One of them is GNOME. Also distributions like Ubuntu or Mint or Budgie. These still want to serve them self, but they have either a business case for serving non-skilled people or they have high aesthetic expectations on the user interface.

  1. One of the things that appeals moat to me about running Linux rather than MacOS or Windows is that it gives me a break from being limited by what end-users find intuitive; it’s software written for hackers by hackers and there is no need to compromise on functionality or flexibility to accommodate non-techie users.

    In theory there is nothing to say that such a configrable system can’t be given a user interface that end users can live with but it seems entirely possible that if that interface is baked in to the level where you MUST use the point-and-click way rather than the command line to do XYZ developers and geeks will find it less appealing.

    It might be a better use of the communication / marketing effort to give end-users the basics they need to grok bash and python and basic command line utilities, as well as the basics of the UNIX philosophy where each tool does just one thing (and does it well) but can be chained and composed together with pipes and scripts etc. so that instead of staying in the GUI rut where options are limited by a screen space they can learn command line options and hotkeys and script basics to become power users and (maybe, just maybe, eventually become developers by accident).

    1. The short answer is, no. The longer answer is, rejecting convenience is a very hard sell. The very reason most users pay thousands of dollars for hardware and software is to make the tasks *they* want to do as easy as possible. If they still have to do a lot of the grunt work, what’s the point? Learning markup languages and running documents through a typesetting processor might be more powerful, but it’s far more understandable to present the user with a sheet of paper, to which they can type, point to what they typed and tell the computer, “Make it bigger. Bold. Little smaller. Great!”

      1. Convenience is having a shell I can trust. I can live with an optional GUI and maybe even will use it sometimes. But I distrust every action I only can do by mousepushing.
        Why does everyone accept needing a driver’s license but expects to power on a computer and get along with it within seconds despite it being thousands of times more complex than driving?
        Learn about your tools (chainsaw, shell, axe, compiler, …) and you’ll have superpowers!
        No learn – no complain!

        1. There is a small point here. Computers have transformed human life and save us countless hours, yet the majority of users refuse to put in even a few minutes of maintenance and training. Even applying basic updates didnt catch on until people started losing money.

          1. You imply that users fail to apply updates due to ignorance/laziness, but in some cases we avoid updates because we’ve had too many painful experiences where updates broke obscure functionality that we depend on, and vendors are making it harder and harder to downgrade back to a working state.

        2. Everyone accepts that you need a license to operate an automobile because even the smallest car is basically a 2000 lbs whistling missile of death and a computer is not.

          Operating a car recklessly routinely leads to death or maiming, whereas a recklessly operated computer routinely leads to little more than wasted time.

        3. Your comment perfectly illustrates Byan’s point. This is not about *your* needs as a typical geek. It’s about the typical non-geek, for whom the computer is just a means to an end. You cannot and should not expect all users to be power users or even care to become power users. Yet the stereotypical nerd response always boils down to a smug “Pshaw! All you need to do to succeed is put in more effort. You shall be enlightened!”. The ordinary user doesn’t care about becoming enlightened. They just want to get a job done in order to move on to more interesting things. Sadly many geeks just can’t seem to grasp the reality that computers are just a utilitarian tool for most people.

          1. Good point. For most users the most important thing about their computer is the collection of personal files stored on it. Think storage device health, keep those valuable files on a separate and backed up partition; it’s easy to change the computer and / or the OS that’s used to operate on those files.

      2. “The very reason most users pay thousands of dollars for hardware and software is to make the tasks *they* want to do as easy as possible.”

        Yes, my wife is an Apple fan, Win8 and Win10 made her hate Windows.
        If she has a problem with her iPhone, iPad, or iLaptop (whatever it is called): Apple support is just a phone call away, and they treat her like a human, and help her resolve the problem.

        1. Yes that’s exactly why I struggle with Linux- AND it’s so damn hard to installl applications.

          Mac OS wins the installation race hands down. Put in CD or open double click on a disk image – drag application to applications folder (or shortcut)

          That’s 90% of installs not from the App Store

          Contrast that to Linux or Windows.

          I get that there are people that like to know every last detail of the install files- but I dont.

          But given the market share of windows it’s apparent most people dont care about ease of use.

          Mind you under Tim Cook he is doing everything he can to make apple a has been

          1. It’s not that people don’t care about ease of use, but that there is an appropriate level between “build it yourself out of sheet steel”, and, “pay $10 because you have no option to do it yourself”.

          2. Sorry, I clicked “report comment” by mistake. Hope it doesn’t affect your comment!

            This might depend on the particular application, but for most things I want to use Arch and Debian are faster, and MUCH more pleasant to install apps. I’ll take sudo pacman -S or sudo apt-get install over fumbling around an “App store” any day of the week.

    2. It’s people like you who want force end users into being another smelly geek enmeshed in technical minutiae that ensures Linux will remain the OS of basement dwellers and IT admins.

      You just don’t get that 99% of end users don’t need what you recommend and is a major turn off for normies who just want to get the job done and go home.

      1. The point is though, that although it should be intuitive to the end user that should not limit others. Even if that means an option like “expert mode” or something, people who want to use the command line for everything, for example, should be able to do so.

        1. The problem is that once you start crossing wires under the hood, the nice buttons and the UX meant for the regular user breaks down and no longer works correctly, so the expert user has to ditch the “normie features” in order to do what they want, and the non-expert user cannot use any of the expert features because it would break the rest of it.

          It is like trying to design a car that works as a farm tractor AND seats five and gets 50 MPG on the highway. Can’t be done.

          1. The problem arises when people who don’t know what they’re doing start fiddling with those “expert mode” settings based on some nonsense they read on the internet.

            For example, the top three results from a quick search:

            10 Super Cool Ways to Make Windows 10 Run Faster like Bolt
            10 easy ways to speed up Windows 10
            19 Tips & Tricks To Speed Up Windows 10 And Make It Faster

            Will following these ‘guides’ make their system run faster? – I doubt it. Performance tuning is a little more complicated than that and performance enhancing drugs don’t work on computers.

            Is there a chance they’ll break something and need to call support? – Probably. It would depend on how much they think they know.

            PS: Oh… Bother. The comment system seems to be playing up again. Probably this end as usual :)

          2. The fundamental problem is that there shouldn’t be two ways to do one thing.

            If you have a plain-text configuration file that has a specific script and a syntax, you either write it yourself, or the computer writes it for you. If you mess up with the file manually, the parser that makes the magic GUI button change a setting will either fail, or over-write and break your custom edits to the file. Maintaining coherence becomes hard and things break down too easily when you’re trying to do both.

          3. The car that works as a farm tractor, seats 5, and gets 50mpg is called a Lada Niva Diesel. Well, not sure if it does 50mpg but it definitely could if it were in production with a modern diesel engine, possibly not the 1980s non-turbo Peugeot unit.

          4. Another car that could do it was the Ford Model T.

            But even the Niva doesn’t really go across fields like a tractor does. It’s merely possible if you absolutely must.

          5. By modern standards, the Ford Model T is pathetic: aerodynamically horrid and incapable of seating 5 people safely. Despite the car’s weighing only 1200 pounds, its primitive low compression engine would typically deliver 15 mpg.

      2. Is that why Microsoft randomly shuffles stuff around in control panel with every new release of Windows? So that even a Windows admin with 30 years of experience can’t find it?

        Meanwhile, Unix user from 30 years ago would not have much problem using Linux box.

          1. No – you don’t need the control panel because all the settings are listed hierarchically in the group policy editor.

            After all, both of them just poke bits in the registry.

        1. There is truth here too. Dont imagine that the cli is inherently difficult. It may not be intuitive, but it is static and simple. This means that the first line of support for most users (the first result on google, the most tech savvy person in the room) is incredibly effective when using the cli.

          People that know nothing about computers take to linux surprisingly well. I’ve lived with a few such people who have commented on how easy it is to solve a “problem” using trustworthy forums and shell commands. It’s the simplicity of knowing exactly what keys to press, instead of being told to go into settings to find a switch that was moved into the control panel or broken down into a separate radio button.

          1. I’ve noticed that Mint is a LOT easier to fix than anything I’ve used before. Someone on a forum will usually give you a line to copy and paste into a terminal and there, it’s done. No need to give some stranger remote access to your computer or have to print out reams of instructions that have to be manually entered without any mistakes and multiple re-boots in-between entries. (or search and find similar problem has been known about but not fixed for 10+yrs -Microsft) Most common seems to be ‘nuke it and start over’ when there is a Windows problem

        2. Why does the GIMP GUI have (unless it’s been given a drastic update since I last tried it) have just one menu button with everything else cascaded from it, with unrelated functions grouped together and functions that should be on the same submenu widely separated in different submenus?

          My guess is that whomever made that menu just grouped functions by contributor to the source instead of giving any thought at all to functional groups, or putting major categories like File, Edit, Filters etc on a menu bar.

          The GIMPshop GUI tried to fix that but it still had some “WTH?” issues the last time I tried it.

          1. When was the last time you used it? I use it regularly and it has multiple menus, the expected file and edit, along with image, layer, tools and filters. As far as I know its been like this since 2.8, at least that’s when I started using it.

          2. Not only does GIMP have a conventional menu bar, the same menus can be entered by right-clicking in the image window.

            Many of the menu items in GIMP can be moved from one category to another, or put in both categories, with a bit of editing. If you think “Curves” should be in “Filters” rather than “Colors”, you can do it. You can also make new categories, and several plug-in developers have done so.

            Plug-in developers choose the default UI location of their contribution, although within a category sorting is alphabetical unless forced to be otherwise.

          3. >”Many of the menu items in GIMP can be moved”

            I’d rather take sane defaults.

            When the default is jumbled up, everyone ends up making their own configuration and then when I sit on someone else’s system, I can’t find anything – or when I have to re-install, update, etc. I always have to spend an insane amount of time just getting everything back to where it was.

        3. @Miroslav – I used to think the lack of random, arbitrary and pointless reshuffling of admin settings was one of Linux’s greatest strong points.

          Then came Hal, Pulseaudio and now Systemd. The Windows Control Panel is starting to look quite sane and stable.

      3. There is real confusion about what constitutes a computer. Let’s face it, most people are really using portable terminals that have common local apps like word processing, rudimentary spreadsheets, email terminal programs and the like and these users probably should use the defacto standard terminal environment, Windows.
        You’d be crazy to use Windows on a computer or any serious system. It wouldn’t be a month before your whole system would crash because a mouse program update popup would seize control of some process or some automated windows program would overload metered bandwidth or the computer would just stop.
        There are too many forced downloads and system updates where Microsoft doesn’t ask permission for this operating system to be used for any serious computer systems, to say nothing about the architecture that is the garden of Eden to viruses…

    3. You represent a tiny portion of the population that is willing to stare at a bash prompt and RTFM for hours on end just to do basic shit. The rest of the population wants an interface that is intuitive and gets the job done (there’s a reason smartphones are so successful). Sure, it might waste a few CPU cycles to do it, and maybe it doesn’t do it absolutely perfect, but it’s 95% of the way there and that’s good enough.

      1. Linux is not for everyone. Most people are not very smart … so let them use Photoshop to resize 800 images by hand.

        Instead of using CLI to do it in 30 minutes or less.

        Even Microsoft is bringing CLI back. THAT should tell you something.

        1. Microsoft never actually removed it, they just made powershell the default CLI.

          And please, don’t suggest bash on Windows. It was a daft idea to begin with and an even worse implementation.

        2. Incidentally, Photoshop has batch processing.

          It has a feature where you press a record button, then point and click what you want to be done to the picture, and when you press the stop button it has recorded a script that can be automatically repeated to any number of files.

          It’s far more powerful than what you could do with the usual CLI tools, because you can also automate other stuff, like adding effects and adjustments, watermarks, texts, etc. to every picture. You basically do whatever you do, and it repeats what you just did.

          1. This is why you don’t use examples of software you do not use;) With this comment I could probably find the feature in the GUI of Photoshop (I use PaintDotNet) and use it within 15 minutes. Resizing even a single picture in CLI.. don’t know where to start…. google and hope someone has a non outdated tutorial on this exact use case and the required libraries are available for my distro.

          2. You solved 1 problem … while paying several hundred bucks to Adobe. I paid $0 :)

            There are hundreds of computing problems like that, and in Windows each one of them requires shelling out $$$ to some company … that might not be there in a month or 2.

            While all the tools are there on Linux and BSDs, for free. And I can modify them as I wish.

          3. >”You solved 1 problem … while paying several hundred bucks to Adobe. ”

            I already have Photoshop, because I need to do more with pictures than just resize them. Other graphics suites also have automated tools, even for free. It’s hard to imagine a person who works with photo files and only needs a CLI program to resize them.

        3. Ever heard of batch processes? Kind of hilarious to compare one of the industry leaders in image processing to some CLI program. But hey, I’m not as smart as you, right?

          This is precisely the elitist bullshit attitude that makes me really dislike the Linux community.

      2. The last time I tried Ubuntu, (Lububtu on an older Compaq laptop, but the “Light” version was still really slow) it had no way at all through the GUI to change the date and time display format. I had to look up how to open a terminal window, what command to type in, and formatting codes with % signs etc. This was not an early release at all it was after Ubuntu had been around for some years – and nobody had gotten around to making that basic task *simple*. In Windows I can right click or doubleclick the time then click Adjust Date and Time then… adjust the date and time. A few seconds to do what it took me a quite a while *just to hunt down what I had to do* to make the time and date display in 12 Hour and US formats in Ubuntu.

        To gain traction on the consumer desktop, a Linux distro aimed at that user market must have all the small things like that just as easy and quick to do as they are on Windows and Mac.

        On Windows, especially with the inclusion of PowerShell, one CAN do just about everything from a command line *but you don’t have to*. Same deal with Mac OS X.

        On both there are some things that IIRC can only be done from a command prompt window or PowerShell, but they’re things the vast majority of users will never ever *need* to do.

        1. Wow, I guess that was back when you had to buy the Linux system on CDs because the modem download speed was just sooooo slow.

          Just to throw some fat on the fire, I’m not Windows user but I admin several machines running Windows 10 and I must admit that MS have improved it a lot. I’d go as far as to say that they’ve taken note of developments in the Linux desktop world; virtual windows, standard Apps integrated, while taking inspiration from the smartphone’s desktop.

    4. I am a developer (In theory at least, my current job involves many other tasks besides computer work). The chances of me willingly installing something besides Kubuntu for desktop use are very low.

      I’ve read plenty about the UNIX philosophy, I see that many people love it.. and I have basically no interest in actually using any of that kind of software.

      GUI interfaces can do things text can’t. Well designed GUIs have almost nothing to memorize, which is important for programs you might only use once a week or less.

      And if you read the appropriate forums, every few months someone wrecks a drive with dd. They rarely make mistakes like that with Etcher.

      I don’t know why the command line is seen as a matter of education rather than preference. Everything I learn about minimal software makes me like the full featured versions even more.

      1. GUIs definitely have something to memorize. Most users simply already learned the basics. Copy-Paste isn’t a concept that you “just know”, for example. Also, honestly I prefer reading a manpage than e.g. clicking through MS Outlook’s account settings(urgh).

        That said I agree that a good GUI should be the default setting. More advanced users can instead download the minimal installer image and install the UI of their choice. And some distros do it pretty well already, so I don’t know where some complaints come from. I know from experience that e.g. KDE Plasma 4 was already pretty intuitive for beginners coming from Windows, and since then Linux made leaps forward – Pop! OS, Linux Mint, Kubuntu etc. “just work” nowadays unless the hardware is simply not compatible, program installations are as easy as with a mobile appstore and updates often only require a click and password to install.

        Yes, it could be better. But I think the main issue nowadays is not the UI itself but the approachability of the whole Linux ecosystem. I can imagine beginners have a hard time to know where to start, and of course for the average user the main barrier is the installation itself – as long as it doesn’t stand in the shop next door most people won’t use it.

          1. This is so domain specific that making the statement is rediculous. Professionals use and want what they were trained to use.

            Professionals that continue to develop themselves only Need a GUI for things where that presents an advantage, which it absolutely does not in all cases.

          2. More professionals are trained using straightforward ad powerful tools rather than esoteric and obtuse tools.

            When you look at the part where people are becoming professionals, the GUI wins, and that’s the reason why these days you can find Windows running on industrial PLC controllers, Point-of-Sales systems, ATMs, electronic billboards etc. even though according to the Linux afficionados that’s a stupid idea.

            When it comes to training the professionals who maintain these systems, you can get twice as much done in half as much time because you don’t have to deal with the opaqueness of command line tools and trawling through config files full of obscure script. A lot more people can do it, and it’s far cheaper to train them.

            Since you’re no longer connecting to your equipment by serial port or phone line, you can actually remote desktop into the PLC and use it as if you were using your laptop – which makes troubleshooting a ton easier because you can pull all sorts of graphs and diagrams out of it immediately… etc.

    5. “It might be a better use of the communication / marketing effort to give end-users the basics they need to grok bash and python and basic command line utilities, as well as the basics of the UNIX philosophy …”

      This comment exactly illustrates the consumer-engineer clash the article talks about. This is not going to happen. It’s just not. People like my mom or cousin or friend’s wife or uncle or… who’ve only ever used computers through GUIs and who see anything beyond that as wizardry and have zero interest in anything requiring CLI work will never want to learn it. They just want a computer that works when they want to check email and watch YouTube videos and print things.

      These are folks who don’t even want to change their own oil in their cars.

      Sure, it would be great if they could learn these skills, but they won’t, and they won’t want to. For Linux to grow among “regular consumers” it will need a marketing push promising it is a system that just works, and it will need to be able to deliver on that promise.

      Sure, it might work for a few people, but, in general, trying to teach folks about command lines and Python when they just want to check Facebook and print recipes would be a huge waste of time and money.

      1. In other words, it needs a concentrated effort and a a few billion dollars in development and market research, and greasing some palms in the OEM market to catch up with the competition.

      2. I agree, but don’t over generalise the customer base. They want certain things, we have to provide solutions that are good enough.

        Technical education is a good thing to push, but it’s better oriented towards technical or technically minded users, so it’s all about scope of interaction. A math professor transitioning from Ms office will want a graphical equation editor for instance, even if they are already familiar with TeX.

        It may be good to develop marketing with a mind to this as tiers of interaction based not only on technical aptitude, but also willingness to learn. Great documents and pages that laid simple and straightforward solutions while also proving rabbit holes to more technical methods. Just make sure those technical methods also point to simple education in each topic as well.

        The above is a big project, but honestly it’s something that the open source communities can provide.

    6. I think the problem is that there’s different types of users, and a lot of people think “intuitive” just means “easy”.
      It’s not a question of easy vs difficult. Even the geekiest of geeks benefits from good intuitive design.
      What defines intuition is things being where you would expect them to be, and behaving how you would expect them to behave. It doesn’t necessarily imply simplicity or minimalism.

      Unfortunately, what is currently being pushed is simplicity aimed at the lowest common denominator, and an equal part of visual minimalism being mis-applied in a way that makes things much less intuitive.

      Monochromatic icons everywhere. What used to be a lightbulb that turned yellow (on) or black (off) to toggle visibility of a layer becomes a dark grey icon of an eye, which turns light grey when it’s off.

      This is very _unintuitive_. To anyone who has used computers for any length of time, a black icon generally looks like an option is turned off, and a light grey icon is a function that is unavailable/unapplicable “greyed out”.

      It’s visually simpler, but it’s less intuitive, and unfortunately this seems to be the current trend. Hide all the scary things where even the power users won’t be able to intuitively find them, and make sure that you don’t put anything on the screen that an absolute beginner might have to think about.

      If we need to dumb things down this way to sell Linux to the masses, will it retain any of the things we like about it?

      I hate windows because it’s so condescending in its design. When you plug in a network cable and windows asks you if it’s a “home network”, “office network”, or “public network”. What does that even mean?

      Oh, “Something went wrong” I guess I’ll just have to wait and hope the problem magically fixes itself.

      I want an OS that tells me what I need to know, and If I don’t understand it, I’ll learn about it.

  2. The problem is not “too much choice”, its wanting to accommodate people who don’t care what Linux is, and frankly are very content in their lives without that knowledge. Don’t take this the wrong way, but nobody needs to know they’re using Linux

    Look at Android. Its Linux but a thick layer of Java VM and all the visual goodies to keep the normal users away from scary scary Linux. And its working great, Android is the most popular mobile operating system on the planet.

    Linux will get popular on desktop as soon as they make Wine work better, and somebody comes and builds an amazing desktop environment from scratch, essentially something that keeps the user away from the scary knowledge of Linux, and gives them a nice, pre-setup environment. KDE is close. GNOME is close.

    1. My approach is to “find a distribution I like and stick with it”
      Right now, that means Mint MATE. Your mileage may vary, and that’s fine.

      I strongly agree that a good Windows interface (currently, WINE) is vital for Linux acceptance. Because there are some mandatory (for a given user) apps whose manufacturers do not support anything but Windows and, to a lesser extent, MacOS. Linux needs a way to support those, and, for many users, not having that ability to run Windows programs is a deal breaker.

      I am always pleased, though, when I discover that some embedded gadget uses Linux.

    2. It’s not that it’s “scary”, it’s that it’s a pain in the ass more than anything. I remember way back when I had to recompile the kernel just to get sound going, I mean, wtf? Granted that was a long time ago and I’m sure things are WAY more polished and convenient now. But that same mentality still lingers in Linux and its user base. I think deep down Linux users want to feel like they’re special, that they spent the time to learn all the quirks and because of that they are superior to “normal” people who just want stuff to work without jumping through hoops.

      1. Hmm, someone still has to recompile the kernel to get sound going. That’s the way Linux is designed, the fact that most of the distributions support so many devices out of the box these days is just big big credit to the distribution maintainers.

        1. Which is the way it is, because the kernel devs refuse to provide a stable interface for third party binary drivers.

          They don’t necessarily have to roll everything into the kernel. They just want to.

      2. The people who do Linux are trying to own everything for themselves.

        One of the main problems is the lack of a stable driver API/ABI that would allow long term support by OEMs. It is deliberately not given, because the people involved actively resist closed binary drivers from OEMs. This is because of the ideology that the OEMs should instead release open source drivers that can be included with the kernel, so the hackers could hack away with the stuff all they want, or keep supporting old devices that are no longer in production.

        The OEMs however either can’t give the source code because it depends on licensed software, or they don’t want to give out the drivers because then a competitor could make exactly the same devices without spending any time or money on developing the software, which means they’re able to undercut the original in price and steal the market. That’s why they won’t give the source code, and they won’t bother supporting the hardware on Linux. It’s just too much work because the actual platform is hostile to the OEMs.

        The Linux and OSS as a platform is a sort of la-la-land and a political utopia that autistically refuses to work with the people who they then blame for all their problems.

          1. It’s rather that I wouldn’t want to use software that’s done for the purpose of hacking, just as I wouldn’t want the electrician in my house to leave wires hanging everywhere just because they might want to come in later and re-do it completely differently.

            When you build things to work like meccano, they work, look, and feel like things made out of meccano.

        1. But that is our choice and the way we want it. Its more then just offering a stable interface too. Internal kernel interface s should have the freedom to evolve and improve. Furthermore how can you guarantee binary driver too behaves propely? Proprietary software tends to be a hackjob and drivers worse.

          Besides, there IS a stable interface, for the version you coded against ;)

          As for driver interfaces, that’s why we have usb-hid, ahci, 8250 etc etc. One driver for many similar devices. If we learned anything from the last 20 years, is that we made the right choice and are doing quite well. OEMS are becoming better in their support, things have improved globally.

          1. And there’s also the aspect of being able to see what the driver’s doing so when there’s a bug, it can be found, and the offending code fixed, rather than “I guess email intel/nvidia/ati/…’s support, because there seems to be a bug in the binary blob”, or “Don’t upgrade your kernel to fix that exploit, because company X hasn’t released a blob for that version yet.”

            Fortunately, yes, there are some nice standardizations occuring on a lot of things, and these days more often than not you can plug in a new device and it’ll work as expected.

          2. Yes, but none of that bothers me as a -user-

            I don’t care what you do with the kernel – I just want to plug in my doodad and have it work. If it doesn’t, if you break support every couple revisions or don’t fully support my devices because you insist on using “generic” drivers, then I don’t like you or your OS. It’s that simple.

            Part of the game for OEMs is product differentiation. You add a feature, you do a little more, or slightly differently, though you’re still using the same basic chip as everyone else. Linux fails miserably in these cases, because the one-size-fits-all driver does not account for product differentiation.

            The OEMs have to rely on unrelated and un-interested third parties to provide the features they want to offer to their customers, which means the linux developers and maintainers actually decide whether my printer supports borderless printing, or whether my scanner compresses the image to JPEG before or after transferring it over USB (for faster scanning). That’s not what I want as a consumer, and that’s not what the OEMs want as businesses.

            It’s all about the developers wanting to do things their way, because the target audience is not the OEMs or the users, but themselves.

      3. A lot of that stuff has been streamlined. Most distributions are packaged with kernels that have almost everything compiled as modules, and will usually work out of the box for the average user’s needs.

        Occasionally there’s still problems when a brand new device isn’t immediately supported, but it’s gotten to the point, where I rarely have to worry about a piece of hardware not working in Linux.

        It’s also been a fair while since it’s been necessary for an end user to recompile their kernel for any reason.

        I don’t think there’s too many Linux users who want to gatekeep the OS with unnecessary difficulty.

        There are probably lots of us who don’t want simplification at the expense of usability.
        I don’t want things made easier for people who aren’t interested in learning how things work, if it’s going to make things more difficult for me. To be frank, Windows is doing a fine job of that, and if you want it to be like windows, why not just stick with windows?

        I think a lot of people misinterpret this as “git good noob”, but really it’s not that.

        I want my OS to be really efficient and easy for me to use. I run a lightweight window manager, and run things either using keyboard shortcuts I’ve configured, or from a terminal window. For me that’s _way_ faster and less error prone than dragging files around with a mouse.

        Windows just doesn’t work for the way I like to do things, and if cloning their UI is what it’ll take to get new users, maybe I’m fine with not getting new users.

        That said, I believe what’s best is a middleground, where there’s tools that make a beginner-friendly front-end while keeping the back end clean and efficient for people who want to set things up that way.

        Where I see it breaking down right now is things like GTK3 with it’s horrible trendy minimalist icons, and ignoring compatibility with existing standards (title bars don’t behave the same as other apps etc.)
        It may be making things more like windows/mac/phones, but it’s making things worse for the power-user.

        1. Of course as you know, Linux is designed in small pieces so someone could write a GUI or whatever to ride on too of the kernel. Thiugh tgat wouod likely confuse people by yet another option.

          The failure comes when people want things dumb, and tamper with the basic structure, rather than add something that rides on top.

    3. Yep a polished GUI and a suite of well oiled apps would do wonders for Linux. But progress is glacial, I thought the Linux big dogs would have had a decent GUI almost 20 years ago but they despised making Linux accessible to mere mortals who just wanted to get their work done.

      1. More like there was no need because everyone was fine with using a CLI or a TUI. If there was no demand, it doesn’t make sense for anyone to go out and make a GUI.

        People often talk about snobbish Linux user who want to secure their own space, but in my 10 years of using Linux, I have maybe seen one or two such users. I think its just a stereotype without much truth

        1. Oh believe me, they exist.

          I’ve been on the receiving end of their hysterical rants on more than one occasion for daring to ask about intended use before making a recommendation.

          Apparent, I should recommend without having to think about it :)

          1. Damn. I keep getting caught out by the formatting options. What the last line should have read was:

            Apparent, I should recommend {Insert name of distro here} without having to think about it :)

        2. Well GUI’s and apps are what sells Apple and MS products while Linux is relegated to the back office.

          Linux users make the mistake in thinking that everyone else loves the CLI and tinker with the OS and generally just screw around with Linux.

          1. “Linux users make the mistake in thinking that everyone else loves the CLI and tinker with the OS and generally just screw around with Linux”

            Do they? Name 3 of them.

          2. Windows users make the mistake in assuming that we linux users all use nothing but CLI.

            It’s not really a CLI vs GUI thing at all.

            I’m primarily a linux user, and like most users, the majority of what I run are GUI apps. The difference is that for things like file management, being able to use things like wildcards and pathnames from a CLI is way faster and less error prone, so I use the right tool for the job. Also, I run a very lightweight window manager, and have shortcut keys for all my common tasks. I remember a half dozen keystrokes for my every-day programs, and I can launch them in a fraction of a second, rather than hunting around a start menu with my mouse, or even right-clicking a pinned icon on the start bar to launch another browser window.

            I just haven’t found any way to make windows as efficiently.
            Menus are slow. When I want to run something in windows, I click the start menu, start typing, and then wait for it to search, and recommend an app.

            If it’s a common app, I’d probably pin it to the bar, but even there, there’s only a half dozen programs that are 99% of my computer use. Why not just assign them to keystrokes instead?

            Microsoft could actually learn some things from some of the linux window manager conventions.

            Alt-dragging to move or resize windows, for example. Rather than carefully aiming for the couple of pixels at the edge of a window, just hold alt, and and click+drag with your mouse. So much quicker to manage things.

            Being able to move keyboard focus to another window without necessarily raising it. It’s not something everyone looks for, but I use it all the time.

            And all the “aero snap” stuff in windows is awful. I’m constantly either maximizing or half-screening windows apps because of those terrible features. (Thankfully you can turn them off)

          3. Windows is, nowadays, mostly relegated to people who don’t know any better, and/or are too lazy to learn … Most people are like that, sadly.

            It is actually frightening to see a world in which so many sell their freedom for (in)convenience. And give away their private information through Windows 10 “user experience” spyware … and have straight face to complain about Linux and BSDs – which they didn’t use in more than a decade, obviously.

      2. but they despised making Linux accessible to mere mortals who just wanted to get their work done

        I think that’s more a symptom of the underlying malaise, than a direct cause.

        The sad fact is that much of the so-called ‘open source community’ is a poisonous, disease ridden cesspit filled mostly with misogynistic sociopaths.

        Riding herd on them is why Linus looks like a deranged tyrant to the uninformed.

        1. Sometimes it’s hard to tell who’s just doing a random project, and who’s some kind of nutcase that thinks computers are the meaning of life and the point of owning one is to mess with the OS for no reason.

          Most code seems to be experimental hackery rather than a polished product, but every readme file acts like it’s the best thing ever.

        2. And if you’re a noob, don’t dare ask for help on a forum. Those tyrants will tear you to bits for daring to ask a question. “Have you even Googled” or “that’s been answered hundreds of times “ without any reference were my favourite answers. Enough to drive one to tears. I tried to like Linux, but as mate said- It’s the biggest collection of unfinished software on the planet.

      3. “but they despised making Linux accessible” – they probably thought they had made it accessible. During my initial years as a software developer I met several times where the way I thought, and my own insight into the systems, meant I’d design a UI differently from a project manager who would make a UI differently from and end user. I also encountered many UIs which other developers had designed which were surely representations of the backend system, but in no way helpful, they required that I consult code to work out how to use them.

      4. That’s just terrible … just wanting to get the work done :)

        I am confused (that isn’t hard). KDE, Gnome, Cinnamon (as examples of GUIs) do the job nicely. Just pick one the fits your work flow. That to me is a ‘strength’ of Linux. User gets to pick (and jump between as GUIs can be installed on same system) . I don’t like Gnome for example, so I get to pick …say Cinnamon. I like LXDE for its simpliticy. Now Windows/Apple you are ‘stuck’ with it. As in Windows, you can pick your browser (I like Firefox) and LibreOffice Suite and you have all that a high percentage of users need in applications category. Easy to install and work with. Anymore you don’t have to drop to the command line if you want to manage your system that way.

        We’ve been Windows free for years now… and I haven’t missed Windows at home one bit.

        1. Exactly. We’re a Neon household here – that’s on KDE reference laptops – and we use Plasma all day. It’s far easier, more logical and easier to maintain than Windows and I for one prefer it to MacOS (not that I can afford their hardware anyway).

          By the way, there’s Linux under there somewhere. Not that either of us is aware of it most of the time.

          No viruses, no BSOD, no nag-ware, no “go away for an hour while I update”… === no stress. It just works.

    4. Agree, my mom is an 80 some thing artist, and she loves Linux. The best part of spending over a decade Windows free is that when people ask of I can help them, I can say, I can’t, I don’t know, I don’t use Windows.

      1. Modified quote: The best part of using Linux is not helping other people use their computers.

        That just about sums up why Linux will never be for regular people without a corporate giant (e.g. Google) making it palatable. Linux users/developers aren’t interested in consumer goals and use. Their own interests will always come first. And the financial realities of open source will always make that so. Why spend time working on making it into something that you don’t (and maybe no one) care about, especially when not getting paid?

    5. KDE is pretty much there already, besides accessibility and a few trash features(Never have I ever wanted to drag a . desktop file into something from the taskbar). The only thing missing is an app store for AppImages.

      Nothing seems as well supported as Ubuntu, but the packages are always old, and the repo doesn’t even have a decent backup app(Back in Time seems to be the only good one).

      1. I left KDE back when they went to the ‘new and improved’ version (4?) years ago for LXDE. Recently when I needed to update to latest Ubuntu 19.04 (now upgraded to 19.10) on a new drive on my development machine, I decided to try KUbuntu just to revisit KDE again. It really isn’t that bad. I would like dolphin to default to ‘details’ mode instead of icon view (why is that default? I really really dislike that mode). Haven’t found that switch yet. Other than that, very usable. Oh, I like to put most used apps on the desktop and it insists they go along the top. I would like to be able to move them anywhere on my desktop. Minor thing.

        As for backup, I have scripts written to backup my data with rsync. The one GUI app that worked well was TimeShift. I actually had to use it once to restore a Linux Mint machine (upgrade gone bad) and it worked like a charm. Back working on previous version in no time.

    6. I can certainly get behind the idea of improving wine, the only reason I’ve been to go full linux and quit windows is because for the few critical windows legacy programs that I need wine IS good enough. For everyone else wine could use being better. As for “hiding the scary linux” mint and ubuntu both do a pretty good job of letting a user stay in GUI land except on rare occasions (mostly just during install and setting up), I say this as a user who does run some terminal commands but always prefers to try a GUI way first. With the way Win 10 is going though, and MacOS too, I can see linux being the only way for users to have a computer easily do the tasks THEY want as opposed to the walled garden of tasks which M$ or Appl£ chooses to permit.

  3. “,,,Canonical, the company that manages the Ubuntu distribution. Their home page is informative, easy to understand, and not cluttered by jargon.”

    I clicked on the link here what Is written on top of page:

    “Ubuntu 19.10 includes enhanced K8s capabilities for edge and AI/ML, OpenStack Train live-migration extensions for easier infrastructure deployments and more. ”

    What are “enhanced K8s capabilites”?
    What is “AI/ML”?
    Whait is “OpenStack Train”?

    No jargon at all!!!! Great layman marketing!!

    1. This is a company that makes it clear, if you are not a Linux guru we don’t want your business. These people are stuck on stupid.

      If Gates and Jobs were like the clown posse at Canonical there would have been no computer revolution.

    2. Those sentences are referring to the homepage of Canonical rather than Ubuntu. The Canonical homepage does have less jargon, but I suspect if you’ve ended up there you probably already know about Ubuntu and would have visited the Ubuntu homepage. I don’t think the jargon in the “what’s new section” is bad, but perhaps it should have a lower billing.

  4. I gave up on Linux in 2008. I loved to use it in the corporate network, but when I quit my job I had no reason to use it at home. Cheap dedicated hardware did all the network stuff I needed and for my main desktop machine, it simply never worked. The high maintenance and uncertainty simply was not worth it and this was true for every distro.

    What got me back in was the Raspberry Pi and Raspbian and I understand that it does not handle all use cases by a long shot, but more and more projects I do, when asking the question what platform would be most suited, the RPi often wins out.

    As far as I can tell, it is the only total package where you do not have to worry about either hardware or software versions with an ecosystem that is a solid as it gets, also both hard and software. I do not have to reinvent the wheel when it comes to maintenance and roll-out systems. I know that when it runs on my 3B+, it will be excellent on a 4 and I know that support for these dirt cheap machines will be longer than most mid-range consumer systems.

    If you want to give Linux a new shiny image, let the Raspberry Pi ecosystem be the poster boy.

    1. That boils down to the same reason MacOS has fewer OS problems : the OS is prepared, tested and debugged for a small set of configurations. It doesn´t need to to support everything, and knows about the hardware it should expect fo find.

      Since linux ( and Windows ) has to support a lot of different hardware configurations, bugs and strange situations appear. To remove all of those would require much more testing ( just see the total failure that is MS´s Windows10 scheme of making users test their software ) .

      1. That is certainly part of it and the reason why my desktop and mobile devices tend to be Macs, but there is something different about the ecosystem and community as well with Raspbian. Personally I do not need to wade through a jungle of various technical philosophies and preferences like SystemD hate and preaching the *NIX mantras before I figure out what I need to do for what I want. There is less tribalism there which makes for a more productive environment.

  5. Agree.
    The splintered directions of different distros amaze and dismay at the same time.
    A few years ago, (CES) a vendor created a GUI from Visual Basic, designed for seniors.
    Simple desktop, auto configuration, no updates needed.
    Of course it failed, as he surely had zero advertising budget.
    Today I see a similar product advertised in AARP magazines.
    Someone thinks there is a need.
    Is it not time for the Linux mavens to step up and actually produce a usable distro?
    “Linux Desktop, 2090 for sure”

    1. The problem is that I, as a developer, find Linux perfectly easy to use. I use multiple distros at home and at work. The first paragraph of this article talks about a character who can talk both user and developer, in my experience those people are few and far between.
      Even if we had all of the “Tom Smykowskis” we would possibly need how would we get them in contact with both users and developers in the open source software world?
      One of the best “Tom Smykowskis” I’ve worked with learned how to speak user first, and took several years to learn how to speak developer despite having a technical degree. Regardless, all of his learning was hard won, and done face to face with both groups.

      1. The problem isn’t that you don’t have marketing people. The problem is that you HAVE marketing people you didn’t even recruit.

        Linux is by developers for developers and engineers for engineers – or businesses to businesses – it’s done by people who build it for themselves, for problems they need solved. It’s not for casual users and general use – like tractors aren’t meant to be driven on the highway and a Toyota Prius can’t cross a potato field. It’s special purpose and the whole philosophy of the system reflects that.

        The problem is that there is a group of people in between the professionals and the casual users, who are either politically motivated wannabe specialists – you know the type who writes Microsoft with a dollar sign – or people who just don’t want to pay for the commercial options so they’re trying their damnest to make other people shoehorn Linux into serving their purposes instead.

        It’s this in-between group who are trying to foist Linux onto the rest of us, thinking that if they just market it to everyone and preach the gospel, then people would naturally find how great it is and learn to use it (not a chance), or the developers would bend backwards and make it usable (not a chance).

        1. The point being: if I need to be told how great something is, it’s probably not that great.

          When I’m at work trying to figure out how to do X, and I search for an industrial PC to do stuff for me, it comes with Linux because Linux is useful for that purpose. Sometimes it comes with Windows because it needs to run with a user interface, and Windows is more useful for that purpose. Why would I go against the designer’s intent and install Linux on the windows based PLC, and Windows on the Linux based PLC? That’s just asking for trouble.

          But on the desktop, there are people who insist that I should do just that. That it’s better if I install Linux on my machine; no it isn’t – half my stuff won’t work and half the software I want isn’t there. Why would you say such things that are obviously not true?

          Because you want to fake it until you make it. That’s marketing in a nutshell, and the reason why Linux does not need marketing. Where it works, there’s no question that it works and no question why you should use it. Just use it. Where it doesn’t, trying to make it look better than it actually is just makes it look worse.

    2. Look at Ubuntu’s home page. I have no idea WTF they are peddling just a bunch of geek gibberish. It’s quite clear they don’t want normies using their software only Linux geeks.

      Linux has more than a marketing problem, it has a culture problem and is why it never went mainstream.

      1. If you have no clue what they’re trying say, then most likely there’s something wrong with your reading and comprehension skills…

        Again, Sugar Labs awaits :).
        (Before you go full apeshit on me for the preceding remark, just know that I am both a Windows and Linux user)

  6. Awesome write up, Thanks :)

    I’m also an electrical engineer and dumped MS windows back in late 1999 for a full-time “Linux” box.

    The Open Source Initiative have tried hard to build bridges between dev’s, closed source software and everyday people but it has largely not worked. I really feel “Linux” should be smashing the desktop market but here we are. Software like Fusion 360, Altium Designer/Protel, etc should run natively under “Linux” like under MacOS.

    1. The Open Source Initiative has done no such thing. Rather, it’s been just a bunch of people whining why closed source developers and OEMs won’t throw their businesses in the wind by giving everything freely on a silver platter to the OSS crowd.

    2. I think the real issue is that they don’t see enough gain in business. For the most part, if you’re a linux user who needs to run software that’s windows only, you’ll grumble about it, and run it on a windows machine anyway, or possibly run it in a VM, or wine. Net result: 1 sale.

      The alternative is to spend a bit of extra dev hours to make it compatible with linux. It’s probably not as much work as many windows fanboys suggest, but it’s definitely significant. Result: happy linux user, and 1 sale.

      The unfortunate reality is that for a company whose primary motivation is selling a product, there’s likely not enough payback to make the bean-counters happy, so they’ll develop for the OS that gives them the most ROI.

      I mean, I use almost exclusively linux, but haven’t found a good alternative for Fusion 360, so I still end up using it. Autodesk probably doesn’t care about the inconvenience of me having to running their one app on a different machine.

      1. I have a cdrom of Wordperfect for Linux, even ran it briefly years ago. So there was a time when companies released commercial software for linux, but I haven’t noticed any recently. Though maybe esoteric software for very niche purposes.

          1. It wasvavailable by itself for a whike. I got it in a book, it wasn’t free. There was also a Wordperfect for Linux for Dummies which I think included a cdrom, but there were other ways of buying it.

            A later iteration did something like use an MSDOS version under an emulator on Linux.

          2. Right. I didn’t mean that it was only available as a package,
            but does give a pretty good reason for them to have made it, since the two products would synergize, to use the marketing parlance.

  7. Great article and spot on in many ways. I would use the term “business entity” rather than “marketing” as people tend to equate marketing with advertising.

    What these entities do for real products is MAKE DECISIONS. Linux is full of such unresolved issues – Gnome vs KDE, systemd vs init, etc. When real money is on the line someone has to step up and commit to something.

    In the instances where Linux has been successful, there is an additional entity making those decisions. Amazon’s customers don’t get to decide what distro Amazon uses, Amazon does. Yes, your phone runs Linux, but the manufacturer of the phone has decided what distro and packages are installed.

  8. It is not like Windows is a brainless install, it too suffers from different flavors and the 32/64 bit thing. What gets users past that issue is that it comes pre-loaded. You don’t need to get involved in if it is 32 bit or 4 bit, and most home users do not know there are different versions of the OS. They have Windows 7 or Windows 10, no idea of what version. And installing Windows is not occasionally without it’s issues. Like the installer not seeing some SSD’s. But again, all of that has been taken care of for you. You are jousting the wrong dragon saying Linux is too connected to it’s engineers. The fact that when people have issues with their Windows computers few of them call Microsoft. Most of them call the computer manufacturer. For Linux to get a bigger piece of the “normal” desktop, it needs to be pre-installed and ready to rock and roll, and the computer manufacturers need to support it much the same way they support Windows now.

    On the flip side, with Windows 7 going EOL, I think Linux has a real opportunity to snatch a few more percent of the desktop away from MS. A lot depends on what people do with their computers though. For people who just surf the web and do email, Linux should be a pretty painless transition. When people run applications, that is there things start getting sticky. From itunes to cad to graphics programs.. Yes there is WINE, but it is not a given anything will run under it. The reason I run Windows on most of my desktops is because I run a few programs that won’t run under WINE and Linux does not have nice native versions of, and of course the program does not have a Linux port.. The lack of program having Linux versions no doubt is because of it’s small market share, so getting some manufacturers on board with selling and supporting system with Linux pre installed would logically prod software vendors to release Linux ports of their offerings.

    1. My work computer runs Win10, but you (almost) wouldn’t know it, it has all the look and feel of Win7.
      It came to me like that (corporate dictated policy), and while I whined and complained initially (uselessly too),
      it was an easy transition.

      1. Indeed. Microsoft specifically engineered it to be an easy transition for the end users.

        The IT people who have to configure, deploy AND secure it… Well, that’s another thing entirely…

        1. I’ve gotta call BS here.. As a long time Alpha/Beta slug for M$, (Since Win95) they originally attempted to force users into the smart phone look. My early Beta Windows 10 was unusable. In fact, it was so early, it had yet to actually recover from sleep. As you may know, Windows 7 will ignore the recovery data if an error occurs. Not so the early 10. A sleeping dog best left alone I say.
          So attempting to recover from sleep, the OS claims I need to restore/fix as the sleep had crashed.
          The OS then enters the black hole or recover/restore/fix.
          I let it run just to see what would happen.
          Sadly, the “FIX” erased the entire drive, leaving only one empty folder C:\windows
          Mildly annoyed, I reported the error to M$, and received zero reply..
          Oh well.

          1. heh. Programmers are incapable of letting sleeping dogs lie :)

            But seriously, you have to expect problems with beta software. All of the showstoppers should be gone by RTM but with so many different hardware/software combinations out there in the real world something is bound to slip through.

            As for the ‘smartphone look’. That was a deranged attempt by Balmer to force his way into a market which was already dominated by others.

            God only knows how much money he actually cost the company, but I understand from “One Who Was There ™”, that the last shareholder meeting before he ‘stepped down’ was “somewhat acrimonious”.

          2. >you have to expect problems with beta software<

            True, even with the Windows 95 beta software, it was so not ready for prime time it's difficult to describe.
            I received a lifetime supply of floppies with the 95 beta on it. And a CD.
            If you recall, Windows 95 used a "Boot Floppy".
            I booted the floppy, and inserted the "Non Boot-able" CD.
            Microsoft in their wisdom neglected to insert the "Path Command" in the boot floppy.
            So DOS couldn't find the correct folders on the CD.
            I finished the install with the floppies, and after 95 was up an running, figured out the path command was missing.
            Interesting side note, the system requirements allowed me to buy a new tower with enough power to actually run 95.
            But, the good news is, I still have all my beta software on the shelf.
            I await the Smithsonian phone call anytime.

  9. >From there it follows that the only reason that Microsoft and Apple dominate the desktop world is because they have a marketing group behind their products, which provides customers with a comfortable customer service layer

    I don’t follow that. Out-of-the-box experience and compatability are still my major gripes with Linux, even though I like Linux. I bought a Dell XPS 15 a couple of years ago, scrubbed the default Windows install and reinstalled it from scratch alongside Debian.
    As much as I dislike Win10, it immediately picked up all of the hardware features and linked Windows functions to the keyboard shortcuts. Debian runs, but it fails to properly idle the GPU which kills the battery, and the experience with third party GPU drivers is a pain. It fails to properly work with the keyboard shortcuts, so the brightness and volume keys at best have bizarrely coarse control or don’t work at all. The Bluetooth driver in Debian is totally unreliable and gives a uselessly vague error every time it fails.

    It doesn’t matter whose fault these are, a consumer won’t care. A casual user who doesn’t want to tinker under the hood will compare the Win10 vs Debian experience and say “Debian doesn’t work with half of this stuff, Windows did automatically”. Regardless of Win10’s flaws or MS’s approach to privacy and control, the average user will say “My BT headphones just work on Win10 but Debian just refuses to connect and I don’t know why”.

    No amount of customer service budget gets around the fact that Linux by and large was designed for people who are willing to tinker, which is not what most people want. Web servers and supercomupters are not set up by ordinary people doing ordinary consumer tasks, so Linux’s power there isn’t relevant to its potential in the consumer market. The moment the command line gets involved most people turn off because that’s not what they want. With macOS and Windows CLI is optional, with Android CLI isn’t even available by default, but with Linux CLI is largely the standard way of doing things. It’s not a mass-consumer-focussed experience.

    1. Quite so. It’s about knowing your market and adapting your products to fit that market.

      BTW:

      Debian isn’t the only O/S with “uselessly vague” error messages. I got a “Spool file not found” from the license manager on a recent deployment. Nothing to do with the printer. It couldn’t find the input file due to a typo in the answer file…

    2. Quite so. It’s about knowing your market and adapting your products to fit that market.

      BTW:

      Debian isn’t the only one with “uselessly vague” error messages. I got a “Spool file not found” error during a recent deployment and no, it was nothing to do with the printer queue. The license manager couldn’t find the input file due to a typo in the answer file.

      1. Yep. For example, when I was updating BIOS a while back on my motherboards, They specifically said to download ‘x’ driver in Windows before updating the BIOS. On Linux, you just update the BIOS and it ‘works’. For the past few years, Linux works out of the box. No drivers to install. It just works. That has been my experience and I’ve updated all my systems to latest AMD Ryzen (3000 series). Again, no drivers to worry about.

        1. Poe’s Law, I suppose.

          Linux doesn’t use the BIOS for anything, so it doesn’t matter what you do with it. This is a special case. Windows does, because it actually interoperates with some of the functions of your motherboard, like power saving modes, or special features provided by the hardware, which Linux simply ignores.

          1. Not a gamer, but looking at Steam stats, there are over 9000 games released on that platform as of 2018. That’s not counting the traditional arcade, board, and text games and puzzles that are out there. I occasionally will play Tux Racer for example. Surely there is something out there that would tickle a gamer’s fancy. Seems I read EA was returning to Steam as well.

          2. To be clear, that means play games that are only available for windows.

            No, you probably can’t run a lot of AAA titles from EA or ubisoft, but if you just want to play games in a general sense, over half of the games on my steam list will just work, either natively or automatically through steam’s compatibility libs. It’s certainly not as bad as it was years back, when you were limited to Tux Racer, Frozen bubble, or any of the other casual games that were knocked off.

    3. Uselessly vague error messages?

      I’m not going to say that they don’t exist in linux, but they’re few and far between.

      Windows is _far_ worse in this regard.

      “Getting windows ready for you”

      “Oops, something went wrong”

      “Windows crashed, We’re gathering some personal data to send off to microsoft, then your computer will reboot without any explanation of what actually went wrong. Cross your fingers I guess?”

  10. Why does Linux need more ysers?

    It provides a version of a very old operating system to those who want it. I run it because I can, it cost me very little in terms of time to get used to it. I didn’t come from Windows, so I’m not constantly comparing it or wishing for something more Windows-like. My identity isn’t tied up in numbers, so I don’t need more people using Linux to validate my use of it. It runs forever and I have no problems.

    I’ve run Linux since early June of 2001, but I wanted Unix in the early eighties and remember when Stallman write about GNU in 1986, wanting it then. It took time till I got hardware to run it.

    It’s a shock to see all the negativity about Linux here. For all the “it’s not a hack” comments here, people don’t really seem to have the spirit of hacking when it comes to operating systems, they want their Windows.their

    Michael

    1. My main gripe with Linux is, that if I buy $2000 worth of computer parts and slap them together, and then install Linux on it, it works no better than if I had bought the cheapest piece of junk because it literally cannot use most of the features of my GPU, my sound card, my monitor, my headset, my TV card, my printer…

      And the very idea of having to go through a “distro” or a “repository” to have easily accessible software is exactly the same problem as with the Google/Apple app-stores – it’s a gilded cage, except in the case of Linux distributions it’s a 17th century zoo with rusted iron bars. There’s nothing in there that is useful or interesting.

      I can understand someone whose computing habits are stuck in the 90’s has no trouble tolerating Linux, but making the desktop cube spin just doesn’t impress anyone. Ubuntu brown is the color of ass-dough.

      1. Your kidding right? On my systems, I’ve not ran into your problems. I currently run a 4K monitior, HD montors, Nvidia GPUs , AMD Vega and no problem on a variety of servers/desktops. Sound works out of the box and does headset. Printers were very easy install (I’ve had Epson, HP, and now Brother (laser and EcoTank). On all the Dell laptops I’ve had, Linux worked out the box, connected just fine to the internet, etc.

        I only update when I want updates. The only time I have to reboot is when I want to. And that is usually only on a new kernel. At work our Windoze 10 systems are booted every week and people think that is ‘normal’.

        Why do you need to be impressed? Use it to get real work done which it does at this address.

        1. Works for me ™

          When people say “sound works”, what they usually means is “The default stereo output works. I have no global equalizer.”.

          I bought a 5.1 surround sound set and installed a custom APO in Windows 10. I have a global equalizer that applies to all the sounds, I can adjust each speaker individually, including effects and delays. I have absolutely no hope of doing any of that in Linux. I have tried.

          I have a Hauppauge DVT card. Last time I tried, the best I could get out of it was a green screen and a kernel panic under Linux.

          I have an Epson printer. It “works”. I have no color management under Linux and there’s no way to enable borderless printing. The integrated scanner doesn’t work.

          I have an AMD GPU. The drivers under Linux simply suck, but that’s okay since there are barely any games I could play anyhow. I haven’t tried to see if gsync or freesync would even work for the monitors. It’s not worth the effort.

          I’ve told windows I’m on a metered connection. It doesn’t update/reboot randomly after that until I tell it to.

          1. Also, screen tearing is STILL and issue in Linux in the year 2019. With different combinations of distros, GPUs, drivers and compositors, your system may or may not be capable of doing vsync which means video content in browsers, or even full screen video, can have horizontal tearing and skipping.

            This is a problem that was solved when Windows XP was new.

          2. I could make similar but equally important arguments about sound in windows vs, linux.

            I’ve got a fairly complicated setup, and I like to route different apps to different audio outputs at the same time. Sometimes when I’m messing with sound recording stuff, or sometimes just when I want to chat with a friend on some earbuds with a mic while we’re working on a project, while I’ve got spotify or netflix running on my main speakers.

            That’s all works pretty nicely out of the box on linux (I can switch apps between devices while they’re running, etc), where on windows it’s just not as configurable. I switch audio devices. Some of them move to follow my settings immediately. Others require me to restart the app. I didn’t see an easy way to move a single instance of an app’s audio to another output. IIRC, I did find a way to set some apps to default to a device, but I couldn’t have one browser window playing sounds through my speakers, while another is on my headset, etc.

            Yeah, you could argue that these are niche cases, but for whatever reason, they are things that I decided would be useful to me, and which I couldn’t make windows do.

            Likewise for my video configuration. I’ve got a 2160p monitor and a pair of small 2048×1536 displays on the PC, and another 1080p monitor that I use for watching TV when I’m not sitting at the desk.

            I can configure them in linux so I have a desktop spanning the 3 monitors on the desk with a 1080p viewport mirrored to the 1080p, and switch to a straight 1080p mirrored on the 4k display and the 1080p monitor to play netflix or youtube or whatever (without windows popping up on the other monitors, etc)

            I can change these metamodes with a keystroke on my keyboard without messing with any settings in control panel, etc.

            Pretty sure there’s no good way to do that in windows, either.

            Again, it’s a niche thing, but I can make it do what I want it to without a whole lot of work, and I think that’s important, niche or not.

            Windows OTOH, is probably easier to make it do the simplest stuff, but for something like this, I don’t think you could really do it. (Can you script display layouts in windows?)

        2. Linux sound is in a very sad state. I have a 4k monitor with speakers which I DON’T want sound to go to, and a 7+1 sound system that I DO want sound to go to. In perhaps 100 boots, the 7+1 system has worked maybe twice, and changing the setup either dynamically or with startup files doesn’t help. The sound system seldom behaves the same on consecutive boots. No 2 internet advice sites for solving this problem offer the same suggestions.

      2. If you think of it as an insult, then I’m stuck in the seventies.

        I didn’t get a computer until 1979, I didn’t have the money earlier. People got them earlier and other people were more capable, but I had a computer before most people. That’s “elite”, and to often I keep seeing things about how things shoukd be dumbed down so more people could be involved. I didn’t influence computing, but I.coukd comkent on it rather than wait till things became dumb enough, not just cheaper hardware and “easier” software, but most people didn’ have a use for a comouter until onkine shopping or facebook came along.

        But of course I’m influenced by that long history, that doesn’t mean I live in the past. My needs are different, maybe even simpler than many. I’ve never been Windiws-centric, so I don’t judge from the Windows viewpoint. I know tyefe is a different way.

        I can’t afford a $2000 computer. But three years ago I bought a refurbished i7, and no problems. I don’t need fancier graphics so I’ve never bothered with the GPU in the CPU. My fifteen year old scanner worked, so did the monitor I found on the sidewalk and repaired, and tge laser printer I got last year was running with just fifteen minutes of configuring.

        I tried Debian very briefly in 2000, then went to Slackware, often claimed to be the “hardest” distribution. I actually switched to it because I found a clearance copy of “Slackware Linux for Dummies”, which says a lot.

        In the old days we had to go to the store and buy software, either that or write it. Some if it was bad, I had one app that crashed if I did something, not obscure enough to not be a problem. Linux comes with endless utilities and apps, “out of the box”, even multiple programs to do the same thing. If there really isn’t what I need, which is often more a decision of those who put the distribution together, it’s way easy to download software, beats going to the store.

        And if your hardware doesn’t work, again it’s likely because we live in a Microsoft world. Hardware caters to thank, so details are hard to get, especially if there’s no money to buy such details. So Linux either can’t support it, or it takes time t reverse engineer the hardware and tgen write a driver. This isn’t a Linux fault, it’s the fault of hardware that wants to be secret, except when dealing with Microsift.

        People expect things without understanding.

        Michael

  11. The Desktop is dying, proprietary vendors decided it was better to lock people in the cloud with SaaS subscriptions, or free apps spying on you.

    The Desktop has been transformed into a terminal hooked to a mainframe.

  12. Damn. I keep getting caught out by the formatting options. What the last line should have read is:

    Apparent, I should recommend {Insert name of distro here} without having to think about it :)

  13. Checking my back end shelf, I see I still have a box copy of “TurboLinux Workstation 4.0” (c) 1999.
    Requires 16 meg ram, 32 recommended.
    Perhaps I’ll see if it will run on modern hardware later this week.

  14. A lot of the problem is the community directing people to hard to use stuff. If you ask “How do I flash an SD card”, they’ll tell you to use DD, and you might never know etcher exists and think flashing is way harder than it is.

    If you say “What distro should I use”, you’ll get 20 different answers, and some might even me security focused things like Quebes or DIYish distros like Arch. You might never know Kubuntu et all work just fine out of the box.

    Want to back things up? Download back in time. but if you ask the community, they’ll give you a ten line rsync script and an intro to UNIX philosophy.

    Everyone treats the things that actually do just work out of the box like Windows does as something for the new users, and everyone feels the need to “graduate” to the “advanced” stuff.

    I wonder if they tell carpenters to graduate to a rusty old brace and but for the full experience?

    1. I wonder if they tell carpenters to graduate to a rusty old brace and but for the full experience?

      Probably. Some of them do seem to feel the need to drag others into ‘their world’.

      1. Nothing’s wrong with them, and they’re actually pretty cool as living history artifacts, but they’re just not what the average person wants to use for general purpose stuff.

        A Ryobi drill/driver is an incredibly practical and easy to use tool. I’m sure a brace and bit has some.cool features, but they also probably take far more skill, and probably more wear and tear on your body.

        1. I prefer a “cordless” drill that is ready when I need it,
          I got tired of drills that have batteries, only to find out when I need to use one, its batteries have died, either because the drill wasn’t being constantly charged, or because it was being constantly (over) charged.
          B^)

          1. Have you used any more recent stuff, like the Ryobi EverCharge chargers, or just corded drills?

            There’s something to be said for having something on hand that still works when the batteries don’t, like the old rule that redundancy shouldn’t be just different versions of the same thing.

        2. A brace and bit is more controllable if you’re doing accurate work, like clock or jewelry stuff – and it makes no noise so it’s more comfortable to use. The same points apply for fine woodworking – a battery drill just goes wherever it wants to go.

          And the point about batteries is the biggest sale. Usually cordless drills are used so infrequently that the batteries simply die.

          The counterpoint is that it’s tedious and you need the correct drill bits to really get anything done. Any bigger job becomes a chore. With a cordless drill, you can just brute-force it when you just need a hole, and it doesn’t take you a minute of winding to drill through 2 millimeters of aluminum.

          1. That makes sense if you’re doing real precision stuff. Amazing how the older stuff always still has a place somewhere!

            Most of the time in propbuilding and basic shelving and stuff we try to make the construction not show at all rather than try to make it look good, so it doesn’t particularly matter if the battery drill does some funny business, we’ve already planned for that.

            Ryobi has some pretty nice EverCharge lights designed to just stay in the light plugged in when you aren’t using them, so you could keep them from getting overdischarged that way, but lithium is way better anyway than the old NiCad.

    2. That’s an interesting point, but I’m not sure there’s a “right” answer.

      If I teach someone how to use dd, and they understand what it does, they can sit down on almost any unix-like os, and dd will work.

      Also. that’s the only way I know to do it off-hand. I expect some desktop environments have gui tools for the job, but then I have to know about whatever desktop environment they’re using, and if they use another PC that’s running something else, that knowledge is probably not going to be useful.

      If someone asks a carpenter to teach them to make a table, do you think they should start with a trip to ikea?

      I don’t think people would suggest using dd because it’s the “full experience”, so much as because it’s the “Linux” way of doing things, and will be re-usable knowledge, as opposed to being the Gnome way of doing things, and god help you if you sit down on a box running kde.

      Yeah, it’s harder, and that certainly doesn’t help attract windows/mac/phone users to a linux desktop, but I don’t think I really want linux to “standardize” on a specific DE and set of GUI tools, just for the purposes of attracting new users. It’s not that I want to keep new users away, but a lot of times oversimplification to help new users (specifically, new users who aren’t willing to accept the learning curve) often comes at the expense of power-users who enjoy the capabilities offered by a more DIY system where they can pick and choose their own custom environment that works best for them.

      Another side of this problem, which I think thankfully has reduced in recent years, is excessive advocacy. The push to get more users running linux at any cost.

      I’m somewhat cautious when recommending linux to friends. Is it someone who will benefit from the advantages of linux, or is this someone who wants windows, and just doesn’t want to pay for it.

      There are many people who will ask for “help” with Linux, and do nothing but complain about how it’s done differently than windows. These people should probably just use windows. You shouldn’t advocate linux to them, because the only way they’ll ever be happy about it is if you turn linux into a free version of windows, and I hope that’s not what people actually want to see.

      1. I think Ubuntu+KDE+systemd has done a very good job already of being a free version of Windows. Usually people asking for distro suggestions give enough info to tell if they want to learn the linux way, or the Ubuntu way.

        The current way of doing things, with separate Windowlike distros and power user distros seems to work pretty well, and I think a lot of people who want a free version of Windows really will see some benefits from using Ubuntu.

        It’s free, for one thing, and the design is unaffected by vendor lock-in, and the discover center is a pretty easy to understand app store, with AppImages available for everything else.

        It doesn’t have to affect the power users so long as the distros stay separate. There’s always going to be a little bit of controversy with things like systemd, but even then, Devuan is alive and well.

        It adds fragmentation, but do you really want us GUI devs mucking up your distros?

        We probably both want fairly similar stuff from most of the kernel and device drivers, and maybe some basic utils, so it’s not a total fragmentation, but I’d rather the GUI apps be developed by people who understand the full GUI workflow, and aren’t going to try to sprinkle UNIX philosophy on it, and you’re probably happy with DD as is.

  15. You likely won’t. The more recent hardware may need drivers for it, and those didn’t exist in 1999.

    That has been an issue. In the early days of Linux compatible hardware was limited, so there were lists. As things grew more drivers were added, but some hardware revealed no details, requiring a lot of exploring to uncover what was needed, before the driver could be written. I think thst’s improved, but the “binary blobs” exist from hardware companies that want to be Linux compatible but don’t want to reveal details, so they release a closed source binary driver.

    Michael

  16. I’ve got Windows machines, Macs, a FreeBSD server, and GNU/Linux systems at home.

    Macs in the OS X/macOS 10.x era (really, NeXT machines) are great, because they’re just BSD machines with a fancy GUI, working drivers, and decent engineering things (CoreAudio is still, hands down, the *best* audio layer, IMO, when doing audio engineering work.) – they earned a well-deserved reputation for being the go-to for ‘creative’ types who want to get straight to getting their work done. Whether that’s still the case after Apple pivoted to iOS, I’m not entirely sure (I haven’t bought an Apple device since my mid-2012 MBP that’s still going strong).

    I feel like this all really does miss the point, though. ‘Linux on the desktop’ is kind of a meme – for a time it was ‘whenever Duke Nukem Forever releases’. The big Linux distros (Ubuntu, Fedora, etc.) are fine for day-to-day usage, once they’re installed and set up. You sit someone down and point them at a web browser and LibreOffice and they generally know what they’re doing.

    But while Microsoft and Apple and Google all want you to use their systems so they can either sell you the upgrade treadmill or invade your privacy and profit off of what they find, Linux distros are fine going off and doing their own thing. I’m sure Canonical and Red Hat would love to have a bigger piece of the desktop pie, but by and large, the people working on the various parts of the ecosystem are doing so to scratch their own itches.

    That’s what the whole ‘open source’ thing is really about – being able to work together to scratch an itch that there’s potentially no profit in, or is being underserved by the major players (or who are seriously overcharging for what they offer). Its continued existence at all for all these years is a testament to that working just fine as-is.

    Yeah, it’d be nice to have more games and applications native (thank Valve for the big push on the former). Yeah, it’d be nice if there was more… *cohesion* between all the different moving parts (though big desktop environments like GNOME, KDE Plasma, XFCE, etc. aren’t horrible when taken as a whole), but it’s always been something for people who want the ability to tinker. To go ‘no, I don’t like the way that piece works, so I’m going to change it out’. That’s why there’s debates between vi and Emacs, or systemd and old-school init, GNOME vs KDE, etc. – because the entire setup lets you choose your path, and get something that works best for you and your needs in the end.

    Any effort to ‘unify’ these ecosystems is going to get pushback, because there will always be people who prefer the ‘other’ way of doing things. Open source means if they care enough, they can keep hacking away to maintain those ‘other’ paths, and the fragmentation persists. That’s just the nature of the beast. It’s not for everyone, and it doesn’t have to be.

    That’s not to say it couldn’t be *better*. Of course it could. But when a bunch of people are working on something in their free time, usually not getting paid for it… they’re only going to work on the things they want to, that fit their use case and need ‘fixing’ for their workflow.

    And yeah, there’s an extremely large set of these people who are, for lack of a better term, ‘techbros’, that leave a downright toxic aura around them and the projects they work on. But that’s a problem that’s widespread across all of IT and much of society in general – Linux isn’t much better or worse than, say, your average video game studio or publisher, it feels like.

    I think at this point, with the proliferation of mobile devices, the success of Android and iOS (the latter being a BSD core, just like macOS), chasing the ‘desktop’ dream is kind of a fool’s errand. Open source won the server space, and it won the mobile space. The big thing keeping MS afloat on the desktop side is really inertia – they got big early on, people are used to Windows, they have a built up library of software for it and expectations. Unless MS takes a *huge* mis-step. Like, astronomically huge, worse than Vista and WinME combined, they can safely ride that out as desktops become less and less relevant.

    They’re already making plenty of stuff for Android, so I’m sure they’ll be fine even after that point. :)

    1. >” You sit someone down and point them at a web browser and LibreOffice and they generally know what they’re doing.”

      Yeah, if you make the assumption that everyone’s just going to browse twitter and write down cookie receipts.

      What about instant messaging, video conferencing, CAD, visual design, photo editing, video editing, games, entertainment? As soon as you step outside of the Libre-office and web-browser baby pen, things start to break down worse and worse the more you try to use your computer like a normal person.

      1. If you define “normal person” as a windows user who wants to run specific windows apps, then yes, it’s not windows.

        I have 2 PCs, and I spend 95% of the time using linux, and only occasionally start the windows box up when I want to run Fusion 360 for some CAD/CAM, and occasionally games that aren’t available on linux.

        When I set my Steam game list to “linux and steamos”, it’s well over half. That’s significantly worse, no doubt, but there’s certainly not a lack of games available.

        I don’t know what you refer to by “Entertainment” either, but I have spotify, netflix, Amazon prime, and all that junk. I mean, the majority of stuff people do these days is in browsers anyway, so it’s certainly not lacking.

      2. What about instant messaging, video conferencing, CAD, visual design, photo editing, video editing, games, entertainment?

        I haven’t done it, but I think instant messaging and video conferencing are possible in Linux.

        I’ve used CAD programs in Linux. What’s visual design?

        I’ve done a lot of photo editing in Linux, and it continues to improve.

        I’ve done video editing in Linux and it was a PITA, but it was also a PITA in Windows. I’ll admit that about half of all Linux video editors crash, usually when they start, and the documentation is inexcusable.

        Some games work in Linux, some run under wine. It’s hit or miss.

        What do you mean by entertainment besides games? Linux plays videos from disks, drives, and the internet, and obviously music also.

        Because source code is usually available, I’ve been able improve speed and function on some things that would be nearly impossible in Windows.

  17. The problem arises when people who don’t know what they’re doing start fiddling with those “expert mode” settings based on some nonsense they read on the internet.

    For example, the top three results from a quick search:

    10 Super Cool Ways to Make Windows 10 Run Faster like Bolt
    10 easy ways to speed up Windows 10
    19 Tips & Tricks To Speed Up Windows 10 And Make It Faster

    Will following these ‘guides’ make their system run faster? – I doubt it. Performance tuning is a little more complicated than that and performance enhancing drugs don’t work on computers.

    Is there a chance they’ll break something and need to call support? – Probably. It would depend on how much they think they know.

  18. A few years I persuaded my wife to try Linux. It lasted about a week and my wife wanted Windows back. It only takes one or two applications to send people back to Windows, in her case it was Office. I did install it through Wine, but it didnt feel the same. Myself, I have used Linux as my main OS for years now, and would never go back to Windows.

    1. Yeah. The problem there is persuading someone who is happy with the way they’re doing things to try another way that will do the same thing, but probably a little more awkwardly.

      The people who need to switch to linux are the ones who can already see the benefit to them. If you’ve got 10 reasons why linux will be better for you, and a couple where it’ll be worse, you might be willing to put up with running in wine, or using an open source alternative, etc, but if every question is how to make it more like windows, then you may as well stick with windows.

      1. Think your post sums up perfectly why Linux will always have difficulty in getting new people to switch. Like you quite correctly pointed out, why do the same thing if it turns out to be more awkward. I personally switched to Linux for the benefits of having more security, feeling more in control of my computing experience, and learning something new along the way.

  19. This isn’t going to happen. For personal but also business reasons – why should e.g. RedHat be helping out their competitor Canonical or Suse? (they actually do by working on common projects, just not in marketing).

    Also, the article is clearly written from a point of view of someone very much used to the Windows/Mac world – there is only one way of doing things, there is only one source where the OS comes from and there is only one company to blame if something goes wrong, with all decisions down to the color scheme made for me by someone who knows better. Trying to shoehorn Linux distributions into this model is bound to fail and won’t happen on the global (across distributions) scale.

    The diversity is Linux’s strength and not weakness – you can pretty much always find a variant of the system that does what you want and fits with your way of doing things. If there is only one it is either my way or the highway – you don’t like the changes Microsoft or Apple have done to their respective systems? Well, tough, there is no alternative “Windows distribution” you could use. With Linux you have always alternatives, including the possibility of rolling your own (or paying someone to do it for you).

    However, that’s really a false problem. Pick a distribution that does what you want (ideally something mainstream and not Puppy Linux which is explicitly not for people new to the system!) and stick with it. You aren’t jumping from Windows 10 to OSX and back on the same machine all the time neither, right? (that is actually much closer to what Linux distribution differences are about). There is no “The Linux” except for the kernel. But there are tons of *Linux-derived* OSes around – Ubuntu, RedHat, Suse, Puppy, Slackware, Mageia, Raspbian …. Once you start looking at the problem from this angle, it starts to make a lot more sense.

    (for the hardware guys around here – you don’t have an ARM MCU neither, do you? ST Micro is different from NXP which is different from Atmel, despite all using the same Cortex M3/4 cores licensed from ARM. And nobody claims that this hampers adoption of the ARM architecture, does it? Linux is similar to this.)

    What is an issue is the level of documentation – but then if you want first class documentation you need to also be willing to pay for it, whether in money or effort (even though a lot of projects have excellent documentation already). Complaining that “Linux won’t happen because documentation sucks” and at the same time expecting someone else do the work for free for you is a bit dishonest, isn’t it?

    Never mind that documentation for proprietary systems with orders of magnitude larger budgets is nothing to write home about neither – have you ever tried to find something in Windows 10 documentation? Right, there isn’t any to speak about (for end users), there is just mostly useless help and Microsoft forums. Compare that with e.g. Arch Linux documentation or docs for any of the major distributions! And yet nobody yells “Windows 10 will never happen on desktop because the documentation sucks!”.

    1. heh. I’ve been hearing complaints about Linux’s ‘lack’ of documentation for years and it usually boils down to their specific use case not being covered.

      Windows doesn’t lack documentation but most of it is so obtuse you need a translator to make sense of it. Just ask anyone who’s new to the deployment toolkit :)

  20. I guess I’m one of those people that just doesn’t care. What does desktop Linux gain from going from 2% market share to 20%? Popularity doesn’t translate to additional developers or contributions. Users that never compile from source aren’t going to see the requests for donations on the project pages. So it leaves me scratching my head. Why try to win a popularity contest against the rich kids?

    Linux came to dominate commercial applications because it’s price undercut the competition, and it’s license drove it’s growth by forcing reciprocity. Most desktop users never see or consider the price of their operating system, and the most important thing for a desktop user isn’t technical. It’s compatibility.

    1. A 20% market share for desktop Linux would mean business opportunities for software companies and thus more developers being paid to provide software for Linux. It would also mean that more people who have Linux as their religion would be able to make a living at it.

  21. Linux doesn’t have a “marketing” problem; rather, it suffers from a lack of professional level software that -isn’t- designed specifically for uber-geeks. The best example I can give you is photo-editing/management software. Sure, I know all about RawTherapee, DarkTable, and the host of others that someone is going to list to prove me wrong. But those packages are geek-level, not professional level. As an example: RawTherapee has so many ways of sharpening an image. Why? Because it’s open-source and everybody gets to contribute but there’s no one person who can say “no”. Therein lies the Linux problem. Open-source lets everybody in on the act regardless of whether their contribution is even worth the bandwidth.

    If the Linux users are really concerned about “marketing”, then let’s see the various Linux distros craft a crowdfunding program so we can pony up and contribute $$ to Skylum, Affinity and maybe Adobe for Linux versions of Luminar, Affinity Photo and Lightroom Classic. (What’s that I hear? Oh, that costs money and everybody wants free beer.)

    Please understand I would -love- to switch to Linux full-time but let’s not kid ourselves about open-source. Much of it is good but some of it is just laughable.

    1. There are some very difficult cases for sharpening without too much ringing or noise amplification. I have over 25 sharpeners in GIMP including blind deconvolution, and each one has some case where it’s best. Other sharpening techniques are available as multi-step processes and they’re important also. Furthermore, I have an improvement for unsharp mask in mind that I’m going to implement some day,

      Complaining about too much choice is something that USSR communists sometimes did when they visited America in 1985. If you’re bewildered and can’t choose, then just try something and stick with it for a while.

  22. In my opinion the biggest problem with Linux is that it’s not user-friendly. End users of Linux don’t care because they were raised on Linux, and they don’t know any better. User experience is something Linux developers seldom care, or even know…

    Typical (e-)book on Linux for beginners has table of contents that looks like this:
    1. What is Linux and what is ?
    2. Installing and configuring your new distro.
    3. Our file system is better.
    4. UI and adding apps.
    5. Basic configuration, introduction to CLI and text file editing
    Starting from chapter 6 till the end. Things one can do with CLI making GUI an useless addition for displaying multiple terminal windows next to each other.
    Index.
    One chapter that’s always missing is “How to get back to using sane OS, like Windows”.

    The problem with multiple distributions of Linux is that there are multiple development teams that basically make the same Linux but in a slightly different ways. If, for example, they all worked on one distro for normal people, one for developers and one for servers/networks (and maybe two-three more for different users), Linux would dominate the market. The other problem is that the families of Linux distros aren’t too compatible with each other. Red Hat distro app can’t be installed easily (with few clicks) on Debian system. You must compile it first, if developers didn’t make a Debian package. Major developers of software won’t make multiple versions of their software and won’t share source code as it’s usually their IP that keeps them afloat. Too many choices are as bas as no choice…

    I’d really like to switch to Linux. But I just can’t stand it. The most consistent thing about all linuxes is consistently bad user experience. You know what made Microsoft Windows so successful? Each new Windows (with few bad examples (ME broke my motherboard)) makes it easier to use, to work “out of the box”, to look modern for its times. Each new Windows relied less and less on CLI for everyday use and for fixing problems. Under Windows the only config files I edit are for some older games and for some mods in Minecraft or KSP. The only CLI tool I used in recent memory is System File Checker utility. The only thing I had to change in registry was a setting for power management of PICKit3 USB interface. With Win98SE I used to get deep into OS guts and I used plenty of naked DOS back then. Now I have Win10 and I enjoy it. Because it’s good and it just works. I tried Linux multiple times, it hurt. Everything is harder on Linux.

    User experience?
    It’s like this: you spend 80% of time looking at gray letters on black screen, typing everything with some arcane commands, editing arcane files, each file and command having different, arcane rules to follow. Every mistake is punished by death of file system or eternity in the land of CLI. To install an app you type a command. To run an app, you type a command, To use it you type a command with invocation that looks like summoning a demon from hell. To remove the app just type another command. There is a reason why pros use mechanical keyboards – they use Linux, the ultimate typing exercise OS. Everything worth doing on Linux is done with keyboard. Sound not working because there are 15 different sound subsystems and none of them is compatible with everything? Type some more in config files to spawn a proper sound daemon and invoke its voice powers to play some music. Linux books and guides read like Necromonicon. If you are afraid to damned to hell, just forget about sound. Listen to your hard drive spinning in search for another config text file and the soothing sounds of your keyboard clicking and clacking while you try to summon your video daemon in native resolution of your monitor and in more colors than 16. One of the missing commandments was this:
    Thou shall not use Linux.
    And the fact it’s missing is proof that God doesn’t exist and every religion is just made up…

    I’m typing this post in Notepad with colors inverted because it’s easier on the eye and I’m visually impaired. What is keyboard shortcut to invert colors of GUI under any Linux, any window manager? For Win10 it’s [Ctrl]+[Win]+[C].

    Windows – it works! Bi***es!

    1. I presume your trolling but i’ll bite. I switched to linux (specifically mint) cause it just works. Im an automotive technician / autospark. I have lost hours and hours to windows updates screwing up the delicately set up diagnostic / coding / programming / flashing tools for various marques. every windows update would usually brake something on some vintage application i need for work and naturally I would only find out about it when I needed it.

      nowadays, i run mint, and a virgin xp or 7 VM for each application, deny them the internet and I can get on with my job.

      Ideally I would run them natively under wine but having the individual basic VMs makes it easy to backup and clone stuff.

      my main diagnostic laptop is now just an old chromebook with a large SSD and 16gb of ram, hosts VM’s like a champ and if I drop it in the workshop £100 on ebay gets another. Windows on its own brought many much faster laptops to their knees with updates running the applications natively.

      every morning I walk into work, and linux just works.

      1. I agree with you: new Windows and old software don’t mix. And the more niche is the software, the worse it gets. I have a Plustek scanner with software written for Win95. I bought it in the early WinXP era. I can use it with some fiddling. But I have a webcam and two camcorders that won’t work with my current Win version – it’s form of planned obsolescence and it’s specialty of vendors of less common and usually expensive hardware.

        Your problem however is caused by something else – the monopoly. Here’s how it screws you more than Windows Update:
        The software you use is made by single company for single family of cars. They can keep it runing because they stockpiled on old hardware and run old OSes disconnected from net. They could rewrite it for modern S but it costs money and time. And if software is complex, it might take lots and lots of work to rebuild it from scratch. But they know that people like you need it to do their jobs. So they shift the costs of keeping it running on the end users. There is no alternative software for you to use so the companies that made the software hold you by the balls and can just ignore you. And no other company can reverse engineer the software to make alternative because of patents, IP, etc. But on the bright side they will pay much more in the future, the way banks and financial insitutions pay now for not upgrading from COBOL and running their spaghetti code on old hardware, and now on emulators…

        Also VMs work on Windows too. I used them for some Linux distros…

      2. Agree, don’t know about current windows, but the registry used to fill to massive amounts, and the antivirus stuff would knock back performance even on new hardware, Linux provides choices, lots of choices, Devs are willing to listen and respond to users requests if possible, it requires a willingness to be involved rather than demanding action, for those who like windows, that’s fine, if you are willing to separate from the herd give linux a fair try, do it on some old hardware, try different distro’s, if you don’t get on with it, no loss.

    2. “It’s like this: you spend 80% of time looking at gray letters on black screen, typing everything with some arcane commands, editing arcane files, each file and command having different, arcane rules to follow. Every mistake is punished by death of file system or eternity in the land of CLI.”

      WTF is it you’re trying to do? And with what?

      “To run an app, you type a command”

      Sounds like you’ve installed a headless system. Why not use the gui?

      “Windows – it works!”

      Except when it decides to install an update while you’re trying to work…

      I recently installed Linux on a laptop I was given my a Windows user because it was “too slow”. I installed Linux, installed a few applications I need, and it runs very nicely. I turn it on, and it works. I haven’t touched a single text file in the process.

      “Now I have Win10 and I enjoy it. Because it’s good and it just works. I tried Linux multiple times, it hurt.”

      But really, if you like Windows so much, why did you feel the need to try Linux “multiple times”?

      1. It’s just that my computers were never built for Linux. A friend of mine, who is a Linux Guru told me that’s the root of my problems.

        The onlly time Linux worked without a single problem was back in 2001, when I installed Red Hat that was added to a computer magazine. It worked except for games. And I wanted to play games. Few years later I tried Ubuntu and OpenSUSE. It took me 8 hours to convince Ubuntu to switch to higher resolution than 640×480 at 8-bit color. Network card didn’t work at all. At least sound worked. But no MP3 playback because you need a codec that wasn’t included. OpenSUSE worked a bit better, at least I had a native resolution. Network worked, but no sound that time. I managed to get sound working and tried some games and software with Wine. Well, even older games ran on that machine as if it was 1995. Graphics looked flat, worse than Duke Nukem 3D. Half of apps I used didn’t work at all, other half was unstable. Oh well, back to Windows. I tried Linux a few more times – it always was the same: always, just always something refused to work. I tried a Linux distro for old computers on old laptop computer, it didn’t even boot properly, kernel was too new for its CPU. I wanted to use LinuxCNC with my 3020 mill, but it doesn’t support USB controller used in it. I tried a Linux for creators – no sound unless I plug in my old USB headphones (they eventually broke). And movie editor had half of options unavailable, just placeholders for future updates.

        Last time I tried Linux was few months back with Mint. I needed a live USB to connect to internet and record a podcast. Under Windows my sound chip didn’t have working microphone input. No updated drivers, and older one didn’t work. Well, it worked okay, except for one thing: it was too white and green and I couldn’t read the screen very well. I tried to find a way to change the UI colors, increase UI scaling, etc, but couldn’t find it. I ran out of time and spent some money on USB headphones instead…

        And why I try to switch over to Linux? Because it IS a good OS, when it works. And it would be faster and better on my current hardware, with no bloat I never use. But it just takes too much time and work to set it up. I switched over to mostly FOSS software or software that has Linux versions anyway. And I could play games with VM running debloated and lightweight version of Win10. But I admit: Linux gets better and better. In few more years main distros might be a valid replacement for Windows for average user. By average I mean someone who just wants to use his computer and not bother with configuration even on odd hardware.

        And for automatic updates, wasn’t that a Linux thing first? Didn’t it ever break something, some dependency by accident? On win10 I set the update time at 3 o’clock A.M. It installs updates when I turn it off and turn it on next day. No nag screens, no unexpected restarts, no problem…

    3. Major Linux distros default to a GUI installation; firefox and Libre Office are just clicks away.

      Black on White, White on Black, and several other possibilities are available in the profile settings for Konsole. Font style and size are also adjustable. Local and full-screen zooming can be achieved in KDE in a number of ways.

  23. A friend of mine is currently helping 3 other people to switch to Linux.
    Main reason is that M$ is cancelling support for the OS running on their PC’s in the near future (begin 2020?) and they are unwilling to buy another Windoze version for their current PC’s.

    I haven’t touched that stuff for for a number of years.
    Back in the 90’s their FUD:
    https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt
    campains have cost me a month of my life because I had to re write an FTP server to be both compatible with the RFC’s and M$ wickedness. I wil not forget, nor forgive as long as they do not pay me back the months I wasted on their deliberate incompatibilities, which will probably never happen.

    1. Hopefully he knows what he’s getting into.

      If those people want to switch to linux as an alternative, and are willing to learn, this is great.

      If they’re expecting it’s like windows but it’s free, your friend is eventually going to get sick of dealing with it, and they’ll all end up back on windows.

  24. @Koen

    $ for photo in *.JPG; do djpeg ${photo} | pamscale -xyfit 720 720 | cjpeg -quality 60 -progressive ${photo/.JPG/-small.jpg}; done

    Yes I have used that exact command a lot because I just want to reduce the size of some photos for inclusion in an article and not spend all day faffing around with a GUI image editor.

    1. Yes, once you figure out what you need, it makes it easy to do multiple operations.

      The mind boggles at how many programming languages are included even in the “bad” distributions. Not just Python and C, but scripting like the shells and Awk and SED. If one doesn’t fit you, there’s always others.

  25. I use Linux now as I got fed up of being the unofficial Microsoft technical support.

    These days, it has changed so much from when I last use it, I don’t recognise anything — trying to guide someone over the phone for me is pretty much not going to happen. I haven’t known the Windows UI that well since the turn of the century, which is about when I started using Linux seriously.

    Today, I see Microsoft building Subsystem for Linux. As a free software developer, why should I spend my unpaid hours trying to support the pariah of operating systems when I can just build for POSIX, which will neatly net the BSDs, Linux and MacOS X?

    Linux/BSD/MacOS X has pseudo TTYs, just one type of file descriptor with a common API, a network stack that just works, file systems that support a rich tapestry of object types and a lowest-common-denominator environment which makes writing a boot-strap script for your project a synch. Need to some specialised set-up for your project? Just put it in a ./configure script. Write that script in something basic like ksh and it’ll work practically anywhere.

    Some argue bash on Windows is a daft idea? I say maintaining two different versions of the same script that essentially do the same thing is a daft idea!

    If Microsoft says their SFL is as good as they say it is, the Linux version of my software should work on that too without modification.

    Now I accept the above is meaningless to the average computer user that just wants to “get stuff done”. Guess what, I as a software developer, just want to “get stuff done” too. Like a user, I’ll use the path of least resistance. Ultimately, software doesn’t write itself, and if Microsoft make it too difficult to support their platform, people like me stop supporting it.

    If we all follow open and documented standards when interfacing our pieces of software, this actually does not matter. With TCP/IP, we now see very disparate machines able to exchange network traffic. With the demise of ActiveX controls, Flash, Java applets, and the rise of decent web standards, we’re now seeing applications that just work, on any platform whether it be Windows, iOS, Android, MacOS X, Linux, *BSD, … etc.

    If I use a word processor that works the way I want, and completely meets some open standard for documents and share a document with you… provided your word processor of choice fully supports that open standard, it actually does not matter.

    Open standards matter more than open source.

  26. “Virtually 100% of supercomputers use Linux now. How you define a webserver is contentious, and Linux figures range from 70% to 98% depending on whether you count cloud services and subdomains, but anyway Linux runs the vast majority of the web. Even smartphones are dominated by the Linux-powered Android, with about 65% of devices, 20% using iOS, and the rest being an amalgamation of fading Blackberries, Windows Phones, and others.”

    You’ve left out the embedded world. Linux also dominates embedded systems. If it’s not a microcontroller, or a highly specialised domain, it’s likely running Linux.

    “From these numbers we can infer that there is some intrinsic benefit to working in a Linux environment. Not only does it have dominance when raw computing ability is needed, either in a supercomputer or a webserver, but it must have some ability to effectively work as a personal computer as well, otherwise Android wouldn’t be so popular on smartphones and tablets. From there it follows that the only reason that Microsoft and Apple dominate the desktop world is because they have a marketing group behind their products, which provides customers with a comfortable customer service layer between themselves and the engineers and programmers at those companies.”

    Behind all this is an unstated assumption – that having the great unwashed running Linux on their desktops would be a Good Thing.

    The issue is more than marketing, it’s also customer support. When Aunt Mabel is having trouble accessing cat videos, who’s she going to call, and who will pay for that?

    But really, who cares? Linux dominates the server and embedded world, why should we care about its share of the desktop world [which is already in decline]? How would we benefit if this share was increased?

  27. 20 years ago:
    – To run a program in Linux you must type its name;
    – Linux lacks a lot of device drivers;
    – You have to learn everything again to use Linux;
    – Linux GUIs have a lot of inconsistencies;
    – You have to spend hours configuring your system to make it usable.

    Hey, that’s Windows 10 today!

      1. Are you sure…?
        – Crappy start menu, programs in alphabetical order, better start typing to find something..
        – Device drivers..hmmmm better start buying new peripherals
        – All user interface remodelled, better start typing (again) to find this or that configuration menu…
        – “tablet” style menus mixed with regular windows menus all over the place…
        – Thank god there’s now at least two debloater to turn off cortana, web search, annoying whistles, data mining features, uninstall crappy games, correct 100% disk usage so I will not have to loose a lot of time doing that!

        I have nothing against windows, I have several machines running it, my beef is on Windows 10.

    1. Reading is not enough, one must be able to understand it too!

      I’l tell it again, without the ironies: Windows 10 is a crap!

      Most of previous Windows releases are cool being XP and 7 good operating systems, pretty stable and very usable.

      But then Windows 8.1 and again Windows 10 screwed that up, bringing up annoyances that the “competition” had 20 years back.

      Did you get it now or do I need to draw that?

  28. Well well. The same arguments I’ve read/heard for many years. Sigh. I now run Linux 18.04 and Windows 7 on 4 machines. What I’d like to do is change the size of my mouse cursor on 18.04. I know, I’ve tried ALL THE SUGGESTIONS ON ALL THE FORUMS AND THEY DON”T WORK.

    Otherwise, wtf. I haven’t seen anyone on this largish thread who notes that WIN 7 is effectively the last Windows you can truly own a copy of. You rent WIN 10 and cannot resist updates. Bought an ADOBE product lately? nope. you can rent one.

    How about any late version of WORD? Excel? Right, you rent them too. The basic structure of consumer software has changed, IMHO not for the better.

    If I sound like a crochety old guy, that’s because I am, I wrote my first computer program in machine language on a Bendix G-15D in 1958 and have been cursing computers ever since. Just hang in there, the only thing that’s constant is change. Stay flexible.

    1. We need to write sci-fi movies differently now. We know when the heroes return from outer space after the global apocalypse there really won’t be spooky automated radio jock-in-the-box DJs playing golden earring and ZZ top, because all of the smartnet, Meraki, and DNA accounts will have expired, the windows annual accounts have expired and all displays will be stuck on a screen saying please reboot computer for updates to take effect. Every one of these technology companies is subject to ups and downs, stock market and mergers and acquisitions and could be Sears or Kodak at any time. It is a great threat to permit IT companies to tether purchased hardware functionality to a Sun licensing renewal model. If you buy it you should own it and it should work until you turn it off.

    2. Cursors are provided by the desktop. In KDE, you get to choose. I’m looking at a decade-old version of KDE right now, and there are 3 size choices. I don’t know what you’d do if you need something outside that range, perhaps edit some bitmap files.

    1. What matters is ensuring m$ doesn’t try any mroe crookedness with things like “secure boot” or other such initiatives designed to block linux. So long as we can protect ourselves from those who would like to wipe out our OS entirely then we’ll be fine however many users we have.

  29. I think we are overcomplicating the problem.

    Linux can be anything you want it to be.

    I believe the problem lies in at least 3 things:
    1. There are already 2 different companies that satisfies the needs of the consumers. One being Microsoft and the other being Apple
    2. Taking exception to Redhat whom focuses on Enterprise markets, there isn’t a large Linux company that has solutions entirely focused on the consumer…that means there isn’t anyone to blame for computer trouble
    3. Pay someone to choose the right solution for them. They don’t want to think about the benefits of whatever. If it works for them then that’s what they are going to get.

  30. “Any of these points may in fact be valid, but will instantly turn away anyone who is simply looking for a quality piece of software…”

    But that I think is where you miss the point. As a long time desktop Linux user myself I would like to see all the major applications ported so that I never HAVE to use something else. I also lived through the days when everything on the web was Flash and Linux’s latest Flash version was too old to view it. More market share is the answer to both of those things.

    But… what is scarier than the idea of never getting more users? Getting outnumbered on our own platform by people that just don’t care about the technical differences that make Linux different or even better than the more mainstream alternatives. That would change the direction of development, if not among the hobbyist contributors than at least among the profit motivated corporate ones. That matters because they overtook the hobbyists in programming hours a long time ago. This could easily make the future of Linux look more like the Windows we all fled. Then where do we go? I tried FreeBSD myself but didn’t want to give up Netflix.

    But… wait… apparently RedHat is running the show now and their goal is Linuxd, the cheap, low quality version of MacOSX. Oh well. Might as well bring on all the newbies.

  31. Nobody will ever use Windows because who wants to learn all those arcane DOS commands?

    Yes, it sounds that ignorant when people say it about Linux. Many Linux users do a lot at the commandline because they like it that way. Nobody does it because they HAVE to. A good desktop distro has gui menus for pretty much everything although most users still never need them because default settings are just fine.

    I’ve dealt with users so computer ignorant that you could stick them in front of a Mac or a PC and they didn’t know the difference. They still got things done because they didn’t need to know. They just knew what picture represented the one application they needed. Click and done, no more interaction with the OS necessary. This is no different with a decent Linux desktop. Just click the little picture and get on with life.

    I put kids on Raspbian because I hope it encourages them to actually learn something. I just wish there was an AMD64 Raspbian. I’m probably going to have to switch to some other Debian based distro and install packages to make it more Raspbian like soon.

    Older adults get Ubuntu because it has good hardware support and they probably aren’t going to learn anything anyway so whatever is the easiest path to get them to their web browser is best.

    But do you want to get stuck being the only one a user knows who can support their computer that you put Linux on? Surely if you install Windows then there will be someone else that will step in from time to time and provide support. Yeah right! Do you want to be stuck supporting Windows 3 days later when they already have it filled up with crapware? Or if they had money for a Mac would you be supporting their computer in the first place?

    Yes, Linux is easiest

    1. There is a Raspbian desktop for AMD64 you may download. It works well as a live Linux booting from USB Flash Drive.
      https://www.raspberrypi.org/downloads/raspberry-pi-desktop/ Debian Buster kernel 4.19 Version: September 2019

      I liked your comments “Not Amused”:
      Look at Lick Installer 1.3.3 for MS Windows computers
      http://puppylinux-or-pcbsd.blogspot.com/2019/11/ms-windows-users-consider-lick.html
      A good PuppyLinux to use in “Frugal Mode” from a usb flash drive.
      http://puppylinux-or-pcbsd.blogspot.com/2019/04/fatdog64-800-release.html
      https://distrotest.net

      Yes, Linux is easiest

  32. I reject the thesis that there is a problem.

    Why does Linux need desktop market share?

    It’s a tool. It has a sufficiently active community that it is perpetually prepared for and good at a huge variety of different jobs – more so than a multi-purpose tool has any business being. It’s a jack of many trades and a master of most of them. People are building billion dollar products with it, making careers out of working on it or with it, and tinkering with it in their spare time for fun.

    There is no problem here.

    1. Were you trying to use Linux as a desktop back when everything on the web was Flash and Linux was years behind on Flash support? That is the price of insufficient desktop market share. It sucks. You don’t want to go there.

      Just think how little corporate support we would have to lose today and streaming services like Netflix and Hulu would become out of reach from the Linux desktop? It would be FreeBSD!

    2. I agree. There really isn’t a problem. As you say, there is plenty of users that support the Linux platform. I like it and use it. Does all I need it to do and more. I don’t use Netflix or Hulu or any other subscription site or desire to do so. My VAX, Prime, Windows and DOS/CPM days are behind me (at home that is) and Linux is ‘the’ goto OS and has been a favorite since I first loaded a stack of Slackware floppies back when….

      Win7 was the last OS I ever bought or will buy intentionally. I say that because I have bought laptops and then laid down Linux over the Windows that was on it.

    3. “Why does Linux need desktop market share?”

      Well it technically doesn’t need a market share, but if it has no market share, that means no one is using it as a desktop. Which I honestly think is shame.

      Linux is pretty much the only alternative desktop “OS” out there besides windows (I don’t count OS X because Apple doesn’t want it on hardware they haven’t sold)

      And with the recent blunders of windows, there is a real need for desktop alternatives, which the likes of KDE and Gnome could easily take on.

      The problem is no real “problem” for linux I agree. It can be perfectly healthy as an evironment and project just by being a kernel for servers and commercial applications. But there is a real, growing need for desktop alternatives. Im honestly fed up with windows constantly “fixing” what already works (countless iterations of settings, which constantly get progressively worse to navigate etc) and implementing malware like practices like advertising embedded in a product that you are already throwing hundreds of dollars at.

      “Linux” doesn’t need Desktop users. Its the users that need Linux, but as the article points out. Even though linux is a pretty viable alternative to windows for most things, novice users are turned off by the marketing, which is true. I mean just go to the frontpage of Ubuntu.org and read it right now. Its not exactly selling it well to consumers. Focusing on a pretty narrow field like ML in the main paragraph of the page is a big blunder for a distro which supposedly targets desktop markets. It is not the way you sell a desktop distro.

  33. Take this quote from “Whats new in Ubuntu 19.10” on the ubuntu landing page:

    > Ubuntu 19.10 includes enhanced K8s capabilities for edge and AI/ML, OpenStack Train live-migration extensions for easier infrastructure deployments and more.

    Thats not the highlights of 19.10 for me at all! Yikes! The new version of gnome comes with some serious performance improvements compared to what was shipped in 18.04. That is what end users that aren’t ML enthusiasts care about!

    What should have been their landing page is the /desktop page, and even that isn’t superb. Why not highlight the Gnome applications “store” ? That things are actually as easy to install, if not easier than OSX and windows… Why not showcase some productivity apps instead of a very bland media player, am i supposed to be impressed that I can playback mp4’s?

    Anyways, I must say that I really have started to grow fond of Linux, I have been using it through ssh the last 10 years, only recently have I started using it as a desktop, and I’m actually impressed by how good Gnome can feel like if you spend a few hours tweaking stuff (dash-to-dock etc)

  34. Part of the problem is any Linux discussion immediately starts using acronyms and terms that are obscure to the non-Linux expert. That’s fine if it’s aimed at the expert.

    It’s a bit like starting teaching someone to drive by starting with compression ratios and the inner workings of a locking differential.

    What is needed is a simple idiot sheet of simple steps to get it up and running.

    Or another analogy, learn to ride the bike first, then worry about how it works.

    1. “What is needed is a simple idiot sheet of simple steps to get it up and running.”

      Actually, what’s needed is to design it so well from a user experience perspective, that they don’t nee a cheat sheet at all… Linux people are great at nuts and bolts but truly awful at UI/UX design.

      1. Well in Unix days there were books aimed at the user. Only the things the user needed, in more detail thana chapter or two.

        But then they never had to be sysop, that was done by someone else.

        The problem with Linux books is they are mostly about the install, and administration, with minor space devoted to using it.

        That’s not a Linux fault, it’s tge fault of the books. Separated ng user stuff wouod give more space to detail.

  35. “Even smartphones are dominated by the Linux-powered Android, with about 65% of devices, 20% using iOS, and the rest being an amalgamation of fading Blackberries, Windows Phones, and others.”

    Ironically, this is the exception that proves the rule. People don’t by Android phones for Linux – that just happens to be what Android runs ON. The end user almost never sees it. You could swap out LINUX and put Unix or Windows or whatever you wanted under it – as long as the Android part works, no one would care.

    Overall, well said. I’ve been saying pretty much the same thing for over a decade. Most people DON’T want choice – they want the illusion of choice. They want to be able to pick and go, not agonise over ever detail.

    And what a lot of Linux types miss – it’s not about the OS.. it’s not even about the apps entirely… it’s about workflow. People learn how to do things in a certain way and once they get good at it, they can be very fast and very accurate. It’s not enough to be better – it has to be non-disruptive.

    Classic example is OpenOffice or LibreOffice vs Microsoft Office. It doesn’t matter that it does the same things (mostly). It’s about doing things the same way so the end user doesn’t have to waste hours relearning something they already know. That why even though Gimp is as good if not better than Photoshop.. and Inkscape is as good as if not better than Illustrator.. they’re almost unknown.. because they don’t work the same way and have different workflows.

    This is the hard part for FOSS types to get: free isn’t free. It’s actually worth paying a monthly fee to make sure your workflow isn’t upset. Why do you think everyone went spastic when Microsoft changed the Start menu?

    While we’re at it – why does everyone go spastic because there’s *gasp* 8 different versions of Windows (most of which you’ll never see!)

    Linux is an operating system written by geeks for geeks. And that’s fine. There’s literally NOTHING wrong with this. Until the geeks think everyone else should be using it too…

  36. frequent Linux AND Windows user here – I’ve been dual-booting weekly for a decade or so (was an exclusive Windows user before that), so maybe my experience will be of use.

    tl; dr : Reliability/easy of use and ecosystem also are issues about Linux

    The ~5 first years of Linux usage (I was using Ubuntu), I would often break my Linux distro (a bad “cp” once, messing with packages, etc), or an update would break my Optimus configuration resulting in no screen (intel+Nvidia laptops weren’t perfectly supported to say the least). As a result, I would have to reinstall it every year or nearly.
    For the last 5 years, I’ve been using Manjaro (KDE flavour) and even thought some updates would prevent the system from booting, it’s been overall more stable than what I was used to with Ubuntu. Optimus setup is configured nearly flawlessly each time. Never had a Kernel panic. Quite great.
    I have manjaro on another Optimus laptop, and the experience is quite similar.

    On the other hand, Windows has been nearly flawless since then. Maybe I had to reinstall once after a disc crash (not really the OS’s fault), and twice had some BSOD episodes caused by a bad driver (avoid the “KIller” wireless crap if you can). Once a virus episode, cleaned up right away.
    The laptop I’m currently using received windows 7->8->10 upgrades flawlessly.

    So far, windows had been a bit more stable than Linux, at least for my usage.
    I wouldn’t give a Linux computer to my grandmother. Windows or maybe MACOS, even though I have no experience in the latter (but having used an iPhone in the past, I know how Apple cares about simplifying or “dumb-ing” everything down for 99% of the population).

    As for the ecosystem, apart from browsing the web there is not a ton of common usage a have for both.
    I use Windows mainly because it’s the only OS that can run the software I work with (Altium Designer, Solidworks, Fusion 360), or the games I play (I probably have ~30 games in my steam library).
    Yes, I could try to fiddle with wine, use a VM, or try the equivalents but either performance would be fucking horrible or I’d have to change my (and my company’s) workflows, if there’s a viable alternative at all.
    Of course, I’d love to see all those programs run at full speed on Linux, but I can understand that supporting a platform very few in your target audience use is hard to justify financially. Chicken and egg problem…

    I do all the rest on Linux (FPGA, MCU or buildroot programming stuff). I love that I can install a the toolchain for my MCU, or build the firmware and then load it, all from the command line. It’s just 10x more efficient and easier than having to learn an horrible eclipse-like IDE per MCU or FPGA manufacturer. Plus, you get exposed to the raw source code and tools, without any magic tool to block you from fully understanding what is going on, and how to master it.

    To be honest, I love that I have more control of my Linux OS than on Windows, even if it’s a tad more complex to use and less stable.
    I *would* give up Windows without looking back, IF all the games ans programs would run natively on Linux.
    Maybe marketing could push SW editors to support Linux more widely and break users from this issue of MS OS dependency chain, but I don’t see this as the only hurdle from getting access to the remaining 96% of desktop market share.

  37. OK… well since I’ve seen many many opinions on this perhaps my 2 cents worth of wisdom might matter to some people here.

    Well… been hearing this argument a whole lot lately… people thinking that if GNU/Linux was like Windows and Mac OS… one OS, One Desktop no choice… it would win a bigger piece of the user base because it’s better.

    That, in my opinion and in my opinion only, is completely wrong. If that was the case we wouldn’t be talking about it right now simply because users would have already chosen the one desktop they would love and the one distro they would love and elevated their user share.

    Choice is not the problem. And generally speaking GNU/Linux does not have a marketing problem. Marketing has a problem with GNU/Linux.

    There is choice in GNU/Linux because no desktop or laptop is the same. You got really low specced machines that run Windows 10 and an antivirus and are slower than molasses going uphill in the winter. And run way better when you install a distro with XFCE or LXDE or even LXQt on them and give them new life… and if you want them to go even faster… you introduce an SSD. It’s not rocket science people… it’s practical thinking.

    The very same practical thinking that is missing in the Windows world. And the same kind of practical thinking is walled off by the “we-decide-what’s-good-for-your-computing-needs-you-don’t-know-shit-about-it-anyway” ideal that Mac OS and iOS uses to approach everyday desktop usage,

    The same choice this article professes is not good for marketing GNU/Linux as a “product” produced NginX and Lighttpd before that to produce a better and faster web server than Apache. The same kind of choice that allows GNU/Linux to give new life to machines that Windows makes slower than a snail crawling.

    So people don’t like calling it GNU/Linux? Ok but does anyone bother to find out why the proper name of the entire OS is a combination of two things?

    Or is it just easier to call it Linux and forget that behind the entire OS there was, IS and always will be a big huge community of volunteers and people who are FOR freedom? Freedom to copy it and give it to friends to try it out, Freedom to use it as many times as you want and as many devices, virtual machines, containers, embedded devices, routers, toasters or whatever you want. Freedom from EULAs that restrict your usage of your software… even after you’ve paid for it.

    Yes it might not be as polished and full of eye-candy and 1-click solutions as Windows and Mac OS. But again in my opinion that’s what’s screwed up so many users… we’ve spoiled them and did not teach them to look under the hood… just what’s on the surface. Personally I like the fact that GNU/Linux is not yet commercialized like Windows and Mac OS. And it also gives me hope that the user base is not growing in a faster pace. If that happens in my opinion it’s going to hurt rather than benefit the GNU/Linux community.

    I am sorry if this reads like an essay… Not my intention… but yes I do feel strongly about “marketing” GNU/Linux as a competing product to Windows and Mac OS rather than an alternative that’s different and not a one-size-fits-all kind of thing that Windows is pushed as.

    Hope this makes sense to people.

  38. Thank you for a thought provoking article, I find that the title misses the point in order to provoke. In reality Linux doesn’t have a marketing budget comparable with Microsoft or Apple, both of which are the fruits of human ingenuity that have succeeded in making a lot of people very wealthy by using the computing knowledge of the time and packaging it, ultimately, for the non-computer literate user. Both MS and Apple found a way to exploit both the world’s economic system and human behaviour.

    For me, Linux is about more than that. Linus Torvalds when asked if he had regrets about not following the same financial model as MS or Apple, replied to paraphrase him “not at all, and by the way I am not by any means short of money”. Linux is about humanity (cue Ubuntu) sharing and education. The world-wide open source community does an amazing job of developing software to support the many Linux systems available.

    Personally I became interested in Linux about 20 years ago and finally dropped Windows during the XP era when it appeared simpler and more secure to run and maintain a Linux system than the commercial alternatives. Today it is, as many commentators note, easier and quicker to setup a mainstream Linux system than Windows; I can’t speak about Apple as I’ve never owned one of their machines. Today it’s an uphill battle to persuade people away from a commercial OS that they’ve grown up with, and that has been cleverly integrated into the business and education world but let’s not give up; like the climate change issue, with enough optimism the tipping point may be reached.

    Regarding the myriad of Linux distributions available, I suggest that educating (marketing) Linux is a good way forward. Hammer home the information presented above about the strength of Linux in powering the world’s mission critical computer services, while reminding us that the predominate smartphone OS is in fact a Linux based system.

    Of course, part of me doesn’t want Linux to become mainstream because then, the hackers will be targeting us and my online life will be less secure.

  39. Agreed on most of the points. The latest hilarious FLOSS schism is init fundamentalists vs. systemd. As if normal users care about init systems at all.

    Apple maintained its unity and is all the better for it, from a user’s point-of-view. They also have an annual operating budget of $71 billion compared to Canonical’s $6.2 million

    Or perhaps Apple are the poster child of inefficiency? They’ve achieved only 2.5x Linux’s market share with a budget that’s 4 or 5 orders of magnitude as large.

  40. All Linux needs are apps (snaps, maybe?) that address the needs of the many instead of the needs of a few overly-geeky hobbyists. Want an example? Raw Therapee is powerful but is so complicated; it contains redundant functions simply because there was no one who could say “No!” to contributors. What is needed in place of Raw Therapee? Adobe Lightroom. Adobe won’t port it? Okay; then a couple of developers need to slim down Raw Therapee’s user interface to make it match the features of Lightroom (without the “ten almost similar functions” complexity) and, most importantly, match the workflow of Lightroom.

    Most photographers will not switch to Linux for this very reason: Photography apps in Linux just plain suck when compared to Lightroom or even Apple’s Photos (as much as I dislike it).

  41. I’m only just now seriously considering to use Linux after so many years of being interested and lurking from the outside and I am learning more about it from reading your comments than actually having conversations with guys who genuinely want to help me learn about using Linux. Several of you have inadvertently shared helpful details while arguing how easily your parents or children have picked it up. From an outsider’s perspective I can confirm that there is so much that is assumed by those who already know what it is that it is almost impossible for them to grasp and bridge that knowledge gap to explain to those who are just getting started.

    If, what I understand so far, Linux is the better choice then we do need more people like Tom Smykowski to bridge that gap.

    1. Out of curiosity, have you identified any similarities and differences so far in your research? Which areas do you think you will have the steepest learning curve when you make the jump to GNU/Linux (or Linux if you prefer to use its shortened name)?

  42. It’s really funny that I stumbled on your article while searching:

    “game linux flying penguins reach target”

    (I was searching for a game called “MTP target”, that, it seems, no longer exists).

    It’s funny because it’s been 15 years now that I’ve been wishing Linux (and FOSS in general) was more widely used.

    More recently (for the past two years) I’ve been working with developers of free software (esp. ones revolving around libre currencies) and trying to educate them to marketing, all the while doing actual work.

    A simple thing I’ve noticed in FOSS is:

    Never ever will you see a USP or benefits statements on the project website (I’ve just checked and it’s true for Mint too, alas).

    Developers always talk about the features of their software, and rarely ever about what it is good for. They don’t seem to understand what a need is.

    I can empathize with that because I, too, have been that geek that did not understand a single thing about how to talk to normal human beings. I’ve been learning marketing these past few years and I can really see now that I have an edge over developers for UX design and website creation.

    I don’t think we can fix the whole world of FOSS. We can’t really create an army of marketing-savy-FOSS-enthusiasts to go talk to each project and tell them “you’d better do it this way, trust me, I know”. I don’t trust people to become aware that they do it wrong.

    Instead, I suggest we create training content for FOSS developers. That was already my second priority for 2021 so… I was searching for a name…

    What about…

    “Pimp My FOSS” ?

    Anyone willing to work on that with me this year (2021)?

    We could start really small, writing a simple tutorial on how to craft a USP, and go on from there.

    What do you say?

    Mail me at:

    PimpMyFOSS@borispaing.fr

  43. In response to Boris’s comment, here’s one USP (Uniqe Selling Point, i guess)

    A Linux system is built around a philosophy of cohesion not competition. Linux system updates deal with updates to all system and application files via a unified tool, and in a user friendly manner.
    Microsoft systems have a competing update system, the OS file updates compete for resources with the Application file updates. This loads the hardware resource requirement leading to update sessions that take a long time and frequently render the computer unusable, then one is berated with reboot requests.

    I use Linux because the OS puts me in control, rather than taking control, the Microsoft systems approach.

    I was reminded of that difference recently, while rescuing a friend’s laptop by replacing Win10 with LM Xfce. There are many more USPs but it’s time to work now.

Leave a Reply to JonathanCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.