Computer Speed Gains Erased By Modern Software

[Julio] has an older computer sitting on a desk, and recorded a quick video with it showing how fast this computer can do seemingly simple things, like open default Windows applications including the command prompt and Notepad. Compared to his modern laptop, which seems to struggle with even these basic tasks despite its impressive modern hardware, the antique machine seems like a speed demon. His videos set off a huge debate about why it seems that modern personal computers often appear slower than machines of the past.

After going through plenty of plausible scenarios for what is causing the slowdown, [Julio] seems to settle on a nuanced point regarding abstraction. Plenty of application developers are attempting to minimize the amount of development time for their programs while maximizing the number of platforms they run on, which often involves using a compatibility layer, which abstracts the software away from the hardware and increases the overhead needed to run programs. Things like this are possible thanks to the amount of computing power of modern machines, but not without a slight cost of higher latency. For applications developed natively, the response times would be expected to be quite good, but fewer applications are developed natively now including things that might seem like they otherwise would be.  Notepad, for example, is now based on UWP.

While there are plenty of plausible reasons for these slowdowns in apparent speed, it’s likely a combination of many things; death by a thousand cuts. Desktop applications built with a browser compatibility layer, software companies who are reducing their own costs by perhaps not abiding by best programming practices or simply taking advantage of modern computing power to reduce their costs, and of course the fact that modern software often needs more hardware resources to run safely and securely than equivalents from the past.

284 thoughts on “Computer Speed Gains Erased By Modern Software

  1. I cannot agree more.

    I have an elderly Lenovo laptop that I never turn off because it sometimes takes over an hour to reboot.

    Switching between applications can take tens of seconds.

    Perhaps it is RAM limited or the hard drive is severely fragged.

    But I suspect that it is down to the relentless “improvements” in modern software.

    1. While I do agree that abstraction has a penalty there is also always the “weakest link” in the equation.

      You can get cheap SSDs now for under $100. Then, re-paste the CPU and probably buy a fresh battery that’s compatible to with the laptop. After all of that your 1 hour boot probably goes down significantly.

      Also a re-install or running the several suites of disk clean/maintenance tools can always help.

      1. Maybe instead of running maintenance tools or reinstaling Windows you could get proper operating system which doesn’t require this. I’m running and upgrading same Ubuntu almost 10 years now, during that period 2 times completely changed the hardware platform, and two times exchanged the SSD to bigger one moving my OS with sector copy of the old one. Never needed to do so called ‘fresh install’.
        This could also get you some rest of exchanging CPU paste beceuse in the world without telemetry when you do nothhing -> your computer do nothing so the CPU is not a subject of constant frying with 100% usage.

          1. He’s not wrong, though. I’ve seen multiple instances of Windows installs slowly grinding themselves to a halt over time, requiring a reinstall from scratch to restore usability. I’ve never seen that behavior on a Linux box (or a MacOS/X box either, for that matter).

          2. @Jeremy I haven’t used Linux (as a desktop os) in about 4 years, but it always felt like if I didn’t just install each big version jump as a fresh install then I would get all sorts of messed up packages and driver issues etc.

            Since I had /home partitioned it was relatively painless but I haven’t had that experience with Windows.

            I’m not saying either of our anecdotes speak to the whole but I certainly have had Linux issues.

          3. Everyone would use Linux if it wasn’t so complicated to install even simple software. Also Linux does not have that fresh feeling that Windows has. I feel bored and somewhat confused when using Linux. Don’t feel that way with windows.

          4. Between the suggestions of “try a different brand of OS” and “occasionally re-paste your laptop’s CPU and swap SSD/battery”, it’s the former you find absurd/offensive/stupid?

          5. @Jeremy
            Linux has the exact same problems as windows: bloat, dependency hell, slow apps, installations that corrupt the system, driver issues. I’ve had more issues with Linux then Windows. But both need periodic reinstalls and need some tweaks..

          6. Actually, one of the big problems that slows down Windows is the Registry, the other is legacy DLLs. The problem is that it keeps growing and has a large number of abandoned entries. In addition to this, there is an ever-growing number of legacy DLLs that get loaded upon boot. Fragmentation used to be an issue but I think they finally fixed NTFS but it could just be that SSDs made the problem invisible. Back in XP days, I worked for a company that wrote a Window’s optimization tool, so I’m not just inventing this stuff.

          7. I’ve been a relatively hardcore Linux programmer for 23 years… and I use windows in my laptop. Mostly for simplicity, but mainly because I hate Openoffice, Gimp and half-baked, poorly mantained software that claims to be “as good”. Period.

            Btw..Whoever still claims “LaTex is cool” can comb his pubes

            I wouldn’t run a server in windows. But let’s be honest : laptops/desktops are just a hassle to run in Linux. All you need is Putty, Notepad++, Office and WinSCP. Maybe even VSCode if you like colors.

          8. @keredson
            Reinstalling the OS is sloppy design of said OS.
            Cleaning your hardware, a physical object that good old time will corrode and that can’t fix itself, is maintenance.

          9. Use linux, you’ll be amazed how fast the system will perform compared to windows. The os is sluggish and very inefficient, always has been like that, nothing that microsoft ever released has been anything but horribly mediocre. But that is a well known fact.

          10. I use both Windows and Linux daily. Windows is very slow to start. However I can’t compare them fairly because all my linuxes have SSDs, either as only storage, or just for non-/home, and company windows machine is HDD only. Linux is a rolling release and I update daily. I rarely have problems with it, occasionally there is some dependency clash, but it is usually an update timing problem which resolves itself the same day.
            But for me it was never about performance anyway. Freedom just feels right for my values.

        1. You don’t need to run maintenance tools or reinstall Windows either, so let’s stop pretending that’s a thing except for lazy/dumb IT people or for the wannabees who comment and don’t have a clue.
          I’ve reinstalled Windows exactly 2x in the last 20 years to “fix” an issue with it. Once was simply because it took less time than the actual fix would take and I didn’t want to waste the client’s money.
          Another time was because I was under a deadline and needed the machine back ASAP. I took a drive image and eventually figured out the actual fix because I try not to be a hack professional.
          But Windows certainly doesn’t “require” reinstalls, if anything it’s gotten more resilient over time. You can upgrade a Win7 PC these days directly to Win10/11 without losing any data or breaking a sweat. Modern Windows barely cares what hardware it’s on or what the previous MS OS was. If you want to gripe about something in Windows, there is plenty to choose from but this area isn’t one of them.

          1. You are right. Even windows does not need regular reinstallation. Except when there was a new large update or a new version (irritating)
            Still. I was working in helpdesk of a large company for over 20 years and there was times, when the re-installation was the only way.

          2. It does require installs. Case is point is my upgrade from 9900K to 13600K. The old install worked mind you and a normal user wouldn’t notice much difference but there was something off… I kept seeing 110w on Idle and games were still a bit microstuttery at times. Process lasso helped with the stutter but power consumption was still higher than I’d liked it to be… and yes there was no malware, I have a comprehensive security suite. Reinstalling Windows made all the jitters go away and now my PC is idling at 60-70w. So yes reinstalling Windows is kinda needed at times…..specially when where are architecture shifts in your hardware.

          3. I had a Windows 95a install that I kept through multiple hardware upgrades and (IIRC) five complete system changes. There was a specific procedure involving putting both drives into another PC or booting from a live OS CD-R. Too long to detail here.

            It died coincidentally with my switching of the critical stop sound. No idea what glitched at that precise moment. What made it really bad was I used a very authentic sounding breaking glass noise.

            What it would so is in Windows Explorer I could make exactly three clicks on things with the left mouse button and it’d hard crash with the breaking glass sound. Made it impossible to move or copy or do much of anything in Explorer. IIRC it would also crash if I clicked three icons on the Desktop.

            Fortunately I was able to use a web browser without Windows crashing. I found the last, unfinished, release of Total Commander for Windows and was able to use it to copy off everything I wanted to save so I could wipe the drive and do a fresh install. The experience made me really miss XTREE. (Sit me in front of a DOS box with XTREE now and I’m certain I’d have no idea how to run it.)

            What was the problem. ONE corrupted character in one Registry key. DOS mode Regedit registry export and re-import would choke on the corrupted character. Must have given me some idea of where it was stopping so I could find it in GUI Regedit.

            I tried every Registry fixer and cleaner and method of forcibly deleting the bad key but Windows protected it like a mother wolf guarding a dead pup.

            Windows 9x was also good about protecting corrupted files and folders from being deleted. Unlocker was a very welcome tool that could pry Windows hands off those and nuke them. Before Unlocker (which hasn’t had an update in forever but still works on all Windows from 95 through 10) the only way to get rid of those was wiping the drive. Couldn’t get rid of them with any DOS mode tools. Totally the opposite of how Windows should have acted. The response should have been “This file/folder is corrupted and completely unreadable and unrecoverable. Would you like to delete it?”

          4. This seems to have descended into a flame war, but I am just curious, not having been an IT guy since about 2007 (and lazy/dumb being a fair description of my approach at the time): what do you do about the hard drive slowly filling up over time from Windows updates, orphaned install files, etc.? Where do the slowdowns that Windows is known for come from generally, and how do you fix them?

          5. Seph – We are talking about Windows reinstalls to fix a problem, not because you changed the underlying hardware and just expected is to work. Does it often work just fine, sure. Should you expect your OS install to be portable across disparate hardware or major upgrades – probably not.
            And your situation should have been fixable with some benchmarking tools and a look into system processes (among other things). But yes, sometimes a reinstall is the way to go if you don’t need the data, don’t mind not knowing the underlying issue (this is my annoyance point), and don’t have a lot of time.

          6. Yep… I spent hours trying to fix it… When instead I could have had it all back up and running within 30-40 minutes (with all my softwares and games).

            Sometimes it really is just more time efficient to reformat and reinstall rather than waste hours.

          7. Steve – Windows has built-in disk cleanup tools you can call to deal up with files that are no longer needed at both the user level and system levels, which includes update files, optimization files, etc. In addition you can use tools like TreeSize Free, Spacemonger, Wiztree, etc to find non-Windows files that are taking up space where you can personally decide whether they are necessary or not.
            A drive that’s filling up shouldn’t affect performance unless the drive is nearly COMPLETELY full, in which case it’s time for a larger one unless you don’t need some of that data, and at this point everyone should be using an SSD of some sort so it’s not like data being stored on the inside tracks as opposed to the outside ones is really a concern anymore or affecting the performance of the underlying hardware.
            Windows has been around long enough and is popular enough where between the built in diagnostic tools and some trusted popular third-party ones, it’s pretty easy to disgnose issues without resorting to a fresh install. Malware attacks are different obviously as it is hard to trust any OS install after an invasive malware attack, especially for sensitive data.

          8. Seph – 100% agree with you that a reinstall can be more time efficient. I just hate doing it because then I’ll likely never know the underlying issue and/or be able to help someone else in the future. For those of us not in the biz (or not as stubborn), it probably doesn’t matter though.
            Win10 is exceptionally good (normally) at moving to disparate hardware. I’ve imaged older AMD machines to modern Intel rigs just to see how bad it would be as a test, and for the most part it works just fine even when going from am ancient SATA HDD installation on AMD hardware to a newer NVMe rig with an Intel. Certainly not best-practice, but considering how inflexible older Windows OSes were with stuff like this…. pretty neat!

          9. Oh yeah that’s amazing how it just reconfigures itself now. I remember having to reinstall windows when upgrading processors back in Pentium 4 days 🤔.

        2. Windows 11 was my call to make the move to Linux. I run older quad core rigs with SSDs but this was not good enough for the new MS OS. KDE Neon runs beautifully, looks a lot like Windows, and is easy to maintain. Never looked back.

      2. I’ll fix that: “you can now get cheap 2TB SSD for under $80.”

        I will sit here and tell him with a straight face that an E5 1650v2 is not comparable at all to modern hardware. It’s great, I’ve built a half dozen of them. But Skylake is a lot quicker and an Alder lake is another ball game. i3 12100T is going to whup your 1650v2 into next week.

        I am serious. Also please tell me you are running an NVMe drive on the E5 platform.

        I also want to ask him did he run ‘bloatynosy’ or ‘thisiswin11’?

        I’ve tried 10 LTSC and its wonderful. (I ran 2000 for years, and 2000 theme on Win7 for many years after that even, it’s great)

      1. You’re totally missing the point. SSDs are awesome, yes, but software should NOT expect them to be standard. SSDs are not supposed to compensate for bad performing OSes and applications, get it? 😃👍

          1. ‘If you have nothing meaningful to say, please do the world a favor and keep quiet. Thanks for your understanding.’

            Joshua snarks without a hint of self awareness.

            Should modern OS’s also work well with drum memory? Eight inch floppies? 4k of RAM? Punch card readers?

          2. Nah, when it comes to storing large swaths of data HDD’s are still king, unless SSD’s get significantly cheaper. For $124.99 I can get a 7200rpm 6tb WD Black w/ 256mb of cache & a 2tb Silicon Power A55 for $64.97; so for roughly $130 I can get either a, 6tb HDD that performs ok, or a similarly performing 4tb of SSD that takes up two sata ports; I think it’s a no-brainer what one should choose, seeing as not everyone can shill out hundreds upon hundreds for a decently sized SSD.

          3. I think the main point is that newer versions of the same software now run slower on SSDs than their older versions did on HDDs. Software should take advantage of SSDs, but not to the point they use SSDs as RAM and actually run slower then before.

          4. @Arlyst
            >People like the person complaining basically want to halt progress because they feel their old systems should be working forever..

            Well, the old system works, for all intends and purposes and the occasional maintenance withstanding, forever.
            My fiancee and i have a friend, an older doctor, who still uses an old Win95(!) machine to do all his personal office work. Why change whats not broken and does the job for someone?

            I sometimes use old hardware on purpose, to focus, to slow down, to write something and not being distracted. And i am always amazed how blazingly fast stuff just works. Yes, IO is slower on these machines, spinning rust at five MByte per second is slower than even a cheap SSD at 150, but with program sizes in the kilobytes and not in the hundred megabytes that benefit is eaten up fast.

        1. Then the whole world bowed down and changed their ways because you said so; software companies relentlessly worked to optimize already working code, foregoing the market demand for new features, because you deemed it more important than the revealed preferences of everyone else. What an arrogant comment, of course we want optimized software, but don’t chastise everyone else for dealing in the real world rather than your dream.

          1. Well, speed is also a feature.

            And in my experience, most software has not added any features the last 5-15 years.

            There are exceptions, but those programs are also pretty fast so kind of a moot point.

            And it’s fully possible to make fast software without “working relentlessly to optimize”, in fact it’s the same as for hardware, you just remove the biggest bottlenecks.

            And if you remove a bottleneck in software, it runs faster for millions of people, compared to just you for hardware.

            Thinking about it, the business case for spending some time optimizing software would probably pay for it self in a great way (for the world economy).
            And it might just be justifiable for big companies also using their own software, such as Microsoft. A second saved is a penny not spent, multiplied by the amount of users, multiplied by uses per day, multiplied by 365 days a year.

            That’s a lot of pennies.

          2. I also say so. There is no good reason why a newer version of an OS should be slower than the previous, it’s not as if Windows has become a 3D VR environment to need gaming hardware to run.

          3. We have energystar ratings for consumer computer hardware.
            Perhaps it’s about time we had the same type of rating for consumer software and operating systems?

            What’s the point of having energy efficient hardware when the software squanders it all?

          4. Peter, I can assure you, based on personal and current experience, MOST microsoft employees are
            1. Wasting most of their time at work, Microsoft contracts all the real work out
            and ALL are
            2. Using linux for critical work
            3. Using software slower than anything Microsoft provides customers
            There is no benefit to microsoft to invest resources into this kind of thing for employees.

            Further there is no business case for anything absent a monetization scheme; corporations, schools, governments, professionals etc. that actually buy licenses for microsoft office products are not going to switch away regardless, and anyone outside that bubble that can avoid wasting money on office by installing a free alternative, will. You may not call what microsoft adds “a feature”, and most of it is not anything users want, but someone in HR got their promotion getting their company to buy the new cloud feature despite everyone still using slack.

            You can say, “hey, you are agreeing with me, this is all stupid and microsoft should fix it”, that is you trying to fight the tide, microsoft has no incentive to change and that is exactly the one thing that matters. Find a better strategy, buy used midrange hardware more regularly to keep up, get so used to reformatting your drive you keep a thumb drive on a necklace, use FOSS, or don’t and be forever confused that the world doesn’t work in a way you find rational.

          5. You can also only optimize so much before you run into a wall. A HDD no matter what you do is going to be slow unless what you are loading uses a very small truncated file structure with data sizes optimized for an HDD.

            Complexities of a modern software end up causing file sizes, amount of files, and their structure to be impossible to optimize well for an HDD no matter how many hoops developers jump through

            People like the person complaining basically want to halt progress because they feel their old systems should be working forever..

            Maybe we should optimize for tape drives while we are at it too… Then we got the bozo who thinks basic computer maintenance things that everyone should be doing is somehow a reason the switch to a different operating system which is just laughable because that maintenance still needs to be done when it is things you need to do to the computer physically.

            I’d wager money that said person rarely blows out their laptop fans and likely is running a system where they should have likely been replaced by now and if they aren’t willing to check battery and re-do thermal paste that computer is going to die spectacularly and it’ll be their own dam fault.

            Heck they could take it somewhere to do this stuff for them too. Ultimately though people need to realize that things absolutely do change with time and things get more complex on the software and operating system front as we go forward. I mean we went from machines that were basically glorified typewriters that simply input data tonhe analyzed by huge room size computers that basically spit out results on paper to more and more comped systems all of which have required leaving certain things in this past.

            Also to the person saying you need gaming hardware to run a modern system this is wildly untrue. It just so happens that you need to run hardware for modern systems that drivers are actually made for and you can’t sit there acting like a hardware company should support something indefinitely when the hardware becomes obsolete.

            You can still get a good performing device with built in graphics that is best for office work, but it just so happens that advances with hardware have bought those up to the low tier of gaming

          6. The point is that the increase in hardware requirements has nothing to do with more features and an improved user experience. It is from software development practices like abstraction and generic libraries which reduce the time and skill required by the coder at the expense of hardware efficiency.

        2. OSes from the spinning rust era naturally got optimized by the developers for the awkward, complicated timing behaviour hard drives imposed. Often at the cost of resilience – W2K era systems were not exactly forgiving about repeated sudden power outages or forced reboots….) and features.

          You can get quite some performance from spinning hard drives by reordering requests over significant time spans, aggressive write back caching – advanced hardware raid controllers do these things, while they can achieve near-SSD performance with reasonably large arrays, woe betide you if you tweaked these screws up to eleven and neglected to invest in/maintain power failure protections (UPS and BBU/FBWC). Legacy OSes will have done similar things, on a slightly more conservative level.

          (Note: Journaling File Systems will not save you, they will make the mess far worse!, once you are reordering writes in cache!)

          These kinds of optimizations will just add unneeded complexity and failure potential in an SSD era.

          1. Journaling did l did in fact actually prevent these problems for most users most of the time. Large ram caches massively parallel writes exacerbate it, but both are features of modern Windows, not legacy systems.

            For most users, the fact that older systems were designed to assume iops were limited results in much faster responses on modest or old hardware… For excruciatingly obvious reasons. Spinning rust is going to be around a long time, and designing an OS that is crippled at sub 100MB/s r/w and sub 100 iops/s is just bad.

          2. @S.O. ext3 might have some resilience, OG NTFS or Reiser tended to fail REALLY bad with out of order journal writes.

            Throughput is less important than one would think – an SSD at a reliable 50MB/s (eg iscsi target) will make W10/11 reasonably usable. Latency/IOPS, yes, that is what matters.

          3. However, I had two bad Samsung EVO 2TB SSDs in a row. Each lasted only 7 months before S.M.A.R.T. warnings and Samsung replacement took 2 months. Eventually Amazon just refunded me.

          4. Hi reading some of your comments has made me laugh it has nothing to do with anything hdd related you can reinstall windows as much as possible but it’s the cpu that is the bottle neck the only way to keep the computer running fast is to do some coding of the software built-in to the cpu ok it might be hard for some of you but just read the manual on the cpu I have been coding cpus for a long time and had success in unlocking features that was not meant to be unlocked with no problems hope this helps clear some of the bad air.

        3. There’s only enough optimization you can do for a hardware that can do a handful of random seeks per second.

          And chances are the HDD is pretty old anyway, no reason not to replace this with a SATA SSD.

        4. So let’s break some things down here. Software does not expect them to be standard, but almost all modern operating systems have some form of live scanning antivirus and a lot of indexing things going on which for a standard HDD are taking up a massive amount of it seeking which slows down the entire system

          You can see this shown by simply running the operating system off an SSD of any sort and then installing programs on the HDD and you won’t have as many problems or hell bring up task manager with nothing at all actually open and things sitting idle and you will see quite high disk usage if your HDD is your boot drive…

          The whole thing here is largely related to how modern operating systems operate for security and indexing and the way a hard drive reads and writes. You also run into fragmentation on a HDD which can mean different parts of a program are scattered around and not in efficient places for the HDD to seek which also makes things take longer.

          As you see there is a ton of things that simply cause an HDD to be slow that are outside of software they relate to security, indexing, how things fragment or get placed on the HDD and simply drives never being able to maintain their top speed at all times as a result of other factors. Even the highest end HDD (a lot of laptops use slower hdds)

          Transfer speeds on an HDD typically used in older laptops are going to peak around 75-100 Mbps if you are lucky (that’s with it completely empty and fresh). Heck even age can slow them down quite after awhile. In the real world these times are going to be lower and then these other factors are going to come to light.

          The operating systems are not bad performing. You’d see many of the same problems if you loaded a live scan antivirus software onto windows XP or did better indexing/superfetch on the OS.

          You can’t keep technologies moving forward by clinging to dated technologies just because you think you have a good opinion when your opinion is in fact just whinging about advancements.

          Software and operating systems for more complex, more secure, and have done a lot of things to try and ease used interactions with indexing s d superfetch. Programs can have a lot of files that are large that a HDD needs to find or a lot more small files both of which an HDD simply isn’t grand for because to put it bluntly HDDs suck for anything that wasn’t from the windows XP and very early vista days.

          Software complexity, file structure, file number, and file size alone preclude an HDD with an OS on it from operating anywhere near peak efficiency and even without an OS on there they aren’t going to perform well for those tasks simply because they have a limiting factor due to how they operate.

          TLDR : HDDs are limited because of how they operate which causes slowdowns due to a multitude of factors many of which are not related to software, but the things that are software related are unavoidable if you want security and to have many of the modern things we have

          1. If you buy a car based on the paint color and number of cup holders stop reading this comment. What slows down the OS is all the un-needed cr@pware and spyware/malware/usertracking built in and installed. Do you really want/need your computer constantly connecting to google/micro$oft/apple/amazon/facebook? Even when you don’t have a facebook account? Pretty and time wasting animations and sounds for opening and closing windows? Software updates for the sake of software updates is a great way to break things that are working and slow things down with useless and un-warranted overhead. Most software and OS “features” seem to lean more towards being malware and revenue generating for the perpetrators than benefiting end users.

      2. Old SSD’s aren’t necessarily the best in the world, & in some cases a cheap HDD may be faster than a cheap SSD, especially if the SSD doesn’t have a Dram Cache.

        1. I was with you on storage per dollar meaning that HDDs are still relevant for large storage needs, but they are absolutely terrible in comparison to SSDs for responsiveness.
          Even a very budget SSD like Micro Center’s in-house Inland brand (they give them out for free with digital coupon fairly often) will beat a decent HDD in terms of latency. Often, SSDs are orders of magnitude faster in iops, which makes a huge difference in how people perceive the responsiveness of their system.
          Having said that, a combination of solid state system drive and HDD (or perhaps multiple HDDs) can definitely make sense if you need a responsive system and best bang-for-your-buck storage in one machine.

        2. Latency is what matters for an OS drive, and most any flash storage better than a bargain bin SD card or USB stick will beat even a 15kRPM SAS disk into the ground on that. A 100MByte/s SSD will make a huge difference with W10/W11 compared to a 100MByte/s HDD.

          1. Absolutely. Even a cheap SSD in an old machine makes it feel like a new PC for users. An SSD is essentially a requirement for Win10/Win11. I always strongly recommended that anyone upgrading to Win10 also clone their install to an SSD.

    2. First thing I would do is turn off that laptop, remove the HDD and image it. With the image safely stored elsewhere, I would then copy the image to a SSD and see if the elimination of random reads from the HDD solves the issue. Often I have found that modern OSs tend to not like older HDDs and are often written with the expectation that they will be installed on an SSD. HDDs are the most common bottleneck in situations like this. A cheap SSD can often make an older machine feel like new.

      Second, and I think this a good thing for every PC user do regularly, is to wipe the machine and do a clean reinstall. Assuming it’s performing better afterwards, which I suspect it will, then you were probably dealing with OS bloat (pretty common with windows). I make a point to wipe my c: drive and do a clean install on my primary machine every 6 months or so. It saves a lot effort by not having to deal with most issues that crop up over time.

      This thing where people ignore basic maintenance and other quality of life procedures when dealing with their personal machines is baffling to me. I have machines that are 15+ years old that run perfectly fine with some TLC and well managed expectations.

      1. Jack – this is 2023.

        We should not have to go out of our way to do basic remedial maintenance on computers.

        My Huawei cell phone boots from absolute cold start in 30 seconds. It finds Wifi and 4G almost immediately.

        Why are we left with this legacy of laptops and PCs that need human intervention and hardware upgrades to keep them running smoothly?

        Someone writing OS for PCs needs a kick up the ass – a rude introduction to the 20202s

        1. We are left with legacy hardware that needs upgrades and intervention because they no longer occupy the part of the hardware spectrum that developers are targeting. Seemes pretty obvious to me.

          30s to boot seems slow to me. My PC boots in about 20s, so I guess I win?

          It sounds to me that real problem your having is you don’t want to put in the effort it takes to maintain legacy hardware. What you really want is for legacy devs to cater to your specific needs and write OSs that work perfectly on old hardware. It’s not going to happen. Be prepared to do some work.

          1. No, it’s not like this.
            Of course an SSD is much faster and recommended, but before, HDD “were not so slow”: meaning the current configurations (software & hardware) render them almost unusable as OS drives. And this is mainly Windows, BIOS, drivers, and other low level software to blame.

            Let’s get one thing right: in the 80’s / 90’s hardware was not enough, so programmers did incredible things; but unbelievable things with their software. It was super efficient & super conservative. Now, maaaany developers program like crap. Windows is a disaster, and you don’t need to see the code to know it. Software lacks design. If a decent analyst / programmer had a quick look at the DBs of one of these “marvelous” Content Management System (for example Drupal), he/she would want to cry.

            Before one man was making a full program or game. Now you have 1000 hired workers to do it, as they build one on top of another. This software, even if carefully developed, will be bloated with functionality that may never be used. Microsoft recently removed features from file explorer in Win 11 that you can revert through registry: meaning not only they didn’t really removed those features, but they also added more bloat letting you choose if to see them or not. Pointless, brainless.

          1. #facts…I like to read before I comment…you’d b surprise how many lamen are here misleading the hell outta of ppl with nonsense….I blame Apple for ppl comparing their phone to a PC…because they have reduced consumers to ninnies that overpay for the same device…. Iphone, ipad, ipad pro etc…(Ipad was decisively given ipadOs to seem different) n they would never give the mac’s touchscreen because u would see the sham. (Their PC/MacBooks weakest link in their company for this reason and not paid attention to). Point is Windows is fragmented because hardware is not standardized n constantly changing…yet they are innovating…there will b bugs n bloat…they are reacting and anticipating…while trying to lead. Linux is reactive and MacOs is uniform …you’re comparing… homeschooled, parochial n public school students 🤷🏾‍♂️

          2. Jay Gatsby – it’s “laymen” firstly.
            Secondly, if you think no one is paying attention to MacBook offerings, you must have been asleep for the last few years. The M-series chips make MacBooks serious performers and anyone who wants powerful hardware coupled to a reliable OS is taking a serious look at MacBooks.
            We are a Windows house and we still bought a family member a MacBook last holiday season because it was the most performant option available at the time. MacBooks are seriously capable with the newer M chips, but especially for content creation vs the previous Apple offerings.

        2. Your phone uses solid state storage and boots a much lighter weight/less capable OS that is modified and compiled for that specific hardware configuration. It doesn’t have to be flexible to different CPUs, GPUs, audio/IO/etc chipsets, storage controllers, etc, and it doesn’t have to retain backwards compatibility for 30 years of apps before it.

          Your laptop likely has a failing HDD that’s causing the majority of it’s issues. If it could run 7 then it can almost certainly manage 10, and a memory upgrade will handle the extra “weight” of modern apps.

      2. Your (admittedly justifiable) advice to periodically reboot a computer you tend to leave on, or to wipe and reinstall your computer’s OS to restore performance, is an outcome of choosing Windows as your OS. Amazingly, Microsoft has successfully convinced its user base that this kind of crummy behavior is to be expected. Well, it’s not.

        I have used (debian-derived Linux distributions for 20+ years and never have problems like this. I have systems that have run 24/7 for a year or two at a time, with no need to reboot or rebuild.

        1. Windows will happily run without needing a reinstall and will basically run forever unless it needs to install updates so there’s very little difference.
          I have a friend who refused to run updated and never turned off his Win7 installation – he had uptimes well beyond a year at least 3x.
          Windows Server has no problem with long uptimes either, at which point usually a system update will cause the uptime count to be interrupted.
          So while people like to harp on this about Windows, it’s largely untrue except for the fact that Windows tends to favor reboots after updates more than some Linux distros might.

          1. I run both *nix and Windowd as desktops and Servers and I rarely run into shiw stopper bugs in either.

            It really boggles my mind how many “IT Pros” cannot handle basic maintenance to whatever OS they claim to be an expert in.

            I have a windows 3.11 install on a am386dx-40 that hasn’t been reinstalled since… oh.. 1994? It survived my childhood, my grandad abusing it, etc. I just finishing upgrading the memory and replacing the RTC in the system and plan on more mods. But guess what? Same OS install. A heavily modified windows 3.11 with CalmiraXP running, DOS upgraded to 6.22, win32s etc etc.

            I have a Windows XP install that I had upgraded to from Windows 98se (along with a fat32 to ntfs conversion) that has been migrated across probably 10 different machines including between AMD and Intel chipsets. Still runs great.

            I have migrated Debian (my usual Linux choice) across hardware many times and have Debian server installs that are well over 15 years old that have been also upgraded from whatever original install base to whatever was/is current. And this includes switching from 32bit to 64bit.

            Same with macOS (OSX). Ancient installs that have been heavily abused and upgraded a ton.

            /occasionally/ I have run into bad enough corruption issues to warrant just going ahead and reinstalling or reimaging just to save time (across Linux/BeOS/macOS/Windows/etc). But this idea that you should just randomly wipe and reload every couple years is absurd.

            It really shows a lack of knowledge, in both breadth and depth, that people have WRT their operating systems.

            I will admit to the backup/imaging > wiping and reinstalling > reloading/restoring method for /clients/ who were in a time crunch. But.. it shouldn’t be the go to.

            I think this mindset has also poisoned the knowledge well for a lot of IT people and the moment you are operating outside a narrow view of “the documentation says X should happen but G is happening!” They fall apart and just nuke it. I see it a lot and it drives me insane. Especially when the fix is relatively quick and simple often.

            /rant

          2. Helf – could not agree more with you and FWIW, I appreciated the stroll down memory lane.
            Nuke and reinstall has its place, but I feel that it should be reserved for time crunch situations, spots where it’s the cheaper option for the client, or times when you have a nasty malware attack and need to be relatively sure you can trust the system again.
            Moving 3.x installs and 98/XP installs is honestly pretty damned impressive. Older Windows OSes were generally more resistant to migrating to new hardware in my limited experience (with the case of 3.x anyway, I was pretty young) and I had honestly forgotten about the FAT32 to NTFS conversions, but you’re absolutely right. Did many of those back in the day!
            It’s honestly impressive how little modern Windows cares what hardware you are running it on and how quickly it can go and grab (mostly) appropriate hardware drivers without user intervention.
            Definitely progress in that area.

            But yes, I greatly dislike when people feel the need to rag on Windows in order to make their OS of choice sound better. Each OS has its own particular merits and may be better suited for the person or their specific situation, but that doesn’t mean we need to crap on all the other OSes that exist. For the most part, modern operating systems are extremely reliable and can run for a long time uninterrupted if you do a reasonable amount of caretaking and don’t do anything incredibly stupid.
            And if nothing else, reloading the OS should not be a regular or “maintenance” event.

      3. Caveat when imaging a drive wholesale: Unlike legacy (MBR) style partition tables, GPT technically needs some fixing (backup partition table will be in the wrong spot. Can be fixed with the gdisk utility on linux) after copying to a larger medium.

    3. Not sure why I can’t seem to comment, only reply. Anyway, my example is that I rely on Word 2003 for production purposes, because later versions of Office are slow as mud even with things like recognizing a right click. I create and edit massive amounts of text on a daily basis, so every 5 milliseconds makes a lot of difference to me. Should Word 2003 not run anymore on a future Windows version, I hope to be able to run it using WINE. Heck, WINE might even run on WSL by then.

        1. I’ve noticed the same on my Threadripper QEMU/KVM passthrough setup. While it’s “native”, it’s still under a hypvervisor, and is markedly faster than running Windows on bare metal. I chalk it up to better memory/NUMA management, but it’s still surprising.

          Same with my steam/proton chroot. Windows sucks.

      1. I wouldn’t give Microsoft a pass for this. But a lot of software is very large. So larger that no single developer understands the entire codebase. Microsoft I’m particular maintains so much backwards compatibility that software from Windows from 30 years ago will run on Windows today without a recompile.

        This means more and more on top of the old. It happens with Linux too, but I’m that space things get replaced in incompatible ways more often.

        Not to mention that 16-256 colors in 800×600 is a lot less overhead than true color with transparent compositing at 4k+ … And that’s just rendering. It doesn’t include security isolation, monitoring, memory clearing, etc that modern systems do for security.

        Windows 9x was a giant security hole where any process could access anything everywhere.

        Hell, it gets so complex that rust was created as a new approach just to avoid whole classes of bugs without sacrificing correctness and safety by default. Similar for Go and other patterns since then. Some more or less effective than others.

        We now simulate or create entire isolated environments for safety. It’s a huge deal. This is where most of the performance has gone. A lot of abstractions,. Sure, but just as much is for improved security.

      2. I don’t have direct experience but instead of WINE, have you considered using “Bottles”? It’s supposed to be able to run more Windows apps than WINE… or so I’ve read.

    4. Uhh.. You really just need to reinstall your OS on that thing. Under no circumstances is any of that acceptable. It has nothing to do with bloat and everything to do with the state of the install. Seriously…

      1. Even that likely isn’t necessary. Removing unused services and applications, old security software and enabling defender etc. A proper defrag,. Though the HDD itself could be dying.

        An SSD and 8+GB RAM is really necessary for most modern OS options to run well.

    5. Booting a computer now and then cleans some carbage from memory. Cleaners like Easycleaner and CCleaner make huge boost to Windows. The registry database of Windows is the main reason for slowdown.
      Another thing is that most Win programs are built with libraries compiled to executable and not common to many programs. This eats RAM. One problem here is the thousands of programming languages used, which can not share libraries.
      Some programs, like Firefox and Chrome load pages in memory and don’t rely on the links.
      My own Firefox eats sometimes over 1GB, when I have a large project open.

      1. All these time-wasting housekeeping manoeuvres (including Windows defrag) to keep a Windows system going is one of the main reason Linux is superior. No such baloney to ever have to do. No anti-virus or spyware scanning either.

        The Windows registry is a complete nightmare and always has been. When the foundations of an OS are poorly designed, you can put lipstick on this pig and seemingly make it workable but it’s still a darn pig.

    6. Reading through many of the comments here, I’m struck by how unfounded and technically untrue many of the statements are against Linux. For one thing, there isn’t such a thing as “Linux” in the same sense as when we say “Windows”. (Actually, a better term is “Windoze”.) Linux comes in many different distributions. A Ubuntu-based Linux OS isn’t the same as a Debian one or something based on a different desktop environment.

      No one in their right mind would install the latest Ubuntu distro onto a 15-year old computer with only 2GB or less of RAM. Instead, Linux’s diversity of distros is its strength. You pick the one that’s suitable for the hardware specs you have. Any modern computer is fully capable of handling any Linux distro that’s available today. But older hardware requires informed selection choices. No one expects you to instantly know what will work and which ones won’t. The OS is free so just try it out and see how it goes.

      Testing Linux out is just a matter of getting a USB stick and using an app like “Rufus” to make the .ISO file bootable. There is no such thing as a “live” Windows version (yeah, yeah, it can be done but its a painful ordeal and not commonly done). But with Linux, I used an app called “Ventoy” and the only thing I need to do is install it properly on a USB stick and then just dump (i.e. copy) any Linux .ISO I want onto the stick. “Ventoy” makes ’em all bootable without having to go into any extra steps.

      Now concerning the comment about it being so difficult to install apps on Linux, I will call “total bullshit” on that statement. Linux has a myriad for ways to install software. You can get something called an “App Image” which is very much like an Windows .exe file. The app isn’t truly installed into Linux. It just sits there and you can launch it any time you want.

      Then there’s the “normal” software repository method. To a Windows user, it’s similar (but better) than the Micro$oft app store. You also have Snap files and FlatHub. Depending on the distribution you install, these may all be available with the OS or something that’s very easy to install yourself. What is terrible about the way Windows handles software is that you can grab app all willy-nilly over the web, never totally sure if it contains malware, and updates have to be done on a ad-hoc basis, typically being notified of a newer version and interrupting your workflow in order to get the update completed. Totally stupid method. By contrast, Linux’s software manager takes care of ALL apps under one roof and gets them done all at one time, in the background, without every affecting your workflow.

      In all of my 20+ years using Linux, I’ve NEVER run into the same serious hassles that I had with all versions of Windows. I am also a 40+ computer tech and have seen numerous OS’ on a variety of hardware platforms and Micro$oft wins the lottery for the most headaches acquired per day.

      Linux may not be for everyone. I get that. If you are a gamer, there are ways to get Linux to play many popular games. But, it may not be as easy or streamlined as in Windows. (I’m not a gamer and never will be, so I will admit I have a limited degree of knowledge on that subject.) Suffice to say, that when you have a massive money-hungry company like Micro$oft dominating the market, it’s only natural that every other software and hardware company out there will bend their will towards M$. But don’t try to compare apples with oranges. Or Apple with Micro$oft. These are two different things.

      Ultimately, any time someone scoffs at how terrible Windows is and suggests using Linux, some Windows lover will lambast that Linux user for promoting Linux and demean them with the “fanboy” tag. To that, all I can say is “grow up”. We Linux aficionados realize that there are no TV commercials or magazine ads promoting Linux to common, brainwashed Windows users. Therefore, our only means of getting the word out to suggest (not demand) that they give Linux a fair shake.

      I’ve got a 25 year old IBM Thinkpad running Pupply Linux. It’s Win3.1 counterpart is completely incapable to surfing the web. I’ve also got a 15 year old HP Pressario chirping along just fine using Linux Lite and is my main machine. I am typing this reply on my wife’s 6 or 7 year old Acer Cloudbook using an old version of Peppermint Linux (ver 7). And my new Lenovo laptop runs the latest Linux Mint. I totally got rid of Win 11 before even install that piece of crap. I couldn’t be happier, thank you.

      So for those people with slow-as-molasses Windows machines, and all the usual spyware issues, keep doin’ what your doin’. I couldn’t care less. But if you want a real smooth user experience, Linux is truly the only way to go… or buy an expensive Apple product that’s locked into its own eco-system. Anything is better than Windows IMO.

    7. Hi, I have an elderly Lenovo laptop (a thinkpad from 2007, making it the first with the Lenovo badge.) I swapped the hard disc for an SSD, found an 8gb kit of DDR2 memory, and installed Windows 10.
      Now I wouldn’t say it’s the fastest machine on the planet… but it handles modern software, including Office 365, just fine.

      I would suggest your issues are memory and storage related.

    8. Conspiracy theorist here. At least some of this seems to be by design or at least conflicting motives betweem the software company and user. We all have lighting fast freeware that does exactly what you want (like xnview or everything), yet it seems utility for enterprise solutions would conflict with the need to sell or constantly upgrade with subscription pricing. While developers often cite that they want more security, I think this is an excuse. Truly secure systems could be built into the hardware at lighting fast speeds. Software security seems mostly for the purpose of hiding code from competitors and manipulating the user into certain behaviors that are profitable for the company but disastrous for user productivity. Solutions to this rests more with regulators and a Congress “representing consumer/public interest” than developers. In fact smarter developers make things worse if they aren’t fighting for the right team…which is why software gets worse every year for people who want to use their computers to maximum efficiency. There is too much phoning home to Redmond and perhaps the nsa (nsakey) which violates privacy and 4th ammendment, as well is a form of fraud where you buy a computer and software…yet it is normalized that it can be changed and altered at the whim of the company who supposedly sold it to you and cpu cycles are being used also by that company without your permission. If we havked into Bill Gates servers and used the cycles to mine bitcoin, we woukd be in jail, but the reverse criminal activity of constsntly checking certificates, cloud computing, etc has been normalized. We need better laws.

    9. what model is that laptop? Also as mentioned, get an ssd instead, turn off windows defender , system restore and all the other security trash in windows and you’ll notice a big difference,

  2. In general, I find that Linux boots faster than Windows. I have no idea why (and choose not to dig into the topic). My laptops of choice are ex-corporate refurbs, ideally ones I have experience with from work (so I know they work well with Windows). I’ll buy them 3-5 years after they come out, replace the HDD (if so fitted) with an SSD and max out the RAM. “Zippy” is what I’d call them after I install Linux (MINT is my current preference).
    My experience, yours may vary, but I strongly advise staying away from consumer laptops and going with refurbed corporate ones.

    1. There’s a few reasons, one is that open source development is often as much hobby as job so developers are more likely to dig into the nuts and bolts to optimise things that interest them rather than just meeting a target and delivering, another big one related to this is that most Linux based operating systems support much older hardware for a variety of reasons than Windows does (eg to run a system for playing around with on otherwise defunct hardware). Having deleted anything but the most modern hardware (relatively speaking) from the support list Microsoft for instance is free to leave Windows 11 relatively unoptimised. You can also see this effect when comparing Windows to MacOS – on similar hardware MacOS flies compared to Windows because Apple cares a lot more about optimisation (and yes, optimisation is easier when you’re literally only supporting one CPU and GPU, but Microsoft could still limit themselves to effectively 2 x86 architectures and optimise a lot more than they choose to).

      1. I think your first statement’s mostly it: I don’t think it’s the OS specifically, it’s just the applications themselves. There are commercial applications that have both Windows and Linux ports and the Linux port is so poorly written it’s an utter dog in comparison.

        And a ton of applications I use on Linux are *incredibly* old, whereas Windows basic apps seem to be rewritten constantly.

        1. That’s because those apps are “ported” with very thick “compatibility” layers. A properly designed app will have internal APIs to isolate different elements. It’s then easy to support different UIs, say, by implementing that API using native Windows, MacOS, Gtk, etc. primitives. The downside is that you have to give serious thought to what the API really needs in terms of the UI instead of creating a thin facade for one platform and then require thick adapters when supporting platforms with different concepts. (E.g., that’s why docker on macs is actually run in a Linux VM running under MacOS – the platforms are very similar but docker requires something MacOS lacks. “Capacities”, iirc.)

          With those crappy ports you’ll usually (always?) find that they were written for windows and then a very clunky adapter was added to convert the calls to Gtk. This means that some individual windows calls will be converted to multiple Gtk calls, and the system won’t be able to replace multiple Windows calls with a single Gtk call that does the same thing.

          1. “That’s because those apps are “ported” with very thick “compatibility” layers.”

            …which is exactly what the article is saying is slowing down software.

            In the specific case of what I’m talking about, God only knows what’s slowing it down because the “slow” part is command line.

      2. Apple also isn’t living with 30+ years of backward compatibility. They’ve also had their own security issues.

        I use Linux as my daily driver personally and currently m2 max for work. Windows had a ton of crust to be sure. It’ll also run an exe from 30+ years ago without a recompile. There’s something to be said for that.

        1. Linux and Mac will also run a Windows app from 30 years ago, it’s called QEMU and DosBox, UTM etc use it or Wine/Proton does a good job too these days and it’s more secure to boot than allowing an app built without any ideas of multi-user and Internet connectivity to run as admin/system.

          Windows just has a lot of feature creep. The project is too large and has been mismanaged for decades, .NET is one of those debacles where you just have a layer on top of layers of old cruft and Microsoft has been forcing their devs to use it for a while now resulting in the bloated Windows 8/10/11.

          If you get a new language/paradigm, like Cocoa, Swift or ObjectiveC did, you don’t need to incorporate the old mistakes. The fact current .NET can call old and broken DLLs, OLE, WinForms and do crap like dependency (aka memory) injection etc is a problem, not a feature.

    2. “What Intel giveth, Microsoft taketh away.”

      The average age of working PCs in our house is around 9 years. All run Linux, mainly Mint, with a cheap SSD for the OS. original HDD for data, and max 8 GB RAM. Don’t need more.

      I’m typing on a 2013 model Chromebook Pixel with Mint OS. No more new consumer grade cheapo PCs for me, learned my lesson.

      1. For mobile computing, I have a “laptop pool” of full-disk-encrypted laptops with Ubuntu GNU/Linux on them. They are ex-corporate X-series thinkpads and I max out the ram on them generally. Their performance is fine despite them being at least 10 years old.

      2. Everything in our house is 3rd and 4th Gen OptiPlexes and Latitudes running Win10 with SATA SSDs and 8GB RAM…. they are plenty zippy unless you try to play a 4k movie or so something similar that requires software decoding because the CPU and GPU don’t have dedicated decode for it.
        Windows isn’t the problem, people being stupid with their computers is generally the larger issue.

    3. linux is tiny and depending on the distro, doesn’t install a bunch of crap you don’t need. my windows install is like 33.7 gigs in the windows folder alone. a modern linux distro isnt even half that.

  3. As a programmer, … er, I mean software developer, I can agree with these conclusions and to some extent justify them.

    On one hand laziness. In the “old days”, speed was always a concern and we took great pains to make software run fast on those mc68020 powered sun3 workstations. These days I often do things that would have been stupid, even reckless in the old days. Read an entire 100M file into memory and scan it multiple times — no problem and it only takes a second or two. Unthinkable in the old days.

    On the other hand, time spent making a piece of software run in less than a second when a quick approach will yield something that works, but run in 4 seconds. Who cares? The extra effort truly isn’t worth it. But this leads to a mindset that emphasizes getting things done quickly rather than spending time to write software than is optimized for efficiency. And here we are.

    You can guess where the corporate mindset puts its priority. Getting it “done” quickly comes first, not having nasty bugs draws second place, and making it run fast isn’t even on the map.

    1. I remember the really old days, when I build an in-our-application-editor that can handle files greater 64k… what a hassle.

      Then, in the not so old days, my windows program was some resources and data that I moved into standard GUI objects. GUI design was driven by what was there and what that stuff could do. Make fancy looking stuff? Please no, it is all a hellscape of an individual draw routines nightmare.

      Today I craft a class with INotifyPropertyChanged, have some xaml with xml namespaces and wonderful looking things happen everywhere, regardless of windows, linux, android or macos.

      I like the new times more.

        1. yeah a funny thing about the last 20 years is that the browser is the heaviest program, by far. if a laptop gets too slow, or too unsupported, it’s always been because of the browser. but even within that world, slack held a special place. my battery life went down by about 30% just by leaving a slack tab open. and it didn’t do anything that ircII didn’t do for me in 1995.

    2. A program that runs in 4 seconds instead of 1 second is OK if it’s only run occasionally. When it’s the target of a half dozen nested loops, come back in an hour.

      1. Well, I expected someone to say something like this and it is certainly true. That is why I said “program”. Even today if I am writing code in a library that may have unanticipated uses I will use more care than when writing a throwaway python script. The key point is the mindset that develops as you get used to it not mattering if you do quick and dirty coding.

    3. Issue is that you’re not writing the program but merely a single function for it. The runtime of your function is 4s, which is easy to disregard.

      But there is so much stuff running on top and below, before and after, and each thing is it’s own function that takes 4s to run instead of 1 and suddenly the user is sitting there waiting for multiple minutes for a seemingly simple process to finish.

      And the other factor is overabundance of resources. No software engineer is coding or running tests on a generic computer with barely enough RAM to open a few tabs in chrome. It’s always either a high-end computer or even an entire farm of top-of-the-line servers. No one gives a damn about how their product is going to perform out in the field.

      1. With modern computers, even if you optimize, you have to pick a bit of what you want to optimize for, speed or resource usage. Was messing around the other day and an app manipulating some images took 4-5 minutes using a couple hundred mb of ram, multi threaded it for some speed, it now was 20 seconds, but over a gb of ram

    4. Exactly… Get it running, correctly is much more important. And depending on what it is you may have to scale horizontally anyways. If it takes 2 years to release something that runs on half of one server or 4 months to get something that does the job but takes 3 servers, sooner is better and cheaper than the Dev cost.

  4. There has been a continuous trend since the advent of computing to make coding faster at the expense of needing more machine time. First came assemblers, then compilers, then API layers, then virtual processors (like the JVM or .NET runtime) etc. The thinking is that machine time has become progressively cheaper while developer time is progressively more expensive.

    While old code may indeed be vastly more efficient at what it does, it is also much less functional and/or portable than the modern equivalent. Meanwhile the point at which hardware becomes unusable due to age has stretched further back into time due to the capability of even 8 to 10 year old systems these days. Upgrade them with an SSD and most are still quite usable. It seems to be a fair trade-off.

    1. Yes and no, agree with nearly all of yout statements. It does not make sense to develop super high complex systems with assembler, higher languages have their opportunities.
      Only: “It seems to be a fair trade-off.” is a bit bad to me. Every energy that isnt used is good energy. We are blasting tons of megawatt in the world for displaying GUI frames renered in a bad effiency.
      Thats why i liked the gentoo thinking, just compile as efficient as possible to your computer.
      And yes, I am working in a company developing in java. It is really unsure to me if the IML is a bad thing or a good thing. But it should be the last layer in any way.

    2. Explain Javascript on the server then Sweeney…

      It’s terrible at both development and runtime performance.
      On the one hand is sucks huge glistening slimy wet donkey balls for debugging. Worse than COBOL in 1965 using punched cards and printed memory dumps.
      On the other hand it crawls like a 1 MHz 6502.
      On the gripping hand it’s a dependency nightmare with libraries referenced by URL. Said libraries can and do use specific, hardcoded, obsolete versions of other libraries.

      The only thing I can say about it is: Javascript programmers deserve it, no lube.

      1. It’s an attempt to solve a portability issue. Not a good attempt I’ll admit, but for an issue that genuinely exists. How do you run code from within any web browser was the question, and having gotten a hammer Javascript developers tried to make everything in to a nail.

        There are plenty of examples in the past of languages that performed poorly but were popular (heck, even the 8 bit era was ruled by interpreted BASIC that likewise sucked compared to assembler). These languages either adapted or were swept away. Javascript hung on because there wasn’t a cross platform alternative, that was until the advent of WASM. How long it continues in the face of that is an interesting question.

    3. heh there is some of this and there is some of not-this, as well.

      “While old code may indeed be vastly more efficient at what it does, it is also much less functional and/or portable than the modern equivalent” there is a lot of awful code today that is not only slow but also unfunctional and dysfunctional and unportable and dysportable.

      there are many penalties for overuse of recycled code with underuse of comprehension. poor performance is not the only penalty.

      in fact, one of the starkest penalties is unmaintainability. often, it costs programmers more time to work with poorly-performing code because of the factoring decisions that were made. of course, we solve this problem by simply not spending the time. so in the end maybe we still saved programmer time but at the expense of having the program be buggy and unfeatured forever.

      i’m pretty much describing every android app.

  5. Thank our “friends” at microsoft…. 40+ years of absolutely no planning for anything but profit. Bill got rich, but the world got a big steaming pile of “bill” for their hard earned cash. Sad that monumental failures such as his can’t be added up and the bill given to the cause of it all.

    1. Microsoft is a big company -not everything they are doing is bad… .net core for example: it runs on all major operating systems and hardware plus the evolution of c# has made it one of the most powerful programming languages around _and_ influences another great language, typescript. VSCode also comes to mind.

      I’m not a msft purist. I run MacOS, Windows, and Linux (xubuntu, alpine) daily. Each wins and loses on different metrics.

      1. MS gave up on VB6 and its community. That betrayal should never be forgotten and forgiven.
        Instead, VB.Net was introduced. A completely different language, without all the advantages that programmers loved about VB Classic.

        1. The only reason there were so many VB6 programmers is because most of them wouldn’t be programmers otherwise. VB6 was a steaming pile of dung (“on error resume next”).

          1. So that’s where all the Javascript ‘programmers’ came from?

            Don’t believe you. They’re worse now.

            The fact is that no ‘programmer’ only uses one language. That’s the tell. Doesn’t matter what the language is, if it’s the answer to everything, the answerer is a fool.

          2. No, I think that’s not the case. VB Classic was reasonable to understand by mentally healthy people who’re not one trick ponies. That’s why VB Script and VBA were popular, too.

            The problem is, I think, that guys like you who use street language aren’t capable of anything meaningful in life besides coding abstruse C/C++ code. You can’t even change. People who used VB Classic or Delphi were still capable of normal social interactions.

        2. I was a VB6 developer, and was happy to see it killed off. VB.NET fixed many of the things I hated about VB6. The only thing that VB6 had going for it was familiarity, and that’s something that a developer has to deal with constantly. Things always change, keep up with the changes or move over to management.

          1. “I was a VB6 developer [..]”
            Who wasn’t at some point?

            “The only thing that VB6 had going for it was familiarity, and that’s something that a developer has to deal with constantly.”

            No, it wasn’t not the only thing. VB Classic was the true successor to classic BASIC language. It had natural language elements and allowed for both procedural and object-oriented programming.

            I say” oriented”, because it wasn’t being object “based”. That’s a questionable paradigm that’s been forced onto programmers, it’s not an optional anymore.

            VB Classic appealed to all the programmers who previously loved to work with home computer BASIC, GW-BASIC and QBASIC.

            The Visual Studio 6 IDE also allowed for easy prototyping. You, as a developer, could create experimental prototypes for boss or client.
            That’s why it was a real Rapid Application Development Development Environment (RAD IDE).

            VB6 and Delphi were not seldomly used by electronic hobbyists to write utilities and control programs.

            All these things are no longer being covered by VB.NET.
            In Dot-Net, VB nolonger is an independent language with a future. It’s an afterthought, a neglected legacy.

            The automatic VB to VB NET translator was introduced in VB.Net 2005, then dumbed-down in VB.Net 2008, then scrapped all together.

            Seriously, what’s wrong with you? As an ex-VB6, why are you so blind to not see these crucial grievances? The other VB6 developers did sign a petition for MS to bring back VB6 – or more precisely release an update. VB7 was something that VB6 developers had still keept asking for a decade after VB6 went EOL.

            Seriously, I’m quite disappointed by that attitude. VB Classic was beloved by a silent majority, by “normal” users that wanted to write their own programs, like in the DOS and home computer days. It’s as if those users were being left behind on purpose by MS or IT as a whole.

            I hope you understand the dimension of all of this. Microsoft was once the leader of BASIC programming languages. It was their original business in the 1970s. That’s why the attitude against VB Classic/BASIC users is twice as sour.

          2. Sounds like you have learned nothing since the days of VB6 then. I also used (and loved) Delphi, and Turbo Pascal before it. It is perfectly possible to write p*ss poor block structured code in them also, and in VB.NET if you so chose. VB.NET is still a RAD tool that lets you knock up quick prototypes, and VB6 needed a bunch of support files to run (it couldn’t create free standing binary code like Delphi). The support files for .NET may be larger, but cause less issues with version compatibility.

            BASIC was never a great language. It encouraged writing un-maintainable messes of code and performed poorly. The reason it was popular back in the 8 bit days was due to the fact that BASIC interpreters were small and needed few resources. That and the fact that it was easy to learn made it the de facto choice. Microsoft spotted that and made a BASIC interpreter their first product. They didn’t stop there though, and have been involved with many other languages since then.

            Modern programming is less about the language and more about understanding the APIs and system resources. Once you get past that idea then the important thing is to create readable code that prevents anything external from messing with its inner workings. OOP is one, but not the only way of doing this latter thing.

          3. @Sweeney:
            >[…] VB.NET is still a RAD tool that lets you knock up quick prototypes, […]

            No, it doesn’t. I can’t throw a button and a input box into an editor, double-click the button on said editor and tell it what to do when clicked. In all mayor modern languages that can do a proper native GUI or are a hidden browser i have to write tons of boiler plate to get shit done.
            Qt? Fiddle with signals.
            Gtk? Fiddle with callbacks.
            NodeJs? Fiddle with a server client architecture to open a frigging file.

            VB6 was easy to use and lived to the BASIC spirit, open it and trow something together, polish later. VB# doesn’t.

            There are reasons why i prefer good old C or C++ and a text interface nowadays. It gets shit done without any boilerplate forced on the user.

          4. @Bastet I’ve no idea what you’ve been trying, but yes you can do precisely that. Pick Windows Forms as your target environment and VB.NET will work exactly like VB6 in terms of the design workflow. You get the ability to write command line code, libraries, visual components, services, Windows Presentation Framework code etc, but this doesn’t mean you can’t create in the traditional RAD style also.

    2. Does anyone here actually work in a corporate environment? Because most of those still run on-prem hardware for both the server and workstations and they’re almost all Windows boxes.
      Works just fine. If anything, hardware lasts longer productively running Windows for workplace tasks than ever before. Probably part of why Win11 instituted a surprisingly high bar for certified hardware even though Win10 will install and run on stuff as old as (and probably older than) a Core2 build if you have reasonably fast storage in there to install it on and run it from.
      Hardware refreshes on leased units used to be much more necessary after 3-5 years than they are now. Many more people ask to keep their old workstations and that almost never happened in the bad old days.
      I’m not sure many people here bashing Windows are basing their opinions on much more than very limited experience with home machines.

      1. Corporate programmer here!

        We have windows _and_ linux servers for cross-compilation to “sane” hardware running embedded software.

        Linux is more reliable (one was up for over a year before needing a reboot), faster for the same hardware, and doesn’t need a nasty s/w upgrade multiple times a month.
        The windows stuff is heading for the cloud anyway, which adds yet another layer of inefficiency (not to mention needing 100% internet connectivity). It’s getting slower every year.

        On my personal laptop I run linux in a VM when I get hacked off with all the memory leaks in the Windoze apps (corporate says we must have a windoze laptop). It runs everything faster.

    3. a weird irony here, i don’t know how to parse it into like a true/false evaluation of “absolutely no planning”, but microsoft has a lot of high quality planning. by the late 90s, they already had an internal plan for dot net to become what we now know as android. but somehow, android got there first even though android started from a standstill after a giant megacorp had already had a decade of progress on a good plan.

      and today they’re doing the same thing with putting the browser in the cloud. running a browser remotely is a fantastic battery life hack. i believe it will take over the world in some form in 5-10 years. and here, microsoft has already announced the plan, years ahead of the pack. and yet i know, when it becomes everyday, microsoft will not be there.

      i honestly don’t know how a company can repeatedly have such good plans and such poor execution. i just thank god that microsoft isn’t a part of my daily life since the 90s.

  6. It’s a tale as old as time. I first heard it as “What Andy [Grove] giveth, Bill [Gates] taketh away” in the 90s.

    It’s not too dissimilar to the phenomenon of adding lanes to a highway to “solve” congestion. More lanes means less congestion, which makes that route more attractive to drivers, which means more congestion…

  7. It takes effort to make software perform well. It might be fast when simple but as you add complexity (aka selling points) it gets slower. In development you try to speed things up to an acceptable level – but what defines that ? Typically, it’s the performance of the software on the developers and salesmen’s machines.
    And those machines are modern. It’s rare to test on an older machine unless that’s known to be an important part of the customer base.
    It’s clearly a waste of effort to improve the performance beyond what a typical user expects, so optimisation and tweaking stops at the point where it’s acceptable. And slower machines don’t feature.

    1. Your point about acceptable performance of the software on the developers machine is part of the problem and should never be considered a benchmark for acceptable end user performance because I’ve yet to meet a developer who uses a system that comes even remotely close to those of the average end user. I’ve always considered this to be the “developers dilemma”; a problem requiring unwavering diligence that is hard to maintain in the corporate world.

        1. No, I don’t think so. Rather, force developers to TEST their software on 10 year old hardware.

          Development itself requires constant changes and re-compiles. It would take ages on old hardware to complete, it would totally slow down the development until it comes to a halt.

          Developing and running are two different things, really.
          Like cooking and eating.

          That’s why in the 90s, developers had access to different PCs.

          Say, a powerful 486DX-50 PC with SVGA graphics, QEMM and lots of extras (maybe OS/2, tok) for development and a bunch of older PCs for testing:

          A Turbo XT, a 286-12 PC, a 386SX-16 PC, an 386DX-40 PC, a 486SX PC and so on. Maybe a very fast PC, too, that’s beyond the development PC (Pentium). That might help to find timing issues.

          Doing so helps troubleshooting, too. Because developers then can experience the same what their users do.

  8. Architectural efficiency of software depends on the language used.

    Object-oriented languages are by nature slower due the overhead of encapsulation and dynamic memory allocation.

    Procedural languages like C are faster because there is less indirection.

    This difference in efficiency is more evident the faster a CPU goes relative to the memory bus speed because code bloat turns memory use into a performance bottleneck.

  9. The point of faster hardware is so you can run more complex applications than before, not just to run the old applications faster. The speed benefit is secondary.

  10. My say is that “the software gets slower faster than the hardware gets faster” … at least for windows. I enjoy my linux computer booting in ca. 10 seconds, much faster than before. Well, I use mainly Debian, not Ubuntu.

    1. You know, I keep hearing this statement repeated over and over ad nauseam but I’ve never found it to be true. With the advent of today’s hardware, all of our software should have sub 2-second response times (the human factors threshold established in the 1950s and 1960s with interactive terminals).

      1. And even if it is responsive to a basic desktop, the instant you open a web browser and try to do anything with media/ads, it goes “ker-flop.” That’s not Linux-specific, mind you, it’s the same with Windows. You can make a 10+ year old computer perfectly usable no problem until it hits the disaster that is autoplaying media on random websites.

        1. Then disable autoplay. In fact, use NoScript to disable all Javascript except the ones actually needed for a site to work.

          I wonder if there’s a browser that sacrifices rendering quality and smoothness of animation in favor of making the UI feel responsive.

          1. That’s… functionally the same thing as saying “don’t use those websites.” In some cases, yes, you can do a lot of work to figure out how to work around it. In other cases it’s integrated into the functionality of the website itself – disabling media playing on a media website would, uh, defeat the purpose.

            The point I’m making is that going to old computers and saying “look how fast all this old stuff is!” misses the point that a huge portion of processing power in modern computers is going to supporting the disaster that is the modern Web.

  11. It may be another example of “the tragedy of the commons” effect. If people are given unlimited free access to a resource, they will maximize its use. This ends up harming the community as a whole.

    Computer speed and memory are unlimited and free to software developers; so they make no attempt to limit how much they use. It’s the end users that have to pay for the extra speed and memory required to run inefficient software. But since we all need computers, we all pay the price to run bloated software.

    1. This effect is exactly what is going on today with cheap gasoline. At todays prices, nobody is motivated to conserve, drive fuel efficient vehicles, etc. We need to see prices (in the USA anyway) go up to at least $10 per gallon before any changes will take place. I look forward to that day.

      1. Is cheap gas truly the same problem?
        Or is the car industry analogous to the tech industry with similar economic conditions driving similar choices by consumers?

        We need both computers and cars, but I posit that people buy each based on what they can afford and often do so based on sticker price because long-term costs are harder to predict than monthly payments. Raising gas prices to force users into a new electric vehicle with its steeper upfront cost and likely shorter lifespan [ignoring maintenance costs] is somewhat analogous to requiring software developers to write less efficient code to force users into upgrading to a new computer that will likely have to be replaced more frequently than ever before, e.g. range vs. battery pack lifespan.

      2. Go ahead and look forward to it. I’ll be waiting to hear you cry on that day when food prices triple because the farmers needed to buy gasoline and diesel to grow your food.

        For the record, I am motivated to conserve with a 40 MPG car with a well insulated house and an off-grid solar system and heat pump upgrade in our near future. That said, our gasoline bill is still 15% of our take-home pay.

        1. Paraphrasing the movie “Cabaret” Diesel makes the world go round…….

          All our foods, transport and consumer stuff – are only possible with a very high content of refined petroleum.

          But as we convert all our systems to automation and AI – we need smarter, more efficient software. Windows approaches 50, should be retired.

          1. Erm, you’re 10 years in advance.
            Windows 1.0 came out in 1985, so it’s coming up on 40.
            Linux is based on Unix, which has its origins in the 1960s. The age of the original version of an OS has little to do with how good it is. Also the idea that current versions of the Windows source share anything with the original Windows 1.0 is in its self laughable.

      3. SevereTireDamage says they look forward to $10/gal prices. Wow. Literally wishing harm to the vast majority of low and middle income people in the US who rely on “cheap” gas to make ends meet (and with wages not keeping pace with inflation, getting pinched worse every day even with stable fuel prices). A nice way of saying “let them eat cake”.
        Punishing people for not driving fuel efficient cars they cannot afford doesn’t drive technological change since their pocket book isn’t sufficient to take the hit (so they do not contribute to change).

        Even worse, the analogy fails because PC technology is optional, whereas transportation is not. PC technology improvements have (since the beginning) been driven by people with disposable income wanting to get their DOOM (et al) frame rates higher (don’t ask me how I know).

        1. Not sure software/hardware/internet is optional anymore. Can’t apply for a job, pay bills, go to school…the list is long. I would say it’s easier to get by without a car in a city with public transit than to not have a way to interact with the world digitally, which is why libraries are so vitally important for people who can’t afford or don’t have the infrastructure for a computer. The biggest democratizing factor has been cell phones that are financed on a month to month basis as that lowers the bar of entry to computing, and much of that computing takes place on another computer in “the cloud”.

          Also, regarding Doom, I’m not sure what the frame rate is on a disposable pregnancy test — but here’s something to check out if you have Doom nostalgia. 🙂

          https://www.popularmechanics.com/science/a33957256/this-programmer-figured-out-how-to-play-doom-on-a-pregnancy-test/

        2. We drive a Skoda Octavia Variant from ’99 and drive it under 7 liters on 100 kilometers, even with me driving 180 on the A42 occasionally. It cost us 1000€. Don’t tell me that you can’t afford a cheap used economic car.

          Current Super (95 octane) prices over here in Germany is around 1,90€ the liter by the way and we still life.

      4. Good news. They’re printing money so fast. it’s a sure thing. Bank on it.
        The $10 won’t be worth more than current gas prices.

        You could move someplace that overtaxes gas like that already. They also have terrible underpowered cars, just like you want.
        Vote with your feet.
        They might not take you though.

        Also be aware they are printing money as fast as anywhere outside the turd world. Gas price ratios between euro and dollar are likely to remain similar until they hang a bunch of dirt-bags in Brussels.

    2. What makes you think computer speed and memory are unlimited to developers?
      My machine when I was a mechanical engineer was much more powerful (for the time) than what I use now as a software developer. Also, my machine is bloated from all the security software and whatever else IT puts on there. It ends up running like a 10 year old PC.

      Software is becoming more bloated but it is not because developers are running amazing machines. In most cases it is because performance isn’t a high enough priority. These are how I would list the priorities when developing software.

      1) does what it is supposed to do
      2) the code is maintainable
      3) UI/UX
      4) Performance

      Software can always be better so once these items are considered “acceptable” you move on to the next thing. Otherwise, software would never get released. Generally this means performance isn’t improved unless customers complain.

  12. Do they still rate / pay professional coders on the number of lines of code written?
    Many times it would result in alot of spaghetti code that served no purpose other than to impress management.

      1. In the history of computing there has never been a metric so stupid that no PHB used it.

        I doubt that LOC was ever actually used outside the insurance industry. But there, I’m sure it is still used, somewhere.

        1. IBM used to measure their code by KLOCs, then they figured out that it incentivised managers to push for bulk rather than features. That’s how function point analysis was born.

  13. It’s hard to quantify I think. Windows vista felt very slow and sluggish but that could be due to the GUI not getting the priority and many background processes helping the PC and the user get more done but shooting itself in the foot.
    Modern OSes run more stuff in the background like AI (cortana and other heuristics), anti viruses, intrusion detection etc. And there are many applications that do the same thing, not necessarily playing nice together. Take TrueImage for example, offering protection against ransomware and many antiviruses offering similar services. It’s also a thing that offering flexibility but failing guiding the user to setup the environment properly. Setting up scanning for updates, setting up work hours so the OS doesn’t do disk intensive stuff while you need the performance etc.
    Then there’s the ‘old’ installation syndrome that things get progressively slower over time as more apps are installed and uninstalled leaving debris etc.
    And HDDs with some edge case bad sectors, that can make it seem extremely slow, and yes, ssd makes it a lot faster.

    I know as a software engineer that we ‘first make it work, then make it beautiful or fast’ but the last part never happens. It’s a trade off between performance and functionality.

    Responsibility of optimizations have also shifted, like the jit compilers of many types of languages and microcode apps like java and C#. It may well be that when recompiling an application using .net 2.0 to 7.0 runs a lot faster depending on what resources it uses, and also safer due to security bug fixes.

    It’ll be interesting to see when more progressive AIs crop up that can do the optimization for us, just indicate the general idea in text or pseudo code and let the AI figure it out. Trading clock cycles to save clock cycles so to speak. And it could be an on going process getting faster and faster the longer the AI spends speeding things up, at the cost of power/clock cycles. Advances in AI will also improve optimization capabilities.

    1. “It’s hard to quantify I think. Windows vista felt very slow and sluggish but that could be due to the GUI not getting the priority and many background processes helping the PC and the user get more done but shooting itself in the foot.”

      Main issue was lack of RAM. The minimum specs said something about 512 MB, which was unrealistic. Like the XP’s minimum spec with 64 MB was. Vista was a powerful OS that need matching hardware (equally powerful).

      Second, Vista had a complete different GUI that ran as a Shader program in the GPU. Unfortunately, most simpleminded people used “Vista basic” design, because of “less bloat”. Like in the Windows 95 days, when Acrive Desktop was killing performance.

      They didn’t understand how Vista worked, still don’t do. With a cheap Geforce FX 5200 graphics card of the time (among the earliest with Shader Model 2), Vista’s Aero Glass interface was rendered on the GPU, freeing the PC’s CPU for the applications.

      That’s also why RAM was so important. To make things work, Vista required a copy of the graphics card memory in PC RAM. Think of it like a double-buffer mechanisms. Vista’s WDDM 1.0 didn’t support 2D/GDI acceleration at the time (Seven fixed that). So software-rendering was sometimes required, depending on the application type.

      Anyway, the main problem was underpowered hardware! Really! I was there when Vista was released. The PCs Vista was forced to run on were barely XP capable! These were recycled Windows 98SE PCs, essentially. A horrible user experience on either OS. Vista ran on its knees. When Windows 7 was out, the situation had been somewhat relaxed. Windows 7 was being installed on what were true Windows Vista PCs, essentially.

        1. No, I was a Vista user since the Beta program and I read the technical documents. I’m not one of those people who are repeating old falsehoods over and over again, I think for myself, draw my own conclusions and do have my own opinions. And the truth that I discovered is, that Vista was heavily being misunderstood. If properly installed, it worked as intended. I was there and I know what I saw. I don’t need your approval or that of anyone else. Period.

      1. Vista added the idea of a compositing engine, but at least in part due to pressure from Intel (who wanted to push their 945 integrated graphics) they did it by software in memory, requiring 512M of system RAM if you turned Aero on. It wasn’t until much later releases that they actually properly offloaded it to the GPU.

        Much of the bad press was due to the push to put Vista Compatible stickers on machines that really weren’t up to the task, which as I mentioned was due to pressure from Intel. If you got past that, and ignored the Aero bling, then it really wasn’t that compelling an improvement from XP. Most folks chose to wait until Windows 7 rather than upgrade.

  14. Omg some of these comments here are killing me. Wash. Rince. Repeat. Having been a dev since the early 90s, I’ve read this same article / story at least a half dozen times. Dvorak used to beef about bloat constantly. Get over it, the number of lines of code in an OS and other basic libraries these days are orders of magnitude larger. Shipping on time and managing stability often outweighs optimization.

    1. The 90s.. Even back then I felt that “primitive” Windows 3.1x applications did outperform those written with the latest Win32 development tools. File size was smaller, too. A few KBs vs MBs. Just compare an application written in Quick C 1.0 for Windows and Visual C++ 6.0. Or Turbo Pascal for Windows 1.5 vs Delphi 5. The difference is huge. The responsiveness of the Windows 3.x applications is better, too.

        1. Um, what do you mean? I ran things on period-correct hardware. I mean, it happened back then in the 90s.

          How I possibly couldn’t run things on period-correct hardware? I had no access to hardware from the future, after all. 🤷‍♂️

  15. There are things like performance/functionality tradeoffs, but then there are also cases like where steam took it’s perfectly functional (probably largely native) UI and replaced it with a web UI with horrible performance and no significant new features (it just takes much more space to show the same information as before)…

  16. Netapp Inc., I do not know if they still do but they used to use multiple Intel chips running their own custom clustered OS (ONTAP). It was designed to run on baremetal layer with maximum data throughput. Everything else was secondary.

    The problem with modern general purpose OS’es is that they are designed with many layers of abstraction, and each layer eats away performance (more CPU and more RAM). And in most cases loss of performance is due to programmers not writing their own code but using many many libraries/abstraction from from each of which they might only end up using several hundred lines of code, but the domain specific knowledge of how that library does what it does is missing. This is in understandable because it is difficult to understand everything, and unless you are paying many people with domain specific knowledge, this will increases costs (time spent learning new knowledge).

    https://xkcd.com/2347/

    Specific tasks focused on one function can still achieve amazing performance when programmed at a baremetal level, with minimal layers of abstraction on modern hardware. The problem is this is complex and takes time and requires great software developers, most are mediocre. Anyone who has ever worked as part of a large team, know that there are usually one or two, possibly three, truly exceptional people in a team, 90% mediocre (at most things with a few peaks in their knowledge) and a brain dead idiot or three who are usually there due to some form of nepotism. And as scary as it is this pattern can be translated to most jobs in life, doctors and surgeons included. Anyhow for baremetal development you mostly need the cream of the crop.

  17. You can experience what is possible with carefully optimised advances in math, insight and algorithms when looking at the sizecoding- and demoscene. Same old hardware, more functionality plus more performance. Looking at more mainstream targets, you sometimes find this in low power embedded segment.

  18. long time ago I had a video of my Mac SE cold booting into the OS then loading excel and a fairly chunky worksheet before my “modern” at the time quad core got past the windows splash screen. Then I opened a jpeg on it that I took with my crappy 30$ feature phone (so 640×480) …. curb your enthusiasm

    1. That’s because Steve Jobs demanded that the original Macintosh boot in less than 10 seconds and he would not release the developers from the project until they made it happen!

  19. When I left Google almost 5 years ago most PC’s had at most 4 processors but Google gave its developers six Xeon processors and 64 MB of RAM. $3,500 MacBooks were used as dumb terminals because people coded on their desktop machine and compiled and tested on clusters with a thousand or more machines. I worked on the most popular service at Google and it took 230,000 compiles & runs to perform a regression test! The total cost to run one cpu thread in a data center for a year was $20. There should be something called “Fred Brooks’s law” which says that software gets slower by a factor of two every 18 months!

  20. In other news, liquid water is wet. This has always been the case. Sometimes justifiable, sometimes not so much. Not to be rude about the author nor the article, but part of the charm of the OP is that this is similar to teenagers thinking they invented sex.

  21. The biggest reason for slow downs is the personal data collection built into every system and added program. For a better idea just hit Ctrl Alt Delete in windows and bring up the task manager and look at what’s really going on. As I write this I count 8 firefox subprograms running

    1. Yes, exactly that. But its not just a Windows / Google thing, all programs now include some sort of data mining and reporting, if only to help developers find which functions of their software are more used…

      There is also the 2023 expectation to find anything, including inside the content of files, using search (let it be Windows search or Gnome search), leading to a rebuild of the search database at every boot.

      Sometimes I wonder what it would be like to reinstall Windows 3.11 on my 2016 laptop.

  22. Same thing with phones. It was hard time after old Nokias when I had to move to “new” full screen phones. New phones still are laggy as hell compared to old phones.

  23. I have an old 386 with ha handful of kB of RAM and a 210MB hdd on witch i have installed Windows 3.1. it’s impressive how fast it opens Excel spreadsheets and LabVIEW runs better on it than on my 1k€ laptop

    1. UEFI is a bloated OS that’s pre-installed in Flash ROM, so it’s easy to “boot” into full-fledged games already. It would be merely loaded as a payload. There’s even a graphics and network API..

  24. I’ve long advocated that developers should be forced, for one day a week, to use the lowest-spec (say, 2nd-percentile) machine found among their userbase.

    The problem would fix itself real fast.

    1. We are like cats.

      You can’t force us to do anything except quit you chickenshit org.

      Give them a super slow VM for testing, and they will stop all testing. ‘We ship no code to QA until it compiles AND links.’

    2. I worked at DEC (DECades ago). The VMS developers hot the hottest computers; the rest of us the best we could do was prototypes, or stuff that would barely run. And thus, VMS could barely run on a VaxStation 2000 which maxed at 4MB (and DEC stopped a third party from selling larger memory boards)

  25. The abstraction layer annd current coding best practices also produce a security feature as well as attack surface. The OS and the hardware (including firmware) are abstracting at multiple levels. The hard drives still talk like they use addresses, but the internals aren’t like they used to be. The proc does safety checks, the compilers have abandoned silicone optimizations in favor of cross platform compatibility. The software no longer takes as many optimizing shortcuts. On the other hand, what we used to do fast on bare metal was a gaping security hole anyone could walk through trivially. We haven’t solved all the issues with security by far, but, that old hardware that ran fast ought not to be put on any modern internet connection because the way to make it secure is to update it until it is no longer attractively fast.

    The performance we have looks bad on paper, but I never want to go back to our primitive code in a world that would eat it alive.

  26. oses have certainly become more storage-performance-dependent. a consumer laptop built to cost is going to skimp on the storage. using an older gen3 nvme or sata drive.

    was testing windows 11 on my previous rig, and it performs like molasses. its 8th gen intel, with 3rd gen nvme storage. the machine was snappy on win10. i have an even older 4th gen intel machine with sata storage running windows 10 ltsb that out performs it. and these are all well built machines with no significant bottlenecks.

    it begs the question, “what the hell is windows 11 doing?” the os’s place is to load what i need to run my software and get the hell out of my way. linux does this quite well. too bad ltsb versions of windows are not sold on a prosumer license and you need an enterprise license to use it (legally).

  27. Boot XFCE-based Linux and fire up Lapce as a text editor and see which is faster, the new or the old. There’s still modern, relatively unabstract software development that screams speed and is way more modern in features and UI.

  28. Try a software *downgrade*. Did you know that you can quite satisfactorily surf the web, watch videos and listen to music…on a Raspberry PI 3B!?!? – At less than 2W energy consumption, have it all with Android 10.

    1. “Did you know that you can quite satisfactorily surf the web, watch videos and listen to music…on a Raspberry PI 3B!?!?”

      Oh god, no! I’m essentially doing this for years. I started with a Pi 3, then moved to a Pi 4 (because the Pi 3 was more valuable; can decide old videoformats in silicone, can boot old non-Linux SD images of older projects).

      Here’s my user experience: Command line programs work fine, so do QT applications.
      DOSBox-X with shaders is incredible slow, Cool Retro Term is, too.
      Websites are very sluggish, even with AdBlock installed. Take 2-5 seconds to render (like GMX.DE) due to bloat and tracking. I used wired networking, no W-LAN.

      So no, personally, I can’t recommend it. Unless you have nerves of steel. The Pi is good as a Pi-hole system, though.

  29. As someone who grew up with the pre windows OS’s and has been working in computer hardware/software/cyber security for over 20 years…. There are many reasons!
    For example programmers used to squeeze every ounce of memory/processing out of machines… Now a common thing I hear is “just use more memory, it’s cheap.”
    As for windows a large performance hit is the virtualization layer, the antivirus program, the hard drive… The old OS’s did just what you asked, the new OS’s do so much behind the screen… I mean just look at how many active logs windows 11 has compared to Win XP/Win 7…. And that’s just logging.

  30. Although the slowness is mostly related to graphic interfaces, it is quite easy to find the culprits: object oriented languages first, then the growing dependency on non compiled or generally much slower than C languages. Each one of those added a huge layer of complexity that translated in more code to be executed, and by extension more time needed for execution.

  31. An example of this is the ESS ExpressVote XL voting machine. It uses a Windows OS yet doesn’t even need an OS. It is fundamentally a simple state machine. This makes some operations slow. For example, it takes a few seconds after the ballot card is deposited in the ballot box for the indicator light on the stalk to change state.

  32. Slow reaction to user input is highest on my list of pet peeves, I must flood a text buffer at least a dozen times a day, and I probably only do about 60 WPM unless I’ve had wayyy too much coffee. Particularly annoying when you’ve got a really long thought and it hangs 3 words in, what the actual flaming hell is it doing??? Maybe you keep typing hoping it’s just display lag, then suddenly a whole paragraph appears, or maybe you just get the flashing cursor at the end of the words you can see. In particularly egregious situations you have had it recognise things in spurts and be missing chunks randomly in the middle.

    Point and click isn’t immune from this type of lag either, the mouse tracking lag means it sometimes registers a click on a position an inch or two back on the mouse path, wrong thing clicked, and of course it’s also the thing that’s going to slow the system down the most for the next whole minute and you can’t crtl-W/X/Z/F4 X X X X it fast enough. It’s also amazing how it’s super snappy to do the wrong thing with a millisecond grazing click, that’s gonna take forever to get out of but the thing you want to do you’ve hit the button 10 freaking times, holding the click for seconds and it won’t freaking launch…. gah.

    I mind less about things taking a long time to happen when you have definitively told it to do something, but when you are still in the middle of telling it and it goes off on a digital daydream I want to kill something.

      1. I recommend trying out an Atari ST. It was a very short delay, compared to a Windows PC.
        The characters are almost instantly being drawn on screen.
        No seriously. An 1985 Atari “PC” is superior in this field to any PC made since the past 30 years.

  33. It’s pretty sad when I think about the fastest booting computer I ever owned had only 256 Mb of Ram, a 333 mHz processor, and ran Windows 98. Took 15 seconds to boot. From a HDD.

    It’s even more sad to think about how more and more of the CPU power and RAM capacity are allocated to the OS yet ot runs slower than ever before. What good is my 16 gb of RAM, quad core 3gHz CPU and sata 3 SSD that a computer from 1999 smokes it?

    And consumers don’t even really notice. Or care. Lol

    1. 15 seconds should not be the fastest booting computer you have. My aged Latitudes with older SATA SSDs can boot Win10 in under 15 seconds in most cases (maybe not after an update) and they aren’t even remotely modern.
      Win10 hibernates much of the kernel and prioritizes booting quickly, so it should not be hard to boot it very quickly on recent machines, especially with fast start enabled.

  34. “Software is getting slower more rapidly than hardware becomes faster”. In 1995 by Professor N. Wirth from ETHZ in Switzerland, he popularized the adage now named Wirth’s law in his paper “A Plea for Lean Software”.

    1. That’s very informative, thank you. 😃👍

      Though, I must also say that he wasn’t the first one to notice.
      Everyone in IT with a little bit of sensitivity knew that. It was an open secret, so to say.

      My own father, for example, worried about this development before Wirth wrote it down, for example.
      Anyway, I don’t mean to discredit him. I just mention this for the historians who may read these comments in the future.

      Because in ourccurrent days, people seem to seek out for idols, for people that are special and above others.

      Because, to what I read online, it seems that people of today, – who grew up with social media -, tend to disbelieve that their ancestors were fully being capable of drawing same logical conclusions all independently of each others, without any exchange of information. But that’s another story, likely.

  35. Code bloat on all platforms is obscene.
    A 350 MB app on Android just to access my Bank of America Account!?
    The push by every developer to make an app that I have to download, install, and maintain, which really would be better addresses as a web page.
    Web pages that offer about 3 lines of content, but with 90% the screen as ads.

    Every time I use the “inspect” button on Google Chrome I’m shocked at how much crap is going on behind the scene in the browser. HUGE amount of stuff just to manage a constant pipeline of annoying ads. Much of it so poorly constructed that defective ad servers either stalls or slows the web page loading. Or the webpage is constantly, jerkily reformatting every time an overloaded ad server catches up with a tardy ad.

    Money ruins everything; ads have ruined the internet.
    Google is NOT a software company, they are an advertising company. They just use software to pump them out.

    9-of-10 SPAM emails I receive, come from gmail accounts.
    Google has ruined email as well. Google doesn’t care.

    Google has ruined YouTube. It’s now 30-sec content with 30-sec ads. Repeat

  36. Well, when one considers RAM back in the day and even later on was expensive as all get out,
    programmers back then pretty much were forced to write tight, efficient code to get as much use out of the available RAM they could. Nowadays there are machines with multi-gigs of ram, multi-terabye hard drives, and available libraries line .NET 7. There really isn’t an incentive to write tight efficient code anymore because there will always be enough RAM and storage. The reliance on .NET architecture, is another reason.
    Why write a routine that’s already in the .NET library regardless of whether you can write a more efficient version or not? Sure, hardware has improved immensely since the days of the 286 and EGA graphics, and programmers have (and rightfully so) taken advantage of hardware improvements.
    For a good example of this, look at the original game DOOM, then a game called DOOM Eternal.
    Both good games, both released at different times. The original DOOM was great for what it was running on the hardware of its time. Should that original code be rewritten today, would you spend the time writing only for the hardware the original DOOM ran on, or would you take advantage of the latest hardware and graphics and write for that? I liked VB6, I even own the Enterprise edition of it, and while I still retain the knowledge to code in VB6, advances in technology and software development as well as 16 bit software not able to operate under a 64 bit operating system, forced me to learn VB.NET. There are rumors that VB.NET is not as popular as it once was. Looking at Microsoft’s help, all of the examples are in C# now.
    A lot of people were quite upset when VB6 was tossed along the wayside. The software had to evolve.
    Hardware was getting more advanced. Ao, as a programmer today, would you write code in VB6 knowing it can’t run on a 64 bit OS? Does having multi-gigs of RAM and storage mean I should write bloated, inefficient code “just because”? Not to me. Write the best code you can. Look at Windows, and think about the seemingly limitless combinations of hardware it has to run on. Technology marches on.

  37. OK so this is off. For 1) software can get slow when your running older hardware because they use it to emulate newer tasks that are usually embedded in the new hardware such as AI and accelerators. My less than 2 year old laptop running windows 11 developer edition runs butter smooth. The ryzen 7 4800H with my dedicated graphics screams. Basic tasks are opened in no time at all. 2nd windows doesn’t fragment half as bad as it used to thanks to improvements in code. New advancements and upgrades to hardware and rendering along with audio video gaming and personalization all need more computing power and performance to run properly. All of this requires an enormous driver list. The more accessories and the older they are the bigger the driver file and the more updates it required and most don’t clean up so running a cleaner regularly does help. 3) SSD are great but for a decent 1 HDD is still much cheaper and the newer ones are faster. HDD you get more for your money. For the same price of a new 256gbSSD you can get a 2-4 tb HDD. I my 30+ years building and programming pc’s I’ve found SDD to be more prone to dying then HDD and when SSD dies it dies. A HDD can be repaired as long as you don’t break the platter you can get the data. But if you want a fast pc even a cheap 256gb or 512gb to run your operating system on and maybe a couple games will speed up your system but I recommend a HDD for a backup in case your SDD fails.

      1. That’s not what he is saying at all…. He recommends SSDs for boot drives but still wants you to have HDDs for proper data storage.

        Although I do disagree with OP. I don’t know what jank SSDs he’s running which die on him but I still have a perfectly fine Samsung 830 in use, which I believe is one of the earliest models from Samsung. I’ve do Monitor the temps and health religiously though so I guess that does play a role as well.

        1. Well Seph, I’m old school, way back to the 70’s :) SSD is new to me. Never had one.
          Yes, SSD’s may be fast etc. but the technology I have still works. To each their own.
          I’ll have to find out the model of SSD my friend used. He even took it out of his system and plugged it into a USB adapter. No dice. At least with a HDD, I know it’s something I can fix if needed. I’m of the opinion however, that you can have the fastest, most efficient system money can buy, but if the software is poorly written, you negate the advantages of a fast system. For instance, my AMD FX-8150 system is coming up on I believe 14 years old. It still runs FireFox quite well, still loads webpages and I can still play Star Trek Online. Recently when online with a client, the system would start randomly freezing or just power completely off. I figure as much as I like my system, it’s time to upgrade.
          I don’t know what to expect with this new system, but I’m sure it will be leaps and bounds above what I currently have.

          1. It depends a lot now on what your use case is. Getting a top of the line CPU makes little sense unless you are going to be constantly video editing, compiling code or something more. For general purpose computing there is little sense in spending that much money. Get a decent PC but do get yourself a nice Quantom dot monitor and you will be happier than having a 7950x. Ram speeds and config also matter a lot now depending on the platform.. For AMD you want atleast 6200mhz and only two slots populated. More than 2 sticks and the controller downclocks the ram.

            As for SSDs… You cannot go wrong with top of the line WD or Samsung for gaming use. And a mid range SSD isn’t bad either. Just make sure that you are getting a higher TBW one with Dram cache as your main OS drive. Also try to keep the temps low.. I find that the health deteriorated fast when I let the SSDs run without atleast a heat sink on em.

          2. Back in the ’70’s – an IBM 2314 disk cartridge (iirc) was 14″ diameter, held 1 MB, and on the IBM 1130, about as fast as a 3.5″ floppy. And less reliable. (The metal vent plate would come loose, and get sucked against the disk.)

    1. “For the same price of a new 256gbSSD you can get a 2-4 tb HDD.”
      I see 256 GB SSDs regularly going for $20-30, please let me know where you’re getting a 2-4 TB HDD for that price. In fact, by now, only 3.5″ HDDs are significantly cheaper than SSDs per TB, 2.5″ HDDs are struggling to stay competitive.

  38. Rarely improvements are seen IMO, although when UEFI came and I built a PC for a friend like a week after i built my own. I had 9 sec boot time from cold, he on the other hand was sub 2 sec (thinking 1 sec was for LCD warm up). I used basicly same hardware and software for the process, only key difference was the motherboard, everything was identical.

    But yes this focus has been lost for PC’s for years this is were tablets have had focus.

    On my work pc, I can count on having have to use 10-12 minutes for start up (ActiveDirectory and corporate scripts).

    We are not really speeding up since ever, we are just parallelizing. My 266mhz 32mb ram, voodo2 started Quake2 faster than i can do today with a Ryzen3600, 32gb ram, gtx1080 even with the entire game launching from ramfs.

  39. One thing we paid a massive performance penalty for, but that was certainly worth it: Unicode/UTF-8. Combined with a sprawl of text based, rather than fixed record length binary file formats.

    Fixed length binary: You can often literally read the file in one go and type-pun it into your internal representation. Comes with issues though :)

    Text based formats in the ASCII/ISO-8859 era: You could at least take advantage of text in memory being naively indexable and comparable at the binary level, mostly. Bugs and complications tended to discourage users from attempting to use non-English, non-Latin languages (you had to do things like set the language on your text based printers in the 1980s. In the 1970s, there were terminals that needed hardware changes to change the language specific character set.).

    Unicode/UTF-8: None of the problems. None of the speed, either.

    And another thing to do with text: We use printer-grade fonts that need floating point calculations. A far cry from fixed bitmap fonts of yesteryear. No one would think twice these days to use floating point for anything graphical, rather than restricting functionality to what can be done in the integer space.

    On a MSDOS era system, displaying text from a file (not a plain text file!) could be as efficient as literally loading a fixed offset and length from that file… and type punning it straight into the video cards memory (VGA cards even today have these old school text modes, that’s why plain VGA consoles can be insanely fast compared to framebuffer consoles…). Pop up an (ASCII art) dialog and make it go away again? Map the 4K video memory page holding your text to a variable, save the complete content into another variable, render your dialog, when finished just restore these 4K into video memory.

    1. I would certainly think twice about using floating point in graphics programming. Much can be done faster and more accurately in integer arithmetic. Oft times it is just a matter of optimizing the calculations.

  40. Back in the 20th Century I had a Toshiba Tecra 800 laptop, and it didn’t have the top optional CPU. It came with a super slow 4200 RPM hard drive (IIRC 40 gig) and took over 5 minutes to boot Windows 98. I splurged on a 5400 RPM 120 gig drive and just copying the OS to it resulted in boot time dropping to just over 2 minutes.

    That was still super ultra slow compared to the first 100% new desktop my parents got around then. From touching the power button to having Windows 98 100% loaded and ready to use, hard drive trashing stopped, was 45 seconds. *Heaven!*

    I haven’t timed my 6 core Ryzen 5 with 32 gigs DDR4 and PCIe 4.0 x4 NVME boot drive but it takes way longer than 45 seconds to complete boootup.

    What’s quite annoying about Windows 10 is how bloated it’s become. At launch it was the first Windows to have *lower* system requirements than the previous version, and those reqs weren’t rainbows and unicorn farts. Windows 10’s early years would run very well on pretty much any dual core x64 CPU with 4 gigs RAM.

    But the boot time of ALL PCs ever are beaten handily by 1980’s microcomputers like the Commodore 64, TI-99/4A etc. Flip the power switch and they’re ready, fast as turning on a light. Why can’t internet routers be made to restart that fast? It’s all solid state stuff in the box, or brick, or cylinder. Instead of s l o w l y copying the system software from a storage chip to RAM, why not develop ultra fast writeable solid state storage and execute the OS directly from there, like the light switch fast booting 1980’s micros? Any stuff that needs to be saved for settings, or regularly updated, could be put into some type of flash storage.

    With the technology available now, there’s no reason the T-Mobile gizmo in the back bedroom shouldn’t be able to reboot in four seconds instead of four minutes. And why do all these things pack up every so often, stop letting data through, and need to be restarted? How has the internet device industry not figured out how to make a 100% won’t eff up and freeze up system after all this time?

    1. 45 Seconds to completely boot up? What tf do you have running at startup that takes that long ? Mine is up and running within 10 seconds on a 13600K, 3200 C16, and a WD 850X….and thats WITH a 10g Fiber NIC delaying the boot.

      Just because somethings wrong with your PC doesnt mean thats the average :)

      1. Are you just gloating with random specs?

        If you don’t have anything nice to say, just shut up. Your computer does not boot in ten seconds, you never even measured it.

        1. Nop. I am telling you that you are misconfigured…and telling you my specs as a point of reference. A 13600K is a run of the mill midrange CPU..nothing to gloat about. But it did show that you do not really understand much about current hardware and got defensive.

          Reading the rest of your RANT about devices not starting *fast enough* for you does make me certain that you are talking out of your ass like a grandpa and are the one who needs to shut up. Ive been running my own smart home and have yet to restart any of the IoT devices because they *randomly* hung up….and yes my PC does start in 10 seconds..and my surface starts even faster than that…dont be green with envy when you are obviously not good at deploying your stuff and be rude to others for no reason.

        2. Your comment seems unwarranted. I’m not Seph, but even my ancient Latitudes (e6230, e6430, etc) with older SATA SSDs can boot Windows in around 12 seconds.
          And that’s on crap older hardware.
          If your Windows install is taking even 30 seconds to boot, something is wrong. Anyone at work with a machine taking that long to boot would send me a ticket.

      2. Well when my Windows 10 boots, on a good day, it’s relatively quick. On other days there’s this blue spinning circle that can sit there for many, many minutes, or even longer. It would be one thing if Microsoft gave a progress-bar and informed me that “Windows is tidying up some recent updates and this will take 5 minutes”. At least I’d know to go get coffee. But instead all Microsoft can offer is a blue spinning circle. I’ve asked various IT folks, and they know of the problem and can’t figure out why Microsoft refuses to keep users informed.

        Oh, and on a less good day, Windows will stall on the opening screen. When I finally decide it’s broken, I do Ctl-Alt-Del and it *immediately* brings up the password screen, and I can log in. So what was it doing the was so damn important?

    2. “With the technology available now, there’s no reason the T-Mobile gizmo in the back bedroom shouldn’t be able to reboot in four seconds instead of four minutes. And why do all these things pack up every so often, stop letting data through, and need to be restarted? How has the internet device industry not figured out how to make a 100% won’t eff up and freeze up system after all this time?”
      Those companies think they can do better than OpenWRT. The answer is to replace the stock firmware with OpenWRT if there’s an official build for your device. All of the routers I have run either OpenWRT or DD-WRT, they just work.

      1. As an old Ddwrt User (over 10+ years) I got impatient cuz of no WiFi 6 support….. Soo.. I switched to an Asus router and installed Asus merlinwrt. It’s been more stable than ddwrt so far and I get to have WiFi 6…..

  41. Modern “devs” as they are trained at our universities completeley depend on runtime environments, “high” level languages and tons of libs. Finishing code fast is embraced just as much as just fullfilling the bare minimun necessary.

    Most of them have no clue about hardware, memory management, os specifics, code optimization and the likes. Some are unable to even write a simple sorting algorythm etc.

    They just orchester 20 java libs and call it a day when 149mb of jar fullfill the work of 100 lines worth of code.

    Pathetic.

    1. +1

      That’s as if students in math class are being teached to use scientific calculators rather than doing calculations on paper.

      I mean, of course, both have their place.

      But the basics should never be omitted.
      They should be shown and explained once, at least.

      If the students decide to work with calculators only at some points, it’s fine.
      But they should be given the chance to decide for themselves.

      Schools should provide all sorts of education and support them on their way, without trying to manipulate them.
      That’s the only way to help them to evolve into mature, independent and self-thinking individuals.

    2. … I dont know what shitty university you encountered… But in my computer science degree we are taught not only theoretical computer science but in depth data structures, algos, machine code, compilers, ecc, data encoding and more over 3 semesters. That’s separate from actual programming classes :)

      And yes.. You have to be well versed enough to have both the conceptual knowledge AND implementation at the tips of your fingers in order to pass those exams.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.