Optimizing Linux For Slow Computers

It’s interesting, to consider what constitutes a power user of an operating system. For most people in the wider world a power user is someone who knows their way around Windows and Microsoft Office a lot, and can help them get their print jobs to come out right. For those of us in our community, and in particular Linux users though it’s a more difficult thing to nail down. If you’re a LibreOffice power user like your Windows counterpart, you’ve only really scratched the surface. Even if you’ve made your Raspberry Pi do all sorts of tricks in Python from the command line, or spent a career shepherding websites onto virtual Linux machines loaded with Apache and MySQL, are you then a power user compared to the person who knows their way around the system at the lower level and has an understanding of the kernel? Probably not. It’s like climbing a mountain with false summits, there are so many layers to power usership.

So while some of you readers will be au fait with your OS at its very lowest level, most of us will be somewhere intermediate. We’ll know our way around our OS in terms of the things we do with it, and while those things might be quite advanced we’ll rely on our distribution packager to take care of the vast majority of the hard work.

Linux distributions, at least the general purpose ones, have to be all things to all people. Which means that the way they work has to deliver acceptable performance to multiple use cases, from servers through desktops, portable, and even mobile devices. Those low-level power users we mentioned earlier can tweak their systems to release any extra performance, but the rest of us? We just have to put up with it.

To help us, [Fabio Akita] has written an excellent piece on optimizing Linux for slow computers. By which he means optimising Linux for desktop use on yesterday’s laptop that came with Windows XP or Vista, rather than on that ancient 486 in the cupboard. To a Hackaday scribe using a Core 2 Duo, and no doubt to many of you too, it’s an interesting read.

In it he explains the problem as more one of responsiveness than of hardware performance, and investigates the ways in which a typical distro can take away your resources without your realising it. He looks at RAM versus swap memory, schedulers, and tackles the thorny question of window managers head-on. Some of the tweaks that deliver the most are the easiest, for example the Great Suspender plugin for Chrome, or making Dropbox less of a hog. It’s not a hardware hack by any means, but we suspect that many readers will come away from it with a faster machine.

If you’re a power user whose skills are so advanced you have no need for such things as [Fabio]’s piece, share your wisdom on sharpening up a Linux distro for the rest of us in the comments.

Via Hacker News.

Header image, Tux: Larry Ewing, Simon Budig, Garrett LeSage [Copyrighted free use or CC0], via Wikimedia Commons.

100 thoughts on “Optimizing Linux For Slow Computers

    1. I was going to say the same thing, you beat me to it.

      A lot of time is spent optimizing other distros trying to make them run as smooth as Slackware. Much time can be saved by simply running Slackware from the start.

      1. Forgot to add that I’m using Slackware 11 or 12 on a 133MHz Pentium IBM Thinkpad 760ED with 64MB of RAM, running COMMAND LINE (GUI is way too slow), using LILO to choose between Windows 98, Slackware or FreeDOS (I HATE FREEDOS – I’d rather use MSDOS)

        1. P.S. I’m NOT using it right now. I use it mostly with Win98 to use MS Word, Excel, Access, Visio, Solitare, note taking – I also have a Cardbus USB Adapter and it reads USB Flash Drives.

          1. I’d suggest staying away from MS Word, Excel, Access, Visio as well.

            Visio and Access I’ve had data in older versions that wasn’t supported in newer versions.

          2. Eugene, All you have to do is Export your Access tables CSV format. I think it shouldn’t be much of a problem to Import the CSV files in your newer version of Access.

          3. Must be too deep of replies, I can’t reply in the right place. I’m not saying Office 2000 is bad, what I’m saying is you run the risk of later having issues opening your documents in newer versions of MS Office. Visio was one big one that simply refused to import old versions, access 1.0 to 2.0 was the same. Sure I could export data but all my forms and other code I had to manually copy and paste over. I had a couple documents which had issues going from office 97 to office 2000 as well. I run into this at work too where I find an old document stashed away and Office 2013 throws errors and warnings opening it and some of the embedded objects go missing. I found MSOffice to not be the most future proof. In contrast Open/Libre office can open a doc I wrote years ago with no issues. I know I have some dating back to my Linux migration in 2002 that work fine.

    1. Been there, done that, almost. BlueCat Linux running on a 386SX25 w/ 4MB of memory. Application recorded OBD-II data, 3 axis of accelerometer data, serial attached LCD interface, GPS, cellphone module running a PPP link back to a server. Collected data for NHTSA for how people drive, crash detection, and some other data. Kinda cool, but an BeagleBone Black or RPi3 would have made it a lot easier :)

      1. Ohh yes… dial-up Internet. And kids these days complain when they get less than 1Mbps! Luxury!

        I still have the 14.4kbps modem that comprised the physical layer of our first Internet connection… and yes I have turned it on and used it this century, to do UUCP networking with some SCO OpenServer boxes as it was the nearest I had to the Netcomm modems they typically used.

    2. Been there, did that. I remember MCC Interim and TAMU and loading up 18 3.5″ floppy disks. Anyway, my first through would be to upgrade what you already have. I got a rather slow laptop a few months ago and discovered I could replace the CPU. Did that, added more memory, upgraded the hard drive and now it’s somewhat respectable. After that, skip the large distros and go lightweight.

      1. Yeah, I stick with lubuntu on most “this century” systems and puppy on any “last century” ones still worth using.

        Laptop CPU upgrades very doable if you find you’ve got a socketed chip, and not the fastest one ever available on that model. Make sure you’re going to stay within the max TDP the cooling solution was designed for though. I’ve even got one overclocked, Aspire 1690, has a 1.6ghz Pentium M, so banged the bus up from 400 to 533 with a socket mod and it’s happy at 2.0ghz…. think that was a ~25W CPU in the first place though and the TDP was up to ~31, so had some room.

    3. I found a couple root boot floppies the other day actually.

      Running Linux wasn’t too bad on a 386. It was wanting to recompile the kernel! Kick off the compile and go to sleep. You’d still need to wait for it to finish as well. Hope you selected the right options…

      1. Linux on floppy,

        Well there is Basic Linux 3.5[Link]

        A bit old, but I’ve once got a DX486?/Pentium?(1st gen) palmtop, kinda like this one but widescreen and uglier.

        I got it from the boot market for £0.50p LOL.
        When I finally got bored, I tore it apart for fun and found the CPU PCB, The northbridge-like chip and the CPU dies were bonded to a custom PCB that reminded me of the now core i series laptop BGA chips.

        (Hope this makes it through moderation, thanks)

  1. Or you could just fire up Windows XP.

    A lot of people don’t know this, but Microsoft still maintains it. It’s called “Windows Embedded Standard 2009 POSReady”. It’s basically a branch of Windows Embedded Standard 2009 (which itself was derived from Windows XP Embedded), but it comes with all the usual components you’d find in a Windows XP Professional installation. It’s pretty much what SP4 would have been, had they released another service pack. It requires no online activation, and receives patches through Windows Update regularly (until 2019, I believe).

    The only real downside is that it’s 32-bit. However, it runs fine on my T42p, which is still a capable system for web browsing (using Pale Moon) and email and office stuff with Office 2007 (yes, there’s versions of that out there that don’t require activation either). No problems doing all that stuff in 512MB of RAM. It’s not as “useful” as my workstation running Windows 7 and all the snazzy 3D stuff I use on a daily basis, but it’s still a great system to grab and use wherever when I need something more flexible and usable then any of my modern day handhelds.

    Not posting links for obvious reasons, but this stuff isn’t too hard to find.

    1. Last time I ran Windows XP was 2002, between the malware (we didn’t have much browser choice then), the inability to resume reliably, USB issues, always swapping even though I had plenty of ram free, the constant having to reboot if not for updates but because it became unstable, etc. Xp was such a letdown after the performance and stability of windows 2000 its what finally drove me to Linux. I still have that same old laptop from 2002 and it still runs Linux just fine.

      1. You should give it another try then. I personally had very few problems with XP. Linux on the other hand often wouldn’t find the drivers for the wifi or power or screen so would have to bring out the cat5 and then spend the rest of the week making it work like Windows, copying marked strings of commands (often with errors so no copy pasta) and a community of folks that were less than helpful. Both OSes have come a long way. Linux is waaaaay more user friendly these days than it used to be. That being said, it still has room to grow. The various “Distros” are a game in themselves.
        As far as compact Linux for older machines, I have had luck with DSL and Puppy Linux (which almost always works great), and lately Linux Lite (never could get crunchbang to work reliably fwiw).
        You can also try Raspi’s PIXEL OS flavor. It ran fine but graphically you go back to Win 3 lol.
        Good luck :)

        1. Kids still have XP laptops, it still has all the same problems. Suspend and gamble if it will resume. Plug in US and maybe it will find drivers and work. Constantly having to run anti-malware tools, constantly hearing them gripe about forced reboots. Son opens minecraft and it starts swapping and we get to watch the hard disk light flash instead of playing a game.

          Rebuilding XP machines averages about every 6 months with kids with me constantly keeping them locked down and maintained. XP is just too much work.

          I’ve had work provided laptops with XP, same issues, bluescreen at random, failure to resume, random times when it just slows down and starts swapping ram to disk. XP was a sad ridiculous product that was forced on us for nothing else but to integrate IE into the OS to win the court case, it wasn’t technically better than w2k, it was only a political move.

          Daughters old netbook had windows 7 but got the windows 10 virus because I hadn’t applied the way to prevent windows 10 for that week, it took 28 minutes to boot windows 10. I bought her a Dell latitude and took the little netbook and it takes Slackware 28 seconds to boot.

      2. 2000 is pretty good, and available if one wants to run uncooperative older software (compatibility mode). Under a VM makes some things easier (hardware control, drivers, etc)

    2. Yep… an OS that might have updates for a little longer if you tweak the registry to make it lie to the mothership… that insists on activation whenever the hardware changes significantly… and has practically no applications available as most vendors are dropping support for Windows XP now.
      Versus an OS that leaves you alone, will be receiving updates long after the hardware dies, and has current software available.
      About the only use I have for Windows XP is the little bit of Windows CE development I was doing… and even that is something I don’t care to re-visit… and some packet radio software I use for one event (which may get ditched or replaced some day).
      The one machine I have, a Toshiba Satellite Pro 6100, a P4M 2GHz with 1GB RAM, dual-boots Windows XP and Gentoo Linux, and under the latter, is still quite a snappy machine despite entering its teenage years… made more so due to the replacement of the failing HDD with an mSATA SSD (yes, while you can’t buy new IDE HDDs, you can buy mSATA→IDE adaptors for laptops).

      The machine is somewhat usable under Windows XP, but a recent Linux distribution offers a much wider selection of contemporary software.

      1. No!!! a REAL power user uses a butterfly to flap it’s wings,
        in turn creating miniture vortexes that disturb the light at the
        planetary alignment moment relative to when some moon-dust reflects off of Venus,
        filtered through optics and focused by an electron microscope to flip
        the all important configuration bit directly to an HDD platter in the
        sectors containing the /etc/my_fav_configuration.conf files.

        1. Ob. old timer’s joke:

          Four engineers sitting around a table.

          “You kids these days with high level languages. In my days we used C and liked it.”

          “C! That was way beyond us. We used assembler and were good at it.”

          “We didn’t have assembler, we used 1s and 0s!”

          “You had 1s?????”

  2. Two useful hardware changes you can do for an old machine are, more RAM and a cheap SSD for the OS. It is amazing how much of a system’s work is just shoving data from A to B, so these two changes make a very noticeable difference. If I have more than one old box of the same type I’ll take all the RAM out of one and put it in the other to double it, then I only need to spend money on the SSD, with the small ones being fairly cheap these days.

      1. .. and if you want to do something comparable for something ancient, cheap 44 pin IDE to compact flash adapters are available, even a lowly 133x speed is going to feel like a rocket ship on older old systems, 2000-2005 pre-SATA era might do better with 400x. Happiness starts at about 1/3 the speed you “think” your old HDD does, because really it barely ever gets that and you lose it all in seek times. Noughties era PATA probably got a best of about 60MB/sec so anything over 20MB/sec is cookin’.

        1. IDK why I was stuck in laptop/notebook frame of mind, but 40 pin adapters are available too.

          By the way, desktop machines, that approx 1/3 factor also applies to offloading swap onto a separate spindle. If you’ve got ANY drive laying around that’s 1/3 faster or better than your main drive, get it in there and put swap on it. “But it’s slower!!??” trust me, it won’t be, your main drive has it’s head dancing all over the place, even a slower drive dedicated to swap typically has less head movement to contend with, and now your fast drive is dedicated to loading stuff not being windows’ bitch for swapping.

          1. Oh I know what I was thinking, yeah PATA to SATA adapters can be got for less ancient desktops, whereas you can’t cram them into laptops, so much, there were some PATA 2.5″ SSD solutions around but they were tending to spendy and hard to find now I think. But the CF to 44 pin plus CF card will fit.

          2. mSATA (SATA in mini pcie form factor) to PATA adapter is $3.5
            look for “mSATA SSD to 44 Pin IDE Converter Adapter as 2.5 Inch IDE HDD 5 Volt For Laptop” on scambay

            32GB mpcie SSD is ~$20 (newegg for example) and will give old laptop (T60 for example) new life.
            The biggest difference doesnt come from transfer speed, but from access time and IOPS, so even PATA 133 literally flies with crappy (200/50 MB/s read/write) SSD.

    1. But an older computer may have limits on RAM. I used a 1GHz Pentium from 2003 to 2012, it started out with 256megs and eventually I upgraded, but the best it could take was 512Megs total.

      And once a computer is a certain age, it’s passed the time when RAM for it s cheap. So even if you can upgrade, it might require effort and expense.

      But I never ran out of RAM in that computer, and there were times when I didn’t even have swap running. Linux of course is using RAM as buffer, so until you are doing really intensive stuff, things won’t slow down because of swapping or a “too slow” hard drive.

      Use console programs rather than GUI programs, they will use less RAM and be faster.

      Michael

      1. Yah, RAM going obsolescent tends to decline in price in a large curve, until it hits a rock bottom, this is usually before users notice they are gonna need it, then it bounces up to a highish price again then gradually declines until it’s really completely useless and going for $10 a bagful on eBay… though the high capacity modules will remain difficult to find. When your RAM is getting phased out (Hint, DDR4 is here) look for the price minima and buy every last byte your system will take, because it’s gonna get spendy in a year or two.

        A particular problem I seem to have is 1GB DDR SODIMM going bad. However, for linux there’s what used to be a tool called “badram” that’s now integral, where you can pass parameters to the kernel to avoid known to be bad memory blocks. Memtest can tell you what these are and also new versions can I think output the parameters in a format suitable to pass to kernel. I’ve only looked into this, haven’t got round to messing with it personally yet. In theory, it lets you use your big but faulty sticks still which may be irreplacable for practical purposes. Also you could possibly score faulty modules larger than the ones you have free or very cheap.

        Another possibility, probably only suited to 1Ghz or faster systems is zram, which is RAM compression, when you get to a state where your CPU is always waiting for RAM or swap to feed it data, then using memory compression will squeeze more data through the buses at a time and also give you more room in actual RAM.

      1. Having upgraded a Core i5 laptop with 8GB RAM to a 2TB SATA SSD… then recently a P4 laptop with 1GB RAM to a mSATA SSD via an IDE adaptor… I can assure you it does make a big difference in loading times.
        For sure, Linux will run faster anyway due to the mantra “free RAM is wasted RAM”, a viewpoint that heaped Windows Vista with a lot of scorn back in the day (although it also used a lot of RAM anyway)… but access speeds still do play a big part.

  3. I start with a bare Debian netinstall and then add what I need from there. If I need more than a text terminal I put in LXDE. These are some great desktop experience enhancements that will even help my setup, though.

    1. +1 I build Linux embedded systems since 1999 and used almost every build systems that existed, starting from dedicated bash script, scratchbox, buildroot, openembedded, openwrt, etc.. Now you can trash all of that cross-build systems and simply use Debian directly on the target. It work so well that you almost don’t cross-compile anymore (just barebox and kernel), but edit and compiler directly on the target, even on a small ARM-A5 @500MHz. On two projects we are multiples peoples hacking from remote location directly logged on the target board. Debian rock even on very small systems.

      And for “real-time” application, no need to use an other distribution. Just learn to use mlockall(), clock_gettime(CLOCK_MONOTONIC) and timerfd() if not a good even loop library.

  4. Years ago I created an international network of PBX’s and used low end HP consumer grade machines. I started with CentOS single disk server and started cutting from there. By the time I was done I had a real hot rod of a system. It worked quite well until the company was bought out and they replaced it with something that cost thousands of times more and had almost no features. Outside of that, what they replaced it with was fine (smile).

    1. Honestly if I bought out a company that was running important services on home PCs like you’d find at a big box store, I’d plan on replacing them with something that has a professional support contract and a warranty. It’s always best practice to use server-grade hardware with redundant power supplies and mirrored storage, especially for something as critical as a PBX. The hardware savings don’t make up for the potential cost of downtime.

      1. Compared to something you can re-image in 2 minutes and find replacement hardware for anywhere, because it’s not hardware dependent? VS losing 36 hours while a custom vendor specific server part is overnighted from the opposite coast?

        1. Haha, oh wow. You actually believe that IT management cares about actual turnaround time and getting the end-user back online as quickly as possible. You think they’d ever choose a system that worked and was instantly fixable, over a system that did everything “by the book” and gave them someone to kick around if it went wrong?

  5. I really “enjoy” compiling the FreeBSD/OpenBSD kernel and world after specifying the system tweaks in /etc/make.conf such as CPU types and optimization flags.
    I manage to run BSD Unix on:
    – Raspberry PI 1 and 2 (compilation time – around 2 weeks);
    – Marvell OpenRD and Sheevaplugs (KirkWood 88F6281) (compiling time – around one week);
    – 80486@100MHz/64MB EDO RAM – back in the days when i was a student, for everyone to upload/download courses and documentation or study about *NIX (compiling time – 28 days);
    And of course on “normal” computing systems such as:
    – Sun UltraSparc (compiling time – a few hours);
    – PowerPC (compiling time – a few hours);
    – Intel Pentium 1, PRO and II (compiling time – a couple of days);
    – Intel Celeron/300A – I could watch DivX and DVD using a S3 Trio3D/2X video card (compiling time – a few hours);
    – Intel Pentium III (900MHz) (compiling time – a few hours);
    – Intel Core 2 Duo (my day-to-day working laptop which I am using now to post this message);
    – Intel Core i5 – my home desktop computer (buildworld/buildkernel compiling time – one hour);
    And of course my beloved Felix-Coral “half-breed” Mainframe from the Communist Era shown here as a test/example article for the Hackaday Blog:
    https://hackaday.io/project/10579-retro-futuristic-automobile-control-panel/log/49862-test-example-post-mainframes-behind-the-iron-curtain

    With Linux I managed to do that on x86-32 and x86-64 only.

      1. I used to play DivX just fine on a 300MHz Pentium II on Linux… on Windows 2000 however, I found even a dual PIII 1GHz rig couldn’t keep up (but the same machine again under Linux was fine).

        400mHz though would be quite a feat! Must be using static RAM to run that slow!

    1. Strange. I did a FreeBSD buildworld on a stock 1st-gen RPi B and it “only” took ~3 days. What speed SD card did you have? Class 10 or slower? Or were you also building various ports as well?

  6. one thing I do is use an actual drive for swap, it makes things go faster on old hardware.

    My DX4-100 only has 64M of RAM, running X, DR-DOS in DOSEMU, at the same time as running Predict and decoding a wefax image with acfax works no problem and is quite smooth switch between tasks.

    another thing to do is go for a custom kernal, get rid of as much crap as possible.
    before anyone asks, yes, it does take a while to compile on a DX4-100, but that’s why we have every horizontal surface occupied with a machine!
    ;-)

  7. No slowness on my main PC. Quad core Phenom II. According to tracking, the six core FX CPU I ordered from South Korea should arrive tomorrow. Apparently AMD only sold the 95W version of this CPU in Asia. A chip identical in all the important aspects, including benchmarks, sold in the North American market was 125W TDP. WTH is up with that?

    At any rate, the one apparently shipped over by stand up paddleboard runs no hotter than the older quad core Phenom II its replacing.

      1. I recall running Minix2 on a 286 which was a pretty amazing experience for the time. When the alternative was DOS, having a multitasking kernel and process isolation was great for productivity (and the lack of games didn’t hurt).

  8. Writing on my E6400 here:
    Core2Duo T9400 2.5Gz 6M cache.
    GM45 Northbridge (With iME disabled and option ROM socket empty)
    8GB RAM (2x 4GB SODIMM DDR2 – the rare sort. ya know!)
    960GB SSD (More reliable than mechanical HDDs, The way I abuse them)

    I’ve seen (with my own eyes) desktop core i3 systems with a higher clock run more sluggish.

    Feels not much more responsive than the Z3770.
    OK the Intel Z3770 atom SOC makes up in performance by having a better GPU subsystem than the GM45 (AFAIK).

    I’m eyeing up the core2Quad as an upgrade, QX9300 BTW.

    The Linux side of things, Still in KDE mode as I just freshly pulled the SSD out of my multimedia laptop before setting off to work this morning.
    However I usually use LXDE, Set my 2nd logical CPU to parked to force all execution onto a single core (some power saving, but minute), lower the max frequency variable in the CPUFREQ area to a comfortable max speed.
    With about 20%-40% wear across all 3 battery packs, I still see about 4H battery life and 2.5 to 3H in KDE. Plenty enough.

    Oh, because I sometimes look for underground music producers, some producers’ music is in some kind of oldskool open-source-esque .mod format (amiga/octomed format) and I use schism player in SDL via the console mode to save even more power.

    1. Just checked QX9300 has gone up in price again, I seem to recall them being a bit cheaper, no wonder those e-bay listings stay still.

      Maybe I’ll upgrade if a QX9300 falls in my lap…. Maybe…

        1. If only my laptop had an LGA socket (and the battery with some umph), then a pin-modded Xeon would be in the works.

          But a QX9300 is my only quad core option.
          Or a T9900 (dual core) for the raw clock cycles at a better TDP to performance ratio. Though less cache would be a performance hit IMHO.

          T9400 – 6M cache, 2.533Ghz, 35W, (Current)
          QX9300 – 12M cache, 2.533Ghz, 45W, (Expensive ATM)
          T9900 – 6M cache, 3Ghz, 35W, (Not worth the price ATM)

          If there was a way to power down the physical 2nd CPU die of the QX9300……

  9. While I prefer Slackware for my own use, Linux Mint was the route to migrate my dear wife away from Windows (and my need to support it). It’s quite intuitive for a Widows user to learn, and at any rate all she needs a computer for is web browsing and a bit of word processing. Converting her brought the frustration level in my life way down. Having said that, it is not a distro for power-users.

  10. Thanks a lot for reviewing my post. I’m glad it sparked discussion and I hope it helps. I also wrote another post previous to that talking more about Arch Linux and customizing for the desktop.

    Those customizations help a lot but a mechanical hard drive can’t be helped. It’s a slow device that will make as suffer I/O congestion through its short queue. An SSD is really the way to go.

    And the reason is the Web.. Our routine changes. Years ago we would only have a couple of light apps opened, a few web pages that were built with a lot less bloat. Now the norm is to have dozens of tabs consuming hundreds of megabytes each. We will starve for Ram very quickly and swap to disk way more often, causing a lot more page faults and totally destroying the meager hard drive.

    Bottomline, if you can, buy an SSD even before you add more Ram.

  11. Okay, silly question for those gathered above… I have an old laptop, a Pentium II 300MHz, with 160MB RAM (32MB soldered on-board + 128MB PC100 SDRAM; it will not take a 256MB) and a 160GB HDD (which is working fine, so no mSATA upgrade for it just now).

    Years ago (maybe, about 10), I used to use this machine daily for university (yes, I was poor). Had the machine running Gentoo with the FVWM desktop, some applications from KDE 4, Firefox web browser. The machine struggled with YouTube videos (which used Adobe Flash) in the browser, but youtube-dl was able to download those and mplayer could play those fine, so I put it down to Flash not being so, well, flash.

    Fast forward a few years ago, I set the machine up to do APRS digipeating… and as an extra, thought I’d get Firefox going on it to be able to check the weather bureau radar for approaching weather. (Looking out a window only works so well at 4AM when you’re surrounded by hills, and on a bicycle, being prepared is paramount.)

    Well, Firefox 30-something (not sure exactly what version it was), is slower than a 5-day Ashes test. I found there were some tweaks I could do, but they really didn’t improve the situation much at all and the machine is unusable. I gave up for now, and left it running Xastir, until recently. (There’s now an APRS digi on Mt. Coot-tha, so my rig is redundant now and I’ve shut it down for the time being.)

    Has anyone had any luck running a contemporary browser on such an old beast with limited RAM? (SSDs won’t help here!) Notably, one that has some basic JavaScript and graphic support, as the BOM radar needs it to work.

  12. For sure, Linux will run faster anyway due to the mantra “free RAM is wasted RAM”, a viewpoint that heaped Windows Vista with a lot of scorn back in the day (although it also used a lot of RAM anyway)… but access speeds still do play a big part.
    The machine is somewhat usable under Windows XP, but a recent Linux distribution offers a much wider selection of contemporary software.

Leave a Reply to EdwardCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.