Improving Raspberry Pi Disk Performance

Usually, you think of solid state storage as faster than a rotating hard drive. However, in the case of the Raspberry Pi, the solid state “disk drive” is a memory card that uses a serial interface. So while a 7200 RPM SATA drive might get speeds in excess of 100MB/s, the Pi’s performance is significantly less.

[Rusher] uses the Gluster distributed file system and Docker on his Raspberry Pi. He measured write performance to be a sluggish 1MB/s (and the root file system was clocking in at just over 40MB/s).

There are an endless number of settings you could tweak, but [Rusher] heuristically picked a few he thought would have an impact. After some experimentation, he managed 5MB/s on Gluster and increased the normal file system to 46 MB/s.

There are several other settings we might have investigated related more to the actual buffering and reading of the memory card and I/O scheduling. However, [Rusher] shows you his methodology so if you want to use that as a starting point for further exploration, or you want to work with a different file system, it is still worth a look.

We tend to think of the Pi as an embedded board. But in reality, it is just another Linux platform and there’s been a lot of work done on optimizing Linux performance for different situations. We’ve looked at Docker-based clusters before, too.

31 thoughts on “Improving Raspberry Pi Disk Performance

  1. Perhaps this may be a bit pedantic, but SD cards actually use a 4 bit parallel data bus in their native (not SPI) mode, and SATA is actually 1 bit bidirectional serial communication. The “S” in SATA stands for Serial.

    SATA is so fast because the communication speed is 1500, 3000 or 6000 Mbit/sec, depending on which version of SATA they support. SATA uses a 10/8 bit bit encoding to synchronize the bitstream, which translates into raw transfer speeds of 150, 300 and 600 MByte/sec.

    SD cards can use clocks up to 25, 50, 100 or 208 MHz, depending on which version of the SD spec they support. Because these are 4 bit synchronous clocked data, raw transfer rates are 12.5, 25, 50 and 108 MByte/sec. Raspberry Pi obviously isn’t using the faster modes….

    1. The linked article gives impressions it’s a filesystem limitation. The rootfs (ext3/4????) clocks to an expected 46MB/s (yep bytes, not bits….. damn standards, too many of them) which is close enough to the 50MByte/sec you mentioned.

      Though 104MB/s (or faster if possible, via OC, etc) would be good on the Pi.
      Wonder if 266Mhz cards will be released (130MB/s)

    2. “UHS-II
      Specified in version 4.0, further raises the data transfer rate to a theoretical maximum of 156 MB/s (full duplex) or 312 MB/s (half duplex) using an additional row of pins (a total of 17 pins for full-size and 16 pins for micro-size cards).”

        1. Seems the problem is that the raspi is locked to 3.3v and higher speed modes need a lower voltage interface.
          At least that is what a search told me when trying to find out if it has the pins but they were reassigned for other uses.

  2. Also, be sure “noatime” is set on the root filesystem. And I use /tmp a lot, where you can set that to tmpfs in /etc/fstab.
    eg.
    /dev/mmcblk0p7 / ext4 defaults,noatime 0 1
    tmpfs /tmp tmpfs defaults 0 0

    1. Not to mention if you partition things right, you can have a read-only rootfs, tmpfs for minimal logging (because you’ll have shut it off for the most part, right?) and important-read-write configuration (maybe even for write-once files) on a partition marked noatime, sync – just in case the power unexpectedly fails :)

  3. I use a lot of pies in my “LAN of things” here on the homestead, and I have most of the hardware revs they’ve made. They’ve become a little coy of late with schematics, but in the ones I’ve seen they do hook to all 4 bits on the SD card, so good speed is theoretically possible. I have noticed that on upgrade to pixel (with a class 10 card “extreme” in the slot) that things suddenly got a LOT faster (15 second boot), and there were a lot of “kernel hack firmware” type messages during the upgrade, so perhaps Paul above was correct, but they’re fixing it. If you pay attention, PaulS is almost always correct.

    At any rate, you’re *nuts* if you do serious pounding on any SD card file system that is more important than pictures or audio (where a bit error isn’t that big a deal). It only takes one bit error, much less a sector fail, to make something completely useless and unbootable.

    For things that have to be rock solid, and reliable, I use a (seagate, and yes, I know but here I have a couple years on 4 of them here – 24/7/365 no failures) 2tb spinner mounted over / (root) that I’ve copied all the root partition files from the SD card to (on another machine). Ext4. Makes a decent NAS, but all ethernet and USB are limited to the ~100 Mbit/second total rate of that shared bus anyway.

    A USB-3 “extreme” stick for this is indeed somewhat faster, and unlike almost all USB2 sticks, will actually keep up with the full bus speed for writing. So far, no errors on good quality USB drives either.

    Fastest of all is to replace the spinner or stick with a “real” SSD.
    (For all sata drives I’m using a Startech USB3SATA cable)

    None of this seems particularly rate-dependent, it’s all latency first, then read/write rate second. I see a direct correlation between what amounts to “seek time” and inverse speed. For solid state stuff, the write time (rate) becomes the bottleneck in many applications. SSDs and some USB sticks have internal cache, which hides some waits for shorter things.

    If you want a lashup to last, move that one/second log writing stuff to a tmpfs or to an external device!
    Use noatime in /etc/fstab on all solid state stuff.

    The other advantage of putting / elsewhere than the SD card is that a backup of that part is now just a file copy (on another machine, copying while linux is using that media to run is not smart). The FAT boot partition stuff changes very rarely and since it’s almost never written anyway, one copy will back things up for a long time and is small and fast to do.
    The file copy for / is far faster, and then allows nice tools like rsync to make it faster yet. No dd for many gigabytes or even terabytes! That would take till the death of the solar system. This lets me keep a lot of pre-customized pi images on my NAS and restore to a new project quickly. If you’re like me, you use your editor of choice, your network setup of choice, dev tools, webservers, database, ditto – and usually it’s not the defaults that ship with the iso. This junk can take most of a day to customize (including removing the cruft I never use that IS in the default), and being able to just copy it in is a boon.

      1. They are admittedly getting better….but bet years worth of data on that claim? You first. I’ve been had. Those are at best MTBF numbers – means have outliers, and probably don’t include write amplification. Marketer-speak.
        See Bunnie’s blog on hacking SD cards for more…

    1. I am indeed deeply concerned about data aq and storage and a bunch of other data-related stuff. Yes, there are machines far better than pies for some of it, and some not nearly as good. It’s a horses for courses thing, right tool for the job at hand. For example, I’m off the grid, and want some things on 24/7 – but any big box would run my batteries down too much. It would also cost more – I have around 20 machines on my network. You want 20 server or gaming class machines always-on even on Power Company power? $$$ matter, and you don’t always need the CPU horsepower if a machine idles a lot, or is used for dumb simple tasks.
      I have pies (got to come up with a better plural for this or at least agree on one) – as NASes. I have them as cameras, both normal and IR, with passive and active motion detection and recording, as well as streaming when wanted.
      I have pies controlling arduinos to collect data in my LAN of things, where the pi runs mysql, NGINX, perl and so on, logging and displaying (using gnuplot) historical data, as well as controlling things most people have to pay rent to get (water and power systems for example).
      I have some pies running cameras, arduinos and hgh level code to remote control my fusion reactor, which is now at an output level that’s dangerous to be in the room with. Cost is little object there, but guess what – the pi is a lot harder to crash from radiation than a small-geometry Intel box….and it’s a lot easier to stuff into an EMI-tight box and not have it overheat.

      As always, and as in most things – it’s a question of balance.

        1. It’s fun. Documented in various places. Hackaday doesn’t seem to like one posting links, but my real name (youtube) is Doug Coulter, and my website is coultersmithing.com/forums. Not all is there, but a lot is.

          1. Brian, I run a board myself, I *really* do understand – and when I’ve gotten record hits on mine, they’ve often come from here, thanks (macona here is on my board with another name). It’s the reason I only allow new members on mine if I know them some other way first – then I add them as admin. The bots get past captchas I can’t read myself. HI, A.M.A – good to know we’re in a good crowd. fusor.net is the “other” fusor board.

  4. Whenever I need anything involving I/O bandwidth the very last thing I reach for is a Raspberry Pi; everything except the SD card (which has its own issues) is massively bottlenecked by being all squeezed through a single USB host port. There’s many better choices of SBC; personally I like Allwinner A20 boards (a bit old but they’re very effective workhorses), e.g. a Banana Pi M1, running Armbian. Has Gig-E (can easily do 50MBytes/sec over it), plus SATA, 3x independent USB host ports, etc. The RPi is a poor choice for anything i/o bound.

    1. I’d agree. I like the hardkernel xu4 for mid-range stuff – for the high end we all know what to use. The attraction of the pi is performance per buck and per watt-hour, and the fact I can leave some of them on and almost not notice how much power they use, headless. If what they have is good enough, I’m done. If I have a huge download, no need to keep some fast machine tied up and turned on when the machine is loafing anyway – I use one of the LAN pies for that, and copy it later. Since I’m in the boonies and on DSL, the relative speedup between my DSL speed (300k bytes/sec on a good day) and the copy speed from a pi share – around 10 MBytes/sec – seems great. It’s all relative. Even for some compute-intensive things, if you’re not in a huge hurry for the answer, the fact that it can work on your job 24/7 can get it done “fast enough so it’s there when you need it” with a little planning ahead.
      One significant advantage of the pi over say, the odroid or the other things you mention is the fact that there are so many, I’ll never find myself lacking for cheap used spares or a community that solves some of the ickier sysadmin problems for me. Somehow, I think the chances of them going out of business, or pi’s disappearing even if they do, are extremely low compared to most other alternatives. I’ve been around long enough to have been bitten by that one.

  5. Hmmm. I read the article, and I am completely confused. All of those setting that were changed began with “vm.”. The problem is that he is using something that swaps out to VM. In an ideal world, everything would fit in the 1 GB of RAM, so you would not have to swap at all.

    Optimizing the performance of your virtual memory is like trying to find the shoes that let you run fastest with a broken leg.

    1. “In an ideal world, everything would fit in the 1 GB of RAM, so you would not have to swap at all.”

      The only way you wouldn’t want to swap is if all of your programs *and* all of your storage would fit in RAM, entirely. Swapping isn’t just for kicking out program RAM for more programs. It’s also for kicking them out for more disk cache.

    2. FWIW, all Linux program loading, shared library loading, and some of its file I/O (anything that uses mmap(2) instead of read(2) and write(2)) goes through the VM subsystem instead of the more “normal” I/O methods. Dates back to the 4.X BSD days and was done to prevent a kernel-space-to-user-space copy of each block. Also lets an exec(2) start running the new process before all of it is read from disk without too much fancy footwork. Windows NT kernels have done this since 3.51 or before, also.

    3. most debian based distros now use “vm.” default settings that assume at least 4GB+ memory, and which need changing for smaller systems. Look up any article on ‘speeding up ubuntu’, because when it bites you, you’ll know.

      Optimising the performance of your VM *subsystem* is *critical* on systems smaller than the upstream linux devs expect. Otherwise the system swaps far too much, vastly increasing the odds whereby the system will no longer boot.

      One other thing about booting from sd cards: Even setting them ‘read only’ doesn’t actually prevent data loss on poweroff: The eeprom arrays often need to be rewritten on *reads* – so will slowly wear whatever you do, and every power loss risks damage to the data if it catches the controller in the middle of this.

      Some SSD’s are engineered to avoid this failure mode, but AFAIK, no SD card are: Most are intended for data acquisition from cameras. Big bulk write, big bulk read, not many small updates / reads.

      1. I didn’t get deep enough into the subsystems of linux to know that about the vm. stuff – thanks for the tip. Glad I guessed right.

        “read disturb” is one reason I mentioned Bunnie’s fabulous look into SD cards. If you’re designing for “production” vs “desktop piddling around” – you care about such things. While more and more fits on these devices when they add more levels, the effective life drops like a rock, and ECC and over provisioning is about the only reason they survive even one boot.
        Here’s the link for those who haven’t seen it: http://www.bunniestudios.com/blog/?p=3554
        The upshot of this, is as you say, the fact that there is “magic” going on inside one of these – their little CPU and system are always at work trying to present the illusion of reliable storage, and an unplanned power down can mess that up. There isn’t room on an SD card to put on say, and ultra cap to hold power up long enough (as some enterprise grade SSDs do) for unforeseen power loss at “just the wrong moment” so there’s always some risk.

        I’m using a number of different small computers for different tasks here. I use a pi when I need fancy linux stuff. If I need real time stuff – to the microsecond – I use an arduino uno or a teensy, and my own opsys. Any pre-emptive opsys like linux, well, pre-empts, which tosses any time critical thing out the window. It may keep up on average, but…you use the right tool for the job. In the case of the master controllers for each building here – almost always a pi – yes, the entire working set fits in ram, and they are not gp machines. Arduinos gather data (status of systems and weather) or take commands to “do stuff” like adjust heaters or open/shut water valves. The pi shoves the data aq into a MySQL database, and runs NGINX with perl CGIs to serve up views on this data, or take commands from the user (me). Trying to do that on a smaller uP would be a real thrash, why re-write things like gnuplot or a web server? A bigger one would go to waste – as is, the pies mostly just are waiting for the next input from some sensor or gaggle of them on an arduino – or the odd hit when I surf to their web page on the lan.

        I really care about reliability, and do multiple backups. But…SD cards are a PITA, especially when they get large, to do a ful backup on. AFAIK, that requires dd and imaging the entire thing even if it’s largely empty, unless you use the workaround I use.
        The only things left on the SD card are the MBR and the FAT partition that holds /boot. If you only need dd for 512 bytes, well, that’s fast. You can do a file copy for the fat partition if you like, which only gets the needed stuff, or simply use a tiny (and lower level count, more reliable) SD card and dd that. The / file partition, ext4, I put out on external storage of whatever sort, and every so often, take the setup down, remove that storage and make a copy of it.

        My data on weather, how my systems performed and so on is irreplaceable, I’ve got the only copy of that (I’m probably the only guy interested in it as well, but, hey, I am). So I care. Now when I’m goofing around, like here, running the fusor op position etc, you can bet I’m using big Intel boxen that would make most gamers blush – but I can do those things only when I feel like it and have the spare power to burn. One thing about solar – once the batteries are full, and the sun still out, any extra power just falls on the floor unless you can find a use for it – like say, a pi based system that knows about this and runs a water distiller, or an electric heater (turning off the propane to save on that, or whatever)…and so forth. It’s a famine or flood situation.

        Some people might find this of interest. An old-fart retired guy and his toys (me).
        http://www.coultersmithing.com/forums/viewtopic.php?f=59&t=957

Leave a Reply to dcfusor2015Cancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.