Give Your Raspberry Pi SD Card A Break: Log To RAM

The fragility of SD cards is the weak link in the Raspberry Pi ecosystem. Most of us seem to have at least one Pi tucked away somewhere, running a Magic Mirror, driving security cameras, or even taking care of a media library. But chances are, that Pi is writing lots and lots of log files. Logging is good — it helps when tracking down issues — but uncontrolled logging can lead to problems down the road with the Pi’s SD card.

[Erich Styger] has a neat way to avoid SD card logging issues on Raspberry Pi, he calls it a solution to reduce “thrashing” of the SD card. The problem is that flash memory segments wear out after a fairly low number of erase cycles, and the SD card’s wear-leveling algorithm will eventually cordon off enough of the card to cause file system issues. His “Log2Ram” is a simple Unix shell script that sets up a mount point for logging in RAM rather than on the SD card.

The idea is that any application or service sending log entries to /var/log will actually be writing them to virtual log files, which won’t rack up any activity on the SD card. Every hour, a cron job sweeps the virtual logs out to the SD card, greatly reducing its wear. There’s still a chance to lose logging data before it’s swept to disk, but if you have relatively stable system it’s a small price to pay for the long-term health of a Pi that’s out of sight and out of mind.

One thing we really like about [Erich]’s project is that it’s a great example of shell scripting and Linux admin concepts. If you need more information on such things, check out [Al Williams’] Linux-Fu series. It goes back quite a way, so settle in for some good binge reading.

82 thoughts on “Give Your Raspberry Pi SD Card A Break: Log To RAM

  1. SD cards are cheap.
    A partial solution is to clone the SD card, and put the cloned card to your credit card sized computer with a piece of tape. (I have BBB & Cubies, thinking about Hardkernel boards( which have exchangable eMMC modules, but do not like Broadcom blobs)

    Then when the card fails simply exchange the cards.
    It’s also easy to use the 2nd card to experiment / configure a new distribution without having to overwrite the old one.

    1. Always nice to have a backup, but the concern over preserving SD cards is fair. While a few cards purchases won’t break the bank, relatively speaking, they remain one of the most expensive components in an SBC, and the most fragile, meaning they represent a recurring cost. SD cards are also among the most expensive in terms of $/GB compared to other storage solutions. They’re basically as expensive as SSDs. These costs pile up if you maintain multiple SBCs.

    2. They’re only cheap if they’re consumer grade. Some industrial grade ones like Hilscher uses in their rasberry pi plc are $160usd a pop. Not to mention the costs of having a system go down or the cost of having a tech fix something. Reducing the frequency that commercial or industrial applications go down will save much more money than the cost of the SD card. It’s also nice if for hobbyist use you can extend the life of your card because who wants to go back and fix stuff instead of making new stuff.

    3. That if you not planning to make changes to the system for years (which for most of us makers is not the case).
      We keep adding, modifying breaking stuff and in this case you would need to reimage the card periodically to have an up to date backup like once a month.
      After that it becomes more hassle than it worth. It is better to disable syslog completely if you don’t need it or log remotely.
      Logging is the only constant io generator on the SDcard.

  2. That is a good idea, but as an extra step, clone the SD card then remove the original and run on the clone. I had a telephone exchange going well for a couple of years until the SD card wore out. So, I got the clone and put it in, only to find I had cloned just the boot sector :(
    I never got the telephone exchange going again. It took such a long time for me to get the original working correctly and I lost it all.
    Best thing to do is to make the SD card read only I think. Never write to it once you have your app going well.

    1. Something I’ve been trying lately is: Experiment to my heart’s content, poke around until the thing works. Establish a known-working config.

      Then pull that card, set it aside, and start over on a fresh card. This time, knowing that there’s a light at the end of the tunnel, document every step along the way. Save all the sources and links needed to rebuild this again. (Which includes downloading everything I can, since it’ll inevitably link-rot away in a decade.) Hopefully arrive at the same working config.

      If I get lost on that second try, I can always refer back to the .bash_history on the first card. And often I end up changing things on the second one, but as long as I end up with a working document, I’m good.

      Assuming all that’s OK, blow away the first card and rebuild it from my document, checking that it really does capture all the steps. Now I don’t need to save both cards, as long as I save the document and its folder of reference material on my archive drive somewhere.

      1. Better yet, set it up with something like Ansible (or Chef/Salt/Puppet/etc) so that the steps are in code and you can both revert changes and start from scratch with very little work.

        1. Last time I looked at Chef, it seemed to be targeted about ten skill levels over my head. I’ve gained a little experience since then, but the howto guides are still completely greek to me and assume a starting level of knowledge far, far beyond what I possess. Ansible is like that but worse.

          Bear in mind that I’m writing down my steps because they’re not obvious to me. If I had the sort of skills to use these tools, I suspect I wouldn’t actually need them. They may be excellent tools but I’m clearly not an excellent candidate for their use.

          Case in point: I’m looking right now at a “set up your raspberry pi with ansible” page, and it starts right off with a “git clone” command, no hint of where to do this (do I run this on the pi itself, or on my desktop? in what directory? as what user?), et cetera. I’m sure that’s all well and good, for someone else.

          Notepad, on the other hand, works for total idiots.

          1. I tend to copy the command line instructions to a file , one can make a bash script out of it as one goes along,. Put in comments and echoes etc.
            I prefer that to cloning as well.

    2. “Best thing to do is to make the SD card read only I think. Never write to it once you have your app going well.”

      I do believe that’s one of the modes of operation of Alpine Linux.
      Sadly installing it on a SBC isn’t “for dummies” friendly, and the package selection is a bit sparse.
      Not to mention the use of busybox instead of bash, and musl instead of glib tends to be roadblocks for the less Linux familiar types when it comes to compiling a program from source.

      1. Exactly, that is how Alpine works, loading the entire OS into RAM.

        I started to use it a couple a weeks ago, and it is really perfect for headless purposes, and it is also quite fast, albeit it takes some 45 seconds to boot, due to everything shall be unpacked to RAM.

        I am quite new to Alpine, but knowing Debian, and sort of mixing it with Gentoo, it did not take long time to learn how to use it – and the Wiki is quite good.
        Also the package manager is quite ok.

        Another good thing is that there only need to be one boot-able FAT partition on the SD card, and 100MB is sufficient.

        Really recommendable for using with headless video players, and the like.

    3. ” It took such a long time for me to get the original working correctly and I lost it all.”

      Sort of like my experience with my WindowsCE PDA.
      When the main and backup batteries failed, I just went to my backup file to restore my Contacts, Calendar, and files,
      only to find out that only the OS had been saved.

  3. Having your pi run like a live distro will help save your card as well. If you use ssh on it you could also have your log put onto a shared drive.
    I like the idea of using ram for the log file and combining that with a read only / live cd style build plus having your logs saved to ‘cloud’ may be quite fun to setup.

    1. We use OverlayFS with a ram disk to do this, as it is one of the few ways to get systemd to play nice… ;-)

      You can extract the kernel settings from the OS (URL in the user name), as it uses a multi-partition approach so user /home is not as write heavy. Unfortunately, to compile some programs on a Pi you need a swap file, but you want to avoid using it as long as possible.

      Industrial SLC flash sdcards are still around, and Digikey carries the cards…
      Note, some MLC / TLC endurance cards use internal wear-leveling to try to extend service life. USB thumb drives are usually the lowest quality cheapest type. ;-)

      We were thinking about trying JFFS2 for the user /home area… and wouldn’t mind hearing people’s feedback about trying the filesystem on a resource constrained machine.

      1. OverlayFS is a great idea there are a copy of examples that mount the whole system in OverlayFS lower and Ram upper.
        I use it on a directory basis for zlog and zdir.
        I suggest you have a look at https://github.com/StuartIanNaylor/zram-config

        Does zswap, zdir and zlog in one utility.

        Log2Ram does some strange things and works with an extremely tight ram allocation.

        zram-config uses zram in a OverlayFS upper with the original read only in the lower. Due to the nature of copy-up of COW only writes are in zram and this massively reduces memory footprint.
        It also uses the logrotate directive of olddir so logrotate ships oldlogs to persistant rather than choke precious memory.
        It also doesn’t copy every complete file that has change on every hour as this is pointless really as if you have a system crash and the write of stop isn’t accomplished the critical info up to the last hour is lost anyway.

        But any directory can be configured with zram-config and because of the OverlayFS no copy of all files is needed on boot.
        Large directories can be used in extremely small zram memory footprint because zram is the upper in a OverlayFS mount of the original readonly in lower.

        The utility also does zswap and also includes important sys-admin config of mem-limit and tuning parameters of swapiness and page-cache as zram is not HDD like media its a mem technology with near memcpy speed.
        You can create any number of zswaps and zdir and a zlog via /etc/ztab, multiple zswap always bemuses me as zram has been multi-steam since kernel 3.15 but hey if you want more than one you can.

        Looking to build a community around https://github.com/StuartIanNaylor/zram-config so any ideas or issues please post away and get involved as claim no ownership just annoyed me so many utils for this application are flawed.
        Hack, clone and copy as you wish.

  4. How about using Redis or sqlite, or any other in-memory database, perhaps Apache ignite? Using such solutions has the added bonus of both having developed tools, and saving to SD card/hard when the need arises (regular intervals, or triggered by shutdown/restart event). And also the ability to deploy as a microservice later on.

  5. How about using Redis or sqlite, or any other in-memory database, perhaps Apache ignite? Using such solutions has the added bonus of both having developed tools, and saving to SD card/hard when the need arises (regular intervals, or triggered by shutdown/restart event). And also the ability to deploy as a microservice later on.

  6. Why do you need shellscript for /var/log mount? I just add that single line to /etc/fstab and use systemd tmpfiles feature to recreated needed directories (some software does not autocreate subdirectories in /var/log, so i just add that to /etc/tmpfiles.d/ )

  7. 2 alternatives, one is to mount the log dir to a remote system (nfs or cfs) or to tell rsyslog to send its logs to another remote rsyslog. I like t use the remote rsyslog when I can but the ram disk works well wit logrotate (I’ll read the article proper shortly to see if it’s done that way).

  8. I’d like to point to the “pro” or “endurance” SD cards. They are designed for far more writes; having replaced serveral RPIs’ I’ve yet to experience SD-issues whereas I used to have problems monthly. The price is not that much higher.

    1. For my smart security cameras I do this. The trick is having bios support for your intended usb port and drive type. A work around is to bootstrap off of a small flash device that turns the boot over to the usb mounted spinning disk. I use these little 1T laptop hard drives, and as long as you keep them from getting too cold or too hot, they work pretty good. But network storage (and even network boot) is also a great option.

    2. This probably works best, but with HDD (mechanical) drive, especially ones made for NAS or video surveilance applications. If you use ordinary USB flash stick then you’re in same problems as with SD cards. I’ve destroyed few Sandisk Extreme USB drives by using them as router cache, they lasted for 1-2 years before becoming extremely slow and even unwritable. With thousands of writes per day they wear out relatively quickly.

  9. I recommend taking a look at log2zram instead, on kernels with built with zram support (most modern arm kernels seem to be, in my experience):
    https://github.com/StuartIanNaylor/log2zram

    Instead of tmpfs, it uses zram to compress the log files (dramatically in some cases!) In my limited testing, the cpu overhead is pretty small, and results in much less ram usage. This roughly the same way Armbian does ram logging on all their builds.

    1. I would have a look at https://github.com/StuartIanNaylor/zram-config as to be honest there is a lot with log2ram I don’t like.
      In zram-config it works with a /etc/ztab where zswap, zdir and a zlog can be defined.

      Log2Ram does some strange things with limited resources and can potentially fail.
      Firstly the hourly write is a backup for a full system crash where the write out on stop may not happen.
      But essentially in most crashes its pointless as the important info of the last crash is lost anyway somewhere in the last hour.
      So writing complete logs on change every hour is pointless and negates the rationale of log2ram.
      Also it keeps all the oldlogs in memory and that is just a simple logrotate directive to ship elsewhere.

      I gave up with log2ram I will support log2zram if you have any issues but suggest strongly to use zram-config instead.

      The Zlog in zram-config uses zram in a upper OverlayFS with the original /var/log moved to a bind mount as lower.
      Due to the nature of copyup of COW only changes are held in Zram and this drastically reduces memory requirements.
      It also includes the olddir logrotate directive and ships oldlogs to persistant further reducing memory footprint drastically.
      This also means all the files do not need to be copied on start because of the use of OverlayFS with zram on top of the original logs (writes on top reads from lower)
      Zdir work the same way but because we only need file writes in zram large directories can me moved to zram with tiny memory fottprnts.

      zram-config uses a /etc/ztab where any number of zswaps, zdir and a zlog can be defined.

      I really suggest trying https://github.com/StuartIanNaylor/zram-config and come join the community as looking for suggestions, people who can support and generally a joint effort as I claim no ownership apart from the frustration that zram-config and various utils are essentially flawed.

  10. One additional thing (common in embedded Linux distros but not common in big Linux distros) is to mount any flash-backed file systems (SSDs, SD cards, eMMC, etc.) with the “noatime” option preventing the kernel from rewriting the inode every time something opens the file even for read. This saves a huge amount of block writes (and thus page erases) and extends the useful lifetime of the flash drive.

  11. Most of the embedded routers OS mount their FLASH as read only and use RAM drives for logs etc. All the needed pieces are there.
    https://oldwiki.archive.openwrt.org/doc/techref/filesystems
    >OverlayFS Used to merge two filesystems, one read-only and the other writable. flash.layout explains how this is used in >was mainlined in Linux kernel 3.18,
    SSD seems to be much better than SD media when it comes to wear leveling and they work just fine for desktop OS.

  12. I’ve made another approach:
    – set the SD card as a read only (see Adafruit script) and write logs to memory
    – rotate (once only) directly on a USB memory stick
    BTW, as I rotate only once a day, I may loose a day in the worse condition. Not critical for my needs, it’s just home automation and logs are for statistics.

  13. The Pi doesn’t have that much RAM,filling it up with logs is not a great use of it.

    Instead, setup a central syslog server and have all your devices send their logs to the one place. It’s much better than having to go to each device to look for logs, and it avoids any flash wear and the memory consumption involved.

    This was a solved problem 30 years ago.

  14. is SD cards going bad in Pis a thing? i agree on all the technical details, writing a log file to flash is bad for flash. but i have consciously decided that i don’t care, and i have never paid the price. are you all paying a price?

    1. yeah i’v had the odd one go bad for what seems like no reason, but only when i have been writing to them fairly heavily. That said I follow the same procedure as a gentleman above, once the systems 100% I clone it and tape a spare, good SD to the board.

    2. There really is no wear levelling standard on SD HC even bad block records can be called wear levelling.
      SD is vastly better than it used to be but much innovation goes to size increase and speed rather than write count.

      SD cards are cheap and plentiful but that is not the cost as for some applications in IoT/Maker projects the cost is hassle of access and the maintenance of replacement.

      You can use several technologies that will give you many many years maintenance free and that cost for some is priceless.

    1. I’ve only played with a Pi a bit, but the only problem with using USB memory devices is if you also use Ethernet on the Pi. Ethernet piggybacks on the USB bus and there’s contention for its capacity. USB + WiFi doesn’t have that problem, AFAIK.

      I’m sure someone can explain this better than me. A USB thumb drive for logging was the first thing I thought of when reading this article.

      1. Are you going to create the script to bind mount /var/log to your USB thumb drive?
        A thumb drive is no more reliable and you don’t need one as the above is a script that does all that for you and uses durable memory and just copies out on stop (shutdown)

        Its a script that is an easy install, costs nothing, needs nothing and it will do it for you.
        Thats all.

  15. Even if you don’t care about wearing out the SD card, I expect this to give you better performance since the rpi is very bandwidth choked, particularly for concurrent writes.

  16. In these days of cheap SSDs and tiny, cheap USB drives, you are better off just booting from USB and running from a USB attached SSD or a thumbdrive rather than SD card anyway.

  17. To be honest I hadn’t thought about logging causing issues on the SD cards, but it makes sense.

    Personally I think I’d rather have some network share and make a mount point that the raspberry Pi’s can log to. Wouldn’t that cause even less wear and tear on the pi’s SD cards? Plus I could run analytics on the log files at will since they’ll be on either SSD or even spinning disks and not hit the SD card at all.

  18. Three dumb points possibly, related to the hackaday coverage instead of the creator…

    1. I didn’t see any mention of the write speed of the SD card. It would be super embarrassing to have your flush to disk run longer than the time alotted to the task. I am guessing the RAM limit will preclude that concern, but would it if using the compression solutions.
    2. If flushing compressed to combat #1, I presume you can stream the logs to SD without a un/compress cycle, depending on support and lib in use.
    3. If you are trying to save write ops to the SD, what are you actually saving by moving log files to RAM, other than write speed? Unless you are writing circular logs or otherwise ditching log content frequently, the flush to SD action will still perform the exact same number of writes as without, even if compressed. If the logs are worth saving to disk in the first place, they are unlikely to be jettisoned between flush actions, except in emergency RAM availability scenarios or extreme failures. How does this really save the SD if the write is simply delayed for an hour. Speed of multiple progs writing simultaneously is the only win I see, i.e. the real definition behind classic disk thrashing based on too much head movement.

    1. I don’t really like log2ram as yeah it does some funny things like write out complete log files every hour even if it was just a single line change.
      If you are going to use Log2Ram I suggest deleting the hourly cron job as really its pointless to have the hour befores logs but not the logs of a critical crash and really with watchdog routines and cheap UPS you can still be mission critical and just use the write on stop.
      Some apps such as PiHole have very active logs and with high frequency writes often each write is another block write.
      You can do this in ram and then just write out on shutdown.
      I have seen some scripts that mount the whole system via overlayfs on the lower readonly and use ram for the write upper and block wear is none.
      Depends on application and write qty and SD cards are cheap but the time and hassle to replace might not so there are a few ways to accomplish and for Logs Log2Ram is one.

  19. Re #3- it saves far more wear than you’d think. It has to do with block sizes and trimming. Small files like logs which add literal bytes when they write each line to a file do this continually. File systems organize storage space by blocks of bytes. The storage medium has its own internal block sizes. When you write to the SD, it writes an entire block. When you add to that file, it rewrites that block, which is a write cycle. In many systems, it writes to a new block and marks the old block as freed, causing a write cycle to new blocks. With SD cards the blocks tend to be small (512-1024 bytes), but so are log file entries. 5 entries to a log file could easily be 5 full write cycles. It’s not as big a deal on magnetic hard drives, or even on a good SSD, but SD write cycles are super low comparatively.

    Tiny writes to flash memory are murder to memory cells.

  20. Funny, we used to do this in the bad old days to extend laptop life by spinning down the hard drive for extended periods of time. With enough RAM and good cacheing, you could go a long time without the spinning rust platters, and then the limiting factor was the frequency of logging.

    1. funny, we still do it in the good new days to. My main server spins no platters unless someone is accessing the data on one of the disks.. With ubuntu server – but it took a bit of work to get everything that regularly accessed the boot drive gone… Uptime is currently a bit over 2 years…

  21. I suggest you have a look at https://github.com/StuartIanNaylor/zram-config as it uses OverlayFS and zram for zlog.
    This means due to copy-up of COW the upper zram mount only contains writes whilst the lower bind mount of the original is read only.
    It further reduces memory footprint by shipping oldlogs to persistant and doesn’t do the pointless full copy of any log change each hour.
    Also always and directory to be moved in the same method and large directories can be held in tiny memory footprints because of the zram write upper and read only lower of a OverlayFS and doesn’t need to copy all files to memory on boot.
    It uses /etc/ztab where any number of zswap or zdir and a zlog can be created.

    Please come and join the community at https://github.com/StuartIanNaylor/zram-config as welcome ideas, collaboration and hopefully community support.
    I make no ownership to zram-config it just became what it is due to the frustration of so many issues with the available utils.
    Come join and give https://github.com/StuartIanNaylor/zram-config a go hack/copy share and improve.

    1. I tried this on my Raspberry Pi that runs OpenHAB. It installed perfectly. Unfortunately, however, my OpenHAB system stopped working (lights weren’t responding to switch events), so I had to uninstall it.

        1. Works for me with the OpenHabianPi img with no conflicts maybe because the default ztab had a dir setup expecting /home/pi and it failed but should have no effect on openhabian for as far as I can tell on as much as I had looked at it

          [06:19:41] openhabian@openHABianPi:~$ zramctl
          NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
          /dev/zram0 lz4 600M 4K 76B 4K 4 [SWAP]

          1. Hi Stuart,
            I’m looking to add your zram-config to openHABian as a default.
            Could I ask you to help with building a config ?
            I’m not sure if I need to create the bind_dir and how oldlog_dir works (how does it know which logs to rotate?)
            Here’s my current idea:
            # swap alg mem_limit disk_size swap_priority page-cluster swappiness
            swap lz4 200M 600M 75 0 90

            # /var/lib/openhab2 200M persistence, cache, tmp (aktuell 34+119+24 MB)
            # dir alg mem_limit disk_size target_dir bind_dir
            dir lz4 50M 250M /var/lib/openhab2 /openhab2.bind

            # log alg mem_limit disk_size target_dir bind_dir oldlog_dir
            log zstd 50M 250M /var/log /log.bind /volatile/oldlogs
            dir lz4 10M 1000M /var/cache /cache.bind
            dir lz4 10M 50M /var/tmp /tmp.bind

          2. I’m actually seeing the problem that the directory /var/lib/openhab2 is read-only. Did you mix upper and lower layer in overlayFS? But it’s only for this directory. Strange …

  22. The Article suggest that Log2Ram is the only method for protecting ur sd-card but if u have enough ram and u dont need compressing use tmpfs
    it’s more matured and batte tested overall and a core part from almost all Linux-distris.

    best regards

Leave a Reply to JasonCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.