Rackmount RasPi Leaves No Excuse To Lose Data

RasPi backup server

[Frank] knows how important backups are for data security, but his old method of plugging a hard drive in to take manual backups every so often is not the most reliable or secure way of backing up data. He realized he was going to need a secure, automated solution. He didn’t need a full-sized computer with a ton of power; why waste electricity for something so simple? His solution was to use a Raspberry Pi as the backup computer.

The main problem he faced with the Pi was finding a way to make it rack mountable. [Frank] started with an empty 1U server case. He then had to bend a few metal plates in order to securely mount the backup drive into the case. A couple of small rubber pads help dampen any vibrations caused by the hard drive.

The computer power supply was able to put out the 12V needed for the hard disk, but not the 5V required to run the Pi. [Frank’s] solution was to use an LM2596 based switching supply to turn the 12V into 5V. He soldered the power supply wires directly to the Pi, thinking that a USB plug might vibrate loose over time. Mounting the Pi to the computer case should have been the trickiest part but [Frank] made it easy by simply gluing the Pi’s plastic case to the inside of the computer case. When all was said in done, the backup server pulls 29W under full load, 9W with the disk spinning, and only about 2W in an idle state.

On the software side of things, [Frank’s] backup box uses bash shell scripts to get the job done. The Pi connects to his main server via VPN and then the bash scripts use rsync to actually collect the files. The system not only saves backups every night, but also keeps week old backups just in case. If you are really paranoid about your backups, try hooking up a custom battery backup solution to your Pi. If a Pi just isn’t doing it for you, you can always try one of many other methods.

35 thoughts on “Rackmount RasPi Leaves No Excuse To Lose Data

  1. Nice. How do you plug the RPi into a drive… some USBSATA adapter? I wanted to build a similar device but was looking for a device with built-in SATA. Think the Cubieboard might be the ideal solution.

    1. We have three Dell PowerEdge 2850s at work, and their disks are mounted upside down for some unknown reason. The machines turn 10 this year and we’ve only had one disk failure out of a total of 18 disks. Statistically speaking, that’s actually better than the service record of our ‘properly’ mounted disks. Which isn’t an endorsement. Mount your disks properly, people!

  2. samba share with a pi is about 9MB/s if your pi drive is ext4. If the pi’s drive is ntfs you’ll be limited to about 5MB/s

    Btw this is with the pi overclocked to the max setting — yes a samba transfer is all it takes to max out the cpu

    1. That’s part of it, but considering what you have to work with 9 megs isn’t unexpected.

      The Pi only has a Fast Ethernet adapter, which will never transfer faster than 12.5 MB/s (including packet overhead). Unless you’re pulling files off the Internet with a <100meg WAN connection, the bottleneck is the Pi's NIC.

    2. I ran into similar issues when building my NAS. The Pi only has a 100MBit ethernet connection. That’s about 12.5 MBytes/s theoretical max. Couple that with the fact that the USB connectors and the Ethernet share the same USB controller… It becomes clear that 9MB/s is actually quite fast for the Pi and it’s not really the CPU’s fault…

      The Pi is just not fit for any form of networked storage/backup utility. Sure, as a UPnP renderer or internet cache/proxy server or something like that it’s perfect. Not for high throughput network-based storage…

      I ended up buying an HP Proliant N54 server… It has orders of magnitude more capability than a Pi for only about three times the price of a bare board. Power consumption is about 40W under load. Never been happier!

  3. Why use a VPN? For fun? I see a rollver serial to usb on the floor. Checked the other photos and he has a full on Cisco lab. He knows how to setup proper VLANs ACLs and such. There is no need for a VPN.

    Near the end he states, ‘With SCP (over plain ethernet, without VPN), I’m able to reach ~1.5MB/s from disk to disk on small files, which is “reasonable”. If you enable compression, you drop to 1.0MB/s (and more latency).’

    Tells me two things, firstly he may not be using VPNs after all. Secondly, he needs to study more. Those types of compression options are ancient from the times where nearly all network traffic was plaintext. Nowadays everything is already compressed. Running a compression algorithm on data again makes the transmit size LARGER, not smaller, thus the slower speeds.

      1. Enjoy politics? Using hyperbole, ad hominem, reductio ad absurdum, and manipulated context to make a point does not make for compelling rhetoric to an audience of intelligent people.

        The point is that video, voice, images, programs, and some document formats, the vast majority of what we move about networks today, are all compressed. We don’t have two-terabyte drives to store our huge collection of text files. This is freshman level network stuff here.

        Might I suggest a class on ethics, another on logic, and following up with one on basic network administration for anyone on the low side of dunning-kruger on these subjects.

  4. With such a slapdash build, soldering the power wires seems out of place. If the connection was loose enough for the connector to vibrate out, mod the connector. Just chink it or something. Also, IMNSHO, any backup chain that has USB somewhere in there is flawed and will fail at some point. Surely an SATA solution isn’t that much more difficult?

  5. Why oh why?
    With all due respect to this guy but this is just another case of RPi fetishism.
    There are other boards with native SATA and Ethernet (even GbE) which would be much much better for the job. Use of USB is utter fail.
    Also, why not just put openWRT on a hackable NAS?
    Oh, and 29W seems like an awful lot for such a spartan build.

    1. It’s the power supply. It defeats the purpose of using the low-power Pi by plugging it into an inefficient PSU.
      I’m using two wall-warts in a similar setup (one for the pi, one for the USB drive) and my Kill-A-Watt reports 5W max. for the pair.

      1. I can only imagine that the 29 watt power usage is when the harddrive is spinning up. A high capacity 3½ inch drive can really use some power on start up.

  6. Cool project. Although I establish the same task using a 1TB USB drive stuck to a pi running bittorrent sync.The set up is placed at a family members house to provide offsite backup of photos. Its always on so as soon as my PC gets a new photo from a camera/smartphone etc- btsync finds the offsite pi and trickles the new encrypted data to it. Thus a real-time backup without the need for custom bash scripts and rsync etc. I like yours for the geeky bits though. Hats off to you sir.

    1. as the bittorrent sync requires a fixed amount of ram per each file you sync, you quickly run out of memory.
      Imagine a project with a .git directory, where you have easily 40.000 files in there.
      Each files consumes between 300-400 bytes from ram. If you have 1 million files you’ll need 3-400MB of ram.

      And one thing raspberry pi don’t have: RAM.

      You solution just wont fly. It will crash as soon as you hit a limit of files. No matter how many GB you are occupied of the 1TB external disk.
      I hit the memory limit with 3GB files worth of sharing.

      Ignorance is a bliss. A thread about memory usage:
      http://forum.bittorrent.com/topic/27965-btsync-memory-usage/

      Laszlo

      1. Ignorance is a bliss. <– I mean the bittorrent sync authors. Absolutely no move in that regard(low memory usage) from their part.

        (Ie. able to specify an "archive" directory and don't watch it all the time, hence modest ram usage.)

        After I failed with bittorrent sync, I gave a spin with git-annex-assistance.
        Still no luck.

        Now, I will revisit the whole issue with udev based swap mount from the external drive, and I will assign 4GB of swap to the raspberry pi. Speed is not an issue, I dont care, if the raspberry pi cpu will be iowait all the time. I want a bulletproof solution.

        1. Well – its been running in excess of a year (24-7) and has synced over 50gb of data. The trick to get round the memory issue on the pi is to set up a cron job which copies the contents of the pi sync folder to another location on that usb hard drive periodically. That way the sync folder can be flushed and therefore no memory issues. You may have guessed from my nic name that I am quite keen on ZFS. That sync folder at the feed end sits on a ZFS array (Freenas) in my house which is just about the best file system in the world for data integrity- snapshots etc. Good point though but it works for me and in total has cost me 100 pounds (UK) – check that against 1TB of Dropbox space. 1000 pounds per year. I think you will agree that this is a pretty good solution.

          1. Zfsforthewin: i still have not given up entirely on the raspberry pi solution.
            So I would really like to read more about your solution.
            Please share.

            I will periodically check back to this post, but i would really like to get in touch with you. My email address is a42b42(aT]gmail.com. But if you are afraid of getting too personal, im fine with a simple reply to this post with the url linking to your detailed blogpost.

            Currently this is my plan:
            1. udev rules when plugging external harddrive, to swap on additional space from a given file (/media/extbig/raspi.swap)
            2. Write a daemon which autostart btsync when the extbig present.
            Stops btsync in case of harddrive unplug.
            3. daemon acts also as archiver. It watches /media/extbig/btsync/archive directory, and moves any file present to /media/extbig/archive directory

            4. Unarchiving is still an open question.

            Best,
            Laszlo

          2. I can’t reply to my own comment, but here it goes anyway.

            > Currently this is my plan:
            The work has begun.

            > 1. udev rules when plugging external harddrive, to swap on additional space from a
            > given file (/media/extbig/raspi.swap)

            /media/btsync/swapfile, but close enough. The udev rule was quite tricky,
            also udev is utterly complicated, and there are no such event,
            which is triggered *after* the mountpoint was created.

            I start a daemon process from the shell script, which dummy-waits for the path to appear, and finishes swap setup and btsync start.

            The dphys-swapfile which is on raspbmc is also half-implemented.
            You can’t pass parameters to swapon. It’s a shell script only around swapon, so
            I implemented my own.

            I needed because if I unplug the drive, despite I launch an unmount.sh sscript, I can’t unassign the swapfile. It just stucks inside /proc/swaps.
            To workaround this kernel buggy behaviour (its like nfs disconnect, or process), I assign priority to swapfile.
            So each new drive plug in, will create the new swapfile with higher priority then the one before.

            > 2. Write a daemon which autostart btsync when the extbig present.
            > Stops btsync in case of harddrive unplug.

            Done too. Simple shellscipt was enough, no need for a daemon for this.

            3. daemon acts also as archiver. It watches /media/extbig/btsync/archive directory, and moves any file present to /media/extbig/archive directory

            This is still a todo item. Maybe a crontab is a simpler approach then a full daemon.

            4. Unarchiving is still an open question.

            The same unsolved problem (maybe crontab job there too?).

            Summary:
            btsync is crazy slow on the raspberry pi.
            I started it like 9 hours ago, and still have absolutely no idea where the process is at, and when it will finish.
            The total memory usage (swapfile is 4GB), is at 720MB at the moment. So well above the raspberry pi memory (mine has 375MB ram, because I assigned ram to GPU (xbmc) too).
            So the swapfile is used 350MB from 4GB already.
            I have no idea what will happen, if I unplug the drive. No idea what resides in swap space. Would be nice to define which programs are allowed to use swap (xbmc.bin, btsync), and any other programs must reside in the main memory.

            I dont think it is possible at all under linux.

            My folder to share is 84.8GB. I estimate a full week to the raspberry pi to sync it.
            I will report back after a week how it went. Maybe I will hit some other problem too.

            Before the swapfile hack, the btsync crashed on raspberry after 3 hours of startup. So definietly some progress.

            The whole hack is ignited by an accidental file delete.

  7. I’m dead serious about backups. Half hearted solutions will burn you. And if it’s not convenient you’ll be opting to somehow limit/lower your requirements.

    Take a proper PC/Server as storage, wakeonlan it at night and run your backup to disk. But wait, there’s more: Use Bacula. On the PC the Storage demon and on a machine that’s always on the Director demon. A pre-run script on the Director does the WOL. Rock solid. And don’t even get me started on on-disk and wire encryption…
    Check it out.

  8. Makes no sense whatsoever to build this. If high secure and high capacity is needed, use a RAID6 array and an ECC capable system.
    Otherwise any old laptop pulls equally or less power than a RPi, uses less space, has a faster CPU and has an included screen and ususally GBit Ethernet and WLAN. Cost is similar or less to a RPi or free if it’s pulled from trash. So why so much effort to build something without ECC and the perfomance of a Pentium II?

Leave a Reply to JoeCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.