If you have more than one Linux computer, you probably use
ssh all the time. It is a great tool, but I’ve always found one thing about it strange. Despite having file transfer capabilities in the form of
sftp, there is no way to move a file back or forth between the local and remote hosts without starting a new program on the local machine or logging in from the remote machine back to the local machine.
That last bit is a real problem since you often access a server from behind a firewall or a NAT router with an ephemeral IP address, so it can’t reconnect to you anyway. It would be nice to hit the escape character, select a local or remote file, and teleport it across the interface, all from inside a single
I didn’t quite get to that goal, but I did get pretty close. I’ll show you a script that can automatically mount a remote directory on the local machine. You’ll need
sshfs on the local machine, but no changes on the remote machine where you may not be able to install software. With a little more work, and if your client has an
ssh server running, you can mount a local directory on the remote machine, too. You won’t need to worry about your IP address or port blocking. If you can log into the remote machine, you are good.
Combined, this got me me very close to my goal. I can be working in a shell on either side and have access to read or write files on the other side. I just have to set it up carefully.
Wait… Is that Cheating?
You might say this is cheating because you are really using two
ssh connections — one for the file system mount and another to log in. That’s true. However, if you have
ssh set up properly, you’ll only authenticate once, and it won’t be as much overhead as two separate connections.
In addition, the script hides the details so from a user’s point of view, you connect (almost) the same as usual and it just works.
sshfs program is a user-space file system (FUSE), which means that it’s a user-space layer over the underlying file system. In this case, the underlying file system is an
ssh server that can do
sftp. This lets you access a file on the remote machine as if it were on the real filesystem on the local machine. If you haven’t used it, it works quite well.
If you have a login set up for a machine
myserver, you simply run
sshfs myserver:/home/admin ~/mounts/myserver from the local machine.
/home/admin directory on the remote machine will appear at
~/mounts/myserver on the local machine.
There are some options you can use. For example, it is useful to allow the file system to reconnect a broken connection. Read the man page for more.
sshfs uses the remotely mounted version of the file, all changes made show up on the remote machine, but once you’ve shut
sshfs down, you’ve got nothing on the local box. Let’s fix that.
Before The Script
Before I get into the script, there is a little setup on the client that you could customize if you like. I create a directory
~/remote and then create a subdirectory for each of my remote computers. For example
The script is called
sshmount and it takes all the same arguments as
ssh. To make life easier, you should have your details in the
~/.ssh/config file for the remote host so that you can use a simple name. For example,
lab might be something like this:
Host lab Hostname lab.wd5gnr-dyn.net Port 444 User alw ForwardX11 yes ForwardX11Trusted yes TCPKeepAlive yes Compression yes ControlMaster auto ControlPath ~/.ssh/master-%r@%h:%p
Thiat isn’t strictly necessary, but then you get a nice
~/remote/lab directory and not
~/firstname.lastname@example.org:444 which is annoying to use. There’s nothing magic about any of these parameters but the
ControlPath do make multiple connections more economical which is important in this case.
You’ll also want to set up logging in automatically using a certificate if you haven’t already. We did a post on this for the Raspberry Pi, but it really applies to any
The script has a split personality. If you call it via a link to
sshunmount it will unmount the directory associated with the named remote host. If you call it as anything else (usually
sshmount), it will do three things:
- It checks for a directory under
~/remotethat matches the remote host name (e.g.,
lab). If it fails to find it, it prints an error message and continues to execute
- If the directory exists, the script examines the list of mounted file systems to see if it is already mounted. If it is, the script just continues with
- If the directory is not mounted, the script calls
sshfsand then proceeds with
You can find the script on GitHub, but here’s the gist of it (less some comments);
#!/bin/bash if [ "$1" == "" ] then echo Usage: sshmount host [ssh_options] - Mount remote home folder on ~/remote/host and log in echo or: sshunmount host - Remove mount from ~/remote/host exit 1 fi # if called as sshunmount... if [ $(basename "$0") == sshunmount ] then echo Unmounting... 1>&2 fusermount -u "$HOME/remote/$1" exit $? fi # normal call... if [ -d "$HOME/remote/$1" ] # does directory exist? then if mount | grep "$HOME/remote/$1 " # already mounted? then echo Already mounted 1>&2 else sshfs -o reconnect $1: $HOME/remote/$1 # mount fi else echo No remote directory ~/remote/$1 exists 1>&2 fi ssh $@ # do log in
This gives us half of what I wanted. My local machine has a direct mapping of the remote file system while I’m logged into the system. But getting the local directory mapped to the remote machine is a bit harder.
Reversing the Process
If you want to experiment with having a local directory mounted on the server, you can do that too if you have an
ssh server running on the local machine. Of course, if your local machine is visible to the host and accessible, that’s trivial. Just run
sshfs on the remote machine and mount a directory from the local machine. But in many cases, you won’t have an accessible route from the remote machine through whatever firewalls and routers you are behind, especially on something like a laptop that doesn’t stay in one place.
There is still an answer though. It requires two things. First, you need to add an extra argument when you call
sshmount (you could edit the file if you wanted to always do this):
sshmount MyServer -R 5555:localhost:22
Then after you are on the host, run
sshfs -p 5555 localhost:/home/me ~/local
The -R option creates a socket on the remote machine at 5555 (which, obviously, needs to be otherwise unused) and maps it back to us on port 22. Assuming there is an
ssh server on port 22, this will allow the server to log back into our local machine over the same connection. No need to know our IP address or have an open port.
sshfs command, which you could put in your startup files, maps your local
/home/me directory to the remote server’s
~/local directory. If you log in locally too, there are several
SSH_ environment variables you could use to tell if you are starting up remotely, for example
Of course, you’ll need to change the hosts and directories and port numbers to suit your environment. But once set up, you can have folders on both machines visible to the other. No, I haven’t tried circularly mounting the same directories. That might create a black hole.
Be Careful Out There
You still should probably be careful going in both directions. Tools that scan the whole file system, for example, could easily get confused. I also wish I had a better answer to cleanly disconnect the server’s file share when you log out of the last session.
However, for now, the system works well and it is an easy way to share files from within an
ssh session without much work. Another answer might be to just keep directories synchronized and use those directories for transfers. Want more stupid
ssh tricks? We got ’em.