If you have more than one Linux computer, you probably use ssh
all the time. It is a great tool, but I’ve always found one thing about it strange. Despite having file transfer capabilities in the form of scp
and sftp
, there is no way to move a file back or forth between the local and remote hosts without starting a new program on the local machine or logging in from the remote machine back to the local machine.
That last bit is a real problem since you often access a server from behind a firewall or a NAT router with an ephemeral IP address, so it can’t reconnect to you anyway. It would be nice to hit the escape character, select a local or remote file, and teleport it across the interface, all from inside a single ssh
session.
I didn’t quite get to that goal, but I did get pretty close. I’ll show you a script that can automatically mount a remote directory on the local machine. You’ll need sshfs
on the local machine, but no changes on the remote machine where you may not be able to install software. With a little more work, and if your client has an ssh
server running, you can mount a local directory on the remote machine, too. You won’t need to worry about your IP address or port blocking. If you can log into the remote machine, you are good.
Combined, this got me me very close to my goal. I can be working in a shell on either side and have access to read or write files on the other side. I just have to set it up carefully.
Wait… Is that Cheating?
You might say this is cheating because you are really using two ssh
connections — one for the file system mount and another to log in. That’s true. However, if you have ssh
set up properly, you’ll only authenticate once, and it won’t be as much overhead as two separate connections.
In addition, the script hides the details so from a user’s point of view, you connect (almost) the same as usual and it just works.
About SSHFS
The sshfs
program is a user-space file system (FUSE), which means that it’s a user-space layer over the underlying file system. In this case, the underlying file system is an ssh
server that can do sftp
. This lets you access a file on the remote machine as if it were on the real filesystem on the local machine. If you haven’t used it, it works quite well.
If you have a login set up for a machine myserver
, you simply run sshfs myserver:/home/admin ~/mounts/myserver
from the local machine.
Now the /home/admin
directory on the remote machine will appear at ~/mounts/myserver
on the local machine.
There are some options you can use. For example, it is useful to allow the file system to reconnect a broken connection. Read the man page for more.
Because sshfs
uses the remotely mounted version of the file, all changes made show up on the remote machine, but once you’ve shut sshfs
down, you’ve got nothing on the local box. Let’s fix that.
Before The Script
Before I get into the script, there is a little setup on the client that you could customize if you like. I create a directory ~/remote
and then create a subdirectory for each of my remote computers. For example ~/remote/fileserver
and ~/remote/lab
.
The script is called sshmount
and it takes all the same arguments as ssh
. To make life easier, you should have your details in the ~/.ssh/config
file for the remote host so that you can use a simple name. For example, lab
might be something like this:
Host lab Hostname lab.wd5gnr-dyn.net Port 444 User alw ForwardX11 yes ForwardX11Trusted yes TCPKeepAlive yes Compression yes ControlMaster auto ControlPath ~/.ssh/master-%r@%h:%p
Thiat isn’t strictly necessary, but then you get a nice ~/remote/lab
directory and not ~/remote/alw@lab.wd5gnr-dyn.net:444
which is annoying to use. There’s nothing magic about any of these parameters but the ControlMaster
and ControlPath
do make multiple connections more economical which is important in this case.
You’ll also want to set up logging in automatically using a certificate if you haven’t already. We did a post on this for the Raspberry Pi, but it really applies to any ssh
setup.
The Script
The script has a split personality. If you call it via a link to sshunmount
it will unmount the directory associated with the named remote host. If you call it as anything else (usually sshmount
), it will do three things:
- It checks for a directory under
~/remote
that matches the remote host name (e.g.,lab
). If it fails to find it, it prints an error message and continues to executessh
. - If the directory exists, the script examines the list of mounted file systems to see if it is already mounted. If it is, the script just continues with
ssh
. - If the directory is not mounted, the script calls
sshfs
and then proceeds withssh
.
You can find the script on GitHub, but here’s the gist of it (less some comments);
#!/bin/bash if [ "$1" == "" ] then echo Usage: sshmount host [ssh_options] - Mount remote home folder on ~/remote/host and log in echo or: sshunmount host - Remove mount from ~/remote/host exit 1 fi # if called as sshunmount... if [ $(basename "$0") == sshunmount ] then echo Unmounting... 1>&2 fusermount -u "$HOME/remote/$1" exit $? fi # normal call... if [ -d "$HOME/remote/$1" ] # does directory exist? then if mount | grep "$HOME/remote/$1 " # already mounted? then echo Already mounted 1>&2 else sshfs -o reconnect $1: $HOME/remote/$1 # mount fi else echo No remote directory ~/remote/$1 exists 1>&2 fi ssh $@ # do log in
This gives us half of what I wanted. My local machine has a direct mapping of the remote file system while I’m logged into the system. But getting the local directory mapped to the remote machine is a bit harder.
Reversing the Process
If you want to experiment with having a local directory mounted on the server, you can do that too if you have an ssh
server running on the local machine. Of course, if your local machine is visible to the host and accessible, that’s trivial. Just run sshfs
on the remote machine and mount a directory from the local machine. But in many cases, you won’t have an accessible route from the remote machine through whatever firewalls and routers you are behind, especially on something like a laptop that doesn’t stay in one place.
There is still an answer though. It requires two things. First, you need to add an extra argument when you call sshmount
(you could edit the file if you wanted to always do this):
sshmount MyServer -R 5555:localhost:22
Then after you are on the host, run
sshfs -p 5555 localhost:/home/me ~/local
The -R option creates a socket on the remote machine at 5555 (which, obviously, needs to be otherwise unused) and maps it back to us on port 22. Assuming there is an ssh
server on port 22, this will allow the server to log back into our local machine over the same connection. No need to know our IP address or have an open port.
The sshfs
command, which you could put in your startup files, maps your local /home/me
directory to the remote server’s ~/local
directory. If you log in locally too, there are several SSH_ environment
variables you could use to tell if you are starting up remotely, for example $SSH_CLIENT
or $SSH_TTY
.
Of course, you’ll need to change the hosts and directories and port numbers to suit your environment. But once set up, you can have folders on both machines visible to the other. No, I haven’t tried circularly mounting the same directories. That might create a black hole.
Be Careful Out There
You still should probably be careful going in both directions. Tools that scan the whole file system, for example, could easily get confused. I also wish I had a better answer to cleanly disconnect the server’s file share when you log out of the last session.
However, for now, the system works well and it is an easy way to share files from within an ssh
session without much work. Another answer might be to just keep directories synchronized and use those directories for transfers. Want more stupid ssh
tricks? We got ’em.
SSH with scripting is one of most complex and powerful tools in linux. I used to use it with tunnelling to run screens from a remote UltraSPARC on my linux desktop. Very nice article on shares.
When connecting to a linux machine from a mac you need to do some name encoding conversion if you’re using characters outside ascii. It took me quite a whille to figure out the fix.
`sshfs -p 22 user@host:/dir/ -o allow_other -ovolname=Linux -o modules=iconv,from_code=UTF-8,to_code=UTF-8-MAC`
Or you could just use emacs.
https://www.gnu.org/software/emacs/manual/html_node/emacs/Remote-Files.html
You won!
;-)
I just wanted to point to tramp, but you were faster.
And (tramp * eshell) is a nice product too!
For less planned access (I happen to be logged in to some machine through an ssh terminal… now I need to transfer a file) is there a good ssh client with x/y/z-modem or Kermit?
Silk idea.
But I think the smart creators of SSH already have such thing implemented.
> How?
I don’t know. I make this reply for getting the discussion further with the hope it yields a solutions.
For feeding the discussion: Type `~.`, tilde dot, at client side closes the connection. Perhaps is `~`, tilde, also used to preamble a file transfer
Apparently the ControlMaster option will let you use an existing ssh connection to start another sftp session between the same two hosts after you’ve already logged in:
https://groups.google.com/forum/?_escaped_fragment_=topic/comp.security.ssh/IZhIH9XHFSw
I use mc (Midnight Commander) to that, it has a vfs where you can change to that remoter dir with cd sh:// and then you can do ,allmost, every thing what you can on a local drive/dir.
ehh.. things go missing at post time :( rge command is:
cd sh://vdr where vdr is in my network the servername and also the username
Have a look at Magic Wormhole, https://github.com/warner/magic-wormhole
easy ad-hoc encrypted file transfer
sshfs, smbfs, pre-shared keys, fuse, and the automounter let you put together a real simple way to access the filesystems of every system within your network segment. Read only.
This setup even properly handles seeking within files so you can do cool things liked tail -f on massive log files, on hundreds of machines, at the same time. Can be rather processor intensive at that level due to all the encryption happening, but on a trusted network segment you can turn that down or even disable that if desired.
Simply awesome.
I think performance would suffer, NFS or rsync would be better
Indeed, sshfs is not a good solution to transfer a lot of file. Rsync is very good.
PS. sshfs is an abandoned project
Until you blindly trust rsync never gets it wrong but incremental updates detemines no changes are needed even though they are.
Now your systems are no longer in sync but rsync claims otherwise. Especially annoying when relying on a cron’d rsync.
I’m not so sure about sshfs being abandoned. I had to recompile it on Gentoo a few months ago because they changed how some of the options worked and updated it to work with the new(er) FUSE system. I had updated my Ubuntu system and then found my Gentoo system couldn’t automount as it had before and had to update sshfs to get it to work as expected.
I actually got about the same performance out of sshfs as I ever did out of NFS. I had to switch to NFS for a short time while updating my Gentoo system to get it compatible with one of my Ubuntu systems and I noticed no great difference in performance. The biggest bottleneck was my network.
You might want to have a look at rclone (https://rclone.org/).
It’s meant mainly for cloud storage, but also works over ssh, and can copy, sync, mount as drive, etc.
It can even serve mounted filesystems over SFTP, HTTP, WebDAV, FTP and DLNA.
Windows, macOS, linux and FreeBSD,.
I use rclone for this purpose also. I have ~/cloud and then mount points in there for Google, NextCloud, and all my remote machines. I have to put an echo in my profile though reminding me that when it fails, run rclone config to redo the credentials.
“I also wish I had a better answer to cleanly disconnect the server’s file share when you log out of the last session.”
I have a script for per-project encrypted directories. It mounts the directory, starts a new instance of graphical terminal with “gnome-terminal –disable-factory” and after that exits, it unmounts. That way the directory is automatically unmounted when I close the whole terminal window, but I can still have multiple tabs inside the window and close and open them as I want.
You should learn about escape code in SSH. Typically type “Enter then ‘~’ and ‘?’.
This allow modifying SSH environment dynamically and thus open a port for forwarding transfers.
Thus, the sequence is
Enter
‘~’
Ctrl+Z
scp myFile me@host
password
fg
Thanks for the `~?`. it opens a path to the other possibilities of Escape with `~`.
Yes I’m aware of that, but doing an scp isn’t nearly as handy as having the remote filesystem live. I do use the escape all the time to set up ad hoc port forwards.
You can also use shoop too.
What is `shoop`?
(My web search did yield a Salt’n’Pepper song about shoop the slang word)
Shoop: “high-speed encrypted file transfer tool reminiscent of scp, written in rust”
When you have a search collision like that, add linux to the term “shoop linux” gave me results.
Or just use mc, which is easy to use for coping files back and forth over ssh
BTW: You can easily automate ssh and other tools by using expect. https://linux.die.net/man/1/expect
I have been using sshfs for years in place of NFS as I prefer the improved security and flexibility. One thing I added to my mount script is a line like:
ssh-add -l|grep -q || ssh-add
Where is the signature for your key. This may or may not be necessary depending on how your ssh-agent is invoked for your particular DE.