Linux Fu: Remote Execution Made Easy

If you have SSH and a few other tools set up, it is pretty easy to log into another machine and run a few programs. This could be handy when you are using a machine that might not have a lot of memory or processing power and you have access to a bigger machine somewhere on the network. For example, suppose you want to reencode some video on a box you use as a media server but it would go much faster on your giant server with a dozen cores and 32 GB of RAM.

Remote Execution

However, there are a few problems with that scenario. First, you might not have the software on the remote machine. Even if you do, it might not be the version you expect or have all the same configuration as your local copy. Then there’s the file problem. the input file should come from your local file system and you’d like the output to wind up there, too. These aren’t insurmountable, of course. You could install the program on the remote box and copy your files back and forth manually. Or you can use Outrun.

There are a few limitations, though. You do need Outrun on both machines and both machines have to have the same CPU architecture. Sadly, that means you can’t use this to easily run jobs on your x86-64 PC from a Raspberry Pi. You’ll need root access to the remote machine, too. The system also depends on having the FUSE file system libraries set up.

A Simple Idea

The idea is simple. You could do a video encoding like this:

outrun user@host ffmpeg -i input.mp4 -vcodec libx265 -crf 28 output.mp4

This will work even if ffmpeg isn’t on the remote machine and the input and output files will be on your local box where you expect them. Here’s a screencast from the project’s GitHub page:

A Complex Implementation

How does this work? A FUSE file system mounts your local filesystem remotely using a lightweight RPC file system. Then a chroot makes the remote machine look just like your local machine but — presumably — faster. There are a few other things done, such as setting up the environment and current directory.

The chroot, by the way, is why you need root on the remote machine. As an ordinary user, you can’t pivot the root file system to make this trick work.

To improve performance, Outrun caches system directories and assumes they won’t change over the life of the command. It also aggressively prefetches using some heuristics to guess what files you’ll need in addition to the one that the system asked for.

The Future

We wish there was an option to assume the program will execute on the remote machine and only set up the input and output files. This would make it easier to do things like slice a 3D print on a remote PC from a Raspberry Pi running Octoprint, for example. Of course, this is all open source, so maybe we should go make that fix ourselves.

Then again, you could do something like this pretty easily with sshfs and some other tricks. If you want to run a program on a bunch of remote machines, there are ways to do that, too.

20 thoughts on “Linux Fu: Remote Execution Made Easy

  1. Wouldn’t the Future be Docker and Containers, or am I missing something? I like the way this works, it is lightweight and needs essentially no setup. But the wish for programs to execute on remote machines is surely within the Docker remit?

    1. Yes, you’re missing something. Containers, particularly Docker, are a solution to fixing symptoms, not problems. They help alleviate the issues which arise from bad practices. That doesn’t have anything to do with what’s being offered here.

  2. Anyone remember MOSIX and OpenMOSIX? It provided transparent process migration across the network to be closer to resources. Rather than explicitly telling your system you wanted to run ffmpeg remotely, you’d just run it, and if the scheduler noticed a computer on the network with more idle CPU cycles than yours, the process would be invisibly migrated to that machine. And then it would get migrated again if another better machine became free…

    1. Used openmosix on a cluster of 4-5 machines in the early 2000s for a few years. The mosix global file system and the easy process manager was fun to use on my mix of heterogeneous machines as I used as essentially it as one large desktop with 3 machines hooked to monitors. I was very sad when everything seemed to move to containers and batching won out and most models for design moved to a requirement for similar machines.

    1. Piping through SSH is probably faster and easier and doesn’t care about architectures.

      It requires you to have all tools on the remote machine though and if your command doesn’t support working with standard input and output it gets considerably more complex as you’ll need to either use tempfiles or copy them with SFTP before and after execution.

      Could easily be scripted but your script needs to be tailored for each command since not all commands have the same map of specifying input and output.

      For ffmpeg piping through SSH seems the easiest way though since it has great support for pipes.

      1. Good point about the availability of the tool. There’s been plenty of times I’ve ssh’d somewhere and found I’ve not got the tools I was expecting.

        I was mainly thinking about security – with the SSH approach, it’s the host computer which is vulnerable if the client is compromised, but with this approach, a compromised host has access to the client’s file system.

  3. When you consider the case mentioned with a video, you have to consider the speed of doing the job locally vs the impact on your network, if the other server is not on your network, the impact of the internet and lastly the impact of and on the remote network. Sadly no one cares about the internet anymore, but you may piss people off on both local nets if you saturate them with your video. And that all takes time. I am not sure how well ffmpeg parallelizes but I suspect not super well. The gain you get tossing at at more cores I suspect is marginal. So you go through all these hoops and wind up about where you were speed wise. This tool sounds like it has uses but I am not sure resampling videos is it.

  4. > Sadly, that means you can’t use this to easily run jobs on your x86-64 PC from a Raspberry Pi.

    Not without one extra piece; I’ve done big Pi compile jobs (OpenCV etc) on my x86 machine by setting up QEMU for user emulation and chrooting into a local copy of a raspbian image. Running ARM binaries from the command line then transparently invokes QEMU to execute them. Even with the added overhead, it’s a massive win because syscalls are all native, and you have all the memory and cores and IO speed of the bigger machine.

    Obviously you’d lose the IO speed advantages if you were running over a FUSE network filesystem, but that’s also not the limiting factor for a lot of tasks.

  5. I think this is over engeneering for most usecases. With sshfs you can mount remote filesystems and work with files as if they are local. You can use ssh to run a command remotely. Sure you can’t run local programs but most situations it shouldn’t be to problematic to install it in that system.

    It all this you can make a quick and dirty script that mounts a file system, runs command, unmounts filesystem.

Leave a Reply to EvaprototypeCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.