Linux Fu: Pimp Your Pipes

One of the best things about working at the Linux (or similar OS) command line is the use of pipes. In simple terms, a pipe takes the output of one command and sends it to the input of another command. You can do a lot with a pipe, but sometimes it is hard to work out the right order for a set of pipes. A common trick is to attack it incrementally. That is, do one command and get it working with the right options and inputs. Then add another command until that works. Keep adding commands and tweaking until you get the final results.

That’s fine, but [akavel] wanted better and used Go to create “up” — an interactive viewer for pipelines.

Pipe Philosophy

Pipes can do a lot. They match in with the original Unix philosophy of making each tool do one thing really well. Pipe is really good at allowing Linux commands to talk to each other. If you want to learn all about pipes, have a look at the Linux Info project’s guide. They even talk about why MSDOS pipes were not really pipes at all. (One thing that write up doesn’t touch on is the named pipe. Do a “man fifo” if you want to learn more for now and perhaps that will be the subject of a future Linux Fu.)

This program — called up — continuously runs and reruns your pipeline as you make changes to the pipe. This way, every change you make is instantly reflected in the output. Here’s the video, here’s a quick video which shows off the interactive nature of up.

Installing

The GitHub page assumes you know how to install a go program. I tried doing a build but I didn’t have a few dependencies. Turns out the easy way to do it was to run this line:

go get -u github.com/akavel/up

This put the executable in ~/go/bin — which isn’t on my path. You can, of course, copy or link it to some directory that’s on your path or add that directory to your path. You could also set an alias, for example. Or, like I did in the video, just specify it every time.

Perfect?

This seems like a neat simple tool. What could be better? Well, I was a little sad that you can’t use emacs or vi edit keys on the pipeline, at least not as far as I could tell. This is exactly the kind of thing where you want to back up into the middle and change something. You can use the arrow keys, though, so that’s something. I also wished the scrollable window had a search feature like less.

Otherwise, though, there’s not much to dislike about the little tool. If writing a pipeline is like using a C compiler, up makes it more like writing an interactive Basic program.

17 thoughts on “Linux Fu: Pimp Your Pipes

  1. I assume it essentially runs the command in the background every time you edit the line. So you may very well pipe into a binary you didn’t intend, while building the command. Don’t run it as root, as you’re a typo away from overwriting something important! Other than that, it looks like a really nifty little tool.

      1. A valid concern. But if you quote the source you should also read it. The corresponding code says:

        if restart || (*unsafeMode && command != lastCommand) {

        Therefore it only gets executed on a change if also “unsafeMode” is activated, otherwise it must be explicitly triggered with ENTER. It seems this was introduced in https://github.com/akavel/up/commit/5a1abb25282ed35499ab5b4897f36da49c0f9b5d, a day after the above video was published, but before this article. To activate this mode `–unsafe-full-throttle` must be given as a commandline option. Sadly that commit missed to reflect the change in the comment.

      1. They do, except without this tool, you have to explicitly run the command. This tool runs the command as you’re typing it. It might just be me, but I’d be very nervous running this as root.

        1. Like if you want to pipe something through ‘sha512sum’ but it’ll first try running everything through the ‘sh’ shell after you type the ‘h’? Seems like any short command name that’s the start of a longer vaild command name will get run whenever you try to type type longer name in. That seems dangerous to me. I’ll stick to ^p.

          1. Another example if you wanted to do ‘rmdir’ which is a touch safer than the ‘rm’ you have to type through as it won’t delete unless the directory is empty. Perhaps the command should not execute in the background until a non-word character is typed such as space or the pipe symbol. This would also have the effect of preventing those annoying ‘command not found’ errors as you type. Bonus points if you can detect that a command is being typed like ‘grep’ as opposed to the arguments, which *is* useful to see the effects in real time as characters are being typed.

      2. Say you want to type “rm -fr /home/user/blah/blah”.
        With this tool, when you reach “rm -fr /”, all the data that the user can delete, will start to be deleted.

        Too extreme?
        Ok, try “rm file.ext.bak”.
        Say goodbye to file.ext first.
        Or “mv a-file another-file”.
        Enjoy cleaning the left overs (a, an, ano, anot, anoth…).

        And that’s before mistypes, you have to figure what happens for every key stroke.
        Because it will be invoked as is.
        Infinite possibilities.

        rr would be a better name for it. Russian Roulette. Anyone playing with this should do it in a sandbox (full OS or sacrificial account, but make sure it has no access to your normal accounts, at all) that you can discard later instead of having to clean up a mess.

        1. Well in all fairness, you are piping data into it to start with, so how often do you write something like ls -Slrt | rm….?

          It seems like it would be easy to fork and either/or add a black list or not trigger the command line until you hit a space, tab, or pipe which would cure all that. In practice, though, I don’t think I’m going to accidentally start deleting files from inside a pipeline I’m trying to debug. At least not like that.

          1. but that only works with known common commands, what about aliases, and non common commands, just different shells have different built-ins ksh/bash etc and when you get to distros there can be huge differences in installed programs.

    1. I only run GNU/Linux, and rarely use root except when messing with drivers, installing system wide software, wireshark etc. (haha)
      But lots of people use sudo or root for every permission error there is!
      I did it a lot myself before I started to dig into the roots of the problems.

  2. Piping was essential in the early days of *nix / BSD.

    There simply wasn’t enough memory to process large files as a block instead of a stream.

    HDD access was slow to so without piping those things that “could” fit in RAM was very slow. Especially when consecutive filtering reduced the size of the data.

    You still see the difference today. Just try to load a huge file like a movie with windows notepad. It tries to load into RAM and then when it wont fit is will start shuffling back and forth between RAM and virtual memory. It will load in in one coffee. Move the scroll bar and that’s another coffee.

    1. > There simply wasn’t enough memory to process large files as a block instead of a stream.
      This is ALWAYS true for data files in any serious computing activity and will be in any future too.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.