The Space Station Has A Supercomputer Stowaway

The failed launch of Soyuz MS-10 on October 11th, 2018 was a notable event for a number of reasons: it was the first serious incident on a manned Soyuz rocket in 35 years, it was the first time that particular high-altitude abort had ever been attempted, and most importantly it ended with the rescue of both crew members. To say it was a historic event is something of an understatement. As a counterpoint to the Challenger disaster it will be looked back on for decades as proof that robust launch abort systems and rigorous training for all contingencies can save lives.

But even though the loss of MS-10 went as well as possibly could be expected, there’s still far reaching consequences for a missed flight to the International Space Station. The coming and going of visiting vehicles to the Station is a carefully orchestrated ballet, designed to fully utilize the up and down mass that each flight offers. Not only did the failure of MS-10 deprive the Station of two crew members and the experiments and supplies they were bringing with them, but also of a return trip which was to have brought various materials and hardware back to Earth.

But there’s been at least one positive side effect of the return cargo schedule being pushed back. The “Spaceborne Computer”, developed by Hewlett Packard Enterprise (HPE) and NASA to test high-performance computing hardware in space, is getting an unexpected extension to its time on the Station. Launched in 2017, the diminutive 32 core supercomputer was only meant to perform self-tests and be brought back down for a full examination. But now that its ticket back home has been delayed for the foreseeable future, NASA is opening up the machine for other researchers to utilize, proving there’s no such thing as a free ride on the International Space Station.

Continue reading “The Space Station Has A Supercomputer Stowaway”

Linux Fu: Turn A Web App Into A Full Program

I hate to admit it. I don’t really use Linux on my desktop anymore. Well, technically I do. I boot into Linux. Then I do about 95% of my work in Chrome. About the only native applications I use anymore are development tools, the shell, emacs, and GIMP. If I really wanted to, I could probably find replacements for nearly all of those that run in the browser. I don’t use it, but there’s even an ssh client in the browser. Mail client? Gmail. Blogging? WordPress. Notes? OneNote or Evernote. Wouldn’t it be great to run those as actual applications instead of tabs in a browser? You can and I’ll show you how.

Having apps inside Chrome can be a real problem. I wind up with dozens of tabs open — I’m bad about that anyway. Restarting chrome is a nightmare as it struggles to load 100 tabs all at once. (Related tip: Go to chrome://flags and turn “Offline Auto-Reload Mode” off and “Only Auto-Reload Visible Tabs” on.) I also waste a lot of time searching since I try to organize tabs by window. So I have to find the window that has, say, Gmail in it and then find Gmail among the twenty or so tabs in that window.

What I want is a way to wrap web applications in their own window so that they’d show up in the task bar with their own icon, but external web pages that open from these apps ought to open in Chrome rather than in the same window. If applications were outside of the single browser window, I could move them to different desktops and organize them like they were any other program, including adding them to a launcher. Hopefully, this would let me have fewer windows like this:

Continue reading “Linux Fu: Turn A Web App Into A Full Program”

Linux As A Library: Unikernels Are Coming

If you think about it, an operating system kernel is really just a very powerful shared library that offers services to many programs. Of course, it is a very powerful library, but still — its main purpose is to provide services to programs. Your program probably doesn’t use all of the myriad services the kernel provides. Even a typical system might not fully use all the things that are in a typical kernel. Red Hat has a new initiative to bring a technology called unikernels to the forefront. A unikernel is a single application linked with just enough of the kernel for it to execute. As you might expect, this can result in a smaller system and better security.

It can also lead to better performance. The unikernel doesn’t have to maintain devices and services that are not used. Also, the kernel and the application can run in the same privilege ring. That may seem like a security hole, but if you think about it, the only reason a regular kernel runs at a higher privilege is to protect itself from a malicious application modifying the kernel to do something bad to another application. In this case, there is no other application.

Continue reading “Linux As A Library: Unikernels Are Coming”

Non-Nefarious Raspberry Pi Only Looks Like A Hack

We’re going to warn you right up front that this is not a hack. Or at least that’s how it turned out after [LiveOverflow] did some digital forensics on a mysterious device found lurking in a college library. The path he took to come to the conclusion that nothing untoward was going on was interesting and informative, though, as is the ultimate purpose of the unknown artifacts.

As [LiveOverflow] tells us in the video below, he came upon a Reddit thread – of which we can now find no trace – describing a bunch of odd-looking devices stashed behind garbage cans, vending machines, and desks in a college library. [LiveOverflow] recognized the posted pictures as Raspberry Pi Zeroes with USB WiFi dongles attached; curiosity piqued, he reached out to the OP and offered to help solve the mystery.

The video below tells the tale of the forensic fun that ensued, including some questionable practices like sticking the device’s SD card into the finder’s PC. What looked very “hackerish” to the finder turned out to be quite innocuous after [LiveOverflow] went down a remote-diagnosis rabbit hole to discern the purpose of these devices. We won’t spoil the reveal, but suffice it to say they’re part of a pretty clever system with an entirely non-nefarious purpose.

We thought this was a fun infosec romp, and instructive on a couple of levels, not least of which is keeping in mind how “civilians” might see gear like this in the wild. Hardware and software that we deal with every day might look threatening to the general public. Maybe the university should spring for some labels describing the gear next time.

Continue reading “Non-Nefarious Raspberry Pi Only Looks Like A Hack”

Linux Fu: Pimp Your Pipes

One of the best things about working at the Linux (or similar OS) command line is the use of pipes. In simple terms, a pipe takes the output of one command and sends it to the input of another command. You can do a lot with a pipe, but sometimes it is hard to work out the right order for a set of pipes. A common trick is to attack it incrementally. That is, do one command and get it working with the right options and inputs. Then add another command until that works. Keep adding commands and tweaking until you get the final results.

That’s fine, but [akavel] wanted better and used Go to create “up” — an interactive viewer for pipelines.

Pipe Philosophy

Pipes can do a lot. They match in with the original Unix philosophy of making each tool do one thing really well. Pipe is really good at allowing Linux commands to talk to each other. If you want to learn all about pipes, have a look at the Linux Info project’s guide. They even talk about why MSDOS pipes were not really pipes at all. (One thing that write up doesn’t touch on is the named pipe. Do a “man fifo” if you want to learn more for now and perhaps that will be the subject of a future Linux Fu.)

This program — called up — continuously runs and reruns your pipeline as you make changes to the pipe. This way, every change you make is instantly reflected in the output. Here’s the video, here’s a quick video which shows off the interactive nature of up.

Installing

The GitHub page assumes you know how to install a go program. I tried doing a build but I didn’t have a few dependencies. Turns out the easy way to do it was to run this line:

go get -u github.com/akavel/up

This put the executable in ~/go/bin — which isn’t on my path. You can, of course, copy or link it to some directory that’s on your path or add that directory to your path. You could also set an alias, for example. Or, like I did in the video, just specify it every time.

Perfect?

This seems like a neat simple tool. What could be better? Well, I was a little sad that you can’t use emacs or vi edit keys on the pipeline, at least not as far as I could tell. This is exactly the kind of thing where you want to back up into the middle and change something. You can use the arrow keys, though, so that’s something. I also wished the scrollable window had a search feature like less.

Otherwise, though, there’s not much to dislike about the little tool. If writing a pipeline is like using a C compiler, up makes it more like writing an interactive Basic program.

Linux Fu: Marker Is A Command Line Menu

The command line. You either love it or hate it, but if you do anything with a Unix-like system you are going to have to use it eventually. You might find marker — a system billed as a “command palette for the terminal” — a useful program to install. We couldn’t decide if it was like command history on steroids or more of a bookmark system. In a way, it is a little of both.

Your history rolls off eventually and also contains a lot of small commands (although you can use the HISTIGNORE variable to ignore particular commands). With marker, you save specific commands and they stay saved. There are no extra commands nor do the ones you save ever roll off.

Of course, you could just make a shell script or an alias if that’s all there was to it. Marker lets you add a description to the command and then you can search through the commands and the descriptions using a fuzzy incremental search. In addition, you can put placeholders into your command lines that are easily replaced. There are some built-in commands to get you started and the same bookmarks will work in bash and zsh, if you use both.

Continue reading “Linux Fu: Marker Is A Command Line Menu”

Can You “Take Back” Open Source Code?

It seems a simple enough concept for anyone who’s spent some time hacking on open source code: once you release something as open source, it’s open for good. Sure the developer might decide that future versions of the project close up the source, it’s been known to happen occasionally, but what’s already out there publicly can never be recalled. The Internet doesn’t have a “Delete” button, and once you’ve published your source code and let potentially millions of people download it, there’s no putting the Genie back in the bottle.

But what happens if there are extenuating circumstances? What if the project turns into something you no longer want to be a part of? Perhaps you submitted your code to a project with a specific understanding of how it was to be used, and then the rules changed. Or maybe you’ve been personally banned from a project, and yet the maintainers of said project have no problem letting your sizable code contributions stick around even after you’ve been kicked to the curb?

Due to what some perceive as a forced change in the Linux Code of Conduct, these are the questions being asked by some of the developers of the world’s preeminent open source project. It’s a situation which the open source community has rarely had to deal with, and certainly never on a project of this magnitude.

Is it truly possible to “take back” source code submitted to a project that’s released under a free and open source license such as the GPL? If so, what are the ramifications? What happens if it’s determined that the literally billions of devices running the Linux kernel are doing so in violation of a single developer’s copyright? These questions are of grave importance to the Internet and arguably our way of life. But the answers aren’t as easy to come by as you might think.

Continue reading “Can You “Take Back” Open Source Code?”