[Drew DeVault] recently wrote up some interesting instructions on how to package up interactive text-based Linux commands for users to access via ssh. At first, this seems simple, but there are quite a few nuances to it and [Drew] does a good job of covering them.
One easy way — but not very versatile — is to create a user and make the program you want to run the default shell. The example used is to make /usr/bin/nethack the shell and now people can log in as that user and play nethack. Simple, right? However, there are better ways to get there.
Continue reading “Linux Fu: Interactive SSH Applications”
It is easy to think that a Linux shell like Bash is just a way to enter commands at a terminal. But, in fact, it is also a powerful programming language as we’ve seen from projects ranging from web servers to simple utilities to make dangerous commands safer. Like most programming languages, though, there are multiple layers of complexity. You can spend a little time and get by or you can invest more time and learn about the language and, hopefully, write more robust programs.
Continue reading “Linux Fu: It’s A Trap!”
If you use just about any modern command line, you probably understand the idea of pipes. Pipes are the ability to connect the output from one program to the input of another. For example, you can more easily review contents of a large directory on a Linux machine by connecting two simple commands using a pipe:
ls | less
This command runs
ls and sends its output to the input of the
less program. In Linux, both commands run at once and output from ls immediately appears as the input of less. From the user’s point of view it’s a single operation. In contrast, under regular old MSDOS, two steps would be necessary to run these commands:
ls > SOME_TEMP_FILE
less < SOME_TEMP_FILE
The big difference is that
ls will run to completion, saving its output a file. Then the
less command runs and reads the file. The result is the same, but the timing isn’t.
You may be wondering why I’m explaining such a simple concept. There’s another type of pipe that isn’t as often used: a named pipe. The normal pipes are attached to a pair of commands. However, a named pipe has a life of its own. Any number of processes can write to it and read from it. Learn the ways of named pipes will certainly up your Linux-Fu, so let’s jump in!
Continue reading “Linux Fu: Named Pipe Dreams”
The story of Linux so far, as short as it may be in the grand scheme of things, is one of constant forward momentum. There’s always another feature to implement, an optimization to make, and of course, another device to support. With developer’s eyes always on the horizon ahead of them, it should come as no surprise to find that support for older hardware or protocols occasionally falls to the wayside. When maintaining antiquated code monopolizes developer time, or even directly conflicts with new code, a difficult decision needs to be made.
Of course, some decisions are easier to make than others. Back in 2012 when Linus Torvalds officially ended kernel support for legacy 386 processors, he famously closed the commit message with “Good riddance.” Maintaining support for such old hardware had been complicating things behind the scenes for years while offering very little practical benefit, so removing all that legacy code was like taking a weight off the developer’s shoulders.
The rationale was the same a few years ago when distributions like Arch Linux decided to drop support for 32-bit hardware entirely. Maintainers had noticed the drop-off in downloads for the 32-bit versions of their distributions and decided it didn’t make sense to keep producing them. In an era where even budget smartphones are shipping with 64-bit processors, many Linux distributions have at this point decided 32-bit CPUs weren’t worth their time.
Given this trend, you’d think Ubuntu announcing last month that they’d no longer be providing 32-bit versions of packages in their repository would hardly be newsworthy. But as it turns out, the threat of ending 32-bit packages caused the sort of uproar that we don’t traditionally see in the Linux community. But why?
Continue reading “The Saga Of 32-Bit Linux: Why Going 64-Bit Raises Concerns Over Multilib”
Windows 10 — the operating system people love to hate or hate to love. Even if you’re a Linux die-hard, it is a fair bet that your workplace uses it and that you have friends and family members that need help forcing you to use Windows at least some times. If you prefer a command line — or even just find a place where you have to use the command line, you might find the classic Windows shell a bit anemic. Some of that’s the shell’s fault, but some of it is the Windows console which is — sort of — the terminal program that runs various Windows text-based programs. If you have the creator update channel on Windows 10, though, there have been some recent improvements to the console and the Linux system that will eventually trickle down to the mainstream users.
So what’s new? According to Microsoft, they’ve improved the call interface to make the following things work correctly (along with “many others”):
- Core tools: apt, sed, grep, awk, top, tmux, ssh, scp, etc.
- Shells: Bash, zsh, fish, etc.
- Dev tools: vim, emacs, nano, git, gdb, etc.
- Languages & platforms: Node.js & npm, Ruby & Gems, Java & Maven, Python & Pip, C/C++, C# &
- .NET Core & Nuget, Go, Rust, Haskell, Elixir/Erlang, etc.
- Systems & Services: sshd, Apache, lighttpd, nginx, MySQL, PostgreSQL
The changes to the console are mostly surrounding escape sequences, colors, and mouse support. The API changes included things like allowing certain non-administrative users to create symlinks. We’ve made X Windows work with Windows (using a third-party X server) and Microsoft acknowledges that it has been done. However, they still don’t support it officially.
Continue reading “Windows 10 Goes To Shell”
When your thermostat comes with Linux running on it, that’s not a hack. When it doesn’t, and you get Linux on there yourself, it most definitely is. This is exactly what [cz7asm] has done. In a recent video, he shows the Honeywell thermostat booting Linux and running a wide range of software.
While the hardware inside the thermostat doesn’t afford all the luxuries of a typical modern embedded Linux, it’s got enough room for the basics. The system runs from a 1 MB rootfs in RAM, and has a 2.5 MB kernel image, leaving a spare 12 MB for everything else. With just these meager resources, [cz7asm] shows how the system can use a USB network adapter, connecting to telehack.com for some command-line retro fun, and host a web server, although no browser runs yet. There’s also framebuffer support for displaying graphics and animations, and the usual Linux terminal goodness.
All we’ve seen so far is the video, so we hope [cz7asm] posts the code somewhere, because we’re tired of using our thermostat just to run the AC.
You might remember [cz7asm] from his previous thermostatic triumph: running Doom. Check out the video of the latest thermostat adventure after the break.
Thanks to [Piecutter] for the tip!
Continue reading “Running Linux On A Thermostat”
Getting a home music streaming system off the ground is typically a straightforward task. Using Apple devices with Airplay makes this task trivial, but if you’re a computing purist like [Connor] who runs a Linux machine and wants to keep it light on extra packages, the task gets complicated quickly. His goal is to bring audio streaming to all Linux platforms without the need to install a lot of extra software. This approach is friendly to light-footprint devices like the Raspberry Pi that he used in his proof of concept.
[Connor] created a set of scripts which allow streaming from any UNIX (or UNIX-like) machines, using only dependencies that a typical OS install would already have. His Raspberry Pi is the base station and streams to his laptop, but he notes that this will work between virtually any UNIX or Linux machine. The only limitation is what FFmpeg can or can’t play.
We definitely can appreciate a principled approach to software and its use, although it does seem that most people don’t have this issue at the forefront of their minds. This results in a lot of software that is bulky, making it difficult to maintain, use, or even know what it does, and also makes it harder for those of us that don’t want to use that type of software to find working solutions to other problems. It’s noble that [Connor] was able to create something without sacrificing any principles.