While most of us have likely spun up a virtual machine (VM) for one reason or another, venturing into the world of containerization with software like Docker is a little trickier. While the tools Docker provides are powerful, maintain many of the benefits of virtualization, and don’t use as many system resources as a VM, it can be harder to get the hang of setting up and maintaining containers than it generally is to run a few virtual machines. If you’ve been hesitant to try it out, this guide to getting a Docker container up and running is worth a look.
The guide goes over the basics of how Docker works to share system resources between containers, including some discussion on the difference between images and containers, where containers can store files on the host system, and how they use networking resources. From there the guide touches on installing Docker within a Debian Linux system. But where it really shines is demonstrating how to use Docker Compose to configure a container and get it running. Docker Compose is a file that configures a number of containers and their options, making it easy to deploy those containers to other machines fairly straightforward, and understanding it is key to making your experience learning Docker a smooth one.
While the guide goes through setting up a self-hosted document management program called Paperless, it’s pretty easy to expand this to other services you might want to host on your own as well. For example, the DNS-level ad-blocking software Pi-Hole which is generally run on a Raspberry Pi can be containerized and run on a computer or server you might already have in your home, freeing up your Pi to do other things. And although it’s a little more involved you can always build your own containers too as our own [Ben James] discussed back in 2018.
The containers in the main image for this article look a little strange. There’s containers of different widths and even seems to be one hanging in the air. Surely all containers are standard widths and dont just float in the air? What is going on?
You should read about what AI art ist.
That’s because this is most likely an image from the AI. If you look closely you can see some other anomalies like malformed letters on smaller signs or two different containers merged into one.
This looks like an “AI” generated image.
(goes to check the linked article)
Yeah, the linked article has the same picture, together with a midjourney prompt used to generate it.
That’s the “I’m gonna use AI because I’m to cheap to pay for stock images” trend ;-)
Nothing cheap about it. Why spend the time looking for, then paying for, a single-use image when you can get something similar for free? Just like we use FOSS software instead of paying for commercial products.
I love the different-sized containers. It’s like the sixth finger.
I love the image. It has a certain Gilles Tran-vibe to it.
You would think so, but the variation in length, height, and (to a lesser extent) width are real. Before looking at https://en.wikipedia.org/wiki/Intermodal_container and https://en.wikipedia.org/wiki/Containerization consider why they are even separate articles. Then look for things like “pallet-wide”, “internal dimensions”, “sufficient”, “North American”, “almost” …
I’ve always been confused about Docker. I tried it several times but I always run into the same issue: files.
So say I run a piece of software such as HASS. That software will generate files over time (or I’ll add configuration files to it). Files I need, files I need to make sure are safe so if the docker image is restarted or I move the files to a new computer with a new docker image, it can reuse those files and continue. I never understood how that part works. I tried reading manuals but they all skip that part which for me, is the only important part. So right now, I’m using VM’s.
This is where you either bind a host directory to a directory in the container or use a docker volume. Check out https://www.freecodecamp.org/news/docker-mount-volume-guide-how-to-mount-a-local-directory/.
The one caveat there is that if you’re not careful, the docker container will operate on these files as the root user (by default docker runs internal programs as the root user). This means new files will be owned by root. You can get around this in a few ways, but generally I try to use the method described at https://www.baeldung.com/ops/docker-set-user-container-host as it lets you set it at container launch rather than build time (can be useful in CI situations).
I have given up on Docker. It is just too easy for the container to get wiped out without warning.
This is because I am not an expert user, and documentation is vague at best.
I just reinstalled HASS due to a failed update that turned into a spiraling disaster, which meant that the whole container got wiped out by something as innocent sounding as the run command.
Unfortunately that means you’re doing it wrong.
While it’s possible to run a container as a live system of stuff, where the container updates its files and all data is within the container itself that is fundamentally NOT how it should be used.
It’s useful when trying to initial build a container and get it configured, but instead you should create a Dockerfile and or compose script to do the work. The container itself should be structured to be a static image, any changing data is externalized from the container itself to another volume.
It could be a docker volume, or a local directory, or another option.
My production containers are structured to be static; although I haven’t yet forced them to be readonly. That’ll happen soonish.
The external volumes or shared directories/files are your solution to persisting data. My database docker containers externalize the data stores, if I rerun the container or rebuild it with a compatible version of MySQL they’ll start right up with the data as expected.
If you want to manually build a container there is an option to commit the container, taking a snapshot of its current state. This can be useful when initially working on a container and figuring out just what’s needed and providing a bit more room for experimentation than rerunning a Dockerfile build process over and over.
Also, I’ve been using chatGPT to translate between docker and podman. Docker is a bit friendlier, but podman is somewhat more secure , in theory at least.
Now if I could just get my employer on board with podman…(or docker, but that will never happen).
Not sure if your very well ontended bit lengthy comment and overly complex setup help here. While its all true, just getting started here should only really mentioned that,
By design, docker containers are ephemeral. E.g. they don’t keep data. E.g. a webserver doesn’t generally need to remember anything, and the data comes from a database, right?
But alas the world is not so simple and we do want to store data, where else would your database container store its data?
Docker volumes solve this. The are a means for ephemeral containers to store data. Docker volumes come 2*3 (sigh) flavors. Bind mounts and volume mounts. While nearly the same, there’s subtle differences.
However when getting started the simplest thing to learn (in the case of hass) is `docker run –volume /path/where/to/store/files:/config hass` where /config is the destination directory that has will write files too, so a path inside the container, and the other path a path on the host, that will mapped to the inside of the container. As written above, permissions can be an issue …
i used docker for a while. it was a lot harder to set up than i expected it to be, and there were a lot-lot-lot of cases where i felt like i would have made a different choice if i really understood what it was doing. and there was a lot of really confusing forward and backward incompatibility, from docker’s own evolution over time (many different ways to skin each cat). and the thing that really got me is there was no way to make it secure without being a true guru at it. the root within your virtual environment is shockingly close to the root on your host PC, and the user account on the host that you let run docker for control purposes is exactly identical to root!
it truly is very powerful and there might be a use case for it *if you endure the effort of really getting up to speed* but i switched to lxc and i found it was much easier to configure and required far less of everything — less configuration, less privileges, less RAM, less odd-ball kernel features. i’m very happy with lxc.
as an aside, the main docker daemon busy waits. it is constantly consuming CPU time even if no events are coming to it. it doesn’t block. it’s a select() or poll() with a timeout. i know on modern PCs, people just waste cpu cycles and I-cache and it’s no big deal — it truly is a small number — but i personally do not agree. the fact that they decided to make a daemon that is always busy tells me something crucial about the soul of the people who designed docker.
Thanks for picking up my article!
Just checked my analytics today and thought something must be wrong, as there is a 383% increase in visitors :D
My Dockers are no problem so long as I put them on 1 leg at a time and wear a belt…
When I see all the ‘Docker is too hard’ comments, I remember that this is primarily a hardware hacking newsfeed. The software developers get it. Docker was and is revolutionary. Yes, it builds on top of existing tooling like jails, but the Dockerfile syntax and compose system coalesced a lot of ideas into a useful system of tools that solve real problems. Listening to people whine about Docker is like listening to people whine about the command line; you don’t even understand what you’re missing.
Maybe if people had more positive comments on why others may find it difficult and would help them to understand then there wouldn’t be people “whining” about it being difficult because they are not day to day software developers.
I tried to be a programmer many times and failed. Until one nice chap was willing to explain the concepts I didn’t understand. Imagine that.
“just read the docs” often doesn’t help because the people that get it wrote the docs and make huge assumptions given their expected audience.
Same reason people are in awe at some of the stuff I can do that they cant.
Docker, hummm
I have a pi running docker and a few, can I say virtual machines or containers.
When it comes to setting the date on some of them, Node Red, it was steep to discover how. Now Pi hole needs updating, (for a year or more) and I realise I do not know how and I just do not have the time or maybe skill, to do this.
So I run really old images until I get the time to work it out.
Docker seems such a good idea but really, for me, I think I should just buy more Pi’s just because it is easier.
Oh, I do build some hardware, I do write my own code, simple but working. I have even played with databases and I write scripts within be games.
Again, nothing too hard or complex.
Maybe Docker is one step too far, sad because I can see how useful it is.
One day eh!.
This may help: https://github.com/jesseduffield/lazydocker