It used to be tedious to set up a cross compile environment. Sure you can compile on the Raspberry Pi itself, but sometimes you want to use your big computer — and you can use it when your Pi is not on hand like when on an airplane with a laptop. It can be tricky to set up a cross compiler for any build tools, but if you go through one simple step, it becomes super easy regardless of what your real computer looks like. That one step is to install Docker.
Docker is available for Linux, Windows, and Mac OS. It allows developers to build images that are essentially preconfigured Linux environments that run some service. Like a virtual machine, these images can run together without interfering with each other. Unlike a virtual machine, Docker containers (the running software) are lightweight because they share the same underlying kernel and hardware of the computer.
The reality is, setting up the Raspberry Pi build environment isn’t any easier. It is just that with Docker, someone else has already done the work for you and you can automatically grab their setup and keep it up to date. If you are already running Linux, your package manager probably makes the process pretty easy too (see [Rud Merriam’s] post on that process). However, the nice thing about the images is it is a complete isolated environment that can move from machine to machine and from platform to platform (the Windows and Mac platforms use a variety of techniques to run the Linux software, but it is done transparently).
Installing Docker
If you are not using Linux, you’ll need to figure out how to install Docker. There are several ways to do it under Windows (depending on what version of Windows you use) and I don’t use the Mac. However, on Linux, you should be able to install what you need via your package manager.
In Ubuntu and other similar distributions, you might expect to install the Docker package. Makes sense, but no. That package is a system tray icon manager. What you want is docker.io:
sudo apt-get install docker.io
You’ll see some suggested packages, and you can consider adding the –install-suggests option to apt-get if you want them.
Docker is in two parts: a daemon (a server that runs in the background) and a client named docker. There are a variety of GUI tools to manage Docker if you don’t like the command line. I do like it, so that’s all I know about that.
Cross Compile
Docker maintains a repository of images on their website called the Hub. By default, if you don’t have an image locally, the client will look there for it. In this particular case, the image you want is sdthirlwall/raspberry-pi-cross-compiler:legacy-trusty. That’s a mouthful, and the developer provides a nice script for calling it under normal circumstances. How do you get the script? You use Docker, of course.
By the way, by default, you need to run the Docker client as root, although you can also create a special group (although using sudo works just as well). Here’s the command to run to get the rpxc script:
sudo docker run sdthirlwall/raspberry-pi-cross-compiler:legacy-trusty >rpxc
Since you probably don’t have that image on your hard drive, it will take a while for the client to download it and complete the task. Next time won’t take so long because Docker will have a local copy.
You can set the rpxc command to execute with this command:
chmod +x rpxc
Then move it somewhere on your path (or refer to it by full path like ./rpxc or ~/Downloads/rpxc, if you prefer). Although you can download the whole image from the Hub, if you want to look at the files, contribute, or follow development, you should have a look at the GitHub repo for the project.
In Use
The rpxc script generally runs any command you like in the new environment. Since it runs Docker, you need to be root or in the Docker group, of course. All the usual build tools are prefixed with rpxc, so:
rpxc rpxc-gcc -o hello-world hello-world.c
Or, if you have a Makefile:
rpxc make
The current directory you are in when you run rpxc, becomes the /build directory in the new environment and is the default current working directory.
If you are lazy, you might prefer to just run:
rpxc bash
Then you can issue commands and do what you like. Do an ls on /usr/local/bin/rpxc* to see all the tools available. You can also use the rpxc script to update the image and itself. Use the update command to do both, or you can specify update-image or update-script.
Docking
There are other images you might find interesting. You have to get a free account on the Hub, and once you do, you might think there is a very small number of images. However, try doing a search for Raspberry, for example. Or Arduino, which shows a lot of preconfigured Arduino environments. You might enjoy searching for ESP8266, too. There’s even a Docker image for Eagle PCB layout software. Let us know your favorite profiles in the comments below.
Thank you.
This site is a bit of a mixed blessing to me. One the one hand I can spend hours reading about someone else’s project but not get anything done on my own. On the other hand I sometimes read a tutorial on a subject I was just starting to dig into or read in a project log about how someone solved a similar obstacle to one I am currently facing. It never ceases to amaze me how often areas of interest overlap and how one subject can be of interest to so many others at the same time.
Thank you and thank you all for taking the time to document.
I usually use proot (with qemu) to run native ARM stuff on my big machine. I use the original raspbian images, mount them, and do a chroot with binfmt. No big magic, no docker-foo with unknown sources from whoever.
Docker uses far less resources than a traditional VM when run on a Linux environment. It’s faster to boot and can share resources like RAM with the host.
What I love most about docker is how each step in creating an image is a checkpoint. If you change the last step and rebuild the image it will pick the previous checkpoint and apply the new last step to it. This is very handy when doing a lot of apt-get installs etc.
proot only uses a chroot with binfmt. It changes the ARM instruction on-the-fly to host-native instructions. The kernel resources are shared there, too. I have no experience with docker, but I doubt that there is a big difference (in either way) concerning resources.
The big difference is, that the approach shown in this post seems to be doing real crosscompiling, whereas the approach I suggest uses the native toolchain. This makes it possible to build everything exactly as it would on a raspi, without changes in the code. Crosscompiling usually expects the build system to honor the settings (e.g. setting $CC correctly), which is an additional requirement that not all code (especially some quickly written stuff for your brand-new toy) might fulfill.
git clone github.com/raspberrypi/tools?
Need to be inside a VM if your machine runs windows, but that’s not that much of a hassle…
Good article. Nice to know that Docker is a lightweight alternative to running a VM.
I’ll stick with using the cross compiler though, since I am on the road with that.
I don’t see ANY benefits in the following approach. There’s basically a crosstool-ng based cross-compiler inside, just packaged inside docker with a helper script and gcc 4.8 with NO raspbian/debian rootfs or any libraries. You can tell crosstool-ng to build a static toolchain and use just with just about the same success.
If you just need a small app this regular cross compiler without the docker image is enough. Basically that’s it. However when you start messing with a lot of build-dependencies things will get you quickly in the usual windows-way hell, where you have to fetch and compile ALL the package dependencies yourself. This is where docker comes in handly to run fullblown debian rootfs in docker via qemu-user-static. Especially if you are hacking a huge project with a weird build-system that knows nothing about cross-compilation. This can be easily integrated with jenkins for later packaging.
However this approach has a huge drawback – the compiler is run with qemu-user-static that is slow as hell.
The better approach is building a cross-compiler with linaro abe and placing debian rootfs inside. You can cook the following with these scripts yourself:
https://github.com/RC-MODULE/rcm-toolchain-builder
and
https://github.com/RC-MODULE/skyforge
I’ve been using this approach very successfully: https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub Here’s the corresponding GitHub repo: https://github.com/resin-io-projects/armv7hf-debian-qemu
“If you are not using Linux, you’ll need to figure out how to install Linux”
FTFY :)
If you’re actually developing a purpose-built image for the raspberry pi, I’d recommend using Yocto, which is the industry-standard tool that oddly doesn’t seem to get mentioned much. You define package and machine recipes that get the image right where you want it, and it builds a completely trimmed down image that’s super lightweight and secure. All through a really elegant build system that requires very little typing (though quite a bit of reading). Yocto provides a full toolchain that’s custom-built for your machine / libs. Raspberry Pi is supported through the meta-raspberrypi layer. I like that you can have one Yocto environment set-up to target multiple boards, because my work has me switching between four or five different application ARM processors constantly…
If you’re running Ubuntu, I highly recommend that you add Docker’s apt repository, and install from there. Ubuntu’s bundled version is woefully out of date.
See https://docs.docker.com/engine/installation/linux/ubuntulinux/
What is the advantage of this compared to just using Maven to create a Uber Executable jar and maven plugin to FTP jar file to Raspberry Pi?
Uhh, Sorry for this post, I was thinking Java all the way….. C
I haven’t investigated this much yet but I need a library installed and then I need to do a make. If I run the bash example above, manually run the commands, I end up with a x86 compiled program.
————————————————————————
pi@raspberrypi:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
pi@raspberrypi:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
armhf/alpine latest 1324905a5239 5 days ago 3.928 MB
sdthirlwall/raspberry-pi-cross-compiler legacy-trusty 9959d27b3f95 6 weeks ago 558.9 MB
pi@raspberrypi:~ $ sudo docker run sdthirlwall/raspberry-pi-cross-compiler:legacy-trusty >rpxc
standard_init_linux.go:175: exec user process caused “exec format error”
pi@raspberrypi:~ $
————————————————————————
Gives “exec format error” on Docker Version: 1.12.1
same problem
Any tips for using Docker with a Beaglebone Black? There’s a nice overview video that talks about the differences between Docker and Vagrant
https://www.youtube.com/watch?v=pGYAg7TMmp0