How Canonical Automates Linux Package Compilation

pandaboard

What do you do when it’s time to port the most popular Linux distribution to a completely different architecture? Canonical employee [David Mandalla] works on their ARM development team and recently shared the answer to that question with his fellow Dallas Makerspace members.

Canonical needed a way to compile about 20,000+ packages for the ARM platform, however they did not want to cross-compile, which is quite time consuming. Instead, they opted to build a native solution that could handle the load while ensuring that all packages were compiled securely. To tackle this immense task, [David] and his team constructed a 4U server that runs 20 fully-independent ARM development platforms simultaneously.

The server is composed of 21 PandaBoards, small OMAP development boards featuring a dual-core ARM cortex processor with just about all the connectivity options you could possibly ask for. One board operates as the server head, keeping track of the other 20 modules. When someone requests server time to build a package, the main board checks for unused server, triggering a relay to reboot it before the server is automatically reimaged. Once the pristine, secure environment is ready to go, it’s handed off to the customer who requested it.

If you’re interested in learning more about the build process, [David] has put together a blog with additional details.

[Thanks Leland]

25 thoughts on “How Canonical Automates Linux Package Compilation

  1. Cool project, I always love seeing massive ARMaments. ;)
    I’m a bit confused about the speed issue though. Why would cross-compiling take longer than native compiling? You have the same input, the same output, and you’re doing the same thing. It’s not a case of emulation vs virtualisation, it’s the exact same process. At least, that’s what I see. I’m sure that a cluster can be very quick and that the PandaBoards do an excellent job, but I don’t understand why (for a system of similar throughput) cross-compiling would be slower than native compilation. Anyone care to enlighten me?

  2. it is an interesting hardware hack more than a software one — cross-comipling ARM on much speedier x86 machines is no issue.

    I understand that the relay/reboot is a secure imaging scheme to do what other Linux build services out there do with Virtualization — that is quite clever.

    Does the board have imaging in its bootup sequence, and all is needed is pulling the power to re-init?

  3. I agree with third here, the article lacks any real info. The cross compiling issue isn’t speed of compilation but just the awkwardness of getting the first compilation going right. Nice build and everything but an expensive way of doing it. Never mind at least they are eating their own dog food when running on a Panda board. I’m all up for Panda Lurve ;) !!

  4. Small note, cross compiling doesn’t take longer. It’s just that a lot of open source projects don’t support cross compiling that well.

    Take tcpdump for example, I spend 2 hours trying to get configure to crosscompile it, and then I gave up, took all the sources, a simple makefile, hand written config.h, and 15 minutes later I had a working build.

    90% of the projects require patches/hacks to crosscompile. (Note: I work on ARM and PowerPC hardware for my work, crosscompiling is a daily job for me)

  5. My guess is that the cross-compilation part is quite fast as long as the package Makefiles are set up for it and correctly distinguish between the host architecture and target architecture build artifacts.

    However, in some cases, it may be necessary to build parts of a package twice because some parts need to run on the host as part of the build itself. For example, a package might rely on tools to generate code or data files at build time. Those tools need to run on the host.

    I would not be surprised if the majority of package build scripts simply assume that the host and target architecture are the same, and in particular that the host is capable of executing the generated code at build time.

    It may be possible to use an ARM emulator to work around some of these issues. However, emulators can be slow and buggy and may not implement all of the required features of the target architecture, such as SIMD extensions like Neon.

    At some point it just starts to make sense to use actual hardware, at least for running the unit tests.

    Many of these problems could probably be solved by improving the build scripts. Of course, then you’re stuck fixing 20,000 individual packages each with their own idiosyncrasies… Ugh.

  6. Well the article seems to have done it’s Job.
    I’ve bought a Panda from Digikey!!
    Now I’m a back-order Fashion Victim, TI will love me.
    Never mind at least I won’t have to cross compile now :D

  7. The bastard businessman side of me comes out looking at this — an army of Pandaboards running in a 4U as a metric crapton of VPSes oversold for capacity would probably make you a killing. ;)

  8. Is there a reason to have things be compiled only on one system rather than use an existing distributed compiler like distcc? It would sure speed up builds and make sure that packages are building with all the available CPU rather than limiting each build to only two cores?

  9. @DC – I had this idea approx. a year ago, and did the math.
    As far as I can figure, a conventional system is still much more cost effective, not to mention how hard it would be to sell an ARM based platform to the average VPS customer…

  10. Hey folks,

    I wrote the original article on this, when I briefly interviewed David about the build, speed was the reason he gave. That being said compatibility may have been a more important reason and I might have glossed over that. Thanks for the constructive criticism though! I’m gonna e-mail David tonight and try to confirm the details.

  11. @Jonimoose

    >>an existing distributed compiler like distcc?

    The Debian build system only recently got the ability to pass -j to make IIRC and it depends on the package in question whether it actually uses it. You could probably make distcc, ccache work.. but I’m not sure it would make much difference as Debian at least is already spreading the work of building the latest packages over all their buildd’s. So its distributed just in a different way.

    I don’t get the speed argument either.. any cheap amd64 box will kick the living shits out of any ARM core you can get… but one of the Debian rules at least seems to be that if an arch can’t keep up with building itself it isn’t valid for inclusion in the main releases e.g. m68k.. and emulators aren’t allowed to be buildds. I’m not sure if this counts for Ubuntu.. their standards are a little slacker than Debian.

  12. @metropolis

    >>Is it really faster than using
    >>QEMU on a bunch of normal servers

    Yes.If you actually have a go at using QEMU to create an ARM rootfs or something you’ll realise how slow it is.

    >>emulating an ARM platform?

    ARM isn’t like m68k where the available hardware tops out at 100mhz or so (ignore coldfire here).. you can get 1GHz+ ARM machines which are relatively slow to a 2GHz+ Core or i? machine but still a good deal faster than said x86 machine trying to emulate an ARM machine.

  13. They are right on problems on cross compilation. I just wonder, if they tried to use the Scratchbox environment, which Nokia used for Maemo/Meego development. That environment has successfully compiled to ARM all the libraries and apps to I tried on a x86-64 Ubuntu.

  14. The speed issue really doesn’t make sense. The Pandas lack SATA so any local storage would be an SD-Card or USB2.
    Networking is limited to 100 Mbit.
    The processors are fast arm cores but those will not match an i5 or AMD quad core cpu.
    Maybe a problem with cross compiling but code but if the code has issues cross compiling it should also have issues with compiling on the ARM I would think.

  15. Q: What do you do when you want to compile a piece of software and roll it into an easily accessible package so that your potential users could download and use your software with Ubuntu.

    A: Nothing. Don’t even try it yourself. You have to support N+1 backports for different versions of Ubuntu, and the latest one will be obsolete in six months anyways.

    Q: Then how can I distribute my software to the widest possible audience?

    A1: Kiss a lot of community ass and maybe someone with the competence will take care of it, if they find it personally useful, if they bother, sometimes, maybe…

    A2: Develop for some other OS.

  16. Ummm cross compiling isn’t an issue. Setting up the environment correctly is. Once you have it then its not really an issue. Yes it is far easier just to set up native hardware to do it, but on slower architectures like arm it would be quicker to cross compile.

    While doing embedded Linux development for 10 years I only ran into one issue with cross compiling and that was due to someone counting on address wrap around in a linker script. This quirk was over come by me running a 32 bit compiled cross linker.

  17. just learn how to cross compile or use something like openembedded or t2 or buildroot or …

    cross compiling isn’t that hard. It is more time consuming the first time because you need to patch a lot of bitrotten makefiles/buildsystems. but afterwards you can build verry verry fast all the packages and images you want. upstream is also often happy to get patches.

  18. @cantido

    with e.g openembedded can build easily cross debian pakages. at least dpkg packages. of course you need to set the openembedded distribution to use the same versions/patches. but setup once you can build for mpis and tons of other system, too

Leave a Reply to Hunter DavisCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.