Embed with Elliot: ARM Makefile Madness

To wrap up my quick tour through the wonderland of make and makefiles, we’re going to look at a pair of possible makefiles for building ARM projects. Although I’m specifically targeting the STM32F407, the chip on a dev board that I have on my desk, it’s reasonably straightforward to extend these to any of the ST ARM chips, and only a bit more work to extend it to any ARM processor.

If you followed along in the first two installments of this series, I demonstrated some basic usages of make that heavily leveraged the built-in rules. Then, we extended these rules to cross-compile for the AVR series of microcontrollers. Now we’re going to tackle a more complicated chip, and that’s going to mean compiling with support libraries. While not required, it’s a lot easier to get an LED blinking on the ARM platforms with some additional help.

One of the main contributions of an IDE like Arduino or mbed or similar is the ease of including external libraries through pull-down menus. If you’ve never built a makefile-based project before, you might be surprised how it’s not particularly more difficult to add libraries to your project.

ARM Makefile Take One: Explicit Version

To start off, our ARM makefile is a lot like our AVR version. We need to specify the cross-compilation tools so that the computer doesn’t build files in its native format, and pass some compilation and linker flags using the CFLAGS and LFLAGS implicit variables, respectively.

## Cross-compilation commands 
CC      = arm-none-eabi-gcc
AR      = arm-none-eabi-ar
OBJCOPY = arm-none-eabi-objcopy
SIZE    = arm-none-eabi-size
## Platform and optimization options
CFLAGS = -c -fno-common -Os -g -mcpu=cortex-m4 -mthumb 
CFLAGS += -Wall -ffunction-sections -fdata-sections -fno-builtin
CFLAGS += -Wno-unused-function -ffreestanding
LFLAGS = -Tstm32f4.ld -nostartfiles -Wl,--gc-sections

A number of these options are shared with the AVR makefile from last time — splitting the functions out into their own sections and garbage-collecting the unused ones, for instance. ARM-specific entries include the processor and the “thumb” instruction set options. Finally, in the linker flags, LFLAGS, we pass a memory map (stm32f4.ld) to the linker that tells it where everything needs to go in memory. We didn’t need this file in the AVR case because the chip has a simple and consistent memory layout that GCC can just fill in for us. Not so in the ARM world, but you can find the right memory map for your project in the development libraries that you’re using.


The rules to build the project aren’t particularly complicated. Because make rules are specified in terms of targets and their dependencies, it’s often easiest to think backwards through a rule chain. In this case, my ARM flash programmer needs a raw binary image file to send to the chip, so we can start there. The object-copy command makes a binary out of an .elf file, the linker makes an .elf file from compiled objects, and a default rule compiles our source code into objects.

## Rules
all: main.bin size flash

%.bin: %.elf
    $(OBJCOPY) --strip-unneeded -O binary $< $@

main.elf: $(OBJS) stm32f4.ld
    $(LD) $(LFLAGS) -o main.elf $(OBJS)

That wasn’t hard, was it? The first rule defined is always the default that gets run when you just type make, so I tend to make it a good one. In this case, it compiles the needed binary file, prints out its size, and flashes it into the chip all in one fell swoop. It’s like the “do-everything” button in any IDE, only I don’t have to move my hand over to the mouse and click. (Remember that $< is makefile-speak for the dependency — the .elf file — and $@ is the variable that contains the target .bin.)

Libraries: Headers

Which brings us to libraries. The rule for making main.elf above relies on a variable OBJS (objects) that we haven’t defined yet, and that’s where we get to include other people’s code (OPC). (Yeah, you know me.) In C, including other modules is simply a matter of compiling both your code and the OPC into object files, including a header file to tell the compiler which functions come from which files, and linking the objects together.

In our case, we’ll be linking against both the CMSIS standard ARM library and ST’s HAL driver library for the F4 processor. Since I’ve done a bit of STM32F4 programming, I like to keep these libraries someplace central rather than re-duplicating them for each sub-project, and that means that I have to tell make where to find both the header files to include, and the raw C source files that provide the functions.

The header files are easy, so let’s tackle them first. You simply pass the -I[your/directory/here] option to the compiler and it knows to go looking there for the .h files. (Make sure you also include the current directory!)

## Library headers
CFLAGS += -I./ 
CFLAGS += -I/usr/local/lib/CMSIS/Device/ST/STM32F4xx/Include/
CFLAGS += -I/usr/local/lib/CMSIS/Include/
CFLAGS += -I/usr/local/lib/STM32F4xx_HAL_Driver/Inc/

In this case, we’re including both the overall CMSIS headers that are shared across all ARM platforms, as well as the specific ones for my chip.

Libraries: Objects

Once the compiler knows where to find the header files, we need to compile the actual C code in the add-on libraries. We do this, as before, by telling make that we want the object files that correspond to the C code in question. We specify the target, and make tries to satisfy the dependencies.

# our code
OBJS  = main.o
# startup files and anything else
OBJS += handlers.o startup.o

## Library objects
OBJS += /usr/local/lib/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal_rcc.o
OBJS += /usr/local/lib/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal_gpio.o
OBJS += /usr/local/lib/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal.o
OBJS += /usr/local/lib/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal_cortex.o
OBJS += /usr/local/lib/CMSIS/Device/ST/STM32F4xx/Source/Templates/system_stm32f4xx.o

Remember from the first makefile example that make knows how to compile .c code into .o object files. So by setting the variable OBJ as a dependency of some other target, each of the source files that correspond to the listed objects will get automagically compiled. And since we already have a rule that links all of the object files into a single main.elf file, we’re done!

## Rules
main.elf: $(OBJS) stm32f4.ld
    $(LD) $(LFLAGS) -o main.elf $(OBJS)


The rest of the makefile is convenience rules for flashing the chip and etcetera. Look through if you’d like and here’s a helpful tip on how to do that: If you want to see what make is thinking just type make -p which lists all of the rules and some variables, taking the current makefile into account. make --warn-undefined-variables can also help catch typos — if you type OBJ instead of OBJS.

Fancier ARM Makefile: Building up the Core Library

The make system gives you so many cool tools to automate things, it’s really hard to know when to stop. Quite honestly, I probably should stick to a simple makefile with few moving parts like the one above. It’s very nice to have an explicit list of all of the bits of included library code in one place in the makefile. If you want to know what extra stuff is needed to compile and run the program, all you need to look at is the OBJS definitions. But there are a few make tricks that maybe will come in handy later on in your life, and I can’t resist, so here goes.

The above procedure also has one real flaw. It compiles the object files into the same directory as the library source. When you end up (re-)using the code in the CMSIS directory across projects with different processors, for instance, you’ll have object files compiled for one chip being linked in with another unless you’re careful to remove them all first.


It would be better to compile the object files locally, leaving you many object files floating around in your code directory. OCD programmers of yore hated that kind of clutter, and thus the archive file was born. An archive is just a bunch of object files jammed into one. To build one you pass the object files to an archiver, and out comes a .a file that you can link against later, and the end result is just the same as if you’d linked against all of the component object files, with much less typing.

So let’s take the CMSIS and HAL libraries, all of them, compile them and wrap them up in one big archive called core.a. That way, we’ll only ever have to compile those objects once, until we download new versions of the libraries, and we’ll be free to use any additional parts of the library almost without thinking. (Note that this goes against my advice to be specific about which bits of code we’re including. But it’s so convenient.)


To build the core.a archive, we’ll need an object file for every source file in a few different directories. We could list them out, but there’s a better way. The solution is to use a wildcard to match every .c filename and then edit the names of each file to change the .c to a .o, giving us the complete list of objects.

Here’s a simple snippet that compiles every .c file in the current directory by defining the corresponding object files as dependencies for an executable main program. This makes for a quick and dirty makefile to have on hand because it usually does what you want if you just plunk it down in a directory full of code.

SRCS   = $(wildcard *.c)
OBJS   = $(SRCS:.c=.o)
CFLAGS = -Wall -I./

main: $(OBJS)
    $(LD) $(LFLAGS) -o main $(OBJS)

So we can get our list of object files by wildcarding the various CMSIS and ST library directories:

## Locate the main libraries
CMSIS     = /usr/local/lib/CMSIS
HAL       = /usr/local/lib/STM32F4xx_HAL_Driver
HAL_SRC   = $(HAL)/Src
CMSIS_SRC = $(CMSIS)/Device/ST/STM32F4xx/Source/Templates

CORE_OBJ_SRC         = $(wildcard $(HAL_SRC)/*.c)
CORE_OBJ_SRC        += $(wildcard $(CMSIS_SRC)/*.c)

CORE_LIB_OBJS        = $(CORE_OBJ_SRC:.c=.o)

This creates a list of object files for every source file, and it does one other cute trick. The notdir command removes the leading directory information from every file in the list. So if we had a long filename like /usr/local/lib/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal_rcc.o we now just have stm32f4xx_hal_rcc.o. We’re going to need this because we want to assemble all of the object files together in the current directory, so we have to specify the object files without their full path.


We’ve got an object file for each source file in the various libraries, but we still need a rule to make them. If all of the C code were in the current directory, we’d be set, because when make sees stm32f4xx_hal_rcc.o, it tries to build it out of stm32f4xx_hal_rcc.c. Sadly, stm32f4xx_hal_rcc.c lives in /usr/local/lib/STM32F4xx_HAL_Driver/Src/. If only there were a way to tell make to go looking for C code in different directories, just like we told the compiler to go looking for header files in different include directories. Enter the VPATH.


Long story short, a colon-separated list of directories passed to the special VPATH variable treats OPC, located anywhere on your disk, as if it were our own code in the current directory. So with our list of the localized version of all of the core object files, and the VPATH correctly set to point to the corresponding source code, we can compile all the object files and throw them all into a single archive file.

core.a: $(CORE_OBJ_SRC)
    $(AR) rcs core.a $(CORE_LOCAL_LIB_OBJS)
    rm -f $(CORE_LOCAL_LIB_OBJS)

The $(MAKE) automatic variable just calls make. Here, we’re effectively saying make stm32f4xx_hal_i2c_ex.o stm32f4xx_hal_dcmi.o stm32f4xx_hal_pcd.o ... for all of the object files. The AR line creates our archive (core.a). Then, we clean up all the extraneous object files that are now included in core.a. Neat and tidy.

Wizardry: Putting it all Together

A recap: We use a wildcard to identify all of the source and get the corresponding object filenames. VPATH points at the source files in the foreign library, so make can find the remote source. We then make each of the object files and throw them all together into an archive, and then clean up afterwards. Now we’re ready to compile our personal code against this gigantic library of functions. But don’t worry, because we passed the CFLAGS to remove unused code, we only end up with the bare minimum. And we never have to re-compile the library code again. In this version of the makefile, OBJS is just the three local object files. Everything else is in the core.a archive.

main.elf: $(OBJS) core.a stm32f4.ld
        $(LD) $(LFLAGS) -o main.elf $(OBJS) core.a 

The core version of this makefile is also available in one piece for your perusal. I’ve also pushed up the entire project to GitHub so you can make it for an STM32F4 chip, or modify it to work with other platforms. If people are interested, I’ll do some more “getting started with ARM” type topics in the near future.

We’ve barely scratched the surface of make, yet we’ve done a complex compilation that automatically pulls in external code libraries and pre-compiles them into a local archive. There’s no limit to the trouble you can get yourself into with make and once you start, you’ll never stop. Just try to remember to keep things simple if you can, and remember that debugging is twice as hard as writing code in the first place — you’ll want to make it easy on yourself.

47 thoughts on “Embed with Elliot: ARM Makefile Madness


    Or hardcoded things in general.

    CC = arm-none-eabi-gcc
    AR = arm-none-eabi-ar
    OBJCOPY = arm-none-eabi-objcopy
    SIZE = arm-none-eabi-size

    Can be replaced with:

    CC ?= arm-none-eabi-gcc
    AR ?= arm-none-eabi-ar
    OBJCOPY ?= arm-none-eabi-objcopy
    SIZE ?= arm-none-eabi-size

    Which means you can overrule them from the commandline (if you want to run a specific version, or from a specific path for example)

    This also works for great for paths:

    CMSIS ?= /usr/local/lib/CMSIS
    HAL ?= /usr/local/lib/STM32F4xx_HAL_Driver

        1. For me, the prefix is an unnecessary layer of abstraction. A makefile like this is typed once in your lifetime, and then the top few lines are copy/pasted into future makefiles. When you do change architectures/compile toolchains, it’s a single find-replace.

          This sort of “optimization” is exactly the stuff that make lures you into doing. It doesn’t help/hurt/change anything. It’s just generalization for generalization’s sake.

          You’re welcome to do it, of course. I just find that keeping my makefiles from becoming too “cute” is an uphill battle.

      1. it works for you because you use makefile variables (make FOO=baz)—but the makefile variables are supposed to be tied to your environment variables. You need the question mark if you want an environment variable FOO to be reflected in the makefile variable FOO. Note that it can either be an exported environment variable, or something specified on the command line (FOO=baz make).

    1. To clarify. What ?= means is that if there is a variable of that name already (environment or specified on the command line), use its value instead of the makefile value. if you were to define
      CC = FOO
      in your .bashrc (for instance) then
      CC ?= BAR
      in your makefile would have no effect. CC would remain FOO. Whereas if you put
      CC = BAR
      then CC would be set to BAR regardless of your environment variables.

      1. This, and if one gives variables as arguments to a ‘make’ invocation, it’ll override both. Accordingly it isn’t crucial wether to use ‘=’ or ‘?=’, both can be overridden from the command line. See https://www.gnu.org/software/make/manual/html_node/Overriding.html

        For my taste, taking (usually invisible) environment variables into account is asking for trouble. Hard to find out what’s going wrong or even to recognize that something is going wrong at all, unless one knows what to watch out for. That’s why I use ‘=’, too.

  2. Why do you have a -Wl, in the LFLAGS? Just use

    Globbing of things like *.c in a source list makes my skin crawl. Just list sources out. It’s not going to kill someone to add a source file explicitly and the make file becomes more clear and more flexible (to support other build targets)

    Also this pattern makes cross-compiling more readable:

    PLATFORM ?= arm-none-eabi

    AS ?= $(PREFIX)as
    CC ?= $(PREFIX)gcc
    LD ?= $(PREFIX)ld
    #… etc

    Also your flattener is better with a –gap-fill 0xff option. Most micros use NOR flash and the default unprogrammed state is 0xff. Creates less churn on bits when flashing.

    Lastly I frown upon compiling source from another directory. It creates problems with different people having different versions of source in that other directory on their build machines. If you want to build CMSIS as part of your project, include it locally. Use a source control link to pull the right version from a common repository path spec if you must. But put sources in one root. Then use something like this to manage (from my own make system):

    obj/%.o dep/%.d: %.c
    @echo ” -CC- Compiling ‘$<'"
    @mkdir -p $(dir dep/$*.d) $(dir obj/$*.o)
    @$(CC) $(MACH_FLAGS) $(CFLAGS) $(INCLUDES) -MMD -MF dep/$*.d -c $< -o obj/$*.o

    My $.02

    1. On globbing: I agree that I like to see the files explicitly named. I even said so in the article, and it’s with bad conscience that I wrote the second makefile in public. :) But I feel that it’s justified in this particular case — making an everything-included archive/library from a fixed codebase.

      In that particular case *.c says a lot more directly what’s actually being done — everything is being compiled. If Iisted all the files out, you wouldn’t know (without checking the contents of the ST libs) that it’s really everything. Plus, the globbing/renaming/vpath combo is a nice trick to know when you need it.

      As for including all code in a single directory: ok you can do that. I write _a lot_ of tiny microcontroller projects, and duplicating the 20MB-280MB libraries that go with each of them just to support 1-20KB of my code seems silly to me. I’ve never had any problems.

      As for flattening with 0xFF, I’ll have to look into it. Thanks for the tip!

  3. Folks, if you don’t like hardwired paths in makefiles, do everyone a favour and use actually a sane build system such as CMake. That handles search for dependencies, building and even installation of the software. Having to hack complex makefile rules in this day and age is really crazy. CMake will even generate you a build for your preferred IDE – be it Visual Studio, Eclipse or just plain command line make build.

    Specifically for STM32 series there is a good set of pre-built templates here:

    That handles all the standard libraries, ChibiOS, most of the STM32 architectures, etc. And it is very simple to use.

    1. Except cmake has its own large set of problems. Lack of documentation, coupled with unusual syntax, and compounded by the fact that you still need to know what it’s doing with makefiles “under the hood” if you ever want to figure out why it’s not doing what you want it to do.

      (Disclaimer: I want to like cmake. I really do. But every time I try it, I always wind up going back to plain ol’ makefiles in the end, because getting it to work just isn’t worth the trouble.)

    1. Thanks for the (snarky) pointer. I fixed it.

      Believe it or not, we can’t preview the posts in blog mode — so everything that’s blog-specific has to go live before we can see the (ex-post obvious) glitches. Instead of your vitriol, we deserve your pity and comfort!

      1. I wasn’t trying to be vitriolic with my comment, sorry ’bout that :) I can see how it would be interpreted as such. I really like your articles, and you’re probably one of my favorite HaD writers, keep up the good work!

  4. Elliot! Thanks so much for writing this up! I’m currently on ARM with Eclipse, but I’m always afraid it’s going to break down with some tiny little modification, or I’ll forget just the right switch to flip to make things work….

    I want to move toward makefiles and a flatter, more visible toolchain, and this is a great start. Thanks!

  5. Nice overview, and with helpfull comments also.
    Just last week I’ve been toying with another pretty good tutorial on the whole build process for ARM32, and reading the same in 2 different ways helps me to let it sink in.

    The tutorial from:
    also has some more info over the startup code and the memory map. And I also build a working blinking led with that turoral for a ST32F103C8T6.

  6. Why does this all appear to be so unnecessarily complex.

    I’m only showing my ignorance but it appears that to blink an LED one must be an expert in ancient hyroglyphics.

    1. People generally learn this stuff iteratively. Over time simple Makefiles become more sophisticated. Not many sit down with the GNU make book before blinking an LED.

    2. Yeah, right? This was a bit of a tough one. The second makefile is a little bit fancy, but I’ll claim that the first one is only _necessarily_ complex.

      The reality of working with an ARM chip as opposed to an AVR/PIC, is that everything’s turned off by default in ARM instead of turned on by default in the simpler systems. So just to get the chip up and blinking, you have to configure the CPU clocks, turn on the peripheral drivers for the pins, and then configure them. Doing that by hand is a real pain — it’s actually _easier_ to include all these libraries that have simple functions to call.

      The other thing that added to the complexity is that I showed off some of the stuff that’s normally going on behind the scenes in a “less complex” setup. So instead of just buying a laptop, you got a brief tour of the laptop manufacturing plant. There’s just a lot more moving parts.

      1. Need to turn on the clocks for everything before you start writing to a peripheral or you’ll get no where or crash. The clock trees are a bit more complicated and switching between internal RC to external clock with PLL takes a bit more work. On the other hand, you have a lot of control on if you need it.

        The complexity is likely comes from how these chips are designed – reusable IP peripheral blocks put together, so things are not as simple integrated as the 8-bit chips. The IP blocks allows the chip manufacturers to make very large family that covers from the lower end to high end.

        The learning curve might be higher, but once you have learned it, it is easy to move from one family to another. I have switch ARM family/vendors.

    3. Pretty much this. No offense to the author, but all this series did was reinforce my burning hatred for the seedy underbelly of build toolchains in general and makefiles (and makefile-making toolchains – it’s makefiles all the way down) in particular. Yes, one can learn all this stuff even if it’s just arbitrary complexity that needs to be memorized by rote “just because”. No, there aren’t enough picoseconds in the expected remaining life of the universe for this sort of s##t, when one could focus on writing the stuff that actually does something useful instead.

      It’s bad enough that regardless of language used one already spends 98% of the time looking up semantics and syntax and APIs and other assorted references instead of figuring out what one actually needs to do to get something done. Yes, I do readily craft linker files by hand whenever needed – no, I would never do that sort of misery willingly if it can be helped. And it should be. Mark my words: as far as computing is concerned, not only did we not yet invent the wheel, we haven’t ever even _seen fire_. Figuring out how to use a bloody rock as a tool is still in our nebulous future. All we managed to figure out so far is how computing should NOT work.

      Now please someone rush to point out how highlighting flaws in anything without immediately offering a solution is invalid – really, I could use some practice on my scornful laughter…

      1. Since beginning of this makefile-making tutorial series I wondered, why would I need to learn this? IDE should know, how to link and compile any program I write without me editing or crafting any makefiles. When I need to add a library, I just add an include directive in my main .c source. IDE is smart enough to find it, and with proper compilation settings (done with GUI menu, not with bloody text editor) it will take from libraries only those functions that I use and will track all necessary dependencies without bothering me with those pesky details no one should give a flying duck. The whole point of modern IDEs is to focus on writing source code and let computer do the rest. As long as paths are set (via GUI) and project is configured (via GUI) IDE can link, optimize and compile even complex projects without bothering user for some hand-crafted makefiles.
        For the blazing balls of Beelzebub, it’s 21th century, and yet some people are using their computers as if they were just fancy VT100 terminals. Why?

      2. Haha, “burning hatred for the seedy underbelly of build toolchains”
        Your logic is sound… That’s why IDEs tend to cover that up as best as possible, and likely why things like KEIL can go about charging so much for something that can otherwise be handled via open-source tools.
        So, just admit that your burning-hatred puts you at a stage somewhere between an Arduiner and a “seedy-underbelly”-programmer. *Someone* has to understand that seedy underbelly to make Arduiners possible.
        Replace that seedy-underbelly with something entirely different… Which is done, and generally fails every time. Most C-compilers can hide their seedy-underbelly regarding different phases of preprocessing, compilation, building, assembling, linking, etc. Most IDEs can hide the makefile and linker-scripts… But at some point someone down the line will actually want to *see* that seedy-underbelly, the ASM output is quite handy for debugging, the preprocessed output is, as well.
        So… where does this leave us…?
        ‘Spose you could try to make something better… But, again, one of the great things about MAKE (vs. CMAKE and others) is that it’s downright ubiquitous, pretty much operating-system independent, and has existed since the dark ages. Do something with this “wheel” (made of stone, no less), and you’re guaranteed to be rolling on nearly any surface. Do something with those fancy-newfangled rubber-wheels and you’re going to have a really hard time rolling over a surface of thumbtacks.
        So, then… I should probably just shut-up, here… but we could take it further… Line that stone-wheel with a layer of rubber, or crush that stone and melt it down into iron… whoa, we’ve still got a wheel made of stone. Whatever happens, I’m glad we’ve still got wheel-wells, if that rubber tire leaks or shreds we’ve still got *something* to roll on, but if folks (like yourself?) had their way, we’d all be driving-around on wheels made of balloons and roads paved of felt, and nary a soul would know how to reinvent the wheel-well to go off-roading… We’d have to take someone out of cryogenics just to recreate a wheel that’d roll on cement without popping! There would never be discussion about a wall between Mexico and America, the wall could be nothing more than cement vs. felt-paved roads! Wee!

        I gotta give you props, though… You write linker-scripts?! Yet you’re complaining about Make?! Mind-Blown.

  7. For quick use with STM32A0-4 ARM Eclipse is great, it works out of the box with pretty clear step by step tutorial. You can start project with blinking led and trace configured, there is not much problem to connect st-link debugger too.

    St made StCube which generates drivers + middleware and it can be pinned to Eclipse too, although I didn’t try connecting them both, @HaD: writting on that might actually be pretty neat.

  8. This looks real handy for future ARM endeavors. Thanks Elliot.
    Also, kinda funny, I’ve a big-ol’ makefile-based system I’ve been working on for quite some time, and barely even grasp how it works, there’s some great explanations in here for those things ;)

  9. This was all I needed to at least dust off my Nucleo board, install gcc-arm-none-eabi, texane’s stlink repo, STM32F4Cube, etc.

    I’ll need to do a little finagling to organize stm32cubef4.zip’s contents into something useful for strictly gcc compilation. It’s clearly geared toward a bunch of IDEs I’ve never heard of but not straight-up gcc for some reason. You’d think it would have taken minimal effort by comparison to have this up and running with gcc with only a handful of keystrokes but I’m probably missing something. If I get something working I’ll throw a link up.

    I was at least able to download STM32F4-Discovery.bin from my Nucleo using st-flash and am more comfortable with Makefiles in lieu of this writeup, thank Elliot!

    1. Cube generated code for UART worked out of the box for me – just copied sources, added to build and merged in main.cpp from cube to my main. With usb host I had to tinker with clock setup First.

  10. Thanks for this great article on Makefiles! Please make more. I use makefiles in an intermittent way and tend to forget many subtleties when I stop editing them for a while.

    I discovered platformio yesterday. It works great for offline mbed development. Everything is automated. It downloads the toolchain and the appropriate libraries for you and supports multiple ides.

    for example to create an mbed based project for the eclipse ide and the nucleo_f411re target, type “pio init –ide eclipse -b nucleo_f411re -d .” this creates a barebones project based on the specified target board, ide and dev directory. It also creates a platformio.ini config file where you can specify whether you want to use the STM32 spl libs, libopenCM3 or mbed (default). To build the application from the cli type “pio run”.

    I’ve only been playing around with platform for 24hrs and…..mind blown. The number of supported embedded platforms and frameworks and ides is ridiculous.

  11. +1 to more “getting started with ARM” topics!!!

    I’ve been looking for a good source for getting started with arm microcontrollers and just seem to find a small handful of one-off solutions and hacked libraries (besides mbed). I’d love to learn more about the process of approaching a new arm chip. For example, how take a non mbed supported arm chip and make a blink program.

    I’m loving the entire Embed with Elliot series!

  12. As a pretty seasoned STM32 (and ARM) developer, using the STM32F3Cube HAL drivers in quite a few projects, I’d be careful in making a link library out of their code. Most of the HAL files in there have a ‘basic’ file and an ‘_ex’ version. The basic one provides the standard functions for dealing with the peripheral, and the _ex one provides chip specific extensions. This in itself isn’t too bad, except that in many cases they use ‘weak’ function definitions for stubs in the basic driver that get overridden when you link in the _ex files. With the link library, it’s unlikely that the linker will find the real functions, and replace them correctly. The ADC code certainly shows this. Neglecting to link in adc_ex.c on the STM32F303 causes the ADC code to appear as if it works, but the samples returned are always 0…

    I second the comment above about using the globbing operators to pick up the .c files in a directory and add them to the build. I’ve built some pretty complex systems where some of the .c files are built for the host, and some for the target, and even worked in some projects where files get compiled for completely different processor architectures for the same system. (With completely different compilers to boot.) I cringe when I see globbing used to compile all the files in a directory, as that precludes the ability to easily build intermediate tools that get used in the build, or build certain files for certain architectures without resorting to special file name patterns. It also means that if you accidentally drop a .c file into a directory, it ends up in the build unconditionally. It is far better, and clearer for someone to maintain if you just list the files in the build. There’s no doubt which files end up in the link.

    [editorial note] I moved from 8 bit processors to ARM processors for my hobby projects about 6 years back, and haven’t looked back. The extra complexity in doing things like configuring clocks, and PLL’s is way worth the effort of figuring out. Having the ability to do 32 bit calculations on a chip running at 50MHz makes all sorts of numerical methods available to use, like real time FFT’s, and some other pretty serious math. Even if you stick to fixed point. [end editorial note]

    1. Thanks for the awesome comments!

      Re: Files and _ex files. That’s sketchy. I haven’t run into the same problem, and honestly I have used the old SPL libs more than the newer HAL ones. I’ll have to keep an eye out. Thanks!

      I said this above, but I can kinda defend globbing when the intention is to include everything — it’s a clearer representation of that intention. If I listed all the files, you’d never know if I left one out. I’m totally with you on the general case, though. It’s nice to see an explicit list of the files that the project needs in the makefile.

      Re: speed and etc. Totally! (I really want to do a fixed-point math article soon.)

      1. If I leave a file out, the linker usually complains about missing symbols when it goes to do the final link. :) Doesn’t usually take too long to figure out what the missing file is… and if nothing in it was being referenced in the first place, why should it be in the build? (Weak symbol definitions aside… I still struggle with the usefulness of those except to define stub functions for a user to override, and even then, I would prefer that the build just failed until they are defined properly.)

    2. About these “32-bit calculations” … maybe it surprises some of the ARM addicts, but AVR 8-bit can do 32-bit math just as well. They can even do 64-bit integers, if needed.

      But doing so on an 8-bit is incredibly slow? Surprise, surprise, no, it’s not. About a year back I ported Teacup Firmware, a 3D printing firmware, from ATmega to LPC1114, an Cortex-M0. The performance critical parts in there are 90% 32-bit integer maths. So one would expect a noticeable speedup, right?

      Unfortunately, there is no such speedup. ATmega requires 304 clocks for the central loop, LPC1114 takes 290. These 14 clock ticks difference is that ATmegas feature no 32-bit counter, so it requires some extra code to kind of fake one. All other math: same speed. Additions, substractions, comparisons, shifts.

      How comes? I didn’t investigate it down to the bottom, but here are a few hints: ATmegas can load code from Flash at 20 MHz. LPC1114 isn’t any better at this, one can see processing pauses when trying to execute code faster.

      Another one is: for doing a 32-bit addition, 8 bytes have to be loaded into registers. That’s the same on a 32-bit CPU as on a 8-bit CPU. No gain for the ARM.

      Third one: an ATmega comes with 32 registers, an LPC1114 has only 8. As such, registers have to be loaded and saved just as often to RAM as on the smaller chip. Except when doing simple math, like testing a flag, where the ATmega actually has an advantage: to test the single bit it has to load only one byte, while the ARM loads two.

      That much regarding the 32-bit myth. These ARMs aren’t faster because they’re 32-bit, they’re faster because they run at a faster clock, have better timers and such stuff. Could be done with an 8-bitter just as well.

      1. You are not comparing ARM vs Atmel, but LPC1114 vs Atmel. Most ARM microcontrollers have some sort of flash accelerator using RAM (a small cache).

        “8 bytes have to be loaded into registers. That’s the same on a 32-bit CPU as on a 8-bit CPU. No gain for the ARM.”

        What do you mean ? The ARM loads 4 bytes in one cycle from internal memory.

        1. Yes, the comparison takes ATmega and Cortex-M0, because the comparison means to be 8-bit vs. 32-bit. Both are similarly equipped, except for register width and allowed clock frequency. Comparing ATmega to Cortex-M7 makes IMHO no more sense than comparing Cortex-M7 to Intel i5.

          “The ARM loads 4 bytes in one cycle from internal memory.”

          It takes 2 clock cycles, no matter wether 4 bytes or a single byte is loaded. And only if the prefetch engine can keep up with this, which is often not the case in computing intensive code. With prefetch exhausted one is back to the same 20 MHz of an ATmega.

          I mean, this lack of an advantage of 32-bit CPUs isn’t some theoretical rant, it’s practical, written and performance measured code. Biggest advantage appears to be smaller binaries, they shrunk by some 40%. Again a surprise, this time a positive one :-)

      2. I agree with your architectural points. The speed of a given application depends on both the architecture and the algorithm itself. For my applications, I feel there is an advantage, for others, it may end up being a draw, exactly as your example show. It comes down to how the memory is organized, accessed, and the data processed. The number of clock cycles for fetching instructions, and data start to matter when you try to push the limits of any processor. I’ve been there, it’s not fun to see your application fall over because you ran out of clock cycles to do something you have to…

        If you are clever, you can take advantage of the features of your chosen architecture to speed up your application, in doing so you may end up making it difficult to move to another architecture. You never know when your supplier relationship may change, or the chip you are using becomes no-longer-obtainable. I worry about that a lot.

        I do like the variety of platforms available from multiple sources of ARM based micros, the fact that they share a common processing core is nice, having written code now for PIC, PIC32, MSP430, PowerPC, and the various flavours of ARM makes me appreciate the fact that ARM has provided the CMSIS stuff. (With its own set of warts… but things are improving over time.) That does make porting from one vendor’s chip to another quite a lot less painful. The availability of pretty decent low level ‘drivers’ for the various peripherals helps a lot too.

  13. A word about these MBED libraries. For one they’re incredibly complex. No two chips which can run the same code, so some 50 or 80 individual code paths. avr-libc is much easier to handle, switching between distinct AVRs is trivial compared to switching between ARM variants.

    ARM hardware isn’t supported by gcc. Only the CPU part is supported, but no -mmcu flag. Big advantage for AVRs and big thank you to the guys who implemented the -mmcu part into gcc.

    Another one is, MBED libraries are quite slow. Fast enough to blink an LED for fun, of course, but what if you want to do that at, say, 500 kHz? Then you have to write custom code. Porting FastIO from AVR to ARM brought a more than 20-fold speed increase and easier handling. Here’s the code:

    https://github.com/Traumflug/Teacup_Firmware/blob/master/pinio.h (upper half of the file)

    With all this lamenting, I moved to ARM, too. No 100 MHz ATmega in sight, but an 100 MHz ARM exists already.

    1. MBED is not the end all and be all. It is simply a high level abstraction layer that makes programming ARM chips easy. It is essentially an alternative to the arduino platform. and yes mbed code compiles to huge binary sizes and using it will limit the speed of the applications as in the case of the arduino. But it speeds up development time significantly and reduces the need to look up 1000+ page reference manuals (microcontroller reference manual as well as the official vendor provided library reference manual). If you don’t need “register-based C” code speed, go with mbed. You can always drop down to the vendor library (middleware) level or the register-based levels for sections of code that need to be further optimized.

      At the very least mbed can be a starting point. It is also much better written and significantly more flexible than the arduino libraries. Though i must say api documentation on the mbed site can be outdated at times. And the mbed community isn’t as big as I’d like.

      Even if you don’t like mbed still have a look at platformio its so much more. It supports all arduino platforms, teensy, esp8266, arm-boards with vendor libraries, arm-boards with mbed, arm-boards with arduino and so so much more.Almost everything is supported. so don’t let my mbed bias put you off from checking it out. http://platformio.org/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s