A lot of microcontroller projects out there need some sense of wall-clock time. Whether you’re making (yet another) crazy clock, logging data, or just counting down the time left for your tea to steep, having access to human time is key.
The simplest solution is to grab a real-time-clock (RTC) IC or module. And there’s good reason to do so, because keeping accurate time over long periods is very hard. One second per day is 1/86,400 or around eleven and a half parts per million (ppm), and it’s tricky to beat twenty ppm without serious engineering.
Good RTC ICs like Maxim’s DS3231, used in the Chronodot, can do that. They use temperature correction logic and a crystal oscillator to get as accurate as five parts per million, or under half a second per day. They even have internal calendar functions, taking care of leap years and the day of the week and so on. The downside is the cost: temperature-compensated RTCs cost around $10 in single quantity, which can break the budget for some simple hacks or installations where multiple modules are needed. But there is a very suitable alternative.
What we’re looking for is a middle way: a wall-time solution for a microcontroller project that won’t break the bank (free would be ideal) but that performs pretty well over long periods of time under mellow environmental conditions. The kind of thing you’d use for a clock in your office. We’ll first look at the “obvious” contender, a plain-crystal oscillator solution, and then move on to something experimental and touchy, but free and essentially perfectly accurate over the long term: using power-line frequency as a standard.
If you don’t need very high accuracy, perhaps you can gain or lose a minute or two per month and it’s no big deal, then there are a ton of cheap crystal oscillator solutions that’ll work for you. In fact, you may even be clocking your microcontroller with one right now: that’s certainly the case with Arduino which have a normal-looking crystal on board and are probably good for something like around 10-50 ppm.
The problem with crystal oscillators is temperature. While any given crystal may keep a stable frequency to within a couple parts per million at a single temperature, this frequency changes when the temperature changes. Indeed, if you want very high stability you can hold the temperature of the crystal constant by putting it in a small insulated “oven” and you’re set.
This temperature dependence is why the manufacturers’ specifications are given as ppm over a given temperature range. And the same goes for the simple RTC units that use the ubiquitous 32.768 kHz crystal. But even if the frequency is reasonably stable, say you’re keeping the circuit indoors in a climate-controlled building, the crystal oscillator won’t necessarily be accurate. That is, the frequency of your 8.000 MHz crystal might be, at some temperature, a very stable 8.001 MHz. If you want serious accuracy, you’ll want to calibrate this.
Note that this is also true of the internal RC oscillators that are found in many microcontrollers, although the situation is more complicated there because the frequency depends on operating voltage as well as temperature. If you’re able to hold both temperature and voltage constant, you can get decent performance from the RC oscillators as well.
Calibration isn’t conceptually hard, it’s just a pain in the astable vibrator. If you have patience and a good time source, you can get very good results for indoor use by running the crystal clock for 24 hours or longer and figuring out how many seconds fast or slow your microcontroller runs and adjusting accordingly in code: adding or subtracting leap-milliseconds periodically as necessary. If you repeat this calibration over a few days, possibly picking a hot day and a cold day, you can get decent year-round accuracy for almost no added cost except for a bit of firmware overhead.
But all of this calibration and temperature-control is a drag. And it’s worse still if you have to calibrate multiple devices. It’s much better to let someone else take care of accurate timekeeping for you. Someone who could broadcast a timing signal to all of your devices simultaneously. Someone like your local power utility.
In many parts of the world, the AC power runs at a very well-regulated 50 or 60 Hz. While the timing can vary a lot over the day, depending on things like what kind of generators are providing you with power and how much aggregate demand there is in the system, power utilities try to maintain a ridiculously accurate long-run average frequency. So much so that powerline driven clocks can rival the best TCXOs for accuracy over the long run.
For a reliable wall-time solution, using a transformer to reduce the wall voltage down to something that the microcontroller can handle is the way to go. If you’re already running your project off of a wall-wart, you might even be able to open up the case and tap into the low-voltage AC directly as it comes off the transformer. You might additionally want to run the resulting 50 Hz or 60 Hz sinewave through a lowpass filter to smooth out power-line noise so that you get nice clean low-high logic transitions. Maybe even square up the sine waves with a Schmitt trigger.
But what if we went even more minimal? We’ve often noticed that the 50Hz (a European standard) hum of the power line works its way into any high-impedance input that gets left unconnected, so why not treat the noise as the signal and try to run the microcontroller real-time clock off of radiated power line
We hooked up an alligator clip to an input pin on an AVR Mega168, and used the chip’s normal input conditioning circuitry in place of any external parts. When the voltage on the pin is high, the input reads a one. Done. Microcontroller logic inputs have high enough input impedance that the voltage imposed on the pin swings back and forth with the received power line noise signal.
The logic transitions were fairly glitchy, and you can see why from looking at the output of our “antenna” on the scope. We went for quick and dirty debouncing in code; we just waited one microsecond after the first rising transition to wait out the effects of higher-frequency noise.
With that sorted out, we got an LED blinking at what looked like one second intervals. But how accurate is the powerline signal anyway, and will it be reliable enough to use as a clock?
To find out, we had the microcontroller send a byte to our laptop once per “second” over a USB/serial connection. On the laptop side, a Python routine received the incoming byte and logged the system time when it arrived. Now, the laptop’s internal clock isn’t all that accurate either, so the laptop was synced to an NTP server.
It might be better to use a GPS’s pulse-per-second output as the reference timing source, but we have bad GPS reception in the office. It should all be good enough. But bear in mind that the difference between the power-line clock signal and the laptop’s time will necessarily be the sum of both parts’ errors.
You’d expect the reception of the power-line noise “signal” to be the Achilles heel of this method. After all, the antenna consisted of a random-length alligator clip just draped on our desk, but it seemed to work very well. At 50Hz, one cycle is twenty milliseconds, so a missed cycle would appear as a sudden change in the computer-microcontroller time discrepancy of twenty milliseconds.
We saw only five jumps of that size during the 24-hour period that we left the experiment running. One hundred milliseconds lost per day is 1.16 ppm, so even this completely ghetto antenna is not the limiting factor here.
What we did see was a lot of gradual shifting back and forth around our initial starting syncronization. (Note that there’s no significance to the “zero” in time sync error; it’s just the initial difference between our laptop’s clock and the microcontroller’s.)
Power-line time generally runs a few seconds slow during the day, is worst in the evening when everyone gets home from work, but catches up at night when the generators are more lightly loaded. In this sample, the clock went as fast as 3.6 seconds ahead in the morning and as slow as 2 seconds behind at dinner time. As long as the power company ends up with an average of 4,320,000 cycles per day, which they aim to do, the clock will stay accurate year-round.
The slow variation in clock speed over time means that if you need very precise timing from minute to minute, the powerline method may not work for you.
On the other hand, if you simply need a microcontroller to accurately count up the right number of seconds in a day, using received power-line noise seems to be nearly ideal. Even though the clock ran fast or slow most of the time, it should average out well enough to beat a good TCXO for long-run drift if the utilities are doing their job.
So there you have it: an accurate clock powered by the electric utility company, made out of a crocodile clip hanging over the side of the desk and a few lines of code.
The main downside of using radiated power line noise instead of a dedicated RTC chip are the lack of battery backup that most RTCs have these days. And of course it breaks down when the wall power goes out. For a fully featured calendar application, you have to take care of leap-years and stuff like that yourself, but there’s standard libraries for that (AVR version).
Any of you out there use the power-line frequency for timekeeping? How’s it working for you? Any tips or tricks? Does it still work in the US? Can you think of a better antenna design or perhaps some input conditioning circuitry that would improve on our results? Post up in the comments.