NTP, Rust, And Arduino Make A Phenomenal Frequency Counter

Making a microcontroller perform as a frequency counter is a relatively straightforward task involving the measurement of the time period during which a number of pulses are counted. The maximum frequency is however limited to a fraction of the microcontroller’s clock speed and the accuracy of the resulting instrument depends on that of the clock crystal so it will hardly result in the best of frequency counters. It’s something [FrankBuss] has approached with an Arduino-based counter that offloads the timing question to a host PC, and thus claims atomic accuracy due to its clock being tied to a master source via NTP. The Rust code PC-side provides continuous readings whose accuracy increases the longer it is left counting the source. The example shown reaches 20 parts per billion after several hours reading a 1 MHz source.

It’s clear that this is hardly the most convenient of frequency counters, however we can see that it could find a use for anyone intent on monitoring the long-term stability of a source, and could even be used with some kind of feedback to discipline an RF source against the NTP clock with the use of an appropriate prescaler. Its true calling might come though not in measurement but in calibration of another instrument which can be adjusted to match its reading once it has settled down. There’s surely no cheaper way to satisfy your inner frequency standard nut.

28 thoughts on “NTP, Rust, And Arduino Make A Phenomenal Frequency Counter

      1. I wouldn’t say Rust is friendly for beginners. I use it because it is hard to make bugs with it, like null pointer dereferencing, or race conditions with multithreading programming. With C/C++ it is easy to make these bugs.

        But the same program could have been implemented in Python as well easily, and with shorter code. The advantage of Rust is for bigger programs, but I’m still learning it as well.

    1. Not sure if it’s the best thing since sliced bread, but it’s certainly the best thing since C for a lot of applications where C is still king. The way I see it, Rust is becoming popular due to the fact that it beats everything else on speed and safety. Because it’s becoming popular and because it’s not too cumbersome to use once you know it, people are bound to start using it for random one-off applications where Python would suffice- and that’s fine. It does the job.

  1. It appears that this currently uses only the count of pulses, which is why it takes longer to get good accuracy for low frequencies.

    There is a simple way to improve this: instead of calculating the elapsed time in the print loop, calculate it in the serial receive loop every time you have a delta != 0. Also initialize start_time the first time you receive delta != 0 and subtract one to get the correct count of intervals.

    In this way instead of calculating (number_of_edges / total_time_elapsed), you are calculating (number_of_edges / time_taken_by_whole_periods).

  2. Shame it only does up to ~7MHz as I would like to set up my 26MHz OCXO’s.

    But then you can easily add a divide by 10 to the front end (and further decades) and even control it with the Arduino, to autorange it (if f(in) > 6MHz then interpose another divide by 10, if f(in) < 0.4 MHz then remove one)….

    Then once I have a nicely trimmed OCXO, use that for the arduino CPU clock, and NTP convergence will be quicker.

    1. If you use a divide by 10, your resolution is now 10Hz. You’ll have to increase your sampling time to 10 seconds.

      There is a Microchip app. note for the PIC16C (54??) frequency counter that uses its internal divider, BUT cleverly recover the info inside the divider. i.e. find out what the divider count is by pulsing it to observe when the count overflow.
      The PIC frequency counter is good to about 60MHz.

    1. It is still useful for many applications. For example it already helped me to test some load capacitors for a crystal. With 18 pF the frequency was 500,009 Hz (clock output of a microcontroller, crystal frequency divided by 16). With 27 pF it was 499,993. So I specified 22 pF for the circuit. The crystal was 50 ppm, but I assume at room temperature and new it is much better, but I might test it with a few more of the same type.

  3. I don’t think they understand how NTP works. It sets the internal clock of a PC to (typically) +/-10ms, and then lets it wander off on its own. The chances are that the clock is driven by an ordinary quartz crystal, not even a TCXO. Maybe once per day the NTP process fires, so to a human it looks accurate, but in practice 5-10 parts per million is good enough for this.

    Better to compare it against an accurate standard like a rubidium source or GPS derived clock.

    1. No, this is not how NTP works. It synchronizes the PC clock frequency to the NTP server in such a way that the local PC time is typically +/-10ms within the server time, at all times. For Linux for example there is a drift file which saves the clock drift and which is used to calculate the real time. You can read all the details in the RFC and the NTP reference implementation, but it is much better than setting the time just once per day. I guess the Windows time server uses a similar algorithm and protocol, maybe even NTP itself.

        1. It all depends on the type of internet connection. I couldn’t find data for the usual DSL connection, or fibre connection, but at least for internet servers looks like even in 2005 more than 90% were better than 10 ms according to a study, see first answer here:


          Might be interesting if I buy a GPS clock (they are not that expensive, with integrated NTP server over ethernet you can get it for less than $300). Then it would be better than 1 ms accurate, and I could compare it to a computer synchronized with an NTP server over the internet.

          In the last answer in this article someone mentions a local GPS with a PPS signal (one pulse per second) and claims 50 µs accuracy. I guess this is with Linux kernel driver support. This would be interesting as well. GPS hardware with PPS output is available for less than $100 at eBay.

          1. PS: Looks like you can get a GPS NTP time server for ethernet for already $120, just search for “Network Time Server NTP Time Server for GPS” on eBay, there are multiple vendors for what looks like the same device. I just bought one, will report my measurements when I got it.

          2. A cheapo UBLOX GPS module can be had for sub $10, and can be set to generate an arbitrary frequency governed by GPS on the 1PPS pin. Jitter is a bit high, but if you set it to, say, 10MHz and compare counts when you reach 10M from the GPS then you should be able to get sub 1 part per million accuracy from just the microcontroller, and closer to parts per billion over a couple of minutes.

        2. try chrony and type chronyc tracking, and chronyc sources. it will show clearly that multiple sources are being used and the time of last check in. chrony uses NTP. poll time is configurable.

          it’s easy to get under one second accuracy with internet only.

          asymmetric delay can cause problems, where send and receive delays are different and variably so.

          not really interested in armchair war. chrony performs much better than windows ntp implementation in my experience. very configurable.

          for a while, i ran chrony on raspberry pi with GPS Pulse Per Second wired to a GPIO pin. with a GPS lock all day with a good antenna, the agreement with outside sources was fantastic. fun project. good clock.

          reading time only from internet is not stratum 1, unless my memory fails me, which it might have. “atomic accuracy” is not the same as saying as stratum 0/1 though. afaik am atomic clock is stratum 0 time source and a computer directly connected to it is a stratum 1 time source. making this project in the article a stratum 2 or 3 time source, since it is synchronized to a computer that gets its time over the internet.

          neat project!

      1. The Microsoft NTP implementation was for many years infamous for how much low-hanging fruit it discarded. It might be better these days, I don’t know, I’m not responsible for PC labs these days :)

        From Wikipedia:
        In 1985, NTP version 0 (NTPv0) was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in RFC 958. Despite the relatively slow computers and networks available at the time, accuracy of better than 100 milliseconds was usually obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks.

        There are actually three separate layers required for reasonable timekeeping on a computer.

        Determining the local clock skew (how many seconds it counts per second, so that it can ), finding the times that various peers consider to be correct, and caclulating a correct “consensus time”. All of these are important for precise timekeeping, and at least the reference NTP daemon, chrony, and the openntp daemon can be configured to do all three on linux (and some other unixes).

        When I last needed to know, the Microsoft implementation ignored clock skew, averaged the peer times, and punted on round-trip estimates and “web of consensus” calculations. Achieving +/- 1 second was pretty easy, and that was good enough for Microsoft’s stated goal of making it easy to correlate logs across a fleet of Windows servers and clients.

        I have managed to keep a large fleet of linux servers, and clients (some of which were embedded and exposed to highly variable temperatures) synched to within 10ms, and the majority of them were within 1ms. There was a GPS-sourced PPS signal I could use during testing (not available during production), and it could be tied to a GPIO to trigger a logging event. It’s kind of impressive to see several thousand logs show up in the central aggregator, all with *exactly* the same millisecond timestamp :)

        The NTP protocol merely gives a way to communicate what different peers think t he time is. It’s up to the given time-client’s algorithm to determine a “consensus time”, and different algorithms can achieve this with varying accuracies, depending on how many peers there are, how asymmetric the round-trip delay is, etc.

        There are various NTP algorithms that can be used. Most ntp daemons support more than one, and select whichever algorithm has the highest potential accuracy of the ones that currently have their preconditions met by the current list of active healthy peers. Some daemons simultaneously use all viable algorithms, then select the calculated time that has the least calculated uncertainty.

        And then, there’s SNTP. It’s Simple. The algorithm is simply to go get the time from the possible peers, discard all peers not from the best-available stratum, use a simplified, truncated “consensus time” algorithm based only on the few (possibly only one) peer remaining, and then just set the system clock to that once, and exit.

        It’s really simple, good enough for avoiding confusion with certificate expiries, etc. But, it’s generally quite terrible at figuring out real path delays, especially if most of the available peers are on a relatively low-latency-to-each-other network, with only the last hop to the local computer being high latency. In this case, the local time almost always lags the rest of the world by half the last-hop latency. For whatever reason, openntpd (at least) generally does better, as long as it’s allowed to run long enough.

  4. I’m skeptical. The “atomic clock accuracty” just sounds like marketing speak. Having two processors with a link between would seem to toss any accuracy out the window. Better to just calibrate the crystal on your ‘duino — and then recalibrate it now and then.

    But relentlessly pushing Rust yet again is enough to give the project merit. Keep it up.

    1. It could probably get some accuracy if it timestamped the messages between the hosts. But instead it just runs a loop of “read counter, serial.println(counter)”, which will have wildly varying runtimes, and then rather wildly varying latency to transmit (because of the buffer times in the pc, etc.).

  5. Back when building frequency counters was a thing, you’d beat the reference oscillator against WWV. Heathkit’s first frequency counter in 1971 suggested beating against a local broadcast station. It was easy because many reference clocks were at 1MHz, and if not, a multiple of it, and the divider change provided output at lower frequencies.

    It wasn’t perfect, only as good as the effort you made, and the frequency response of receivers made it harder to get close. But then it stopped being a very low tone, but a slow pulse.

    Some people put a lot of effort into it. Not just that adjustment stage, but insulating the crystal, or putting it in an oven, or adding temperature compensating capacitors. Some bought off the shelf temperature compensated oscillators. Some built WWVB receivers and phase comparators to get really close. There were articles about how 1MHz crystals weren’t the best choice, a higher frequency cut differently which gave it better temperature stability.

    Some people had been doing this before frequency counters. But the frequency counters made it so easy to measure frequency. Previously, you’d have a 100KHz crystal oscillator, beat against WWV, so you could get accuracy every 100KHz, or through a lot of work, closer.

    Thise cheap frequency counter boards, using a microprocessor, they may have an odd frequency that can’t be beat against a radio standard. Or it’s a cheap crystal. Certainly no calibration on the assembly line, maybe no trimmer capacitor to adjust it. But still beating a hand calibrated dial.

  6. In the example shown, the counts vary from one second to the next by ~1000 counts on a 1 MHz test signal.
    That’s stupendously awful.
    If your counter is performing that poorly, then you’ve got different things to focus on than getting “atomic clock accuracy”.

    1. Honestly I expected even worse numbers, 1000 counts per second for a 1 MHz signal is already 0.1% accurate. My Linux is not a hard realtime system and USB introduces a lot of jitter as well. Might be interesting to try RTLinux, and maybe on a Raspberry Pi with hardware serial ports, to avoid the USB stack. But as I wrote, if it runs for a few hours, 1000 counts doesn’t matter, and even 1 second NTP accuracy wouldn’t be that bad. For one hour this would introduce an error of less than 0.03% for the NTP part, and much less if you let it run for a day.

      1. in a past experiment with raspberry pi, gps receiver feeding pulse per second to gpio18, using chrony software for NTP. there was increased jitter in system clock offset, in the default non RT kernel. testing same setup with RT kernel resulted in even more narrow clock offset spread. so cheap and so accurate, with just a view of the sky.

        happy timekeeping!

  7. I’d be very surprised if the long term stability of CH340 (or worse, the blank chip clones) and it’s ilk doesn’t play a factor in the results of this project.

    I can’t get those damned chips to stay connected properly for more than 5 minutes even at low baud rates with additional decoupling caps and a foil shield.

    I want to believe they serve a purpose but holy hell they are the worst option by far.

    CP2102 has never been a problem for me, and avoids the FTDI driver screwiness.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.