Benchmarking Latency Across Common Wireless Links For MCUs

Although factors like bandwidth, power usage, and the number of (kilo)meters reach are important considerations with wireless communication for microcontrollers, latency should be another important factor to pay attention to. This is especially true for projects like controllers where round-trip latency and instant response to an input are essential, but where do you find the latency number in datasheets? This is where [Michael Orenstein] and [Scott] over at Electric UI found a lack of data, especially when taking software stacks into account. In other words, it was time to do some serious benchmarking.

The question to be answered here was specifically how fast a one-way wireless user interaction can be across three levels of payload sizes (12, 128, and 1024 bytes). The effective latency is measured from when the input is provided on the transmitter, and the receiver has processed it and triggered the relevant output pin. The internal latency was also measured by having a range of framework implementations respond to an external interrupt and drive a GPIO pin high. Even this test on an STM32F429 MCU already showed that, for example, the STM32 low-level (LL) framework is much faster than the stm32duino one.

Continue reading “Benchmarking Latency Across Common Wireless Links For MCUs”

Modeling Network Latency

The selfhosting community is an interesting and useful part of the Internet dedicated to removing one’s own services and data from the cloud and hosting it on their own servers, often on hardware that can be physically touched. With that kind of network usage, it’s not uncommon for people to build their own routers, firewalls, and other network support systems from the ground up. And, if you go deep enough, maybe even a home lab dedicated to testing and improving the network’s various layers. This piece of software helps simulate network latency to more accurately assess quality of service, performance, and the optimization of one one’s own networks.

The tool, called Speedbump, allows a network administer to quickly build a test network where characteristics of the network such as base latency and wave shape and size can be set up. From there, a TCP proxy sends the network traffic through the virtual network, adding in a set amount of delay to anything traveling on the network. It can be installed (or built from source) on an existing installation or used from within a Docker terminal, so there are plenty of options depending on preference. It’s also available as a library for any programs written in Go.

While this certainly has applications for home labs where self-hosting services is done at a high level, this could have professional applications as well. For troubleshooting simpler network issues we’d always recommend this tool which allows a more comprehensive network test than the standard “ping” command, and if you haven’t heard of selfhosting before it’s probably time to read this primer on it and build a hobby web server from scratch.

Latency Meter For Accurate Gaming

The gaming world experienced a bit of a resurgence in 2020 that is still seen in the present day. Even putting aside the effects from the pandemic, the affordability and accessibility has arguably never been better. Building a gaming PC can have its downsides, though, and a challenging issue to troubleshoot is input lag or input latency. This is something that’s best measured with standalone hardware, and if this is an issue on your setup you may want to take a look at this latency meter.

Unlike other measurement devices that use the time between a mouse button input and the monitor’s display of a bullet or shooting event, this one looks at mouse movement and the change in the scene instead. This makes it much more versatile than other methods since it’s independent of specific actions, and can be used in any game without any specific events needed to perform the measurement. A camera phototransistor is placed on the monitor’s top edge and the Arduino-based device sends mouse commands to the computer while measuring the time between those commands and the shift in the image on the monitor.

The project is open source, so with the right hardware it’s possible to build one to troubleshoot latency issues or just to learn more about a particular hardware configuration’s behavior. Arduinos and other microcontrollers have been doing all kinds of things by pretending to be human interface devices like this for a while now. One of our favorites of late was this effects pedal that replicates musical effects on mice and keyboards.

Computer Speed Gains Erased By Modern Software

[Julio] has an older computer sitting on a desk, and recorded a quick video with it showing how fast this computer can do seemingly simple things, like open default Windows applications including the command prompt and Notepad. Compared to his modern laptop, which seems to struggle with even these basic tasks despite its impressive modern hardware, the antique machine seems like a speed demon. His videos set off a huge debate about why it seems that modern personal computers often appear slower than machines of the past.

After going through plenty of plausible scenarios for what is causing the slowdown, [Julio] seems to settle on a nuanced point regarding abstraction. Plenty of application developers are attempting to minimize the amount of development time for their programs while maximizing the number of platforms they run on, which often involves using a compatibility layer, which abstracts the software away from the hardware and increases the overhead needed to run programs. Things like this are possible thanks to the amount of computing power of modern machines, but not without a slight cost of higher latency. For applications developed natively, the response times would be expected to be quite good, but fewer applications are developed natively now including things that might seem like they otherwise would be.  Notepad, for example, is now based on UWP.

While there are plenty of plausible reasons for these slowdowns in apparent speed, it’s likely a combination of many things; death by a thousand cuts. Desktop applications built with a browser compatibility layer, software companies who are reducing their own costs by perhaps not abiding by best programming practices or simply taking advantage of modern computing power to reduce their costs, and of course the fact that modern software often needs more hardware resources to run safely and securely than equivalents from the past.

Bufferbloat, The Internet, And How To Fix It

There’s a dreaded disease that’s plagued Internet Service Providers for years. OK, there’s probably several diseases, but today we’re talking about bufferbloat. What it is, how to test for it, and finally what you can do about it. Oh, and a huge shout-out to all the folks working on this problem. Many programmers and engineers, like Vint Cerf, Dave Taht, Jim Gettys, and many more have cracked this nut for our collective benefit.

When your computer sends a TCP/IP packet to another host on the Internet, that packet routes through your computer, through the network card, through a switch, through your router, through an ISP modem, through a couple ISP routers, and then finally through some very large routers on its way to the datacenter. Or maybe through that convoluted chain of devices in reverse, to arrive at another desktop. It’s amazing that the whole thing works at all, really. Each of those hops represents another place for things to go wrong. And if something really goes wrong, you know it right away. Pages suddenly won’t load. Your VoIP calls get cut off, or have drop-outs. It’s pretty easy to spot a broken connection, even if finding and fixing it isn’t so trivial.

That’s an obvious problem. What if you have a non-obvious problem? Sites load, but just a little slower than it seems like they used to. You know how to use a command line, so you try a ping test. Huh, 15.0 ms off to Google.com. Let it run for a hundred packets, and essentially no packet loss. But something’s just not right. When someone else is streaming a movie, or a machine is pushing a backup up to a remote server, it all falls apart. That’s bufferbloat, and it’s actually really easy to do a simple test to detect it. Run a speed test, and run a ping test while your connection is being saturated. If your latency under load goes through the roof, you likely have bufferbloat. There are even a few of the big speed test sites that now offer bufferbloat tests. But first, some history. Continue reading “Bufferbloat, The Internet, And How To Fix It”

Guitar Effects With No (Unwanted) Delay

MIDI has been a great tool for musicians and artists since its invention in the 1980s. It allows a standard way to interface musical instruments to computers for easy recording, editing, and production of music. It does have a few weaknesses though, namely that without some specialized equipment the latency of the signals through the various connected devices can easily get too high to be useful in live performances. It’s not an impossible problem to surmount with the right equipment, as illustrated by [Philip Karlsson Gisslow].

The low-latency MIDI interface that he created is built around a Raspberry Pi Pico. It runs a custom library created by [Philip] called MiGiC which specifically built as a MIDI to Guitar interface. The entire setup consists of a preamp to boost the guitar’s signal up to 3.3V where it is then fed to the Pi. This is where the MIDI sampling is done. From there it sends the information to a PC which is able to play the sound back quickly with no noticeable delay.

[Philip] also had to do a lot of extra work to port the software to the Pi which lacks a lot of the features of its original intended hardware on a Mac or Windows machine, and the results are impressive, especially at the end of the video where he uses the interface to play a drum machine via his guitar. And, while MIDI is certainly a powerful application for a guitarist, we have also seen the Pi put to other uses in this musical realm as well.

Continue reading “Guitar Effects With No (Unwanted) Delay”

How Does Starlink Work Anyway?

No matter what you think of Elon Musk, it’s hard to deny that he takes the dictum “There’s no such thing as bad publicity” to heart. From hurling sports cars into orbit to solar-powered roof destroyers, there’s little that Mr. Musk can’t turn into a net positive for at least one of his many ventures, not to mention his image.

Elon may have gotten in over his head, though. His plan to use his SpaceX rockets to fill the sky with thousands of satellites dedicated to providing cheap Internet access ran afoul of the astronomy community, which has decried the impact of the Starlink satellites on observations, both in the optical wavelengths and further down the spectrum in the radio bands. And that’s with only a tiny fraction of the planned constellation deployed; once fully built-out, they fear Starlink will ruin Earth-based observation forever.

What exactly the final Starlink constellation will look like and what impact it would have on observations depend greatly on the degree to which it can withstand regulatory efforts and market forces. Assuming it does survive and gets built out into a system that more or less resembles the current plan, what exactly will Starlink do? And more importantly, how will it accomplish its stated goals?

Continue reading “How Does Starlink Work Anyway?”