Network Time Protocol On The ESP32

Network Time Protocol (NTP) is one of the best ways to keep networked computers synchronized to the same time. It’s simple, lightweight, and not only allows computers to maintain a time standard together, but it also allows some computer manufacturers to save some money on hardware costs. The Raspberry Pi is perhaps the most well-known example of a low-cost computer without the extra expense of a real-time clock (RTC). While the Pi sets up NTP essentially automatically, other microcontrollers like the ESP32 don’t, but it is possible to configure them to use this time standard with some work.

For this project the MicroPython implementation for the ESP32 is required. MicroPython is a way of running Python code on microcontrollers or other embedded systems without all of the overhead that Python would normally require. Luckily enough, the NTP libraries are built right in so once MicroPython is running on the ESP32 it’s nearly as easy as calling the library. Of course you will have to make sure there is an internet connection, and then grab the time, sync it to the machine, and then set the timezone.

For a bonus exercise, the project’s creator [Bhavesh] suggests attempting to configure Daylight Savings Time, although this can be a surprisingly difficult problem to solve. In the meantime, there are a few other ways of installing a clock on a microcontroller like this one. An RTC module is an obvious choice, but you can also get incredibly accurate time by using a GPS module as well.

German Experiment Shows Horses Beating Local Internet Connections

These days, we’re blessed with wired and wireless networks that can carry huge amounts of data in the blink of an eye. However, some areas are underprovisioned with bandwidth, such as Schmallenberg-Oberkirchen in Germany. There, reporters ran a test last December to see which would be faster: the Internet, or a horse?

The long and the short of it is that Germany faces issues with disparate Internet speeds across the country. Some areas are well-served by high-speed fiber services. However, others deemed less important by the free market struggle on with ancient copper phone lines and subsequently, experience lower speeds.

Thus, the experiment kicked off from the house of photographer [Klaus-Peter Kappest], who started an Internet transfer of 4.5GB of photos over the Internet. At the same time, a DVD was handed to messengers riding on horseback to the destination 10 kilometers away. The horses won the day, making the journey in about an hour, while the transfer over [Kappest’s] copper connection was still crawling along, only 61% complete.

Obviously, it’s a test that can be gamed quite easily. The Internet connection would have easily won over a greater distance, of course. Similarly, we’ve all heard the quote from [Andrew Tanenbaum]: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

Notably, [Kappest’s] home actually had a fiber line sitting in the basement, but bureaucracy had stymied any attempts of his to get it connected. The stunt thus also served as a great way to draw attention to his plight, and that of others in Germany suffering with similar issues in this digital age.

Top speeds for data transfer continue to rise; an Australian research team set a record last year of 44.2 terabits per second. Naturally, the hard part is getting that technology rolled out across a country. Sound off below with the problems you’ve faced getting a solid connection to your home or office.

Just How Did 1500 Bytes Become The MTU Of The Internet?

[Benjojo] got interested in where the magic number of 1,500 bytes came from, and shared some background on just how and why it seems to have come to be. In a nutshell, the maximum transmission unit (MTU) limits the maximum amount of data that can be transmitted in a single network-layer transaction, but 1,500 is kind of a strange number in binary. For the average Internet user, this under the hood stuff doesn’t really affect one’s ability to send data, but it has an impact from a network management point of view. Just where did this number come from, and why does it matter?

[Benjojo] looks at a year’s worth of data from a major Internet traffic exchange and shows, with the help of several graphs, that being stuck with a 1,500 byte MTU upper limit has real impact on modern network efficiency and bandwidth usage, because bandwidth spent on packet headers adds up rapidly when roughly 20% of all packets are topping out the 1,500 byte limit. Naturally, solutions exist to improve this situation, but elegant and effective solutions to the Internet’s legacy problems tend to require instant buy-in and cooperation from everyone at once, meaning they end up going in the general direction of nowhere.

So where did 1,500 bytes come from? It appears that it is a legacy value originally derived from a combination of hardware limits and a need to choose a value that would play well on shared network segments, without causing too much transmission latency when busy and not bringing too much header overhead. But the picture is not entirely complete, and [Benjojo] asks that if you have any additional knowledge or insight about the 1,500 bytes decision, please share it because manuals, mailing list archives, and other context from that time is either disappearing fast or already entirely gone.

Knowledge fading from record and memory is absolutely a thing that happens, but occasionally things get saved instead of vanishing into the shadows. That’s how we got IGNITION! An Informal History of Liquid Rocket Propellants, which contains knowledge and history that would otherwise have simply disappeared.

Can You Code Without Google?

Imagine for a moment that something has taken out your phone line, cell, and fibre connection so you have no internet. For some of you this may even be reality, but go with it and imagine yourself deciding to use your unexpectedly disconnected lockdown time pursuing that code project you always promised yourself. You pull out your laptop and fire up a code editor. Can you write code that works, without the Internet as a handy crib sheet? [Austin Z. Henley] couldn’t, when he tried writing a straightforward web app. He uses it as a hook to muse on the nature of learning, and it’s certainly a thought-provoking subject.

It has become an indispensable tool for the engineer and the coder alike, to constantly refer to online knowledge. This makes absolute sense, as it provides a reference library that will be many orders of magnitude in excess of anything an individual can possibly hold personally.

This holds true whether the resource takes the form of code snippets from StackOverflow or GitHub, or data sheets from TI or Microchip. Even our calculations have moved online, as it’s often much quicker to use an online calculator on a web page to derive for example an impedance calculation. This is not necessarily a bad thing, instead it’s an enabler; skills that used to take months to master due to slow information access can now be acquired in an afternoon. But it does pose the interesting question, in the Internet age what is the measure of an expert coder? Is it the ability to produce the code effectively with whatever help is available, or is it a guru-like mastery of the code? Maybe it’s both. If you have the Internet, give us your views in the comments.

Displaying Incoming Server Attacks By Giving Server Logs A Scoreboard

In the server world, it’s a foregone conclusion that ports shouldn’t be exposed to the greater Internet if they don’t need to be. There are malicious bots everywhere that will try and randomly access anything connected to a network, and it’s best just to shut them off completely. If you have to have a port open, like 22 for SSH, it’ll need to be secured properly and monitored so that the administrator can keep track of it. Usually this is done in a system log and put to the side, but [Nick] wanted a more up-front reminder of just how many attempts were being made to log into his systems.

This build actively monitors attempts to log into his server on port 22 and notifies him via a numerical display and series of LEDs. It’s based on a Raspberry Pi Zero W housed in a 3D-printed case, and works by interfacing with a program called fail2ban running on the server. fail2ban‘s primary job is to block IP addresses that fail a certain number of login attempts on a server, but being FOSS it can be modified for situations like this. With some Python code running on the Pi, it is able to gather data fed to it from fail2ban and display it.

[Nick] was able to see immediate results too. Within 24 hours he saw 1633 login attempts on a server with normal login enabled, which was promptly shown on the display. A video of the counter in action is linked below. You don’t always need a secondary display if you need real-time information on your server, though. This Pi server has its own display built right in to its case.

Continue reading “Displaying Incoming Server Attacks By Giving Server Logs A Scoreboard”

Can Solid Save The Internet?

We ran an article on Solid this week, a project that aims to do nothing less than change the privacy and security aspects of the Internet as we use it today. Sir Tim Berners-Lee, the guy who invented the World Wide Web as a side project at work, is behind it, and it’s got a lot to recommend it. I certainly hope they succeed.

The basic idea is that instead of handing your photos, your content, and your thoughts over to social media and other sharing platforms, you’d store your own personal data in a Personal Online Data (POD) container, and grant revocable access to these companies to access your data on your behalf. It’s like it’s your own website contents, but with an API for sharing parts of it elsewhere.

This is a clever legal hack, because today you give over rights to your data so that Facebook and Co. can display them in your name. This gives them all the bargaining power, and locks you into their service. If instead, you simply gave Facebook a revocable access token, the power dynamic shifts. Today you can migrate your data and delete your Facebook account, but that’s a major hassle that few undertake.

Mike and I were discussing this on this week’s podcast, and we were thinking about the privacy aspects of PODs. In particular, whatever firm you use to socially share your stuff will still be able to snoop you out, map your behavior, and target you with ads and other content, because they see it while it’s in transit. But I failed to put two and two together.

The real power of a common API for sharing your content/data is that it will make it that much easier to switch from one sharing platform to another. This means that you could easily migrate to a system that respects your privacy. If we’re lucky, we’ll see competition in this space. At the same time, storing and hosting the data would be portable as well, hopefully promoting the best practices in the providers. Real competition in where your data lives and how it’s served may well save the Internet. (Or at least we can dream.)

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter.

Want this type of article to hit your inbox every Friday morning? You should sign up!

That’s It, No More European IPV4 Addresses

When did you first hear concern expressed about the prospect of explosive growth of the internet resulting in exhaustion of the stock of available IP addresses? About twenty years ago perhaps? All computers directly connected to the internet must have an individual unique address, and the IPv4 scheme used since the 1980s has a 32-bit address space that provides only 4,294,967,296 possibilities. All that growth now means that IPv4 addresses are now in short supply, and this week RIPE, the body which allocates them in Europe, has announced that it no longer has any to allocate. Instead of handing new address blocks they will instead now provide ones that have been relinquished for example by companies that have gone out of business, and parties interested can join a waiting list.

Is the Internet dead then? Hardly, because of course IPv6, the replacement for IPv4, has been with us for decades and has a much larger 128-bit address space. The problem is that there is a huge installed base of IPv4 infrastructure which has always been cited as the reason to delay its adoption, so the vast majority of the internet-connected world has remained with IPv4. Even in an IPv4 world there are opportunities to be more efficient in the use of addresses such as the network address translation or NAT that many private networks use to share one address between many hosts, so it’s not quite curtains for your smart TV or IoT light bulb even though the situation will not get any easier.

The mystery comes in why after so many years we still use IPv4 so much. Your home router and millions like it will pick up an IPv4 address from your broadband provider’s pool, and there seems little reason why it can not instead pick up an IPv6 address and contain a gateway between the two. The same goes for addresses outside the domestic arena, and even in out community we find that IPv6 networks at events are labelled as experimental. Perhaps this news will spur the change, but meanwhile we don’t expect to be using an IPv6 address day-to-day very soon.

We know among Hackaday’s readership there will be people close to the coalface when it comes to IPv6 adoption. As always the comments are open, and we’d like to hear your views.

Header: Robert.Harker [CC BY-SA 3.0].