Listen To The Sound Of The Crystals

We’re all used to crystal resonators — they provide pretty accurate frequency references for oscillators with low enough drift for most of our purposes. As the quartz equivalent of a tuning fork, they work by vibrating at their physical resonant frequency, which means that just like a tuning fork, it should be possible to listen to them.

A crystal in the MHz might be difficult to listen to, but for a 32,768 Hz watch crystal it’s possible with a standard microphone and sound card. [SimonArchipoff] has written a piece of software that graphs the frequency of a watch crystal oscillator, to enable small adjustments to be made for timekeeping.

Assuming a microphone and sound card that aren’t too awful, it should be easy enough to listen to the oscillation, so the challenge lies in keeping accurate time. The frequency is compared to the sound card clock which is by no means perfect, but the trick lies in using the operating system clock to calibrate that. This master clock can be measured against online NTP sources, and can thus become a known quantity.

We think of quartz clocks as pretty good, but he points out how little it takes to cause a significant drift over month-scale timings. if your quartz clock’s accuracy is important to you, perhaps you should give it a look. You might need it for your time reference.


Header: Multicherry, CC BY-SA 4.0.

Crunching The News For Fun And Little Profit

Do you ever look at the news, and wonder about the process behind the news cycle? I did, and for the last couple of decades it’s been the subject of one of my projects. The Raspberry Pi on my shelf runs my word trend analysis tool for news content, and since my journey from curious geek to having my own large corpus analysis system has taken twenty years it’s worth a second look.

How Career Turmoil Led To A Two Decade Project

A hanging sign surrounded by ornate metalwork, with the legend "Cyder house".
This is very much a minority spelling. Colin Smith, CC BY-SA 2.0.

In the middle of the 2000s I had come out of the dotcom crash mostly intact, and was working for a small web shop. When they went bust I was casting around as one does, and spent a while as a Google quality rater while I looked for a new permie job. These teams are employed by the search giant through temporary employment agencies, and in loose terms their job is to be the trained monkeys against whom the algorithm is tested. The algorithm chose X, and if the humans also chose X, the algorithm is probably getting it right. Being a quality rater is not in any way a high-profile job, but with the big shiny G on my CV I soon found myself in demand from web companies seeking some white-hat search engine marketing expertise. What I learned mirrored my lesson from a decade earlier in the CD-ROM business, that on the web as in any other electronic publishing medium, good content well presented has priority over any black-hat tricks.

But what makes good content? Forget an obsession with stuffing bogus keywords in the text, and instead talk about the right things, and do it authoritatively. What are the right things in this context? If you are covering a subject, you need to do so using the right language; that which the majority uses rather than language only you use. I can think of a bunch of examples which I probably shouldn’t talk about, but an example close to home for me comes in cider. In the UK, cider is a fermented alcoholic drink made from apples, and as a craft cidermaker of many years standing I have a good grasp of its vocabulary. The accepted spelling is “Cider”, but there’s an alternate spelling of “Cyder” used by some commercial producers of the drink. It doesn’t take long to realise that online, hardly anyone uses cyder with a Y, and thus pages concentrating on that word will do less well than those talking about cider.

A graph of the word football versus the word soccer in British news.
We Brits rarely use the word “soccer” unless there’s a story about the Club World Cup in America.

I started to build software to analyse language around a given topic, with the aim of discerning the metaphorical cider from the cyder. It was a great surprise a few years later to discover that I had invented for myself the already-existing field of computational linguistics, something that would have saved me a lot of time had I known about it when I began. I was taking a corpus of text and computing the frequencies and collocates (words that appear alongside each other) of the words within it, and from that I could quickly see which wording mattered around a subject, and which didn’t. This led seamlessly to an interest in what the same process would look like for news data with a time axis added, so I created a version which harvested its corpus from RSS feeds. Thus began my decades-long project.

Continue reading “Crunching The News For Fun And Little Profit”

The Hackaday Summer Reading List: No AI Involvement, Guaranteed

If you have any empathy at all for those of us in the journalistic profession, have some pity for the poor editor at the Chicago Sun-Times, who let through an AI-generated summer reading list made up of novels which didn’t exist.  The fake works all had real authors and thus looked plausible, thus we expect that librarians and booksellers throughout the paper’s distribution area were left scratching their heads as to why they’re not in the catalogue.

Here at Hackaday we’re refreshingly meat-based, so with a guarantee of no machine involvement, we’d like to present our own summer reading list. They’re none of them new works but we think you’ll find them as entertaining, informative, or downright useful as we did when we read them. What are you reading this summer? Continue reading “The Hackaday Summer Reading List: No AI Involvement, Guaranteed”

A Feast Of 1970s Gaming History, And An 8080 Arcade Board

Sometimes a write-up of a piece of retrocomputing hardware goes way beyond the hardware itself and into the industry that spawned it, and thus it is with [OldVCR]’s resurrection of a Blasto arcade board from 1978. It charts the history of Gremlin Industries, a largely forgotten American pioneer in the world of arcade games, and though it’s a long read it’s well worth it.

The board itself uses an Intel 8080, and is fairly typical of microcomputer systems from the late 1970s. Wiring it up requires a bit of detective work, particularly around triggering the 8080’s reset, but eventually it’s up and playing with a pair of Atari joysticks. The 8080 is a CPU we rarely see here.

The history of the company is fascinating, well researched, and entertaining. What started as an electronics business moved into wall games, early coin-op electronic games, and thence into the arcade segment with an 8080 based system that’s the precursor of the one here. They even released a rather impressive computer system based on the same hardware, but since it was built into a full-sized desk it didn’t sell well. For those of us new to Gremlin Industries the surprise comes at the end, they were bought by Sega and became that company’s American operation. In that sense they never went away, as their successor is very much still with us. Meanwhile if you have an interest in the 8080, we have been there for you.

Track Your GitHub Activity With This E-Ink Display

If you’re a regular GitHub user you’ll be familiar with the website’s graphical calendar display of activity as a grid. For some of you it will show a hive of activity, while for others it will be a bit spotty. If you’re proud of your graph though, you’ll want to show it off to the world, and that’s where [HarryHighPants]’ Git Contributions E-Ink Display comes in. It’s a small desktop appliance with a persistent display, that shows the current version of your GitHub graph.

At its heart is an all-in-one board with the display and an ESP32 on the back, with a small Li-Po cell. It’s all put in a smart 3D printed case. The software is the real trick, with a handy web interface from which you can configure your GitHub details.

It’s a simple enough project, but it joins a growing collection which use an ESP32 as a static information display. The chip is capable of more though, as shown by this much more configurable device.

It’s 2025, And We Still Need IPv4! What Happens When We Lose It?

Some time last year, a weird thing happened in the hackerspace where this is being written. The Internet was up, and was blisteringly fast as always, but only a few websites worked. What was up? Fortunately with more than one high-end networking specialist on hand it was quickly established that we had a problem with our gateway’s handling of IPv4 addresses, and normal service was restored. But what happens if you’re not a hackerspace with access to the dodgy piece of infrastructure and you’re left with only IPv6? [James McMurray] had this happen, and has written up how he fixed it.

His answer came in using a Wireguard tunnel to his VPS, and NAT mapping the IPv4 space into a section of IPv6 space. The write-up goes into extensive detail on the process should you need to follow his example, but for us there’s perhaps more interest in why here in 2025, the loss of IPv4 is still something that comes with the loss of half the Internet. As of this writing, that even includes Hackaday itself. If we had the magic means to talk to ourselves from a couple of decades ago our younger selves would probably be shocked by this.

Perhaps the answer lies in the inescapable conclusion that IPv6 answers an address space problem of concern to many in technical spaces, it neither solves anything of concern to most internet users, nor is worth the switch for so much infrastructure when mitigations such as NAT make the IPv4 address space problem less of a problem. Will we ever entirely lose IP4? We’d appreciate your views in the comments. For readers anxious for more it’s something we looked at last year.

Finally, An Extension To Copyright Law We Can Get Behind

Normally when a government extends a piece of copyright law we expect it to be in the favour of commercial interests with deep pockets and little care for their consumers. But in Denmark they do things differently it seems, which is why they are giving Danes the copyright over their own features such as their faces or voices. Why? To combat deepfakes, meaning that if you deepfake a Dane, they can come after you for big bucks, or indeed kronor. It’s a major win, in privacy terms.

You might of course ask, whether it’s now risky to photograph a Dane. We are not of course lawyers here but like any journalists we have to possess a knowledge of how copyright works, and we are guessing that the idea in play here is that of passing off. If you take a photograph of a Volkswagen you will have captured the VW logo on its front, but the car company will not sue you because you are not passing off something that’s not a Volkswagen as the real thing. So it will be with Danes; if you take a picture of their now-copyrighted face in a crowd you are not passing it off as anything but a real picture of them, so we think you should be safe.

We welcome this move, and wish other countries would follow suit.


Pope Francis, Midjourney, Public domain, (Which is a copyright story all of its own!)