End Of The Eternal September, As AOL Discontinues Dial-Up

If you used the internet at home a couple of decades or more ago, you’ll know the characteristic sound of a modem  connecting to its dial-up server. That noise is a thing of the past, as we long ago moved to fibre, DSL, or wireless providers that are always on. It’s a surprise then to read that AOL are discontinuing their dial-up service at the end of September this year, in part for the reminder that AOL are still a thing, and for the surprise that in 2025 they still operate a dial-up service.

There was a brief period in which instead of going online via the internet itself, the masses were offered online services through walled gardens of corporate content. Companies such as AOL and Compuserve bombarded consumers with floppies and CD-ROMs containing their software, and even Microsoft dipped a toe in the market with the original MSN service before famously pivoting the whole organisation in favour of the internet in mid 1995. Compuserve was absorbed by AOL, which morphed into the most popular consumer dial-up ISP over the rest of that decade. The dotcom boom saw them snapped up for an exorbitant price by Time Warner, only for the expected bonanza to never arrive, and by 2023 the AOL name was dropped from the parent company’s letterhead. Over the next decade it dwindled into something of an irrelevance, and is now owned by Yahoo! as a content and email portal. This dial-up service seems to have been the last gasp of its role as an ISP.

So the eternal September, so-called because the arrival of AOL users on Usenet felt like an everlasting version of the moment a fresh cadre of undergrads arrived in September, may at least in an AOL sense, finally be over. If you’re one of the estimated 0.2% of Americans still using a dial-up connection don’t despair, because there are a few other ISPs still (just) serving your needs.

Microsoft’s New Agentic Web Protocol Stumbles With Path Traversal Exploit

If the term ‘NLWeb’ first brought to mind an image of a Dutch internet service provider, you’re probably not alone. What it actually is – or tries to become – is Microsoft’s vision of a parallel internet protocol using which website owners and application developers can integrate whatever LLM-based chatbot they desire. Unfortunately for Microsoft, the NLWeb protocol just suffered its first major security flaw.

The flaw is an absolute doozy, involving a basic path traversal vulnerability that allows an attacker to use appropriately formatted URLs to traverse the filesystem of the remote, LLM-hosting, system to extract keys and other sensitive information. Although Microsoft patched it already, no CVE was assigned, while raising the question of just how many more elementary bugs like this may be lurking in the protocol and associated software.

As for why a website or application owner might be interested in NLWeb, the marketing pitch appears to be as an alternative to integrating a local search function. This way any website or app can have their own ChatGPT-style search functionality that is theoretically restricted to just their website, instead of chatbot-loving customers going to the ChatGPT or equivalent site to ask their questions there.

Even aside from the the strong ‘solution in search of a problem’ vibe, it’s worrying that right from the outset it seems to introduce pretty serious security issues that suggest a lack of real testing, never mind a strong ignorance of the fact that a lack of user input sanitization is the primary cause for widely exploited CVEs. Unknown is whether GitHub Copilot was used to write the affected codebase.

VRML And The Dream Of Bringing 3D To The World Wide Web

You don’t have to be a Snow Crash or Tron fan to be familiar with the 3D craze that characterized the rise of the Internet and the World Wide Web in particular. From phrases like ‘surfing the information highway’ to sectioning websites as if to represent 3D real-life equivalents or sorting them by virtual streets like Geocities did, there has always been a strong push to make the Internet a more three-dimensional experience.

This is perhaps not so strange considering that we humans are ourselves 3D beings used to interacting in a 3D world. Surely we could make this fancy new ‘Internet’ technology do something more futuristic than connect us to text-based BBSes and serve HTML pages with heavily dithered images?

Enter VRML, the Virtual Reality Modelling Language, whose 3D worlds would surely herald the arrival of a new Internet era. Though neither VRML nor its successor X3D became a hit, they did leave their marks and are arguably the reason why we have technologies like WebGL today.

Continue reading “VRML And The Dream Of Bringing 3D To The World Wide Web”

An old PC with CRT monitor

ProtoWeb: Browsing The Information Superhighway Like It’s 1995

Feeling nostalgic? Weren’t around in the 90s but wonder what it was like? ProtoWeb has you covered! Over on his YouTube channel [RetroTech Chris] shows you how to browse the web like it’s 1995.

The service that [RetroTech Chris] introduces is on the web over here: protoweb.org. The way it works is that you configure your browser to use the service’s proxy server, then the service will be able to intercept your browsing activity and serve you old content from its cache. Also, for some supported sites, you will see present-day content but presented in the format you would have seen in the 90s. Once you have configured your browser to use the ProtoWeb proxy you can navigate to http://www.inode.com/ where you will find a directory listing of sites which have been archived or emulated within the service.

In his video [RetroTech Chris] actually demos some of the old web browsers running on old hardware, which is a very good recreation of what things were like. If you want the most realistic experience you can even configure ProtoWeb to slow down your network connection to the speed of a 56k dial-up modem. There are some things from the 90s that we miss, but waiting for websites to load isn’t one of them!

We had a look in our own archive to see how far back we here at Hackaday could go, and we found our first post, from September 2004: Radioshack Phone Dialer – Red Box. A red box! Spicy.

Continue reading “ProtoWeb: Browsing The Information Superhighway Like It’s 1995”

BhangmeterV2 Answers The Question “Has A Nuke Gone Off?”

You might think that a nuclear explosion is not something you need a detector for, but clearly not everyone agrees. [Bigcrimping] has not only built one, the BhangmeterV2, but he has its output publicly posted at hasanukegoneoff.com, in case you can’t go through your day without checking if someone has nuked Wiltshire.

The Bhangmeter is based on an off-the-shelf “nuclear event detector”, the HSN-1000L by Power Device Corporation.

The HSN 1000 Nuclear Event Detector at the heart of the build. We didn’t know this thing existed, never mind that it was still available.

Interfacing to the HSN-1000L is very easy: you give it power, and it gives you a pin that stays HIGH unless it detects the characteristic gamma ray pulse of a nuclear event. The gamma ray pulse occurs at the beginning of a “nuclear event” precedes the EMP by some microseconds, and the blast wave by perhaps many seconds, so the HSN-1000 series seems be aimed at triggering an automatic shutdown that might help preserve electronics in the event of a nuclear exchange.

[Bigcrimping] has wired the HSN-1000L to a Raspberry Pi Pico 2 W to create the BhangmeterV2. In the event of a nuclear explosion, it will log the time the nuclear event detector’s pin goes low, and the JSON log is pushed to the cloud, hopefully to a remote server that won’t be vaporized or bricked-by-EMP along with the BhangmeterV2. Since it is only detecting the gamma ray pulse, the BhangmeterV2 is only sensitive to nuclear events within line-of-sight, which is really not where you want to be relative to a nuclear event. Perhaps V3 will include other detection methods– maybe even a 3D-printed neutrino detector?

If you survive the blast this project is designed to detect, you might need a radiation detector to deal with the fallout. For identifying exactly what radionuclide contamination is present, you might want a gamma-ray spectrometer.

It’s a sad comment on the modern world that this hack feels both cold-war vintage and relevant again today. Thanks to [Tom] for the tip; if you have any projects you want to share, we’d love to hear from you whether they’d help us survive nuclear war or not.

Wayback Proxy Lets Your Browser Party Like It’s 1999

This project is a few years old, but it might be appropriate to cover it late since [richardg867]’s Wayback Proxy is, quite literally, timeless.

It does, more-or-less, what it says as on the tin: it is an HTTP proxy that retrieves pages from the Internet Archive’s Wayback Machine, or the Oocities archive of old Geocities sites. (Remember Geocities?) It is meant to sit on a Raspberry Pi or similar SBC between you and the modern internet. A line in a config file lets you specify the exact date. We found this via YouTube in a video by [The Science Elf] (embedded below, for those of you who don’t despise YouTube) in which he attaches a small screen and dial to his Pi to create what he calls the “Internet Time Machine” using the Wayback Proxy. (Sadly [The Science Elf] did not see fit to share his work, but it would not be difficult to recreate the python script that edits config.json.)

What’s the point? Well, if you have a retro-computer from the late 90s or early 2000s, you’re missing out a key part of the vintage experience without access to the vintage internet. This was the era when desktops were being advertised as made to get you “Online”. Using Wayback Proxy lets you relive those halcyon days– or live them for the first time, for the younger set. At least relive those of which parts of the old internet which could be Archived, which sadly isn’t everything. Still, for a nostalgia trip, or a living history exhibit to show the kids? It sounds delightful.

Of course it is possible to hit up the modern web on a retro PC (or on a Mac Plus). As long as you’re not caught up in an internet outage, as this author recently was.

Continue reading “Wayback Proxy Lets Your Browser Party Like It’s 1999”

A graph of download speeds is shown, with two triangular spikes and declines. Above the graph, the label “8 MB/s” is shown.

A Quick Introduction To TCP Congestion Control

It’s hard to imagine now, but in the mid-1980s, the Internet came close to collapsing due to the number of users congesting its networks. Computers would request packets as quickly as they could, and when a router failed to process a packet in time, the transmitting computer would immediately request it again. This tended to result in an unintentional denial-of-service, and was degrading performance significantly. [Navek]’s recent video goes over TCP congestion control, the solution to this problem which allows our much larger modern internet to work.

In a 1987 paper, Van Jacobson described a method to restrain congestion: in a TCP connection, each side of the exchange estimates how much data it can have in transit (sent, but not yet acknowledged) at any given time. The sender and receiver exchange their estimates, and use the smaller estimate as the congestion window. Every time a packet is successfully delivered across the connection, the size of the window doubles.

Once packets start dropping, the sender and receiver divide the size of the window, then slowly and linearly ramp up the size of the window until it again starts dropping packets. This is called additive increase/multiplicative decrease, and the overall result is that the size of the window hovers somewhere around the limit. Any time congestion starts to occur, the computers back off. One way to visualize this is to look at a graph of download speed: the process of periodically hitting and cutting back from the congestion limit tends to create a sawtooth wave.

[Navek] notes that this algorithm has rather harsh behavior, and that there are new algorithms that both recover faster from hitting the congestion limit and take longer to reach it. The overall concept, though, remains in widespread use.

If you’re interested in reading more, we’ve previously covered network congestion control in more detail. We’ve also covered [Navek]’s previous video on IPV5. Continue reading “A Quick Introduction To TCP Congestion Control”