The Modern WWW, Or: Where Do We Want To Go From Here?

From the early days of ARPANET until the dawn of the World Wide Web (WWW), the internet was primarily the domain of researchers, teachers and students, with hobbyists running their own BBS servers you could dial into, yet not connected to the internet. Pitched in 1989 by Tim Berners-Lee while working at CERN, the WWW was intended as an information management system that’d provide standardized access to information using HTTP as the transfer protocol and HTML and later CSS to create formatted documents inspired by the SGML standard. Even better, it allowed for WWW forums and personal websites to begin to pop up, enabling the eternal joy of web rings, animated GIFs and forums on any conceivable topic.

During the early 90s, as the newly opened WWW began to gain traction with the public, the Mosaic browser formed the backbone of the WWW browsers (‘web browsers’) of the time, including Internet Explorer – which licensed the Mosaic code – and the Mosaic-based Netscape Navigator. With the WWW standards set by the – Berners-Lee-founded – World Wide Web Consortium (W3C), the stage appeared to be set for an open and fair playing field for all. What we got instead was the brawl referred to as the ‘browser wars‘, which – although changed – continues to this day.

Today it isn’t Microsoft’s Internet Explorer that’s ruling the WWW while setting the course for new web standards, but instead we have Google’s Chrome browser partying like it’s the early 2000s and it’s wearing an IE mask. With former competitors like Opera and Microsoft having switched to the Chromium browser engine that underlies Chrome, what does this tell us about the chances for alternative browsers and the future of the WWW?

Continue reading “The Modern WWW, Or: Where Do We Want To Go From Here?”

A cat skull enclosed in a domed security camera enclosure with green LEDs illuminating the eye sockets, sitting on a table with other skulls and rocks.

Cat Skull For Internet Connection Divination

[Emily Velasco] has an internet provider that provides sub-par connectivity. Instead of repeatedly refreshing a browser tab to test if the network is up, [Emily] decided to create an internet status monitor by embedding indicator lights in a cat skull…for some reason.

The electronics are straightforward, with the complete parts list consisting of an Arduino Nano 33 IoT device connected to a pair of RGB LEDs and 50 Ohm resistors. The Nano attempts to connect to a known site (in this case, the Google landing page) every two seconds and sets the LEDs to green if it succeeds or red if it fails.

The cat skull is thankfully a replica, 3D printed by one of [Emily]’s Twitter acquaintances, and the whole project was housed in a domed security camera enclosure. [Emily] mounts the LEDs into the skull to create a “brain in a jar” effect.

The source is available on GitHub for those wanting to take a look. We’ve featured internet connectivity status indicators in the form of traffic lights here before, as well as various network status monitors and videoconferencing indicator lights.

Archiving The Entirety Of DPReview Before It’s Gone

Despite the popular adage about everything on the internet being there forever, every day pages of information and sometimes entire websites are lost to the sands of time. With the imminent shutdown of the DPReview website, nearly 25 years of reviews and specifications of cameras and related content are at risk of vanishing. Also lost will be the content of forum posts, which can still be requested from DPReview staff until April 6th. All because the owner of the site, Amazon, is looking to cut costs.

As announced on r/photography, the Archive.org team is busy trying to download as much of the site as possible, but due to bottlenecks may not finish in time. One way around these bottlenecks is what is called the Archive Team Warrior, which involves either a virtual machine or Docker image that runs on distributed systems. In early April an archiving run using these distributed systems is planned, in a last-ditch attempt to retain as much of the  decades of content.

The thus archived content will be made available in the WARC (Web ARChive) format, in order to retain as much information as possible, including meta data and different versions of content.

The ARPANET Of Things And CMU’s History Of Networked Soda Machines

When the computer science department of Carnegie Mellon University expanded in the 1970s, this created a massive issue for certain individuals who now found that they had to walk quite a distance to the one single Coke machine. To their dismay, they’d now find that after braving a few flights of stairs, they’d find that the Coke machine (refilled randomly by grad students) was empty, or worse, had still warm Coke bottles inside. What happened next is detailed by the Coke machine itself, straight from the CMU’s servers.

A follow-up by the IBM Industrious blog adds more feedback from those responsible for we now refer to as an IoT device, though technically it was an AoT at the time, being a pre-Internet era. For the bottle-based, 1970s machine, microswitches were installed by students in the machine to keep track of the fill state of each column and for how long the bottles had been inside. After about 3 hours newly added bottles were registered as being ‘COLD’, which could be queried from the PDP-10’s mainframe (CMUA) or via ARPANET using the finger command on the special ‘coke’ user account with finger coke@cmua.

As time moved on and the coke machine was replaced  in the early 90s with a newer (and very much non-IoT) model, students would once again attempt to modify it, much to the chagrin of the Coke company’s maintenance people, resulting in the students reverting modifications prior to a maintenance appointment. This tracking system used the empty column lights on the machine, leading to a similar tracking system as on the 1970s machine, except now running on a PC-XT class computer that also tracked the status of the M&M snack machine nearby.

Whether CMU CS students can still query such highly relevant information today is not mentioned, but we presume it is an issue of paramount importance that has been addressed in an expedient fashion over the intervening years.

(Thanks to [Daniel T Erickson] for the tip)

End Of An Automation Era As Twitter Closes Its Doors To Free API Access

Over the last few months since Elon Musk bought Twitter there has been a lot of comment and reaction, but not much with relevance to Hackaday readers. Today though that has changed, with an announcement from the company that as of February 9th they will end their free API tier. It’s of relevance here because Twitter has become one of those glue items for connected projects and has appeared in many featured works on this site. A week’s notice of a service termination is exceptionally short, so expect to see a lot of the Twitter bots you follow disappearing.

Twitter bot owners have the option of paying to continue with Twitter, or rebuilding their service to use a Mastodon instance such as botsin.space. If the fediverse is new to you, then the web is not short of tutorials on how to do this.

We feel that Twitter will be a poorer place without some of the creative, funny, or interesting bots which have enriched our lives over the years, and we hope that the spam bots don’t remain by paying for API access. We can’t help feeling that this is a misguided step though, because when content is the hook to bring in the users who are the product, throwing out an entire category of content seems short-sighted. We’re not so sure about it as a move towards profitability either, because the payback from a successful social media company is never profit but influence. In short: social media companies don’t make money but the conversation itself, and that can sometimes be worth more than money if you can avoid making a mess of it.

If the bots from our field depart for Mastodon, we look forward to seeing whether the new platform offers any new possibilities. Meanwhile if your projects don’t Toot yet, find out how an ESP32 can do it.

Header: D J Shin, CC BY-SA 3.0.

Punycodes Explained

When you’re restricted to ASCII, how can you represent more complex things like emojis or non-Latin characters? One answer is Punycode, which is a way to represent Unicode characters in ASCII. However, while you could technically encode the raw bits of Unicode into characters, like Base64, there’s a snag. The Domain Name System (DNS) generally requires that hostnames are case-insensitive, so whether you type in HACKADAY.com, HackADay.com, or just hackaday.com, it all goes to the same place.

[A. Costello] at the University of California, Berkley proposed the idea of Punycode in RFC 3492 in March 2003. It outlines a simple algorithm where all regular ASCII characters are pulled out and stuck on one side with a separator in between, in this case, a hyphen. Then the Unicode characters are encoded and stuck on the end of the string.

First, the numeric codepoint and position in the string are multiplied together. Then the number is encoded as a Base-36 (a-z and 0-9) variable-length integer. For example, a greeting and the Greek for thanks, “Hey, ευχαριστώ” becomes “Hey, -mxahn5algcq2″. Similarly, the beautiful city of München becomes mnchen-3ya. Continue reading “Punycodes Explained”

The GitHub Silverware Drawer Dilemma, Or: Finding Active Repository Forks

An fortunate reality of GitHub and similar sites is that projects that are abandoned by the maintainer are often continued by someone else who forked the project. Unfortunately, the ease of forking also means that GitHub projects tend to have a lot of forks, with the popular projects having hundreds of them. Since GitHub has elected to not provide a way to filter or sort these forks, finding the most active fork can be rather harrowing.

In addition, a popular project’s dead repository tends to score higher in search results than replacement forks. For these particular situations a couple of very useful websites and browser add-ons have been developed. The Lovely Forks add-on by [Utkarsh Upadhyay] seeks to insert information on forks that are notable or newer than the repository one is looking at.

Meanwhile, the Active Forks project by [Samar Dhwoj Acharya] provides a sortable list of project forks when provided with a GitHub repository name. This helps enormously when trying to find the freshest forks in a whole list. This is similar to the Useful Forks project that provides a web-based interface in addition to a Chrome extension. Do note that these queries will count towards the GitHub API rate-limits, so you may need to add an access token.

It’s a shame that GitHub doesn’t offer such functionality by default, but thanks to these projects the times of clicking through a hundred forks to find the freshest one is at least over. For now.