Linksys Velop Routers Caught Sending WiFi Creds In The Clear

A troubling report from the Belgian consumer protection group Testaankoop: several models of Velop Pro routers from Linksys were found to be sending WiFi configuration data out to a remote server during the setup process. That would be bad enough, but not only are these routers reporting private information to the mothership, they are doing it in clear text for anyone to listen in on.

Testaankoop says that while testing out the Pro WiFi 6E and Pro 7 versions of Velop routers, they discovered that unencrypted packets were being sent to a server hosted by Amazon Web Services (AWS). In these packets, they discovered not only the SSID of the user’s wireless network, but the encryption key necessary to join it. There were also various tokens included that could be used to identify network and user.

While the report doesn’t go into too much detail, it seems this information is being sent as part of the configuration process when using the official Linksys mobile application. If you want to avoid having your information bounced around the Internet, you can still use the router’s built-in web configuration menus from a browser on the local network — just like in the good old days.

The real kicker here is the response from Linksys, or more accurately, the lack thereof. Testaankoop says they notified them of their discovery back in November of 2023, and got no response. There’s even been firmware updates for the affected routers since then, but the issue is still unresolved.

Testaankoop ends the review by strongly recommending users avoid these particular models of Linksys Velop routers, which given the facts, sounds like solid advice to us. They also express their disappointment in how the brand, a fixture in the consumer router space for decades, has handled the situation. If you ask us, things started going downhill once they stopped running Linux on their hardware.

Ask Hackaday: Has Firefox Finally Gone Too Far?

In a world where so much of our lives depend on the use of online services, the web browser used to access those services becomes of crucial importance. It becomes a question of whether we trust the huge corporate interests which control this software with such access to our daily lives, and it is vital that the browser world remains a playing field with many players in the game.

The mantle has traditionally fallen upon Mozilla’s Firefox browser to represent freedom from corporate ownership, but over the last couple of years even they have edged away from their open source ethos and morphed into an advertising company that happens to have a browser. We’re asking you: can we still trust Mozilla’s Firefox, when the latest version turns on ad measurement by default?

Such has been the dominance of Google’s Chromium in the browser world, that it becomes difficult to find alternatives which aren’t based on it. We can see the attraction for developers, instead of pursuing the extremely hard task of developing a new browser engine, just use one off-the-shelf upon which someone else has already done the work. As a result, once you have discounted browsers such as the venerable Netsurf or Dillo which are cool as heck but relatively useless for modern websites, the choices quickly descend into the esoteric. There are Ladybird and Servo which are both promising but still too rough around the edges for everyday use, so what’s left? Probably LibreWolf represents the best option, a version of Firefox with a focus on privacy and security.

We’re interested in your views on this topic, because we know you’ll have a lot to say about it. Meanwhile if you’re a Firefox user who’s upgraded to version 128 and you’re not sure what to do, don’t panic. Find the settings page, go to “Privacy and Security”, and un-check the “Website Advertising Preferences” checkbox.

Ticketmaster SafeTix Reverse-Engineered

Ticketmaster is having a rough time lately. Recently, a hacker named [Conduition] managed to reverse-engineer their new “safe” electronic ticket system. Of course, they also had the recent breach where more than half a billion accounts had personal and financial data leaked without any indication of whether or not the data was fully encrypted. But we’re going to focus on the former, as it’s more technically interesting.

Ticketmaster’s stated goals for the new SafeTix system — which requires the use of a smartphone app — was to reduce fraud and ticket scalping. Essentially, you purchase a ticket using their app, and some data is downloaded to your phone which generates a rotating barcode every 15 seconds. When [Conduition] arrived at the venue, cell and WiFi service was totally swamped by everyone trying to load their barcode tickets. After many worried minutes (and presumably a few choice words) [Conduition] managed to get a cell signal long enough to update the barcode, and was able to enter, albeit with a large contingent of similarly annoyed fans trying to enter with their legally purchased tickets.

Continue reading “Ticketmaster SafeTix Reverse-Engineered”

Credit: Xinmei Liu

The US Surgeon General’s Case For A Warning Label On Social Media

The term ‘Social Media’ may give off a benign vibe, suggesting that it’s a friendly place where everyone is welcome to be themselves, yet reality has borne out that it is anything but. This is the reason why the US Surgeon General [Dr. Vivek H. Murthy] is pleading for a health warning label on social media platforms. Much like with warnings on tobacco products, it’s not expected that such a measure would make social media safe for children and adolescents, but would remind them and their parents about the risks of these platforms.

While this may sound dire for what is at its core about social interactions, there is a growing body of evidence to support the notion that social media can negatively impact mental health. A 2020 systematic review article in Cureus by [Fazida Karim] and colleagues found anxiety and depression to be the most notable negative psychological health outcomes. A 2023 editorial in BMC Psychology by [Ágnes Zsila] and [Marc Eric S. Reyes] concurs with this notion, while contrasting these cons of social media with the pros, such as giving individuals an online community where they feel that they belong.

Ultimately, it’s important to realize that social media isn’t the end-all, be-all of online social interactions. There are still many dedicated forums, IRC channels and newsgroups far away from the prying eyes and social pressure  of social media to act out a personality. Having more awareness of how social interactions affect oneself and/or one’s children is definitely essential, even if we’re unlikely to return to the ‘never give out your real name’ days of  the pre-2000s Internet.

Finding And Resurrecting Archie: The Internet’s First Search Engine

Back in the innocent days of the late 1980s the Internet as we know it today did not exist yet, but there were still plenty of FTP servers. Since manually keeping track of all of the files on those FTP server would be a royal pain, [Alan Emtage] set to work in 1986 to create an indexing and search service called Archie to streamline this process. As a local tool, it’d regularly fetch the file listing from FTP servers in a list, making this available for easy local search.

After its initial release in 1990, its feature set was expanded to include a World Wide Web crawler by version 3.5 in 1995. Years later, it was assumed that the source for Archie had been lost. That was until the folk over at [The Serial Port] channel managed to track down a still running Archie server in Poland.

The name Archie comes from the word ‘archive’ with the ‘v’ stripped, with no relation to the Archie comics. Even so, this assumption inspired the Gopher search engines Jughead and Veronica. Of these the former is still around, and Veronica’s original database was lost, but a re-implementation of it is still around. Archie itself enjoyed a period of relative commercial success, with [Alan] starting Bunyip Information Systems in 1992 which lasted until 2003. To experience Archie today, [The Serial Port] has the Archie documentation online, along with a live server if you’re feeling like reclaiming the early Internet.

Continue reading “Finding And Resurrecting Archie: The Internet’s First Search Engine”

The Minimalistic Dillo Web Browser Is Back

Over the decades web browsers have changed from the fairly lightweight and nimble HTML document viewers of the 1990s to today’s top-heavy browsers that struggle to run on a system with less than a quad-core, multi-GHz CPU and gigabytes of RAM. All but a few, that is.

Dillo is one of a small number of browsers that requires only a minimum of system resources and will happily run on an Intel 486 or thereabouts. Sadly, the project more or less ended back in 2016 when the rendering engine’s developer passed away, but with the recent 3.10 release the project seems to be back on track, courtesy of efforts by [Rodrigo Arias Mallo].

Although a number of forks were started after the Dillo project ground to a halt, of these only Dillo+ appears to be active at this point in time, making this project revival a welcome boost, as is its porting to Atari systems. As for Dillo’s feature set, it boasts support for a range of protocols, including Gopher, HTTP(S), Gemini, and FTP via extensions. It supports HTML 4.01 and some HTML 5, along with CSS 2.1 and some CSS 3 features, and of course no JavaScript.

On today’s JS-crazed web this means access can be somewhat limited, but maybe it will promote websites to have a no-JS fallback for the Dillo users. The source code and releases can be obtained from the GitHub project page, with contributions to the project gracefully accepted.

Thanks to [Prof. Dr. Feinfinger] for the tip.

Imperva Report Claims That 50% Of The World Wide Web Is Now Bots

Automation has been a part of the Internet since long before the appearance of the World Wide Web and the first web browsers, but it’s become a significantly larger part of total traffic the past decade. A recent report by cyber security services company Imperva pins the level of automated traffic (‘bots’) at roughly fifty percent of total traffic, with about 32% of all traffic attributed to ‘bad bots’, meaning automated traffic that crawls and scrapes content to e.g. train large language models (LLMs) and generate automated content as well as perform automated attacks on the countless APIs accessible on the internet.

According to Imperva, this is the fifth year of rising ‘bad bot’ traffic, with the 2023 report noting again a few percent increase. Meanwhile ‘good bot’ traffic also keeps increasing year over year, yet while these are not directly nefarious, many of these bots can throw off analytics and of course generate increased costs for especially smaller websites. Most worrisome are the automated attacks by the bad bots, which ranges from account takeover attempts to exploiting vulnerable web-based APIs. It’s not just Imperva who is making these claims, the idea that automated traffic will soon destroy the WWW has floated around since the late 2010s as the ‘Dead Internet theory‘.

Although the idea that the Internet will ‘die’ is probably overblown, the increase in automated traffic makes it increasingly harder to distinguish human-generated content and human commentators from fake content and accounts. This is worrisome due to how much of today’s opinions are formed and reinforced on e.g. ‘social media’ websites, while more and more comments, images and even videos are manipulated or machine-generated.