AI. Where do you stand?

[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics

Over on YouTube you can see [Yang-Hui He] present to The Royal Institution about Mathematics: The rise of the machines.

In this one hour presentation [Yang-Hui He] explains how AI is driving progress in pure mathematics. He says that right now AI is poised to change the very nature of how mathematics is done. He is part of a community of hundreds of mathematicians pursuing the use of AI for research purposes.

[Yang-Hui He] traces the genesis of the term “artificial intelligence” to a research proposal from J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon dated August 31, 1955. He says that his mantra has become: connectivism leads to emergence, and goes on to explain what he means by that, then follows with universal approximation theorems.

He goes on to enumerate some of the key moments in AI: Descartes’s bête-machine, 1617; Lovelace’s speculation, 1842; Turing test, 1949; Dartmouth conference, 1956; Rosenblatt’s Perceptron, 1957; Hopfield’s network, 1982; Hinton’s Boltzmann machine, 1984; IBM’s Deep Blue, 1997; and DeepMind’s AlphaGo, 2012.

He continues with some navel-gazing about what is mathematics, and what is artificial intelligence. He considers how we do mathematics as bottom-up, top-down, or meta-mathematics. He mentions about one of his earliest papers on the subject Machine-learning the string landscape (PDF) and his books The Calabi–Yau Landscape: From Geometry, to Physics, to Machine Learning and Machine Learning in Pure Mathematics and Theoretical Physics.

He goes on to explain about Mathlib and the Xena Project. He discusses Machine-Assisted Proof by Terence Tao (PDF) and goes on to talk more about the history of mathematics and particularly experimental mathematics. All in all a very interesting talk, if you can find a spare hour!

In conclusion: Has AI solved any major open conjecture? No. Is AI beginning to help to advance mathematical discovery? Yes. Has AI changed the speaker’s day-to-day research routine? Yes and no.

If you’re interested in more fun math articles be sure to check out Digital Paint Mixing Has Been Greatly Improved With 1930s Math and Painted Over But Not Forgotten: Restoring Lost Paintings With Radiation And Mathematics.

Continue reading “[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics”

Can Skynet Be A Statesman?

There’s been a lot of virtual ink spilled about LLMs and their coding ability. Some people swear by the vibes, while others, like the  FreeBSD devs have sworn them off completely. What we don’t often think about is the bigger picture: What does AI do to our civilization? That’s the thrust of a recent paper from the Boston University School of Law, “How AI Destroys Institutions”. Yes, Betteridge strikes again.

We’ve talked before about LLMs and coding productivity, but [Harzog] and [Sibly] from the school of law take a different approach. They don’t care how well Claude or Gemini can code; they care what having them around is doing to the sinews of civilization. As you can guess from the title, it’s nothing good.

"A computer must never make a management decision."
Somehow the tl;dr was written decades before the paper was.

The paper a bit of a slog, but worth reading in full, even if the language is slightly laywer-y. To summarize in brief, the authors try and identify the key things that make our institutions work, and then show one by one how each of these pillars is subtly corroded by use of LLMs. The argument isn’t that your local government clerk using ChatGPT is going to immediately result in anarchy; rather it will facilitate a slow transformation of the democratic structures we in the West take for granted. There’s also a jeremiad about LLMs ruining higher education buried in there, a problem we’ve talked about before.

If you agree with the paper, you may find yourself wishing we could launch the clankers into orbit… and turn off the downlink. If not, you’ll probably let us know in the comments. Please keep the flaming limited to below gas mark 2.

Great Trains, Not So Great AI Chatbot Security

A joy of covering the world of the European hackerspace community is that it offers the chance for train travel across the continent using the ever-good-value Interrail pass. For a British traveler such a journey inevitably starts with a Eurostar train that whisks you in comfort through the Channel Tunnel, so a report of an AI vulnerability on the Eurostar website from [Ross Donald] particularly caught our eye. What it reveals goes beyond the train company, and tells us some interesting tidbits about how safeguards in AI chatbots can be circumvented.

The bot sits on the Eurostar website, and is a simple HTML and JavaScript client that talks to the LLM back-end itself through an API. The API queries contain the whole conversation, because as AI toy manufacturers whose products have been persuaded to spout adult context will tell you, large language models (LLM)s as commonly implemented do not have a context memory for the conversation in hand.

The Eurostar developers had not made a bot without guardrails, but the vulnerability lay in those guardrails only being applied to the most recent message. Thus an innocuous or empty message could be sent, with a payload concealed in a previous message in the conversation. He demonstrates the bot returning system information about itself, and embedding injected HTML and JavaScript in its responses.

He notes that the target of the resulting output could only be himself and that he was unable to access any data from other customers, so perhaps in this case the train operator was fortunately spared the risk of a breach. From his description though, we agree they could have responded to the disclosure in a better manner.


Header image: Eriksw, CC BY-SA 4.0.

Automatically Remove AI Features From Windows 11

It seems like a fair assessment to state that the many ‘AI’ features that Microsoft added to Windows 11 are at least somewhat controversial. Unsurprisingly, this has led many to wonder about disabling or outright removing these features, with [zoicware]’s ‘Remove Windows AI’ project on GitHub trying to automate this process as much as reasonably possible.

All you need to use it is your Windows 11-afflicted system running at least 25H2 and the PowerShell script. The script is naturally run with Administrator privileges as it has to do some manipulating of the Windows Registry and prevent Windows Update from undoing many of the changes. There is also a GUI for those who prefer to just flick a few switches in a UI instead of running console commands.

Among the things that can be disabled automatically are the disabling of Copilot, Recall, AI Actions, and other integrations in applications like Edge, Paint, etc. The reinstallation of removed packages is inhibited by a custom package. For the ‘features’ that cannot be disabled automatically, there is a list of where to toggle those to ‘off’.

Naturally, since Windows 11 is a moving target, it can be rough to keep a script like this up to date, but it seems to be a good start at least for anyone who finds themselves stuck on Windows 11 with no love for Microsoft’s ‘AI’ adventures. For the other features, there are also Winaero Tweaker and Open-Shell, with the latter in particular bringing back the much more usable Windows 2000-style start menu, free of ads and other nonsense.

Hackaday Links Column Banner

Hackaday Links: December 7, 2025

We stumbled upon a story this week that really raised our eyebrows and made us wonder if we were missing something. The gist of the story is that U.S. Secretary of Energy Chris Wright, who has degrees in both electrical and mechanical engineering, has floated the idea of using the nation’s fleet of emergency backup generators to reduce the need to build the dozens of new power plants needed to fuel the AI data center building binge. The full story looks to be a Bloomberg exclusive and thus behind a paywall — hey, you don’t get to be a centibillionaire by giving stuff away, you know — so we might be missing some vital details, but this sounds pretty stupid to us.

Continue reading “Hackaday Links: December 7, 2025”

So Long, Firefox, Part One

It’s likely that Hackaday readers have among them a greater than average number of people who can name one special thing they did on September 23rd, 2002. On that day a new web browser was released, Phoenix version 0.1, and it was a lightweight browser-only derivative of the hugely bloated Mozilla suite. Renamed a few times to become Firefox, it rose to challenge the once-mighty Microsoft Internet Explorer, only to in turn be overtaken by Google’s Chrome.

Now in 2025 it’s a minority browser with an estimated market share just over 2%, and it’s safe to say that Mozilla’s take on AI and the use of advertising data has put them at odds with many of us who’ve kept the faith since that September day 23 years ago. Over the last few months I’ve been actively chasing alternatives, and it’s with sadness that in November 2025, I can finally say I’m Firefox-free.

Continue reading “So Long, Firefox, Part One”

Kubernetes Cluster Goes Mobile In Pet Carrier

There’s been a bit of a virtualization revolution going on for the last decade or so, where tools like Docker and LXC have made it possible to quickly deploy server applications without worrying much about dependency issues. Of course as these tools got adopted we needed more tools to scale them easily. Enter Kubernetes, a container orchestration platform that normally herds fleets of microservices in sprawling cloud architectures, but it turns out it’s perfectly happy running on a tiny computer stuffed in a cat carrier.

This was a build for the recent Kubecon in Atlanta, and the project’s creator [Justin] wanted it to have an AI angle to it since the core compute in the backpack is an NVIDIA DGX Spark. When someone scans the QR code, the backpack takes a picture and then runs it through a two-node cluster on the Spark running a local AI model that stylizes the picture and sends it back to the user. Only the AI workload runs on the Spark; [Justin] also is using a LattePanda to handle most of everything else rather than host everything on the Spark.

To get power for the mobile cluster [Justin] is using a small power bank, and with that it gets around three hours of use before it needs to be recharged. Originally it was planned to work on the WiFi at the conference as well but this was unreliable and he switched to using a USB tether to his phone. It was a big hit with the conference goers though, with people using it around every ten minutes while he had it on his back. Of course you don’t need a fancy NVIDIA product to run a portable kubernetes cluster. You can always use a few old phones to run one as well.

Continue reading “Kubernetes Cluster Goes Mobile In Pet Carrier”