AI. Where do you stand?

[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics

Over on YouTube you can see [Yang-Hui He] present to The Royal Institution about Mathematics: The rise of the machines.

In this one hour presentation [Yang-Hui He] explains how AI is driving progress in pure mathematics. He says that right now AI is poised to change the very nature of how mathematics is done. He is part of a community of hundreds of mathematicians pursuing the use of AI for research purposes.

[Yang-Hui He] traces the genesis of the term “artificial intelligence” to a research proposal from J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon dated August 31, 1955. He says that his mantra has become: connectivism leads to emergence, and goes on to explain what he means by that, then follows with universal approximation theorems.

He goes on to enumerate some of the key moments in AI: Descartes’s bête-machine, 1617; Lovelace’s speculation, 1842; Turing test, 1949; Dartmouth conference, 1956; Rosenblatt’s Perceptron, 1957; Hopfield’s network, 1982; Hinton’s Boltzmann machine, 1984; IBM’s Deep Blue, 1997; and DeepMind’s AlphaGo, 2012.

He continues with some navel-gazing about what is mathematics, and what is artificial intelligence. He considers how we do mathematics as bottom-up, top-down, or meta-mathematics. He mentions about one of his earliest papers on the subject Machine-learning the string landscape (PDF) and his books The Calabi–Yau Landscape: From Geometry, to Physics, to Machine Learning and Machine Learning in Pure Mathematics and Theoretical Physics.

He goes on to explain about Mathlib and the Xena Project. He discusses Machine-Assisted Proof by Terence Tao (PDF) and goes on to talk more about the history of mathematics and particularly experimental mathematics. All in all a very interesting talk, if you can find a spare hour!

In conclusion: Has AI solved any major open conjecture? No. Is AI beginning to help to advance mathematical discovery? Yes. Has AI changed the speaker’s day-to-day research routine? Yes and no.

If you’re interested in more fun math articles be sure to check out Digital Paint Mixing Has Been Greatly Improved With 1930s Math and Painted Over But Not Forgotten: Restoring Lost Paintings With Radiation And Mathematics.

Continue reading “[Yang-Hui He] Presents To The Royal Institution About AI And Mathematics”

KDE Binds Itself Tightly To Systemd, Drops Support For Non-Systemd Systems

The KDE desktop’s new login manager (PLM) in the upcoming Plasma 6.6 will mark the first time that KDE requires that the underlying OS uses systemd, if one wishes for the full KDE experience. This has especially the FreeBSD community upset, but will also affect Linux distros that do not use systemd. The focus of the KDE team is clear, as stated in the referenced Reddit thread, where a KDE developer replies that the goal is to rely on systemd for more tasks in the future. This means that PLM is just the first step.

In the eyes of KDE it seems that OSes that do not use systemd are ‘niche’ and not worth supporting, with said niche Linux distros that would be cut out including everything from Gentoo to Alpine Linux and Slackware. Regardless of your stance on systemd’s merits or lack thereof, it would seem to be quite drastic for one of the major desktop environments across Linux and BSD to suddenly make this decision.

It also raises the question of in how far this is related to the push towards a distroless and similarly more integrated, singular version of Linux as an operating system. Although there are still many other DEs that will happily run for the foreseeable future on your flavor of GNU/Linux or BSD – regardless of whether you’re more about about a System V or OpenRC init-style environment – this might be one of the most controversial divides since systemd was first introduced.

Top image: KDE Plasma 6.4.5. (Credit: Michio.kawaii, Wikimedia)

Print-in-Place Gripper Does It With A Single Motor

[XYZAiden]’s concept for a flexible robotic gripper might be a few years old, but if anything it’s even more accessible now than when he first prototyped it. It uses only a single motor and requires no complex mechanical assembly, and nowadays 3D printing with flexible filament has only gotten easier and more reliable.

The four-armed gripper you see here prints as a single piece, and is cable-driven with a single metal-geared servo powering the assembly. Each arm has a nylon string threaded through it so when the servo turns, it pulls each string which in turn makes each arm curl inward, closing the grip. Because of the way the gripper is made, releasing only requires relaxing the cables; an arm’s natural state is to fall open.

The main downside is that the servo and cables are working at a mechanical disadvantage, so the grip won’t be particularly strong. But for lightweight, irregular objects, this could be a feature rather than a bug.

The biggest advantage is that it’s extremely low-cost, and simple to both build and use. If one has access to a 3D printer and can make a servo rotate, raiding a junk bin could probably yield everything else.

DIY robotic gripper designs come in all sorts of variations. For example, this “jamming” bean-bag style gripper does an amazing, high-strength job of latching onto irregular objects without squashing them in the process. And here’s one built around grippy measuring tape, capable of surprising dexterity.

Continue reading “Print-in-Place Gripper Does It With A Single Motor”

A set of three stacked oscilloscopes is shown. The lower two oscilloscopes have screens and input pins visible, and the top oscilloscope is reversed, with a printed back plate visible.

A Higher-End Pico-Based Oscilloscope

Hackers have been building their own basic oscilloscopes out of inexpensive MCUs and cheap LCD screens for some years now, but microcontrollers have recently become fast enough to actually make such ‘scopes useful. [NJJ], for example, used a pair of Raspberry Pi Picos to build Picotronix, an extensible combined oscilloscope and logic analyzer.

This isn’t an open-source project, but it is quite well-documented, and the general design logic and workings of the device are freely available. The main board holds two Picos, one for data sampling and one to handle control, display, and external communication. The control unit is made out of stacked PCBs surrounded by a 3D-printed housing; the pinout diagrams printed on the back panel are a helpful touch. One interesting technique was to use a trimmed length of clear 3D printer filament as a light pipe for an indicator LED.

Even the protocol used to communicate between the Picos is documented; the datagrams are rather reminiscent of Ethernet frames, and can originate either from one of the Picos or from a host computer. This lets the control board operate as an automatic testing station reporting data over a wireless or USB-connected network. The display module is therefore optional hardware, and a variety of other boards (called picoPods) can be connected to the Picotronix control board. These include a faster ADC, adapters for various analog input spans, a differential analog input probe, a 12-bit logic state analyzer, and a DAC for signal generation.

If this project inspired you to make your own, we’ve also seen other Pico-based oscilloscopes before, including one that used a phone for the display.

Usagi’s New Computer Is A Gas!

[Dave] over at Usagi Electric has a mystery on his hands in the form of a computer. He picked up a Motorola 68000 based machine at a local swap meet.  A few boards, a backplane, and a power supply. The only information provided is the machines original purpose: gas station pump control.

The computer in question is an embedded system. It uses a VME backplane, and all the cards are of the 3u variaety. The 68k and associated support chips are on one card.  Memory is on another.  A third card contains four serial ports. The software lives across three different EPROM chips. Time for a bit of reverse engineering!

Continue reading “Usagi’s New Computer Is A Gas!”

Teardown Of An Apple AirTag 2 With Die Shots

There are a few possible ways to do a teardown of new electronics like the Apple AirTag 2 tracker, with [electronupdate] opting to go down to the silicon level, with die shots of the major ICs in a recent teardown video. Some high-resolution photos are also found on the separate blog page.

First we get to see the outside of the device, followed by the individual layers of the sandwiched rings of the device, starting with the small speaker, which is surrounded by the antenna for the ultrawide band (UWB) feature.

Next is the PCB layer, with a brief analysis of the main ICs, before they get lifted off and decapped for an intimate look at their insides. These include the Nordic Semiconductor nRF52840 Bluetooth chip, which also runs the firmware of the device.

The big corroded-looking grey rectangle on the PCB is the UWB chip assembly, with the die shot visible in the heading image. It provides the localization feature of the AirTag that allows you to tell where the tag is precisely. In the die analysis we get a basic explanation of what the structures visible are for. Basically it uses an array of antennae that allows the determination of time-of-flight and with it the direction of the requesting device relative to it.

In addition to die shots of the BT and UWB chips we also get the die shot of the Bosch-made accelerometer chip, as well as an SPI memory device, likely an EEPROM of some description.

As for disabling the speaker in these AirTag 2 devices, it’s nestled deep inside, well away from the battery. This is said to make disabling it much harder without a destructive disassembly, yet as iFixit demonstrated, it’s actually fairly easy to do it non-destructively.

Continue reading “Teardown Of An Apple AirTag 2 With Die Shots”

How Vibe Coding Is Killing Open Source

Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some modelling. Their warnings mostly center around the way that user interaction is pulled away from OSS projects, while also making starting a new OSS project significantly harder.

“Vibe coding” here is defined as software development that is assisted by an LLM-backed chatbot, where the developer asks the chatbot to effectively write the code for them. Arguably this turns the developer into more of a customer/client of the chatbot, with no requirement for the former to understand what the latter’s code does, just that what is generated does the thing that the chatbot was asked to create.

This also removes the typical more organic selection process of libraries and tooling, replacing it with whatever was most prevalent in the LLM’s training data. Even for popular projects visits to their website decrease as downloads and documentation are replaced by LLM chatbot interactions, reducing the possibility of promoting commercial plans, sponsorships, and community forums. Much of this is also reflected in the plummet in usage of community forums like Stack Overflow.

Continue reading “How Vibe Coding Is Killing Open Source”