Where Is Mathematics Going? Large Language Models And Lean Proof Assistant

If you’re a hacker you may well have a passing interest in math, and if you have an interest in math you might like to hear about the direction of mathematical research. In a talk on this topic [Kevin Buzzard], professor of pure mathematics at Imperial College London, asks the question: Where is Mathematics Going?

It starts by explaining that in 2017 he had a mid-life crisis, of sorts, becoming disillusioned with the way mathematics research was being done, and he started looking to computer science for solutions.

He credits Euclid, as many do, with writing down some axioms and starting mathematics, over 2,000 years ago. From axioms came deductions, and deductions became mathematical facts, and math proceeded in this fashion. This continues to be the way mathematical research is done in mathematical departments around the world. The consequence of this is that mathematics is now incomprehensibly large. Similarly the mathematical proofs themselves are exceedingly large, he gives an example of one proof that is 10,000 pages long and still hasn’t been completely written down after having been announced more than 20 years ago.

The conclusion from this is that mathematics has become so complex that traditional methods of documenting it struggle to cope. He says that a tertiary education in mathematics aims to “get students to the 1940s”, whereas a tertiary education in computer science will expose students to the state of the art.

He investigates the effect “computer as calculator” has had on mathematics since the middle of the 20th century, stating that it is less than you might have thought. More recently though we have large language models (LLMs) giving us “computer as generator of mathematics” and interactive theorem provers (ITPs) as “computer as checker of mathematics”, both being new ways to use computers for mathematics research. He notes that each of these technologies have flaws and that neither has, so far, told us anything profound which we didn’t already know. As he puts it mathematics has not seen a “Deep Blue moment“.

The point is then made that the problem with LLMs is that they hallucinate statements which introduces errors and the problem with ITPs is that all the code needs to be hand-written by humans. He floats the “no brainer idea” of combining LLM tech with ITP tech, the LLMs can propose mathematics and the ITP can verify it.

He concludes with the idea that LLM + ITP is the best future for mathematics, enabling mathematics to go from “mostly right” to “definitely right”.

If you have a passing interest in math you might also like to read Getting The Most Out Of ISM Transceivers Using Math and Design Scanimations In A Snap With The Right Math.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.