DeepMind, the Google AI R&D lab, believes that the key to more capable AI systems might lie in uncovering new ways to solve challenging geometry problems.
To that end, DeepMind today unveiled AlphaGeometry — a system that the lab claims can solve as many geometry problems as the average International Mathematical Olympiad gold medalist. AlphaGeometry, the code for which was open sourced this morning, solves 25 Olympiad geometry problems within the standard time limit, beating the previous state-of-the-art system’s 10.
“Solving Olympiad-level geometry problems is an important milestone in developing deep mathematical reasoning on the path toward more advanced and general AI systems,” Trieu Trinh and Thang Luong, Google AI research scientists, write in a blog post published this morning. “[We] hope that … AlphaGeometry helps open up new possibilities across mathematics, science and AI.”
Why the focus on geometry? DeepMind asserts that proving mathematical theorems, or logically explaining why a theorem (e.g. the Pythagorean theorem) is true, requires both reasoning and the ability to choose from a range of possible steps toward a solution. This problem solving approach could — if DeepMind’s right — turn out to be useful in general-purpose AI systems someday.
“Demonstrating that a particular conjecture is true or false stretches the abilities of even the most advanced AI systems today,” read DeepMind press materials shared with TechCrunch. “Toward that goal, being able to prove mathematical theorems … is an important milestone as it showcases the mastery of logical reasoning and the ability to discover new knowledge.”
But training an AI system to solve geometry problems poses unique challenges.
Owing to the complexities of translating proofs into a format machines can understand, there’s a dearth of usable geometry training data. And many of today’s cutting-edge generative AI models, while exceptional at identifying patterns and relationships in data, lack the ability to reason logically through theorems.
DeepMind’s solution was twofold.
In designing AlphaGeometry, the lab paired a “neural language” model — a model architecturally along the lines of ChatGPT — with a “symbolic deduction engine,” an engine that leverages rules (e.g. mathematical rules) to infer solutions to problems. Symbolic engines can be inflexible and slow, especially when dealing with large or complicated data sets. But DeepMind mitigated these issues by having the neural model “guide” the deduction engine through possible answers to given geometry problems.
In lieu of training data, DeepMind created its own synthetic data, generating 100 million “synthetic theorems” and proofs of varying complexity. The lab then trained AlphaGeometry from scratch on the synthetic data — and evaluated it on Olympiad geometry problems
Olympiad geometry problems are based on diagrams that need “constructs” to be added before they can be solved, such as points, lines or circles. Applied to these problems, AlphaGeometry’s neural model predicts which constructs might be useful to add — predictions that AlphaGeometry’s symbolic engine uses to make deductions about the diagrams to identify like solutions.
“With so many examples of how these constructs led to proofs, AlphaGeometry’s language model is able to make good suggestions for new constructs when presented with Olympiad geometry problems,” Trinh and Luong write. “One system provides fast, ‘intuitive’ ideas, and the other more deliberate, rational decision-making.”
The results of AlphaGeometry’s problem solving, which were published in a study in the journal Nature this week, are likely to fuel the long-running debate over whether AI systems should be built on symbol manipulation — that is, manipulating symbols that represent knowledge using rules — or the ostensibly more brain-like neural networks.
Proponents of the neural network approach argue that intelligent behavior — from speech recognition to image generation — can emerge from nothing more than massive amounts of data and compute. As opposed to symbolic systems, which solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs (like editing a line in word processor software), neural networks try to solve tasks through statistical approximation and learning from examples.
Neural networks are the cornerstone of powerful AI systems like OpenAI’s DALL-E 3 and GPT-4. But, claim supporters of symbolic AI, they’re not the end-all be-all; symbolic AI might be better positioned to efficiently encode the world’s knowledge, reason their way through complex scenarios and “explain” how they arrived at an answer, these supporters argue.
As a hybrid symbolic-neural network system akin to DeepMind’s AlphaFold 2 and AlphaGo, AlphaGeometry perhaps demonstrates that the two approaches — symbol manipulation and neural networks — combined is the best path forward in the search for generalizable AI. Perhaps.
“Our long-term goal remains to build AI systems that can generalize across mathematical fields, developing the sophisticated problem-solving and reasoning that general AI systems will depend on, all the while extending the frontiers of human knowledge,” Trinh and Luong write. “This approach could shape how the AI systems of the future discover new knowledge, in math and beyond.”