To train AlphaGeometry’s language model, the researchers had to create their own training data to compensate for the scarcity of existing geometric data. They generated nearly half a billion random geometric diagrams and fed them to the symbolic engine. This engine analyzed each diagram and produced statements about their properties. These statements were organized into 100 million synthetic proofs to train the language model.

Roman Yampolskiy, an associate professor of computer science and engineering at the University of Louisville who was not involved in the research, says that AlphaGeometry’s ability shows a significant advancement toward more “sophisticated, human-like problem-solving skills in machines.” 

“Beyond mathematics, its implications span across fields that rely on geometric problem-solving, such as computer vision, architecture, and even theoretical physics,” said Yampoliskiy in an email.

However, there is room for improvement. While AlphaGeometry can solve problems found in  “elementary” mathematics, it remains unable to grapple with the sorts of advanced, abstract problems taught at university.

“Mathematicians would be really interested if AI can solve problems that are posed in research mathematics, perhaps by having new mathematical insights,” said van Doorn.

Wang says the goal is to apply a similar approach to broader math fields. “Geometry is just an example for us to demonstrate that we are on the verge of AI being able to do deep reasoning,” he says.

Source link