ChatGPT tested on 2,000-year-old maths puzzle, uncovering how it ‘thinks’ – The Independent
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Next article
AI chatbot’s thought process is ‘like a learner or scholar’, researchers say
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Researchers in Cambridge may have uncovered how ChatGPT thinks after testing the artificial intelligence chatbot with a 2,400-year-old maths puzzle.
In a new study aimed at overcoming AI’s black box problem, a team from the University of Cambridge asked ChatGPT to solve the ‘doubling the square’ problem.
First described by the Greek philosopher Plato, the challenge involves teaching somebody how to double the area of a square. Without prior mathematical knowledge, the subject might mistakenly double the length of each side.
Plato noted that with the right guidance – or prompts – the correct solution of making the new square’s sides the same length as the diagonal of the original can be found.
The Cambridge researchers were surprised to see that ChatGPT did not simply regurgitate its pre-existing knowledge of the famous solution. Instead, the world’s most popular chatbot appeared to improvise a “learner-like” approach.
“When we face a new problem, our instinct is often to try things out based on our past experience,” said Dr Nadav Marco, a visiting scholar at the University of Cambridge.
“In our experiment, ChatGPT seemed to do something similar. Like a learner or scholar, it appeared to come up with its own hypotheses and solutions.”
To solve the problem, ChatGPT came up with an algebraic approach that was unknown in Plato’s time.
This method was only able to produce an approximate solution, but when the researchers told the chatbot that they were disappointed it could not find an “elegant and exact” answer, it was able to provide the geometrical alternative.
“If it had only been recalling from memory, it would almost certainly have referenced the classical solution of building a new square on the original square’s diagonal straight away,” said Andreas Stylianides, a professor of Mathematics Education at Cambridge. “Instead, it seemed to take its own approach.”
The results suggest that generative AI models like ChatGPT use a blend of data retrieval and “on-the-fly reasoning to solve problems, and that prompting can help them come up with the correct answer in a similar way to humans.
The research was published in the International Journal of Mathematical Education in Science and Technology, in a study titled ‘An exploration into the nature of ChatGPT’s mathematical knowledge’.
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in