Quiet-STaR algorithm allows chatbot to think over its possible answer before responding – Tech Xplore
Click here to sign in with or
Forget Password?
Learn more
share this!
28
Twit
Share
Email
March 21, 2024 report
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
preprint
trusted source
proofread
by Bob Yirka , Tech Xplore

A collaboration between AI researchers at Stanford University and Notbad AI Inc. has resulted in the development of an algorithm that allows current chatbots to mull over possible responses to a query before giving its final answer. The team has published a paper on the arXiv preprint server describing their new approach and how well their algorithm worked when paired with an existing chatbot.
As the researchers note, the general approach taken by current chatbots is to develop an answer to a query posed by a human using training data. None of the chatbots currently being used by the public stop to ponder multiple possible answers to a query before giving the one it thinks is most likely to be what the human wanted. If a human responded in such a fashion, it would be described as simply blurting out an answer.
In this new study, the research team has given chatbots a means for mulling a bit before answering, and in so doing, claim to have created a way for chatbots to be much more accurate—and to answer questions in more human-like ways.
The algorithm, Quiet-STaR, works by first asking the chatbot to produce multiple answers to a given query. It compares the answers with the original query to decide which appears to be the best. It then directs the chatbot to return that answer to the user. The team also gave the algorithm the ability to learn from its own work, thereby improving its mulling capabilities over time.
To test their algorithm, the researchers added it to the open-source Mistral 7B chatbot and tested it using a standard reasoning test—it scored 47.2%. Without the algorithm, Mistral 7B scored just 36.3%. It also did much better on a math test.
The research team notes that their algorithm could be plugged into any of the chatbots currently in use, though it would have to be done by their makers, a move they suggest could improve the accuracy of chatbots in general.
More information: Eric Zelikman et al, Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking, arXiv (2024). DOI: 10.48550/arxiv.2403.09629
© 2024 Science X Network
Explore further
Facebook
Twitter
Email
Feedback to editors
9 hours ago
0
13 hours ago
0
Mar 20, 2024
0
Mar 20, 2024
0
Mar 19, 2024
0
3 hours ago
3 hours ago
3 hours ago
4 hours ago
4 hours ago
5 hours ago
6 hours ago
7 hours ago
7 hours ago
9 hours ago
Jan 16, 2024
Mar 13, 2024
Jul 14, 2021
Dec 11, 2023
Mar 4, 2024
Jan 26, 2024
3 hours ago
4 hours ago
13 hours ago
4 hours ago
6 hours ago
15 hours ago
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.
Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.