Why do AI chatbots lie and make things up? – Euronews.com


Radio Schuman
This is Radio Schuman, your new go-to podcast to spice up your weekday mornings with relevant news, insights, and behind-the-scenes from Brussels and beyond.
Brussels, My Love?
From the economy to the climate and the EU's role in world affairs, this talk show sheds light on European affairs and the issues that impact on our daily lives as Europeans. Tune in to understand the ins and outs of European politics.
No Comment
No agenda, no argument, no bias, No Comment. Get the story without commentary.
My Wildest Prediction
Dare to imagine the future with business and tech visionaries
The Big Question
Deep dive conversations with business leaders
Euronews Tech Talks
Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society.
Water Matters
Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate – find out why Water Matters, from Euronews.
Climate Now
We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt.
Radio Schuman
This is Radio Schuman, your new go-to podcast to spice up your weekday mornings with relevant news, insights, and behind-the-scenes from Brussels and beyond.
Brussels, My Love?
From the economy to the climate and the EU's role in world affairs, this talk show sheds light on European affairs and the issues that impact on our daily lives as Europeans. Tune in to understand the ins and outs of European politics.
No Comment
No agenda, no argument, no bias, No Comment. Get the story without commentary.
My Wildest Prediction
Dare to imagine the future with business and tech visionaries
The Big Question
Deep dive conversations with business leaders
Euronews Tech Talks
Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society.
Water Matters
Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate – find out why Water Matters, from Euronews.
Climate Now
We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt.
Artificial intelligence (AI) company OpenAI says algorithms reward chatbots when they guess, the company said in a new research paper.
OpenAI is referring to “hallucinations” when the large language models (LLMs) used to train the chatbots guess answers when they are unsure, instead of admitting that they don’t know. 
The researchers say that hallucinations come from an error in binary classification, when the LLMs categorise new observations into one of two categories.
The reason hallucinations continue is because LLMs are “optimised to be good test-takers and guessing when uncertain[ty] improves test performance,” the report said. 
The researchers compared it to students who guess on multiple-choice exams or bluff on written exams because submitting an answer would receive more points than leaving the entry blank. 
LLMs work with a points scheme that rewards them with a point for a correct answer and none for blanks or for saying that they don’t know the answer. 
The paper comes a few weeks after OpenAI released GPT-5, the model the company  says is “hallucination-proof” with 46 per cent fewer falsehoods than predecessor GPT-4o. 
However, a recent study from the US company NewsGuard found that ChatGPT models in general spread falsehoods in 40 per cent of their answers. 
Through pre-training and post-training, chatbots learn how to predict the next word in large amounts of text. 
OpenAI’s paper found that while some things, such as spelling and grammar, follow very clear rules and structure, there are other subjects or types of data that will be hard or even impossible for an AI to identify. 
For example, algorithms can classify pictures when they are labelled either “cat or dog,” but if the pictures were labelled after the pet’s birthday, the chatbot wouldn’t be able to categorise them in an accurate way. 
This type of task that an AI performs would “always produce errors, no matter how advanced the algorithm is,” the report found. 
 One of the key findings by researchers in the report is that models will never be 100 per cent accurate because “some real-world questions are inherently unanswerable”. 
To limit hallucinations, users could  instruct the LLM to respond with an “I don’t know” if it does not know the answer and modify the existing points system for the types of answers it gives, OpenAI said. 


Browse today's tags

source

Jesse
https://playwithchatgtp.com