Bing AI Chatbot Copilot: Incorrect Election Info by Microsoft – Global Village space
A recent study conducted by nonprofit groups AI Forensics and AlgorithmWatch has revealed that Microsoft’s AI chatbot, formerly known as Bing Chat and now called Microsoft Copilot, struggles to answer election-related questions accurately. The study found that the chatbot provided incorrect information or even made up stories about political candidates.
The study discovered that Microsoft Copilot not only got basic facts wrong, such as election dates and outdated candidates, but it also invented controversies surrounding the candidates. For instance, the chatbot falsely claimed that German politician Hubert Aiwanger was involved in a controversy related to spreading misinformation about COVID-19 and vaccines. However, there was no such story. The chatbot seemed to be pulling information from an unrelated incident that occurred more than 30 years ago.
The creation of these fabricated narratives in AI language models is commonly referred to as “hallucinations.” However, the researchers argue that this term does not accurately capture the nature of the misleading factual errors in general-purpose LLMs (large language models) and chatbots.
In addition to providing inaccurate information, the chatbot also evaded directly answering questions around 40 percent of the time. While this may be preferable to providing false answers when the chatbot lacks relevant information, researchers expressed concern about the simplicity of some of the questions that were dodged.
The study also revealed that the chatbot’s performance was worse in languages other than English, such as German and French. When asked questions in English, the chatbot provided answers containing factual errors 20 percent of the time. However, when asked in German, the rate of incorrect answers jumped to 37 percent. The evasion rate for both languages was relatively similar, occurring 39 percent and 35 percent of the time, respectively.
Researchers informed Microsoft about their findings, but subsequent tests conducted a month later showed little improvement in the quality of information provided by the chatbot.
As AI technology becomes increasingly prevalent in online platforms, this study raises concerns about the accuracy and reliability of AI chatbots. Users rely on these chatbots to simplify tasks and provide accurate information. However, the study demonstrates that this assumption is often unfounded.
The researchers argue that tech companies alone cannot address these issues and call for regulation of AI. They emphasize that AI-generated misinformation poses a significant threat to the information ecosystem, and users are left to differentiate between fact and AI-fabricated fiction.
With a presidential election approaching in the United States, the study highlights potential election integrity issues. The researchers conclude that these problems cannot be solved by companies alone and advocate for AI regulation to ensure accurate and reliable information dissemination.
Sources: AI Forensics, Mashable
© 2023 | Email: contact@globalvillagespace.com