UK civil servants square off with ChatGPT in an experiment – Interesting Engineering

julos/iStock  
By subscribing, you agree to our and You may unsubscribe at any time.
Hallucinations and faulty answers aside, AI chatbots are pretty helpful – not just for regular people but also for government agencies. They help answer questions about anything under the Sun and streamline processes in a folksy, conversational manner. But given that government officials often have to deal with sensitive information, can chatbots be relied on?
To test it out, the UK Department of Education ran a trial to determine how these AI chatbots can be used by civil servants to improve their productivity, reported the BBC.
According to internal documents procured by the BBC, the project was conducted in collaboration with a London-based AI company called Faculty.ai. Its customers include the UK Home Office, NHS, OpenAI, and the BBC, among others.
The researchers at Faculty.ai used a premium version of ChatGPT to summarize and compare the "main insights and themes" of several skills training plans that had been sent to the Department for review. These summaries generated by ChatGPT were then compared to summaries written by civil servants. 
The results of the study are yet to be published, but the documents said that language models could analyze long, unstructured documents "where previously the only other option was for individuals to read through all the reports."
The BBC spoke to Renate Samson, a researcher at the Ada Lovelace Institute, a think tank, who said the central and local government was enthusiastic about using language models to analyze documents and draft reports.
As the UK government explores generative AI, they are also cognizant of risks like putting confidential information into the chatbot and privacy issues that come with it. The government has published a set of generative AI guidelines for civil servants. Because this technology is dynamic, it’s mentioned that the guidelines stand for six months, after which they will be under review again. 
“You are encouraged to be curious about this new technology, expand your understanding of how they can be used and how they work, and use them within the parameters set out within this guidance,” said the guidance.
The two key points that the guide mentions are:
You should never input information that is classified, sensitive, or reveals the intent of the government (that may not be in the public domain) into any of these tools. You should have regard to the principles of GDPR.
Output from generative AI is susceptible to bias and misinformation; they need to be checked and cited appropriately.
However, not all governments have been as accepting of using ChatGPT. Governments in China, Russia, Italy, Iran, and North Korea have banned the chatbot, citing concerns over data transfer and that the technology isn’t regulated yet.

source

Jesse
https://playwithchatgtp.com