Should AI Chatbots Provide Government Services? – Center for European Policy Analysis (CEPA)


The small Balkan country of Albania recently announced a daring artificial intelligence initiative: it is building an AI-powered chatbot called Diella to serve as the country’s Minister for Public Procurement.   
From now on, the artificial agent will manage all bids for government contracts and purchases, from roadworks to public health services. Diella will make Albania “a country where public tenders are 100% free of corruption,” says Prime Minister Edi Rama.  
AI agents are replacing human experts across politics, law, and medicine. The Estonian government is trialing the use of AI judges in civil disputes. It’s not just governments, either — users are following the trend. A recent report found that more than 10 million British people are using chatbots to manage and self-diagnose mental health conditions.  
These efforts seem sensible — even a good solution for pressing issues. Humans are flawed. They are corruptible, slow, and frequently make mistakes. So, it makes sense that they could be replaced with seemingly infallible beings. Yet while agents replacing human experts might seem initially appealing, the reliance on AI poses serious and yet unanswered questions about liability and effectiveness.   
The motivations for turning to AI chatbots range from increasing transparency to boosting the supply of public services. Albania suffers from a rampant corruption problem. The most recent report from the US Department of State on the country concluded that corruption was present in all branches of government, “including through public procurement and public-private partnerships.” This is impeding the country’s accession to the European Union, a goal which Prime Minister Rama aims to achieve by 2030.   
Other countries are eyeing AI solutions to their bureaucratic backlog. In Estonia, the number of judges in the past two decades has remained constant, while the number of cases has roughly doubled. In the UK, the Crown Court had a record-breaking 67,000 outstanding cases at the end of 2024.   
There is a rising need for mental health services. As of mid-2025, over a million people are on the UK’s National Health Service wait list to receive support.   
While attractive, this shift from human to AI experts comes with serious risks. While chatbots are impressively reliable and autonomous, they are fallible. The problem is that when AI makes a mistake, it is unclear who is responsible.   
Large language models such as ChatGPT or Diella possess a neural network system modelled on human brains. They are fed large amounts of data, which in turn allows them to make associations between data points. While this makes them good at making autonomous predictions and providing reliable answers, it is intrinsic to such systems (as it is to our brain) that they will sometimes give the wrong answer.   
When human experts make mistakes, they can be held accountable. For doctors and lawyers, malpractice law steps in to protect citizens when human experts act negligently or cause preventable harm. When ministers and judges make mistakes, public backlash can put pressure on governments to get them to step down.   
No such accountability exists with AI. If an AI agent makes a mistake, it is challenging to hold it to account. Legal cases are proliferating. An Air Canada customer recently sued the company after its chatbot gave him false information about the carrier’s airline policy. In response, the company argued that it could not be held responsible for the advice given by the chatbot, because it was a “separate legal entity that is responsible for its own choices.”
Though the airline lost the case and had to compensate the customer, the larger question of how to manage misleading or false answers from chatbots remains unanswered. AI chatbots are often called ethical “black boxes” because, unlike humans, they cannot explain their decision-making. If the Albanian AI minister Diella makes a public procurement decision that turns out to cause immense harm, it will be unable to explain its reasoning. This is a pressing question as AI agents replace experts from whom citizens demand considered judgments and explanations.


AI-powered government officials pose practical problems, too. As Prime Minister Rama himself said in his announcement, “Diella is the first cabinet member who isn’t physically present.” It cannot sit at the table during cabinet meetings, and it can’t be locked up if it breaks the law.   
While there risks in replacing bureaucrats with AI agents, the technology could boost their productivity. In Germany, AI is being trialed to assist rather than replace lawyers and judges. This allows judges to focus on the complex questions in their jobs — where human judgement is required — and leave the administrative tasks to AI. The UK National Health Service is experimenting with the use of chatbots to triage people seeking mental health support, so those in need can find a human specialist as quickly as possible.   
Albania’s designation of an AI minister drove headlines — for good reason. The dangers of AI systems gaining increased responsibility are real. But the opportunities for this technology to improve public services are real, too.
Clara Riedenstein is a tech policy analyst and writer whose work examines how emerging technologies shape existing political, legal, and social institutions. Her research has been featured in Bandwidth, Tech Policy Press, and European View. Clara holds an MSc in Political Theory Research from Oxford University, where she studied as a C. Douglas Dillon Scholar and focused on the implications of large language models for theories of state and jurisdiction. 
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Thursday, October 9, 2025 at 2:30 PM ET

Your support will help CEPA’s mission to ensure a strong and democratic transatlantic alliance for future generations.

source

Jesse
https://playwithchatgtp.com