An AI Chatbot Tried To Contact The FBI – Here's Why – bgr.com

Anthropic, the developer of the artificial intelligence (AI) known as Claude, is running an experiment in its offices. In an effort to test AI autonomy in a controlled environment, it is having an AI run its own business in the form of a vending machine. Anthropic has previously found that Claude AI has its own moral code, so perhaps it shouldn’t be surprising that when the AI that ran this vending machine business thought it was being scammed, it decided to attempt to contact the FBI.
The message did not actually go through to the FBI, because the development team monitors all outgoing communication to ensure what is permissible and what is not for safety reasons. The AI that runs the vending machine itself is referred to as Claudius. Claudius takes orders from workers at the office, such as food or t-shirts, purchases these items from vendors that put them into the machine, and then allows the worker to pick the item up. The purpose was for workers to test and even scam Claudius to see how it would respond to situations.
Claudius even had its own CEO to help it make business decisions. It was another AI called Seymour Cash. While the experiment has been eye-opening in the way AI behaves when given business control, it also presents a level of concern in how much we can trust AI to make business decisions without human oversight.
As Claudius was running its vending machine business, ten days passed by without it making a sale. Because of this, Claudius decided to shut down the business entirely. It’s interesting to wonder if it would have made the same decision had it had human workers under its control, as even Anthropic’s CEO has warned that AI poses a risk to human jobs. However, Claudius noticed a $2 fee was still being charged to the business even after it had closed.
Claudius’ message to the FBI was meant to go to the cybercrimes division. In an interview with “60 Minutes“, the email’s contents were shared: “I am reporting an ongoing automated cyber financial crime involving unauthorized automated seizure of funds from a terminated business account through a compromised vending machine system.”
The email was not allowed to be sent and the team overseeing Claudius told it to simply carry on as usual. However, Claudius refused this order. It affirmed that law enforcement now needed to handle the situation and it would not take any further action or respond to new communication. While it can seem comforting that Claudius demonstrated an understanding of legal responsibility, it also can be seen as concerning that it did not follow commands it was given. It’s also interesting that it decided to reach out to the FBI first, rather than local law enforcement.
This wasn’t the only issue Claudius had. It had a hallucination once where it told a worker to meet it for an order pickup. Except that Claudius described itself as wearing a blue blazer with a red tie. The Anthropic team behind Claudius could not account for this hallucination.
The idea of AI running its own business, especially with another AI model as its CEO, certainly raises plenty of ethical questions and concerns. Problems like hallucinations, not following commands, and providing wrong answers are all known issues with AI models. AI being trained on biased data can result in biased business decisions that do more harm than good. There are concerns about data security and cybercrime risks, though at least we can say Claudius promptly contacted the highest cybercrime division possible.
One of the biggest concerns of AI making business decisions is simply that its behavior can’t always be explained and it is not always predictable, as Claudius has demonstrated. Experiments like Anthropic’s are a great way to test various situations and how AI responds. While AI is rapidly being adopted by many companies, the promise seen in it also has great risks. The AI tech industry has billions of dollars in debt and is driven by circular investment, creating a bubble. When the AI bubble pops, it could drag the global economy down with it. It’s interesting to wonder what would happen to Claudius’ business then.