Chatbots Deserve Free Speech Rights, Too – persuasion.community

“Well, then you need to shut it down.”
That was Republican Senator Marsha Blackburn’s reaction when a Google executive explained during a recent Senate hearing that large language models (LLMs) sometimes “hallucinate” and generate false information. The Tennessee senator was outraged that Google’s open-source model Gemma had fabricated defamatory stories about her and a conservative activist. The next day Google disabled consumer access to Gemma.
Blackburn’s demand captures a dangerous moment for AI and free expression. As generative AI is embedded into search engines, email, and word processors, it will mediate ever larger parts of the information ecosystem that people rely on. Governments are discovering that they can pressure companies to censor what may be the most consequential communications technology since the printing press.
Recent efforts go well beyond combating clearly illegal content such as child sexual abuse material. From Brussels to New Delhi, Warsaw to Washington, officials are wielding regulations, threats, and public shaming to shape what information, ideas, and perspectives billions of people can access through AI.
In October, the Dutch Data Protection Authority warned that AI chatbots made by OpenAI, xAI, and Mistral are “unreliable and clearly biased,” since they produced voter recommendations that tilted toward far-left and far-right parties ahead of national elections.
Support the Persuasion family this holiday season! We are a registered nonprofit that relies on reader support to pay our staff and keep our content free for everyone. If you value our work and want to fight for liberal democratic values wherever they are threatened, please consider becoming a paying subscriber today!
The regulator argued such behavior could violate the EU’s new AI Act, which requires powerful models to mitigate ill-defined “systemic risks,” including “negative effects on … society as a whole.” It referred the matter to the European Commission, which can impose fines of up to 7 percent of global annual turnover for non-compliance.
In July 2025, Poland’s government reported xAI to the European Commission after X’s chatbot Grok generated antisemitic content and offensive comments about Polish Prime Minister Donald Tusk. A spokesperson for the Commission told journalists, “We are taking these potential issues very seriously … we are in touch with the national authorities and with X itself.”
Government crackdowns in authoritarian and illiberal states, meanwhile, provide a disturbing demonstration of the stakes involved. In July 2025, a Turkish court partially banned Grok for generating content “insulting” toward religion and Turkish President Recep Tayyip Erdoğan. When asked in February 2024 whether India’s Prime Minister Narendra Modi was a fascist, Google’s Gemini cited experts who had characterized his policies as such, pointing to “Hindu nationalist ideology” and a “crackdown on dissent.” India’s Minister of State for IT declared that this response violated Indian laws. Google apologized and, until at least December 2024, Gemini responded “I can’t help with that right now” when asked about Modi and fascism. In other words, a prominent global tech company, whose products are relied upon by billions for reliable search results, filtered its most sophisticated technology to appease a government offended by content citing expert statements.
In the United States, and to a lesser degree other open democracies, legal protections for free speech will provide a buffer against overt state censorship of LLMs. But in many countries around the world, free speech exists only on paper. And even where it is legally protected, governments frequently engage in jawboning: informal pressure meant to push companies to restrict lawful speech without leaving fingerprints. Officials can hint at investigations, regulation, or licensing consequences if companies do not “cooperate.” In July, for example, Missouri’s attorney general Andrew Bailey sent letters to major AI companies threatening investigations under state deceptive advertising laws if they failed to provide data about alleged political biases in the outputs of their chatbots.
There are ways that the largest AI companies can disincentivize jawboning and mitigate the effects. Companies may not always be able to refuse government requests to censor outputs when faced with potentially ruinous fines, criminal sanctions for executives, or blanket bans. But through transparency they can make sure that their users know what their governments want to hide or manipulate.
Major AI developers should create a public, searchable database—a Jawboning Repository—where they disclose every government request to censor, filter, or modify their LLMs or chatbots—and how they responded. Requests should be logged by country, agency, legal justification, and category (e.g. “hate speech,” “national security,” “insulting the leader”).
This would build on existing efforts such as the Lumen database for internet takedown requests and the Foundation for Individual Rights and Expression’s model legislation requiring U.S. agencies to log contact with social media firms.
Google already documents government requests to remove content from its search engine and YouTube. Anthropic and OpenAI publish government requests for user data, and Meta lets users track content restrictions on its platforms by national governments. But while tech companies do provide discrete transparency initiatives, none include government requests for AI censorship. Moreover, the initiatives that exist vary widely in methodology, detail, and user friendliness.
By creating a comprehensive Jawboning Repository covering the most popular LLMs, companies will significantly enhance the public’s insight into government attempts to censor AI models. This will force governments to defend their actions in the court of public opinion. Such transparency will deter the most egregious overreach—and preserve the promise of AI as a tool for expanding, rather than constraining, human knowledge and expression.
Jacob Mchangama is the Executive Director of The Future of Free Speech and a research professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and the author of Free Speech: A History From Socrates to Social Media.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
Just so I fully understand this op-ed, let me review what I think it says and wait for others to set me straight. It suggests that LLMs, systems without a soul, a conscience or moral compass should be protected for providing hallucination-created false information, be it politically left or right. Do I have it about right? In an age where malinformation has already put democratic republics on the brink of failure because they have tons of ill-informed voters, Mr. Mchangama wants to let electronic bots say what they wish because they deserve "free speech" just like everyone else. May I point out that LLMs aren’t everyone, they are a collection of electronic ones and zeros. Lest we forget, unrestricted free speech brought Germany to ruin but not until 60 million souls had been extinguished. James Madison could never have envisioned the media landscape of 2026, especially LLMs, when he said, "A democratic republic requires well-informed voters." Uh-oh.
No posts