Ensuring a human firewall: Ethical standards for AI in government and police communications – Police1


Laurence Dutton/Getty Images
By John Guilfoil, MA, APIO
On July 2, 2025, the Westbrook (Maine) Police Department acknowledged it had mistakenly posted an altered, AI-generated photo to Facebook following a drug seizure. An officer had used ChatGPT to add a department badge to the image, unaware the tool would modify other visual elements. The resulting distortions — including garbled text and missing objects — sparked public scrutiny. Initially denying AI involvement, the department later reversed course and issued a corrected post with the original photo, along with a commitment to halt AI use in its social media content.
The incident is a cautionary tale for public safety agencies adopting generative AI without proper oversight. As AI tools become more accessible, departments must move beyond informal use and establish ethical guardrails. The article below outlines why that’s essential — and how to do it — before a minor communication error becomes a major breach of public trust.
| RELATED: Maine PD apologizes after posting AI-edited drug bust photos on social media
From spreadsheets to search engines, technology has long promised to make government and policing more efficient. Now, generative artificial intelligence (AI) is reshaping how we think about communication and customer service — and some are even discussing its use in emergency response. But as public agencies rush to adopt this latest technology, the stakes have never been higher.
Global investment in AI is expected to top $1 trillion by 2030. This makes it impossible to ignore, but for government, public safety and education leaders, this transformation brings not only possibilities — but profound risks.
This is why police departments, which have finally begun to fully adopt media relations and social media policies into their SOPs, need to create governing documents for the use of artificial intelligence software — creating a human firewall to govern and limit its use — before it’s too late. Accreditation standards need to include AI policy requirements, for example.
At JGPR, a public relations and public information consultancy that serves more than 500 police departments in 17 states, our approach to AI has been cautious, limited and transparent. AI is never used as a client-facing or constituent-facing work product. However, the software can be used to analyze datasets, draw graphs, generate ideas for infographics, proofread content against Associated Press style and generate story ideas from source materials. In other words, we want AI talking to your PIO — but we always want your PIO talking to your residents, constituents and the news media directly.
| RELATED: Can AI fix 911’s biggest problems — or make them worse?
Public trust is the foundation of all police, fire and government communication. That trust erodes when people feel like they’re being spoken to by a machine, not a person. The frustration is obvious when a customer is stuck with a chatbot instead of an airline representative or big-box store associate. In government, that same disconnect can undermine confidence in public institutions, especially when constituents are seeking answers or services that directly impact their safety, well-being or quality of life.
There’s enormous potential for AI to improve workflows in government, but the moment a taxpayer believes they’re being misled or kept in the dark about who — or what — is communicating with them, trust is lost. And if this is ever done in a sneaky way — a way that doesn’t clearly disclose the use of AI — it can cause lasting reputational harm.
When crafting an AI ethics policy, keep in mind some clear ethical lines in the sand. Among them, consider these:

In creating a sample “Statement of Principles” for public agencies, JGPR does not call for a ban on generative AI. In fact, the agency encourages the use of large language models (LLMs) like ChatGPT, Copilot and Gemini to generate “first drafts” of content — provided that a human being reviews and edits it.
We should draw a hard line, however, between support tools and decision-makers. Artificial intelligence cannot and must not replace human-to-human communication or judgment in police, fire, EMS or government.
With that take on artificial intelligence, consider dividing your AI usage policies into “Must,” “May” and “Must Not/Must Never” — and then fill in the blanks. Here is a proposed set of AI usage criteria for police departments, which are offered freely for agency use or as a starting point to redraft, revise, add, subtract, etc., to fit your needs.
We must:

We may:

We must never:

A basic framework and policy is not an attempt to hold back the tide. Rather, it is a way to gently wade into a completely new world of technology-driven communications. There are already AI-powered tools being marketed to streamline constituent interactions, from answering routine questions to helping manage service requests. But AI is not a plug-and-play solution — and it may never deliver the accuracy, transparency and public trust required without ongoing human oversight and clearly defined ethical guardrails.
It is important to remember that, in policing, we are not making products or selling goods. We sell public safety and security — and that is not something we can safely “beta test” in the real world.
About the author
John Guilfoil is a public information officer and founder of JGPR, a Boston-based communications agency that specializes in providing public relations, media relations, website design and crisis support to police and fire departments and municipal agencies. He teaches public relations and journalism at Northeastern University and Lasell University, and he is the author of “Public Relations: A Professional Approach.
| WATCH: Generative AI in law enforcement: Questions police chiefs need to answer

Copyright © 2025 Lexipol. All rights reserved.
Do Not Sell My Personal Information

source

Jesse
https://playwithchatgtp.com