Don't use ChatGPT to decide policy, government tells federal public servants – National Post

Guidelines also warn against inputting sensitive information in AI systems
OTTAWA — Government employees shouldn’t rely on tools like ChatGPT at the expense of their own critical thinking, or use generative artificial intelligence to make government policy, according to new guidelines.

Bureaucrats also need to make sure they’re not plugging “protected, classified or other sensitive information” into AI tools, and ensure Canadians know when they’re interacting with artificial intelligence instead of a real person who works for the government.
Enjoy the latest local, national and international news.
Enjoy the latest local, national and international news.
Create an account or sign in to continue with your reading experience.
Don’t have an account? Create Account

The guide, released Sept. 6, provides preliminary advice on how federal institutions can use generative-AI tools, which can generate written text, photos, videos or code. Since the emergence of ChatGPT less than a year ago, people in various sectors have been using the technology in the workplace to write emails, create computer code, do research and help write reports, among other uses.

The government’s guide says federal institutions should be exploring how they “could use generative AI tools to support their operations and improve outcomes for Canadians,” noting that given the increasing adoption of these technologies, “their use in government will help keep pace with the evolving digital landscape.”

The guidelines direct public servants not to enter sensitive or personal information into tools that aren’t managed by the government, and not to submit queries into non-government systems that “could undermine public trust if they were disclosed.”

The guidelines also warn against letting AI tools do the work at the expense of an individual’s critical thinking. “Overreliance on generative AI tools can interfere with individual autonomy and judgment. For example, some users may be prone to uncritically accept system recommendations or other outputs, which could be incorrect,” they said.

“Overreliance on AI systems can result in a decline in critical thinking and can limit diversity in thought, thereby stifling creativity and innovation and resulting in partial or incomplete analyses.”

Government employees shouldn’t turn to AI tools to do work the individuals aren’t capable of doing themselves, and should use them as “aids, not as substitutes.”

“Form your own views before you seek ideas or recommendations from AI tools,” the guide says.

The public should also be informed when they are interacting with a machine and when they’re interacting with a human, something the guide said is crucial to maintaining trust in government.

“Conversational agents or chatbots that use generative AI can produce responses that are so human-like that it may be difficult to distinguish them from those of a real person. As a result, clients may be misled into believing that they are interacting with a human,” it explained.

Management should be informed if generative AI is used to help produce content like briefing notes, and the person generating the information is responsible for making sure the content is accurate and “protected, classified or other sensitive information” isn’t plugged into the AI systems.

Similarly, government employees can use generative AI for programming, but need to “consider the security classification of the code.”

Public servants should also use caution in generating content for use in public communication, the guide said, noting it is “critical to ensure that outputs are trusted given the potential reach and impact of public communications.”

Employees can use generative AI for research during policy development, but not to “recommend, make or interpret policy.”

“When deciding on policy positions, make your own value judgments, in consultation with the relevant stakeholders and consistent with applicable laws,” the guidelines say. The prompts used in that process “should not include any information that would pose legal or reputational risks to the government.”

The guide also stipulates that federal institutions “should evaluate the tools for their potential to help employees, not replace them.”

That’s a concern recently expressed by a union representing public service employees in a brief to a House of Commons committee studying an AI-regulation bill. The Professional Institute of the Public Service of Canada said it was “informed in discussions with the Treasury Board that there would be layoffs as a result of AI.”

It said it wasn’t necessarily opposed to some job changes due to AI, but it wanted a one-year notice of layoffs and for anyone affected to be guaranteed new jobs, retraining or additional long-term unemployment insurance.

It also asked for more transparency on use of AI by government for hiring and evaluating public servants. “Employee evaluation with AI can imply some form of worker surveillance. We are asking for complete transparency on how AI is being used as regards to evaluation, monitoring, surveillance and hiring of federal government and agency employees,” the union said.

PIPSC also said AI use that affects employees should be part of the collective bargaining process. “Employers and unions should be able to bargain how AI is being used as regards to employees’ work life,” the union said.

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark and sign up for our politics newsletter, First Reading, here.

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.
To contribute to the conversation, you need to be logged in. If you are not yet registered, create your account now – it’s FREE.
A revamped collection of lipsticks from the French brand Clarins.
How to protect your precious locks
A few pieces go a long way
Our Place, DAVIDsTEA and Lancôme, to name a few
Denim days are here again.
365 Bloor Street East, Toronto, Ontario, M4W 3L4
© 2023 National Post, a division of Postmedia Network Inc. All rights reserved. Unauthorized distribution, transmission or republication strictly prohibited.
This website uses cookies to personalize your content (including ads), and allows us to analyze our traffic. Read more about cookies here. By continuing to use our site, you agree to our Terms of Service and Privacy Policy.