Impact of ChatGPT and other AI chatbots on the future of work – Chartered Institute of Personnel and Development
Learn more about the people profession – its wide-ranging roles and expertise, the standards we uphold, and the impact our profession makes.
Explore our latest viewpoints, thought leadership and news, offered by our CIPD experts and informed by our professional community.
Whatever topic you need to explore, find our latest and most relevant resources, guides, case studies, viewpoints and research evidence.
Stay up to date with our survey findings and guidance on people professional and workplace issues with our factsheets, reports, podcasts and more.
Whatever stage you are in your career, we have a range of HR and L&D courses and qualifications to help you progress through your career in the people profession.
Find the route to CIPD membership that works for you and the membership grade that demonstrates your level of knowledge and experience.
Whether you’re a people professional, a people manager, an employer or a policy maker, there are plenty of ways to join our community of champions for better work and working lives.
Hayfa Mohdzaini, the CIPD’s senior research adviser on data, technology and AI, investigates the potential implications of widely accessible conversational and generative AI
ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI, designed to provide users with instant, automated responses to their questions. Its capability and potential for broad application means people in many disciplines are finding uses for this technology tool. The ramifications for education and the world of work are the subject of intense interest and debate.
ChatGPT is a conversational AI chatbot that produces plausible human-like responses in text form from user inputs. Because it can create new content based on existing patterns of data, it’s also an example of generative AI.
ChatGPT’s ‘brain’ is powered by a large language model that has been trained on vast sets of text from the open internet and feedback from people. Your organisation may already have an HR chatbot that provides first-line HR advice which uses a language model.
ChatGPT is the first to market with an impressive almost human-like conversational ability AND it is free, widely accessible, and versatile (a premium version that accepts both text and image inputs is available on subscription). You can have conversations with ChatGPT on a vast range of topics, including holidays, history, HR, horticulture and health. However, ChatGPT’s ability to do logical reasoning and advance maths is limited because its answers are based on language patterns from its training data (mimicking what others have said) rather than logical reasoning processes. But this may improve as ChatGPT’s training data and programming are updated.
ChatGPT’s potential not only sparked consumer interest, but spurred the technology sector – giants like Microsoft and Google as well as startups in strategic alliances – to develop and refine their AI offerings. Conversational AI is becoming mainstream.
ChatGPT and other AI built on large language models could reduce the time and effort for many written tasks such as summarising documents, generating social media post ideas, interview guides, drafting job descriptions and templates for employee emails. This will clearly have an impact on jobs built around those activities.
The ability to finetune these bots with your organisation’s own data and customise them to your needs will add capability, but with that, more risks to manage.
With developments advancing rapidly, it’s worth understanding how conversational AI like ChatGPT can help us work smarter and what its limitations are. To create a safe environment to learn how to use and innovate with AI, it’s important that functions across the organisation, including HR, collaborate to establish safeguards, standards and guidelines on appropriate use.
Employers should establish boundaries around acceptable and unacceptable uses. For example, in the case of ChatGPT:
As this is a new tool, review your company’s position and policy regularly. When social media first became mainstream, employers had to scramble to figure out the rapid and constantly changing boundaries.
If a particular AI tool is deemed too risky for work purposes, employers may wish to ban its use and block access from company-owned devices and networks. This does not stop employees using it on their personal devices outside company networks, so other monitoring measures may be needed. This may not be a desirable or manageable solution long-term, and should be weighed against a ‘regulated use’ policy.
Equip people managers to talk to team members who might use conversational AI at work (regular computer users such as legal assistants, digital content creators and market researchers). Discuss what are acceptable uses and opportunities and how this might change work for individuals and teams.
Explore scope to use the time saved for additional tasks or projects, or to retrain or achieve a four-day working week. Technology can impact people in six different ways – Table 2 of the CIPD’s Responsible investment in technology guide gives ideas on what to ask when considering the impact of technology on jobs and tasks.
It could help personalise and speed up learning. Its ability to gather and summarise information and engage in conversation means it could become a personal tutor on some subjects.
Conversational AI could reduce the skills required to become competent at some tasks (deskill), increasing the number of people who could do the task competently. With conversational AI tools producing draft summaries quickly, the demand for summarising skills might shift to fact checking and editing, and learning how to use a chatbot for this task. This is the case with calculators for example. While they can calculate quickly and accurately, arithmetic is still considered a core skill. We still need to know enough about how to do sums and other arithmetic to know when to use calculators in our daily lives.
Important skills at work will increasingly veer towards ‘human’ skills and the ability to collaborate with AI. For example:
For the L&D function, conversational AI could improve:
Conversational AI can deliver fast and personalised responses at scale. It could take on time-consuming, information gathering and synthesis tasks, releasing people to strategise and plan. It can augment writing capabilities and assist people with communication disabilities. It could reduce bias by suggesting more inclusive words in job adverts. Workers may then have time to do another task or additional project, retrain or a four-day working week may become achievable.
As some tasks may increasingly be delegated to AI, there is of course a risk of job losses, which will need to be considered in dialogue with those being impacted. The focus should however, stay fixed on how to use the extra time effectively and how we can adapt to new roles within or outside the company.
Because conversational AI tools like ChatGPT occasionally give inaccurate or inappropriate answers, its ability to scale is also a risk. Humans are inherently biased. While AI is teaching us a lot about our biases, these can also be captured in the AI’s training data and programming. This would perpetuate harmful stereotypes and further disadvantage particular groups at scale. Rigorously test responses before deciding whether to rely on it for a particular task. Exercise real care when using AI to inform decisions about people (eg recruitment and performance assessments) – test the tool alongside experienced practitioners and conduct impact assessments periodically.
There are also potential security and privacy risks associated with its use. Particularly as individuals can access the tool outside of organisational oversight, robust measures will need to be taken to ensure sensitive information, for example financial or people data, is not shared or exposed.
The temptation to present AI-generated outputs as your own (plagiarism) is greater with conversational AI tools at your fingertips. We need to rethink how we assess people’s learning where AI can shortcut people’s efforts. Banning access might be acceptable where we want to test a learner’s ability to recall facts. But if the learners are in or entering jobs where the use of conversational AI will become essential, then it’d be prudent to update the course and assessments to help them prepare for this eventuality. The developers and plagiarism detection services are developing tools for detecting AI-written text.
The explosion of generic outputs is already happening. This might result in outputs that fail to capture the nuances of marginalised and discriminated individuals. Using conversational AI can save time, but it’s up to individuals to ensure the work is fit for purpose and differentiated from others. ChatGPT can create a job advert for you in seconds, but you’ll still need to check the final version for accuracy, for its ability to stand out among competition, and to tailor it to your target audience.
In the pursuit of organisational efficiency, there’s a risk that some jobs will become less attractive. It’s therefore important to engage with affected individuals and their managers on ways to make work more meaningful.
How we work will change as conversational AI starts to be used regularly at work and in our personal lives. Interactions with AI could become increasingly conversational. Instead of trawling through a long management report, you could ask AI for the key highlights and then ask follow-up questions.
This is a massive change, and the continuing evolution may ultimately be led by individuals rather than by corporate decisions and investment only.
The potential uses can’t be underestimated because it’s possible to integrate existing AI chatbots or its underlying language models with other software. A legal firm has launched a chatbot built on OpenAI’s language model that can help with contract analysis, due diligence and regulatory compliance. Similarly, Microsoft 365 users can soon expect ChatGPT-style help when Copilot for apps including Word, Excel and PowerPoint become available, with a promise of being able to conform with an organisation’s existing security, compliance and privacy policies and processes.
The developments in AI chatbots are likely to continue at pace, and it is important for organisations to stay up to date with the advances, at least as an observer if not a user. It has the potential to reshape jobs, generate new roles, change the skills we need, or plug skills gaps many organisations currently have. It can also take on tasks to enable people professionals to be more productive, innovative and strategic in their focus. But with these and other opportunities, there are also risks. These will become clearer as we learn how to use these technologies and how they will shape the future of work.
Browse our A–Z catalogue of information, guidance and resources covering all aspects of people practice.
Share
1 Nov, 2023
Guidance on how technology can be leveraged to support employee voice
An investigation on how generative AI tools can impact gender bias in leadership training
A look at how the use of algorithmic writing assistance for résumés impacts job hires
A thought leadership and podcast series exploring how responsible business and leadership have changed since the pandemic
Lessons on integrating sustainability into your organisation’s purpose
How employers can support employees affected by the conflict, whether in region or globally
Charles Cotton addresses the question of what constitutes a reasonable National Living Wage, by analysing the CIPD Labour Market Outlook – Summer 2023 report