Will ChatGPT Mean An End to Human Moderation Jobs? – Techopedia

Kaushik is a technical architect and software consultant, having over 20 years of experience in software analysis, development, architecture, design, testing and training industry. He…
Eddie Wrenn is a reporter and news editor who has worked in national and international newsrooms across the UK and Australia, with a focus on…
Content moderation encompasses the challenging task of evaluating and overseeing user-generated content on digital platforms. This content frequently contains harmful and disturbing material, placing a strain on the mental well-being of content moderators who are regularly exposed to a constant influx of offensive and distressing content. Can AI offer a solution at the cost of taking their jobs?
Content moderation is not easy. Hateful and vile user-generated content that the content moderators must manage can extract a huge mental toll.
As an example, Meta paid $52 million to its content moderators after its employees filed a suit seeking compensation for the health issues they suffered, following months and months of moderating disturbing content.
While dealing with disturbing content remains a job hazard for content moderators, efforts are on to enable AI to moderate content.
In a blog post, ChatGPT developer OpenAI states:
“Content moderation plays a crucial role in sustaining the health of digital platforms.
“A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours.
GPT-4 can also interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling.
“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of many human moderators.
“Anyone with OpenAI API access can implement this approach to create their AI-assisted moderation system.”
While certain benefits like faster content moderation and policy iterations will result from introducing ChatGPT to content moderation, on the other side, it may also result in job losses for content moderators.
Mental health issues of the content moderators and a want to expedite content moderation by sizeable digital content platforms such as Meta, X, and LinkedIn set the context behind the introduction of ChatGPT to content moderation.
In an interview with HBR, Sarah T. Roberts, who is the faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at the University of California, Los Angeles (UCLA), they were told that content moderators, as part of their daily job, contend with alarming content, low pay, backdated software, and poor support.
Sarah describes her coming across a content moderator who thinks of herself as the one who takes on other people’s sins for some money because she needs it.
Content moderation is highly challenging because of the pace of content generation and the relative slowness in keeping up with the review.
For example, a report stated that Meta failed to identify 52% of the hateful content generated. Content moderation also fails to distinguish between malicious and proper content.
For example, the government may want to remove pornographic images, but content moderation rules may find it difficult to differentiate between malicious porn and nude images representing a good cause.
For example, the famous book Custodians of the Internet shows a nude girl running while her village is burning. Should such images be removed also?
Two advantages drive ChatGPT’s introduction to content moderation: faster and more accurate content moderation and relieving human content moderators from the trauma of having to moderate hateful and bad content.
Let’s look a bit more deeply into both advantages.
Content moderation is challenging because of the continuous need to adapt to new content policies and updates to existing ones.
Human moderators may interpret the content uniquely, and there might be a lack of uniformity in moderation. Also, human moderators will be relatively slower in responding to the dynamic nature of the content policies of their organizations.
ChatGPT’s GPT 4 Large Language Models (LLM) can accurately and quickly apply labels to various content, and appropriate actions can follow. LLMs can also respond faster to policy updates and update labeling if needed.
Meanwhile, ChatGPT is immune to the most hateful and atrocious content and can do its job unaffected. In its blog post, OpenAI suggested ChatGPT can relieve the mental burden of human moderators.
Organizations that need content moderation with ChatGPT can access the OpenAI API to implement its AI-driven content moderation system. It’s simple and inexpensive.
However, ChatGPT is not the panacea for content moderation problems because it has limitations, at least in its present state.
Among them, ChatGPT depends on the training content to interpret the social media content.
There are too many cases of ChatGPT providing biased and unacceptable responses to questions that have generated controversies.
If the training content provided to ChatGPT contains biased content, then it may treat harmful social media content as harmless and vice versa.
As a result, malicious content may be labeled as a false positive. It’s a complex problem that requires time to resolve.
AI is seen as a replacement for human beings in various roles, and content moderation could be an addition to the list.
For example, X, formerly Twitter, employs 15000 content moderators. So, all social media platforms employ a huge number of content moderators who will face an uncertain future.
OpenAI’s claims notwithstanding, AI in content moderation is not novel. In fact, AI has been used to moderate content for many years by platforms like Meta, YouTube, and TikTok.
But every platform admits that perfect content moderation is impossible, even with AI at scale.
Practically, both the human moderators and AI make mistakes.
Given the enormous challenge of moderating user-generated content that has been growing too fast, moderation will be a huge and complicated challenge.
GPT-4 continues to generate false information, which makes the situation even more complicated.
Content moderation is not a simple task that AI can just tick off like a checkbox and claim as done.
In this light, the claims that OpenAI makes sound simplistic and lacking details — the introduction of AI isn’t going to work magically.
Human moderators can breathe easier because its much-touted replacement is still far from ready. But it will undoubtedly have a growing role.
Kaushik is a technical architect and software consultant, having over 23 years of experience in software analysis, development, architecture, design, testing and training industry. He has an interest in new technology and innovation areas. He focuses on web architecture, web technologies, Java/J2EE, open source, WebRTC, big data and semantic technologies. He has demonstrated his expertise in requirement analysis, architecture design & implementation, technical use case preparation, and software development. His experience has spanned different domains like insurance, banking, airlines, shipping, document management and product development, etc. He has worked with a wide variety of technologies starting from mainframe (IBM S/390),…
What Is HarryPotterObamaInu (INU)? HarryPotterObamaInu is a new generation of meme coins operating on the Ethereum blockchain. Like other meme…
Jimmy AkiCrypto & Blockchain Writer
NAS and SAN – the N in both these palindromic acronyms stands for network. They’re closely related in that SAN…
Marshall GunnellTechnology Writer

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
REGULATION & HIGH RISK INVESTMENT WARNING: Trading Forex, CFDs and Cryptocurrencies is highly speculative, carries a level of risk and may not be suitable for all investors. You may lose some or all of your invested capital, therefore you should not speculate with capital that you cannot afford to lose. The content on this site should not be considered investment advice. Investing is speculative. When investing your capital is at risk. Please note that we do receive advertising fees for directing users to open an account with the brokers/advertisers and/or for driving traffic to the advertiser website
Registered Address: Clickout Media Ltd AA PH, Camilleri Buildings, Oratory Street, Naxxar, NXR 2504 Malta Company Number 103525© Techopedia. All Rights Reserved.