China's version of ChatGPT has finally been made public. But will censorship limit its power? – ABC News

China's version of ChatGPT has finally been made public. But will censorship limit its power?
From composing song lyrics about pandas to generating the "world's cutest cat", China's answer to ChatGPT has just been launched. 
Ernie Bot, a generative artificial intelligence (AI) chatbot, is now fully accessible to the public, following Chinese government approval late last week. 
Unlike other countries, China requires companies to submit security assessments and receive clearance before releasing mass-market AI products.
Authorities have recently accelerated efforts to support companies developing AI as the technology increasingly becomes a focus of competition with the United States.
Professor Haiqing Yu, an expert in China's digital media at RMIT University, said it's part of an AI "great leap forward".
But how powerful will Ernie be in a realm of heavily-censored internet use, and how does this fit into China's vision of becoming the world leader in AI?
Ernie, an acronym for Enhanced Representation through Knowledge Integration, is an AI chat product from Chinese tech giant Baidu, China's leading online search provider.
But it's not the only one — four AI start-up companies announced similar public launches last week, while TikTok owner ByteDance and Tencent, which owns WeChat, have also received approvals from the government for AI development, Chinese media reported.
Fan Yang, a researcher at the University of Melbourne's Centre for Automated Decision-Making and Society, said China has been putting more effort and resources into home-grown AI, which has led to big e-commerce platforms like Baidu, Alibaba, Tencent, developing their own models.
"That makes this wave of AI development different to the past waves … which were powered by American companies, including Google and Microsoft," she said.
The pitch from Baidu and others is that as Ernie is for Chinese people within Chinese culture, it will provide more accurate or insightful responses, but Dr Yang said there is still some distance between the capabilities of ChatGPT and Ernie.
"But also, the thing about AI technology is that the more people who use it, the more feedback they receive, [and] the better they can get."
Professor Yu said now that Chinese chatbots are open to public input, they will be "continuously optimising", and she added China's huge population meant there was an enormous pool of data that could be accessed.
But another issue for Ernie, the experts highlight, is China's great firewall. 
The Economist reported that Ernie has some "controversial views on science", reckoning that COVID-19 came from American vape users and was spread to Wuhan by American lobsters.
But it was "rather quiet" on questions of Chinese politics and often demurred on sensitive questions.
Dr Yang said Chinese and US-built AI platforms would also deliver very different narratives around the Russian-Ukraine war.
She pointed out this is far from China's first foray into the world of AI chatbots.
Xiaoice, a Microsoft spin-off, was developed in 2014 and largely used for romantic companionship.
China's dating scene is booming, with young adults flocking to AI chatbots and dating apps to find love. 
Both Xiaoice and another chatbot, BabyQ, appeared to be taken offline and "re-educated" in 2017 after giving politically sensitive responses to questions about the Chinese Communist Party (CCP) or Taiwan.
Later, when asked about similar topics, the bots asked to change the subject or deflected by saying they were tired.
Dr Yang pointed out one of the key aspects of China's interim regulations on AI is content moderation — meaning the companies have to take responsibility as content producers to filter out "illegal content".
She said illegal content is not defined and can be ambiguous, "but most of the time illegal content is the kind of content that is not aligned with the CCP's national interest".
Further, all the content Ernie and other Chinese AI chatbots draw on has already been subjected to China's strict censorship regime.
"The content has already been censored, and then they would produce content that can be further censored. So that is where layers of censorship can occur during this process."
Professor Yu said some colleagues in Beijing had early access to Ernie's prototype and were curious to find out if it had already been subjected to state censorship.
"So they deliberately put in those so-called sensitive terms and words. And then – of course, it's all expected — the chatbot tells you, 'This topic is forbidden', or they try to talk about something else," she said.
"Chinese internet is heavily censored, the content is already cleaned … so the end product itself is understandably sanitised.
"It is typical. They are living in China, people are used to this kind of censorship regime. And it's not surprising to them that these chatbots have also been censored."
China has long aimed to become an AI world leader by 2030.
David Yang, a Harvard economics professor, has said Beijing has an edge due to the vast troves of data gathered by the state.
"Autocratic governments would like to be able to predict the whereabouts, thoughts, and behaviours of citizens," he said in the Harvard Gazette earlier this year.
"And AI is fundamentally a technology for prediction."
Xi Jinping plans to make China the world's unrivalled artificial superpower by 2030, and some observers think he just might be able to.
Professor Yu referred to the Chinese idiom of "walking on two legs" to describe Beijing's approach to AI – both encouraging AI innovation and development in a rush to roll out Chinese models to compete in the "Tech Cold War" with the US, while at the same time tightening regulations – especially around content related to domestic politics.
She pointed out AI products targeting businesses do not need the same government approvals as those intended for content creation and mass public use.
Dr Fan Yang said the real profits of AI for companies such as Baidu are not in products for the public, but in collaboration with the nation state — such as AI on surveillance devices, AI-powered voice recognition systems and implementations of AI in military and defence.
The push for a ChatGPT-style platform, she said, was driven by a kind of "techno nationalism".
Internet users in China were fascinated by ChatGPT when it first launched, and have found creative ways to overcome Beijing's censorship firewall since the OpenAI product is blocked in China, including via the grey market, Dr Yang said.
Internet users are bypassing China's great firewall to connect with a US-made chatbot, as Beijing releases rules paving the way for the country's generative AI boom.
Earlier this year, hundreds signed an open letter calling for a pause in AI research, fearing an "out of control race" to develop powerful digital minds their creators could not control – a move supported by some Chinese AI experts.
Professor Yu said China had taken both an ambitious and cautious approach to AI, in what she described as a "messy contraction" that will be all too familiar to the Chinese public.
"They want to balance the conflicting demands of regulation and deregulation," she said.
"China is one of the first countries in the world to regulate AI, and regulate algorithms for generative AI in particular.
"China wants to demonstrate to the world it is a responsible superpower – it's not just about making money, or just about controlling people, it wants to look good on global stage as a responsible AI superpower."
We acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.
This service may include material from Agence France-Presse (AFP), APTN, Reuters, AAP, CNN and the BBC World Service which is copyright and cannot be reproduced.
AEST = Australian Eastern Standard Time which is 10 hours ahead of GMT (Greenwich Mean Time)