Meta's Clegg Working to Prevent 'Inappropriate' Replies from New AI … – ITPro Today

IT Pro Today is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
| Oct 14, 2023
(Bloomberg) — Among Meta Platforms Inc.’s suite of new artificial intelligence products is a chatbot trained to talk like a detective but look like Paris Hilton, and an armor-wearing character called the Dungeon Master who’s based on Snoop Dogg.
They’re at the amusing end of a serious push by Meta to incorporate generative AI tools that can respond to messages from users into apps used by billions of people every month — and it falls on Nick Clegg, Meta’s president of global affairs, to make sure his employer does so responsibly after years of criticism that the company does not adequately safeguard its products.
Related: Tech Titans Including Musk, Zuckerberg Gather on Capitol Hill To Talk AI
“We’ve had thousands of hours of red teaming — stress testing — to make them as safe as we possibly can,” Clegg said on the latest episode of the Bloomberg Originals series AI IRL. “We can’t guarantee that they won’t spew something inappropriate or something unwelcome at any point, but we can keep retraining it.”
Clegg said he’s involved “right from the outset” when decisions are being made by senior leadership about the direction to take the company’s AI tools “in order to anticipate some of the legal, regulatory, moral, ethical, cultural issues which these new products might bump up against.”
Meta has used AI under the hood of its consumer-facing products for years, particularly in determining which posts and ads to show users. It’s also relied heavily on AI to help detect and combat the spread of misinformation. But now, AI isn’t just a solution to the company’s problems; it’s also a big potential headache. Meta must confront a new set of concerns posed by AI tools that may go wildly off script with a user or spread misinformation. Some users have already reportedly found that Meta’s AI-generated stickers, announced alongside the celebrity chatbots last month, can create concerning imagery.
Related: From China to Brazil, Here’s How AI Is Regulated Around the World
Clegg, a former member of the European Parliament and one-time UK deputy prime minister, was hired by Meta in 2018 to run its global lobbying efforts at a time when the company was grappling with various crises around privacy issues and election meddling. Now Meta and others are developing AI systems under intense scrutiny from governments, regulators and ethicists.
So far, the US has only sought voluntary agreements from tech companies to safeguard their AI products. Clegg joined President Joe Biden at the White House in July to debut some of those agreed upon. Europe, meanwhile, is moving faster to regulate AI than the US. The European Parliament approved a draft law in June that would set boundaries on how AI technology can be used — and massive penalties for violations.
Clegg said Meta’s use of transparent, open-source licenses for some of its datasets and large-language models — the technology that underpins AI chatbots — would be key to safeguarding ethical use.
“The more open we can be about how these new technologies operate, the more open we can be about what the risks and the benefits are, what the controls are for users and also how we need to work with others,” he said.
More information about text formats
Follow us: