Consumers, Competition and Artificial Intelligence – GOV.UK

We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Departments, agencies and public bodies
News stories, speeches, letters and notices
Detailed guidance, regulations and rules
Reports, analysis and official statistics
Consultations and strategy
Data, Freedom of Information releases and corporate reports
A keynote speech delivered by Marcus Bokkerink, Chair of the Competition and Markets Authority, to the AI Fringe Hub.
Original script, may differ from delivered version
Good afternoon, thank you very much for inviting me to share some perspectives about consumers, competition and AI.
I’m Marcus Bokkerink, Chair of the Competition and Markets Authority. We are the UK’s principal competition and consumer protection authority, an independent body responsible to Parliament. We exist to help people, businesses, and the UK economy by promoting competition and tackling unfair behaviour. That means it’s our job to make sure that people can be confident they’re getting great choices and fair deals and without being misled; and to make sure that competitive, fair-dealing businesses are free to innovate and thrive.
I think it’s helpful to first remind ourselves why those 2 sides of the same coin – effective competition and protecting consumers from commercial harm – are so important. I will then talk about what fostering competition and protecting consumers means in practice in a digital economy. And last, I will talk about what we at the CMA believe fostering competition and protecting consumers requires in a world where AI starts to underpin more and more of that economy.
And I say economy, because our remit is focused squarely on enabling competition and protecting consumers from commercial harm; while online safety, which includes illegal content, cyber bullying and harmful speech, are protected by Ofcom, who we work with very closely, not least through the Digital Regulation Cooperation Forum (DRCF). We were the first country in the world to create a joined-up group like the DRCF, which also includes the Information Commissioner’s Office (ICO) and the Financial Conduct Authority (FCA), and it’s one of the benefits to the way we approach digital regulation here. I hope you’ll all be here tomorrow for our DRCF panel event.
Over 30 years working with diverse businesses of all shapes and sizes, I’ve seen repeatedly how the 2 elements of vibrant competition and effective protection of consumers’ interests go hand-in-hand. In open markets where competition thrives and companies play by the rules, the players in a market – be it supermarkets, food brands, vets, social media platforms, search engines, or companies who help us buy and sell goods online – many of which are CMA areas of work by the way – if those markets are open with multiple players competing, those players have the incentive to do exactly that: to compete hard and innovate hard to bring us the best choices, the best quality, the best service, the best prices. And above all they compete for our trust and loyalty.  
But, where a dominant few corner the market and engage in unfair, harmful practices, the opposite happens – innovators and fair-dealing challenger businesses are stopped in their tracks, and we end up down a road of limited choice, deteriorating quality and service, and ever rising prices.  
That’s why, to deliver good outcomes for the people and the innovating businesses we serve, the CMA is determined to protect both sides of this coin. Tackling misleading practices and unfair behaviour, the ‘symptom’ if you like, through enforcing consumer protection and competition law. And, fostering the conditions for healthy, dynamic competitive markets to be able to flourish – tackling the root cause.
AI is part of a broader force that has been shaping our economy and society over the past 2 decades – the ever-accelerating technology revolution and the multitude of new, often disruptive business models this brings, which are transforming how we live, work, and do business. There’s no doubt the digital revolution brings enormous benefits: so much of the world’s information and creativity, now instantly available. New, more effective and efficient ways to learn, research, create, communicate, innovate, buy, sell, do business with each other.  All potentially turbocharged by AI.
To ensure these opportunities really do come to pass, really do benefit consumers, innovators and fair-dealing businesses, and ensure these benefits are not eroded over time, the CMA has been consciously and explicitly adapting how we protect competition and protect consumers in this new digital reality, like our fellow authorities around the world.
Let me start with protecting consumers, and indeed businesses as customers, from unfair behaviour. Our work here includes, for example, cracking down on fake reviews, which influence hundreds of millions of pounds worth of purchases every year; and so-called ‘dark patterns’, which are intentionally deceptive or manipulative design choices that are forced on us when we search, shop or interact online. These design features of online architecture can be very subtle, steering us toward pre-defined outcomes. Features like pressure selling and drip pricing. Exaggerating the benefits of sharing personal data. Intrusive default settings and misleading opt-outs. Being trapped by ever-repeating subscriptions or hidden advertising. These are all design choices which exploit our psychology and vulnerabilities as online consumers, nudging us toward decisions that aren’t fully our own and aren’t necessarily in our best interests.
And the costs are significant (footnote 1). Misleading or deceptive online practices are causing billions in net detriment across the economy; billions spent on things we didn’t want, didn’t need, or regretted buying (footnote 2); with millions of hours wasted communally trying to fix these problems (footnote 3). And while these numbers are bad, they represent not just frustration, but heartbreaking stories of financial loss and personal distress.
That’s why one of the priorities we set out in our new CMA strategy earlier this year is a continuous programme of enforcement action and education to protect consumers from fake reviews, pressure selling, and misleading online choice architecture, across all online retail sectors. This includes, for example, 2 open enforcement investigations into pressure selling, multiple published guides to businesses about what is and is not acceptable conduct, and of course, our Online Rip-Off Tip-Off campaign to raise consumer awareness and encourage people to tell the CMA about potentially misleading online sales practices. Given the potential for AI to turbo charge these very same harms, it is all the more important to tackle them early on by both enforcing, and strengthening, our consumer protection laws, which I will return to later.
Let me now turn to actions we are taking to tackle the root cause of some of these harms by ensuring dynamic competition in digital markets. Because it’s much easier for perpetrators to engage in harmful activity where there are few if any alternatives for a consumer or business customer to choose from, when, for example, they are looking to search or design or buy or sell or socialise online – all markets where AI will play a major role in the future. In situations of limited choice, the often very large, very powerful companies responsible for the harmful design choices in the first place become more dominant and more entrenched, and their behaviour becomes the standard. New entrants and challengers who might offer more transparent, open, or competitive alternatives are squeezed out or never get off the ground. And without competition from challengers, there’s little motivation for players with entrenched market power to do the right thing for consumers and clean all of this up.
That’s why we have been and will continue to be vigilant in protecting competition in digital markets. Recent and ongoing examples of the CMA’s actions include a series of competition law probes into online platforms and digital advertising, which highlighted the limits on competition in search, social media and digital advertising. We, the European Commission and the US Department of Justice, all have cases underway to investigate these issues (footnote 4). We’re also investigating the terms and conditions that govern developers’ access to the Apple App Store (footnote 5), and whether Amazon uses data collected from the sellers on its marketplace to unfairly benefit its retail arm. And we’re also working with Google to ensure changes to its privacy policy don’t unfairly advantage its advertising business.
In addition, because of the rapid speed at which these digital and AI-powered markets develop, we take a forward-looking approach to preventing competition concerns from developing in the first place, through effective merger control. Our recent decision on Microsoft’s acquisition of Activision Blizzard, for example, resulted in Microsoft divesting the online cloud streaming rights to Activision’s games, for the next 15 years, to a rival, to ensure this fast-growing cloud-gaming market remains open to competition and consumer choice (footnote 6).
All our work is independent, deeply researched and objective. All our actions are similarly independent, proportionate and evidence-based. And it’s all geared to getting the best outcomes for people, businesses and the UK economy.
Now, to make us even more effective in doing our job in the future, Parliament has been considering new legislation, the Digital Markets, Competition and Consumers Bill. In a nutshell, this will strengthen the CMA’s ability to champion consumers and protect them from commercial harms across markets, in particular through direct enforcement and higher penalties where necessary. The Bill also addresses challenges that have dogged traditional competition law approaches to keeping digital markets competitive and open to future innovators, in particular where one or few powerful firms have been able to establish substantial and entrenched market power. And it does this while avoiding overly burdensome, rigid regulations which could impede innovation and growth. So the regime is designed to be highly tailored, flexible, proportionate, and characterised by ongoing, constructive dialogue with those companies holding substantial and entrenched market power, who will be subject to codes of conduct that can be enforced if needed. This means it is well suited to help us guide markets to positive outcomes from transformational technologies such as AI, while mitigating the potential harms.
Both elements of the Bill would really be quite a step-change and very good news for both consumer protection and competition – including in a world where AI underpins more and more of what we do, as people, as consumers, as businesses.
So, what about the future potential developments of AI in this context? Artificial intelligence is the latest and arguably the greatest wave of technological revolution we have yet encountered. As with previous technology breakthroughs, it holds promise to deliver transformational benefits for the way we learn, create, interact, work and do business. And just as it can bring many positives, it’s entirely possible that AI could supercharge the harms to consumers and to competition in digital markets that I’ve just laid out for you.
Let me give you 4 examples:
First, search algorithms that rank or give us answers can be tremendously useful. But as we’ve seen in the past – in hotel booking sites for example – those can be distorted by factors that don’t help us get the best products – such as the level of commission the site is getting. A personalised AI ranking could get even cleverer at seeming to get us the best deal or looking personalised to us, but is actually driven by which product is most profitable, not most useful to us.
Second, reviews. One of the benefits of the internet is that we can buy from and sell things to people or companies on the other side of the country or the world with ease, relying on reviews to make choices about who to trust.
Unfortunately, fake reviews are endemic online; to the extent that there are many companies that openly sell fake reviews as a service. That means individual consumers might be misled into buying the wrong washing machine or holiday, for example. But honest businesses also suffer, because fake reviews undermine the magic of competitive markets, where the genuinely best products win out.
Now, at the CMA we are already using AI tools to detect fake reviews; and wouldn’t it be great if companies used such tools to take down fake reviews before they have a chance to cause all these problems? Well, without guardrails, it may be just as likely that AI will be used to generate more fake reviews, more quickly, which are more convincing than ever.
Third, advertising is an important part of doing business both online and off. And targeted advertising has been a key part of the business of the internet.
When it works well, businesses with compelling products find people who really want to buy them. And AI could help with that targeting, perhaps even in ways that actively protect privacy online.
But for example, while we have been able to use AI to detect hidden advertising in endorsements on social media, AI could also be used by others for more pernicious purposes; perhaps helping spot when our judgement is less good, when we’re tired and more inclined to make an impulse purchase – one that might not be in our best interests – or vulnerable. Again, it can supercharge the consumer harms I spoke about before.
Fourth, inaccurate information. AI foundation models can get things wrong. And no one has yet worked out how to reduce the error rate to zero. But the chatbots they power sound so convincing.
Perhaps we will learn to know what to expect from an AI chatbot, and heed the warnings that tell us we shouldn’t expect them to be 100% accurate. But without competitive pressure, what’s the incentive for companies to drive down that error rate, and to be clear to us about the limitations of these models?
Let me finish by talking about how we are further adapting and strengthening our approach to the markets for AI models and their uses, and what more is to come.
Earlier this year, the government issued a white paper articulating the UK’s proposed approach to regulating AI, including a set of cross-sectoral principles for regulators to implement through existing enforcement regimes. At the CMA we’ve considered how these apply in our remit of protecting competition and consumers.
For the immediate term, we’ve made it clear that we’ll continue to be vigilant for digital, including AI-related breaches of existing laws, to protect consumers and business customers against unfair commercial practices and terms, and to foster competition. Where we find breaches, we won’t hesitate to act.
But that alone will not be enough. One of the clearest lessons from past experience in digital markets is that the most effective path to open markets, healthy competition and protecting consumers, is to prevent entrenched market power and a practice of consumer detriment from developing in the first place. Therefore, given the speed of developments in generative AI in foundation models, and in their use by businesses and consumers, the CMA acted promptly with an initial review to understand both the opportunities and risks of AI for competition and consumer protection; and to propose a set of principles that can guide the developments of these markets in a positive way going forward. This was published last month.
In brief, we explored a broad spectrum of possible scenarios for the development of AI foundation model markets and their uses by businesses downstream – some with positive outcomes from a competition and consumer protection standpoint, others with outcomes that would cause serious concern.
We can’t predict the future; but we can anticipate what conditions will push us to the positive rather than the negative outcomes. This has led to the crux of the recommendations so far: defining the conditions that need to be in place for this pioneering market to deliver the positive outcomes for competition and consumer protection, that we all wish to see for people, businesses and the economy – and translating these into 7 proposed principles that can guide the market towards those positive outcomes.
Implementing these principles has to be a collective global effort, so we’re in the process of engaging broadly across the industry and our international peers to refine them, and to build a consensus on how to put them into practice.
Well, we would like to see a positive scenario emerge in which effective, sustained competition between large AI developers drives them to build high quality models and to strive for improvements and breakthroughs. A thriving ecosystem of multiple, independent competitors, with different business models, able to enter and compete in the market on an ongoing basis. So, we have 2 proposed principles here:
Firstly: Access, on fair terms, without undue restrictions, to the key inputs needed to build AI models. Computing power and data, for example. Fair access to markets for innovators and new entrants snapping at the heels of incumbents to help prevent disproportionate positions of market power from becoming entrenched.
Secondly: Diversity, meaning sustained variety of business models, including, possibly both open and closed source models.
Our third, fourth and fifth principles are all designed to keep competitive tension up so as to deliver a positive outcome in which business customers and consumers have a range of options for selecting, using, and switching between AI models and the services they support, without friction or lock-in. So our principles are: sufficient ‘choice’ for customers using foundation models, be that developing models in-house, or accessing them via partnerships, APIs, or plug-ins. Genuine ‘flexibility’ to switch, or use multiple models or environments according to need, which might require interoperability and ease of porting data. And ‘fair dealing’, meaning that customers should have confidence that the best products and services will win out in a market free from anti-competitive behaviour including self-preferencing, tying, or bundling to entrench or leverage market power.
Our final 2 principles – ‘transparency’ and ‘accountability’ – bring us back to direct consumer protection across the piece. We know from extensive research that trust in AI will be key to us all realising its full benefits. We also know that vigorous competition is likely to be a powerful counterweight to laxity in responsible development and deployment, because it naturally creates a race to the top for consumer trust. And we know that consumer confidence depends on people receiving reliable, accurate information and fair treatment from AI model-based services, as well having a clear idea of who’s accountable and responsible if something does go wrong.
So how do we create a positive outcome where consumer awareness and understanding is high and developers have the incentive to compete not just on technical innovation, but on responsible development? We believe it is by requiring ‘transparency’, for consumers and businesses, around AI model uses, risks, and limitations – meaning we can all make informed choices. Also, transparency between AI model developers and the businesses deploying their models, so that they can properly understand and manage the risks. And, finally, through ‘accountability’ – the accountability of developers and deployers for the outputs they provide to customers. This includes establishing appropriate safeguards, as well as making sure responsibility is clearly allocated and redress mechanisms are in place for when things go wrong.
As I hope you’ve gathered from everything I’ve said today, I’m personally passionate about both the tremendous opportunities of digital and AI, and about tackling the potential harms before they become entrenched. This is a watershed moment for all of us, as these technologies forge ahead. Early, pro-consumer and pro-competition action and regulation can help us realise the tremendous benefits while mitigating the harms.
It’s a privilege to be part of an organisation of deeply committed people who are fully invested in helping to deliver that future.
(1) Consumer protection study 2022
(2) Pushed to Purchase: Counting the cost of deceptive digital design in e-commerce – Citizens Advice (July 2023)
(3) Pushed to Purchase: Counting the cost of deceptive digital design in e-commerce – Citizens Advice (July 2023)
(4) Investigation into Meta’s (formerly Facebook) use of data
(5) Investigation into Apple AppStore
(6) Microsoft / Activision Blizzard (ex-cloud streaming rights) merger inquiry
Don’t include personal or financial information like your National Insurance number or credit card details.
To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.