When Bots Set Prices: CMA Highlights Real World Risks of Algorithmic Pricing

Background
In a recent blog post, the UK Competition and Markets Authority (CMA) highlighted the increased antitrust compliance risk attached to the use of AI‑driven algorithmic pricing tools, which are becoming far more powerful and widespread. While these tools can deliver significant operational efficiencies and commercial benefits, they may increase the risk of collusion between competitors. The CMA’s enforcement crosshairs are therefore actively focused on the ways AI may lead to coordinated pricing outcomes in breach of UK competition law.
The European Commission (Commission) has also singled out algorithmic pricing as an enforcement priority. The competition law risks attached to algorithmic pricing are specifically addressed in its Horizontal Cooperation Guidelines, which were published in 2023. Recent statements by Commission officials also point to a ramping up of investigations involving algorithmic pricing.
These developments should be viewed in conjunction with the regulatory scrutiny of potentially unfair or misleading technology-driven pricing practices such as ‘drip’ and ‘dynamic’ pricing. This comes against a backdrop of the enactment of UK and EU legislation designed to adapt consumer protection frameworks to the digital age such as the UK Digital Markets, Competition and Consumers Act, the EU Digital Services Act and Digital Fairness Act. Notably, the CMA closed its investigation of Ticketmaster in September 2025 for misleading sales practices during the Oasis Live ’25 tour after it secured binding commitments to address its concerns. In the EU, Ticketmaster was the object of a complaint which was brought by consumer rights organisations in July 2025.
Rise of Algorithmic Pricing: Why is the CMA Paying (More) Attention?
The blog post underlines that algorithmic pricing is not a new phenomenon and has been deployed for decades by businesses in several sectors such as air travel, hospitality and retail. This is reflected in earlier research published by the CMA: a 2018 study of pricing algorithms; and a 2021 in-depth study on the potential adverse impact on competition and consumers linked to algorithms.
What is new is the marked increase in the sophistication and ubiquity of pricing algorithms. Modern algorithms can process granular, large-scale datasets in real-time, and are increasingly powered by cutting-edge large language models (LLMs). Businesses now have greater access than ever to powerful, low-cost predictive technology which can inform or automate commercial decisions including pricing.
Separately, the CMA published a paper on agentic AI which identifies the competition risk associated with autonomous agents which are utilised by companies to optimise pricing or commercial strategies. The paper notes that interactions between agents used by competing businesses may reduce competitive pressure (agentic collusion).
Algorithmic Collusion: Heightened Compliance Risks for Businesses
The CMA outlines several ways the incorporation of AI into pricing algorithms can lead to coordinated, anti-competitive pricing outcomes (algorithmic collusion):
- Implementation of ‘classic’ collusion. Competitors may have an explicit agreement to coordinate their commercial conduct and use algorithms to implement, monitor and enforce their agreement.
- Hub-and-spoke collusion. Competitors may use the same algorithm or data hub to facilitate the indirect exchange of competitively sensitive information. The use of the hub could also extend to the delegation of pricing decisions or the generation of pricing recommendations (based on co-mingled data).
- ‘Predictable agent’ behaviour. Algorithms that react predictably to market events increase the risk of tacit coordination, which potentially softens competition. This includes algorithms which track and respond to competitor pricing. Tacit coordination is not captured by the UK prohibition against anti-competitive agreements since it requires businesses to enter into an agreement or reach a common understanding (a meeting of (human) minds). However, the CMA is able to rely on its powerful market investigation tool to remedy competition issues which are rooted in tacit coordination: no evidence of wrongdoing by businesses is required. Its extensive remedial powers include the ability to order divestments.
- Autonomous AI coordination. Advanced AI systems may learn to reach coordinated outcomes, e.g., if their objective is to maximise profits.
The CMA’s concerns are not unfounded: a growing body of academic research and empirical studies lend support to the increased risk of algorithmic collusion. Further, recent enforcement action by the CMA as well as European and U.S. regulators underscore the real-world compliance risk posed by algorithms (see below).
Recent Enforcement Action
The deployment of algorithms and automated software has been a feature of conduct which has attracted enforcement scrutiny in a wide range of sectors. Across these cases, a common thread emerges: the use of algorithms is not a defence against antitrust violations. Businesses have been consistently held liable for the actions of their automated systems, even if they claimed that they were unaware of how the algorithms worked.
UK
Online retail of posters and frames. In August 2016, the CMA found that two online sellers of posters, Trod and GB eye (GB Posters) infringed competition law by agreeing they would not undercut each other’s prices for posters and frames sold on Amazon’s UK website. The agreement was implemented using automatic repricing software.
Hotels. On 26 February 2026, the CMA launched an investigation of leading hotel chains who are suspected of sharing competitively sensitive information via a hotel data services provider.
Italy
Air travel. Italian competition authority, Autorità Garante della Concorrenza e del Mercato (AGCM), closed its market investigation into airline pricing and the role of pricing algorithms on routes to and from Sicily and Sardina in July 2025. The investigation did not lead to the opening of enforcement proceedings, but the AGCM indicated that it has been exploring initiatives with the EU Commission to improve price transparency and comparability of airline tickets on those routes.
EU
Online travel. In January 2016, the European Court of Justice ruled that Lithuanian travel agents could be held liable for anti-competitive collusion if they are aware of an algorithm used on a shared online travel booking platform (Eturas). The algorithm facilitated a ‘hub-and-spoke’ arrangement by applying a 3% cap to customer discounts. The judgment confirmed that an explicit agreement was not required to establish an infringement.
Consumer electronics. Asus, Denon & Marantz, Philips and Pioneer were the subject of separate infringement decisions in July 2018. All four manufacturers were found to have engaged in resale price maintenance (RPM) by imposing fixed or minimum prices on their online retailers. The Commission investigation revealed the use of price monitoring software to enforce the RPM arrangements.
Ongoing cartel investigations. Linsey McCallum, the then Deputy Director General of the Commission’s competition arm (DG COMP), indicated in July 2025 that there are multiple ongoing cartel investigations which involve algorithmic pricing.
Poland
Banking / Pharmaceuticals. In September 2025, the president of the Polish competition authority, Urząd Ochrony Konkurencji i Konsumentów (UOKiK), confirmed there were ongoing investigations of collusion via algorithmic pricing tools in the banking and pharmaceutical sectors. The banks under investigation are suspected of using algorithms fed by data from Poland’s largest credit risk database and their own non-public internal information to coordinate the pricing of consumer loans and mortgages. In the pharmaceuticals sector, UOKiK suspects three major wholesalers (with a combined share of 80 percent) may be using IT systems to exchange commercially sensitive information on drug prices, margins, and volumes sold in affiliated pharmacies.
US
Meat processing. In September 2023, the DOJ filed a civil antitrust lawsuit against Agri Stats, an agricultural data company: a trial date has been set for 4 May 2026. The DOJ’s allegations are centred on Agri Stats organising and managing the exchange of competitively sensitive information between broiler chicken, pork and turkey processors. The information exchange was facilitated by its proprietary information-gathering software and data-sharing platforms.
In parallel, on 10 March 2026, a federal judge approved Agri Stats agreement to settle two long-running federal antitrust class action lawsuit which alleged it conspired with U.S. poultry processors to suppress the wages of plant workers. A separate agreement to settle a class action lawsuit with similar wage suppression allegations involving red meat processors was filed in January 2026 and is currently awaiting federal court approval. While the settlements involve no monetary payment, Agri Stats must modify its reporting practices by redacting certain plant-level data in its reports.
Property. The DOJ filed a proposed settlement in November 2025 to resolve its lawsuit against RealPage, a leading provider of property management software. RealPage’s pricing algorithms, which underpinned its revenue management software, are alleged to have facilitated algorithmic collusion among competing landlords to artificially inflate rents and maintain high occupancy. The pricing algorithms were trained on competitively sensitive information gathered from landlords; and the software included features designed to limit rental price decreases and align pricing among competing landlords. The terms of the settlement, which is pending court approval, requires RealPages to cease the problematic conduct and cooperate in DOJ lawsuits against property managers which used its software.
Since 2022, RealPage and various property managers have also faced more than 30 private class action lawsuits as well as actions brought by multiple State Attorneys General. The enforcement scrutiny has also led to the adoption of bans or restrictions on the use of algorithmic rent-setting tools. Notably, New York and California enacted legislation in late 2025 banning the use of rent-setting software that relies on non-public competitor data; and the Preventing the Algorithmic Facilitation of Rental Housing Cartels Act was introduced in early 2024.
Practical Takeaways for Businesses: How to Mitigate Compliance Risk
Although the use of algorithms or AI is not inherently problematic, businesses need to be mindful of the potential compliance risks. The message from enforcers is clear: businesses are expected to put in place appropriate safeguards to prevent breaches of competition rules. This requires businesses which use algorithmic pricing tools to undertake risk assessments and tailor compliance measures to their own activities, market context and organisational needs. Ignorance is no defence.
Proactive steps to manage risk may include the following:
- Due diligence. The vetting of new algorithmic pricing and AI-driven tools should be conducted as part of procurement processes to ensure the business understands how the software/models work, the sources of underlying training data, whether outputs use competitors’ non-public proprietary data or promote aligned / elevated pricing. Alongside the intended task or use case, assess the broader range of tasks the AI tool could perform or may evolve to perform (e.g., with autonomous or agentic AI). Consider seeking legal advice before adopting algorithmic pricing tools which are also used by businesses, which could be deemed direct competitors.
- Governance and oversight. Maintain clear policies which govern the use of algorithmic pricing tools. This includes keeping records of how the system is configured, the data inputs, and any manual overrides. Consider implementing a review process for major algorithm changes. Caution should be exercised before providing non-public information which may be competitively sensitive. Designing rigorous internal processes to assess the sensitivity of non-public information and the safeguards which could be put in place (e.g., data aggregation, time lags) may prevent the unnecessary and potentially harmful disclosure of competitively sensitive information.
- Audits. Regularly audit and stress-test AI tools. This includes the data inputs, use cases, configurations, manual overrides as well as employees with access to the AI tools, underlying algorithms and review processes. Consider linguistic stress testing if the AI tools include LLMs and, where possible, build in explicit constraints.
- Compliance training. Educate employees on the potential legal risks associated with pricing algorithms and exchanging competitively sensitive information, including through intermediaries.

