Go back

AI and its effects on competition

An increasing number of competition and regulatory authorities are speaking out about the use of AI in the market. These include the authorities of Germany, the UK, and France.  Accordingly, we can expect that companies that use AI will be scrutinised much more in the coming years.

But what are the competitive harms triggered by AI? Our team of experts for AI and digital markets has synthesized the extensive area of anti-competitive effects of AI into a simple overview of three possible market environments here.

Artificial intelligence is disrupting traditional competition policy

Over the past decade, a vivid academic discussion on the effects of artificial intelligence (AI) on competition has taken place, driven by companies’ widespread use of computer algorithms. For example, more than 50% of retailers in the US are now using pricing algorithms and 67% of EU firms who track their competitors on a daily basis use algorithms to do so.[1] But the more algorithms also make economic decisions, the more likely they are to impair competition, for example by coordinating their actions or hindering competitors illegally.

Competition and regulatory authorities have now begun to react to this new challenge. A recent market study on algorithms by the UK Competition & Market Authority (CMA), a position paper on the supervision of algorithms by the Dutch competition authority, or the joint study of algorithms by the German Bundeskartellamt and French Autorité de la Concurrence (and many more) all indicate that authorities are very aware of the competitive threats posed by AI. Particularly high on their list of concerns are price collusion, self-preferencing by vertically integrated providers, and price discrimination or predatory pricing.  With regard to possible collusion by algorithms, EU Competition Commissioner Margarethe Vestager said already 2017:

“I think we need to make it very clear that companies can’t escape responsibility for collusion by hiding behind a computer algorithm.”

The CMA has already released a subsequent summary of responses to its market study, which indicates that most respondents agree on the seriousness of these competitive harms. We can therefore expect increasing antitrust intervention and regulatory action in the coming years, as clarity on the intended use and scale of AI systems, together with a deeper technical understanding of their actions, will boost the willingness of authorities to investigate businesses deploying AI systems. The first competition policy cases in which algorithms played an important role have already taken place in recent years.[2]

The increased use of AI has already earlier initiated a broad debate on the potential for misconduct, abuse, or discrimination by algorithms, which has given rise to new processes and methods when developing algorithms such as Responsible AI and Explainable AI. But most of the attention in practice has so far been on the behaviour of individual algorithms – for example, by ensuring that the algorithm is complying with regulation and reporting requirements, banning sensitive inputs, or removing biases from training data. Competition policy, however, is concerned with the implications of market interactions of algorithms. Machine learning and compliance practitioners do not sufficiently recognize the potential for algorithm misconduct via market interactions, leaving companies vulnerable to investigations by competition authorities.

The challenges interacting AI raises

The competitive harm caused by algorithms can take a variety of forms, depending on the context of the market environment in which they operate. The following figures structure the different applications of AI and their effects on competition The challenges of interacting AI correspond to three types of the market structure – horizontal interaction (left panel), third-party interaction (middle panel), or vertical interaction (right panel).

In each case, different competitive harms can arise. In the horizontal case, algorithms may lead to collusion or (in financial markets) herding. In the third-party case, hub-and-spoke agreements can arise and third-parties may manipulate the choice architecture for downstream firms. In the vertical case, algorithms may engage in foreclosure, self-preferencing, and related “Gatekeeper” behaviour. In the following, we briefly discuss each of these three cases.

Horizontal

The concerns for anti-competitive behaviour between algorithms arises from potentially coordinated behaviour. In particular, competing AI may lead to “algorithmic collusion”. Collusion (i.e. cooperation between competitors to limit competition between them) can arise whenever competitors active in the same market repeatedly interact. The repeated interaction makes it possible for collusive strategies to become a game-theoretic equilibrium. These strategies generally consist of a carrot (long-term high profits) to reward competitors for continuing with the agreed collusive arrangement and a stick (“price wars” or generally tough competition) to punish competitors if they were to deviate from the implied agreement. A series of academic research papers has demonstrated that AI in principle can learn to “agree” on refraining from competing in the market.[3]

The worry is that AI may autonomously learn to coordinate on a collusive equilibrium without the need for human communication between the competing companies. This would be a significant challenge to the current legal framework – strictly speaking, only communication and agreements to collude are illegal, not the resulting behaviour in the market. The use of AI thus blurs the line between explicit collusion, which includes information exchange between competitors, and so-called tacit collusion, where no explicit information exchange is involved. In practice, it will be crucial to determine the extent to which the machine learning engineer or company could know that the algorithm would lead to collusion, or indeed facilitated this behaviour.

Moreover, AI may make collusion more stable. At any given point in time, each member of the cartel has an incentive to undercut the remaining members. But the faster the remaining members can punish this deviation, the less profitable it becomes. With potentially near-instant reactions and constant surveillance of competitor prices (or similar data), algorithms could therefore lead to collusion in markets in which collusion by humans was previously considered to be practically infeasible.

A related concern in financial markets is that algorithms lead to herding behaviour. Flashcrashes have occurred repeatedly before. Competing algorithms that train using the same data or are built in a similar (or identical) way, may produce the same behaviour and choices. Here, the coordinated behaviour does not have the explicit purpose of achieving higher profits, but instead has a damaging side-effect.

Third-party

These concerns can be exacerbated if a third-party supplies the information or algorithm to competing companies downstream. In particular, a third-party allows for so-called hub-and-spoke agreements in which competitors source the same algorithm from the same supplier. This can make coordinated market behaviour more likely, and can facilitate collusion and herding concerns. Here, the supplier acts as the “hub” and the companies downstream as the “spokes”.

But the presence of a third-party supplier also may give rise to new forms of competitive harm. In particular, a third-party can manipulate the choice architecture of downstream firms. The supplier can select and curate the information but also the way it is displayed and communicated. By strategically selecting the information provided, the supplier can manipulate the behaviour of downstream users. This is especially concerning if the third party is a large digital platform providing services to many competing users, or if the third-party is itself active in the downstream market (see below the section on vertical integration).

However, for this manipulation to be concerning, the company must occupy a sufficiently strong position in the market. There is on the one hand evidence that the use of artificial intelligence can help enforce a firm’s dominant position, because the development of AI generally requires significant upfront investments, but comparatively little running cost or additional cost occurs when new data becomes available, or the algorithm is deployed in additional markets. Consequently, AI can lead to substantial economies of scale and scope, as well as strong network effects. These are all important factors that lead to dominant positions and AI can thus represent a significant barrier to entry. On the other hand, machine-learning frameworks are very accessible today, making entry more feasible and highly innovative start-up scenes have developed in multiple industries (e.g., FinTech) that emphasize innovation in the use of machine learning technologies. An assessment of dominance and concerns that arise therefore requires a case-by-case analysis.

Vertical

The role of a third-party as an intermediary becomes particularly relevant, if the intermediary is vertically integrated and thus competes downstream with its users/buyers. In this case, the intermediary can have conflicting incentives. Specifically, if a vertically integrated intermediary holds a dominant position in the market, it may have incentives to foreclose[4] competitors, to engage foreclosure, and similar “Gatekeeper” behaviour. The economic analysis for firms employing AI in a vertical and intermediary context is related to the economic analysis of digital markets in general.

One way to foreclose competitors in digital markets is to engage in “self-preferencing”, whereby a firm favours its own products or services over those of its competitors. This is the behaviour Google was found guilty by the European Commission to have engaged in.[5] Amazon is similarly being investigated on these grounds.[6] The economic logic of these cases can be applied to companies employing a learning algorithm. Simply put, it must be ensured that any algorithm in use does not choose to engage in self-preferencing behaviour.

Lastly, foreclosure can also arise in a horizontal rather than vertical context. AI can potentially perfectly target offers for specific buyers or buyer segments. This can result in those buyers who are more likely to switch to a competitor’s product/service being aggressively targeted with more attractive offers. If the company engaging in this behaviour is found to be a dominant player, this would potentially result in illegal horizontal foreclosure. Moreover, this aggressive targeting taken to its extreme can result in so-called predatory pricing behaviour, whereby the dominant firm incurs a short-term loss in order to eliminate its competitor.

Summary

The increasingly widespread use of artificial intelligence raises numerous competition policy concerns, and authorities have begun to examine and investigate them. But machine learning practitioners are largely still unaware of the potential for anti-competitive conduct of their algorithms and the resulting regulatory risks.

This blogpost gives a first glance at the very broad subject of the effect of AI on competition. We have outlined the different levels of market interactions through which AI can affect competition and the associated policy concerns. In our view, the behaviour of AI in the market when interacting with other players will need to become part of Responsible AI frameworks that companies employ. However, as shown by the different forms of anti-competitive conduct that arise depending on the market structure, a one-size-fits-all approach is not possible. Instead, a case-by-case assessment will depend on the structure of the of specific algorithm, its output and behaviour, and the competitive landscape and industry. Additional considerations arise when the algorithm is sourced from a supplier.

To guide practitioners, detailed knowledge of the industry and the use of algorithms is required. Based on a detailed study, a method for compliance assessment can be developed. Finally, the behaviour of algorithms can be tested explicitly in line with technical considerations such as the company’s tech-stack and implementation.

 

References

[1]      Professor Michal Gal, Director of the Center for Law and Technology at the Faculty of Law, University of Haifa, Israel. Remarks at the First Annual Conference on Computational Antitrust of the Stanford Center for Legal Informatics, 2021.

[2]      For example, the “Eturas” case in Lithuania in 2017 or the “Idealista” case in Spain in 2021.

[3]      See for example Eschenbaum et al. (2021).

[4]      Foreclosure refers to an anti-competitive practice in which the upstream company obstructs or prevents supplying its downstream competitors.

[5]      The “Google Shopping” case by the European Commission concluded in 2017 imposed a then record 2.42 billion euro fine. The General Court of the European Union upheld the decision in 2021.

[6]      The European Commission has opened an investigation into the preferential treatment of Amazon’s own offers in 2020.

 

Do you have feedback or questions? Please contact us directly by mail.