ARTIFICIAL INTELLIGENCE, ALGORITHMS AND ANTITRUST
Professor Suzanne Rab*
~ Preventing Artificial intelligence from creating commercial monopolies.
The debate around artificial intelligence or “AI” has attracted antitrust interest among academics, practitioners and regulators. In their book, Virtual Competition Professors Ariel Ezrachi and Maurice Stucke postulate the “end of competition as we know it” and call for heightened regulatory intervention against algorithmic systems.
The AI antitrust literature reflects three broad themes or potential areas of antitrust concern. First, it is said that AI can widen the set of circumstances in which known forms of anticompetitive conduct, and particularly conscious parallelism or tacit collusion can occur. Second, it is said that use of algorithms will bring newer forms of anti-competitive conduct which challenge traditional antitrust orthodoxy with new (non-price) elements such as price discrimination, data extraction and data capture. Third, it is said that exploitation and deception are a feature of algorithmic markets which nudges consumers to engage in unfair transactions which call into question ethics and fairness but with which conventional antitrust regimes are not best equipped to deal.
This article will focus on the claimed facilitating role of algorithms and whether they may contribute or lead to anti-competitive outcomes. It considers (1) whether AI leads to anti-competitive outcomes or other concerns, (2) whether there might be another (not anti-competitive) outcome, and (3) views from the regulators on attribution of liability for AI decisions.
The main concern in the context of antitrust or competition law is that a specific type of AI – pricing algorithms used by firms to monitor, recommend, or set prices – can lead to collusive outcomes in the market in two main ways. Firstly, pricing algorithms may help facilitate explicit coordination agreements among firms. This is because the use of algorithms may make market conditions more suitable for coordination. For example, monitoring prices of other firms could be easier when algorithms are deployed. Secondly, under certain conditions, the use of pricing algorithms can lead to tacit collusion even without agreement to coordinate. This concern is founded on the principle that when many or all firms in the market use some similar and simple algorithms to set prices, their strategies can be anticipated by each other, making it easier to reach coordinated outcomes.
Mehra has focused on the facilitating role of algorithms stating that: “…to the extent that the effects of oligopoly fall through cracks of antitrust law, the advent of the robo-seller may widen those cracks into chasms. For several reasons, the robo-seller should increase the power of oligopolists to charge supracompetitive prices: the increased accuracy in detecting changes in price, greater speed in pricing response, and reduced irrationality in discount rates all should make the robo-seller a more skilful oligopolist than its human counterpart in competitive intelligence and sales…the robo-seller should also enhance the ability of oligopolists to create durable cartels” (Mehra, S. K. (2006) Antitrust and the Robo-Seller: Competition in the Time of Algorithms, 100 Minnesota Law Review, 1323-75.)
This suggests that algorithms can be a ‘plus factor’ which renders tacit collusion more likely, stable, durable and versatile by facilitating detection and retaliation at lower levels of concentration. However, this claim is not straightforward. Firms would still need to choose whether to use and stick to the same algorithms. The incentive to coordinate is not automatic just because of the existence of algorithms. Firms could still choose to undercut rivals for short term gain. Indeed, smart algorithms might try to cheat without being caught.
Contrary to the claims that AI is likely to lead to anti-competitive outcomes AI in general generates a wide range of efficiencies. For example, AI can be used to predict demand using past data and help firms to improve inventory management. In some areas, AI may be effective in replacing human labour for simple and repetitive tasks. Because of these efficiencies, the use of AI may have impacts on the demand for labour. To improve the performance of algorithms, more computer scientists may be required, while the number of manufacturing jobs may decrease as more tasks can be performed by machines. This is one example of increase in demand for goods and services complementary to the use of AI (e.g. servers and computing hardware), and decrease in demand for goods and services that can be substituted by AI (e.g. travel agents).
On the question of whether EU competition law is fit-for-purpose in an AI environment, there is no consensus among regulators. European Competition Commissioner Vestager has stated that: “…businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So, they had better know how that system works” (Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017). In terms of attribution of liability, the European Commission treats an AI decision-maker in the same way as a human and the business cannot escape liability by attributing conduct to a machine. It appears that the European Commission expects businesses to anticipate the possibility of an errant AI decision-maker and they must take steps to limit its freedom by design.
In contrast, the UK’s former Competition and Markets Authority (CMA) Chairman David Currie has expressed a less definitive view. David Currie has questioned whether the legal tools currently available to the CMA are capable of tackling all the challenges presented by the rise of the algorithmic economy, such as self-learning algorithms. This may suggest that the question of attribution of liability (under the UK competition regime at least) is ripe for reassessment should developments in AI advance to such a state.
On the specific issue of whether algorithms may facilitate anti-competitive outcomes, the CMA adopts a nuanced view. It has recently published an economic research paper on the role of pricing algorithms in online markets (Pricing algorithms, Economic working paper on the use of algorithms to facilitate collusion and personalised pricing, 8 October 2018 (CMA94)). The CMA also finds that algorithms can be used to help implement illegal price fixing and, under certain circumstances, could encourage the formation of cartels. However, the risk of algorithms colluding without human involvement is currently less clear.
The AI antitrust scholarship makes a bold claim that AI is an enabler of tacit collusion and could increase the scope for anti-competitive outcomes at even lower levels of concentration than traditionally associated with antitrust theory. However, even the brief examination of these claims in this article has revealed alternative hypotheses which need to be fully tested before the theory can be incorporated in policy and legal environments without running the risk of being counterproductive.
*Professor Suzanne Rab is Professor of Commercial Law and Practice Chair at Brunel University London and a full-time practising barrister at Serle Court Chambers in London where she specialises in competition, EU and regulatory law. Suzanne has addressed the legal, policy and academic community on the implications of new technology for competition and innovation including at Brunel’s 13 September 2019 conference on Disruptive Innovation and Law. Suzanne leads the Brunel Comparative Competition Law Summer School. The 2020 presentation will take place between 22 June and 4 July 2020. For further information or to book a place please contact Nikki Elliot (Nikki.firstname.lastname@example.org).