Shepard Goldfein and James A. Keyte (ljh)
Over the past decade, computers have increasingly become more powerful and able to handle larger amounts of data at higher speeds. As Moore’s Law postulates, the ability of computers to handle data will roughly double every two years. This continuing advancement has allowed sophisticated algorithms to analyze the world and predict how people will behave. An algorithm is a set of rules written to be executed in a specific order, designed to solve a problem or carry out a task. As computer systems have become more sophisticated, algorithms are increasingly being used by companies to automate complex and repetitive tasks that were previously much more costly when done by humans.
As noted by the Organisation for Economic Co-operation and Development (OECD) in the background paper to its June 2017 panel on algorithms, computational algorithms can be used by businesses for predictive analysis—where the algorithms measure the likelihood of future outcomes based on analysis of historical data—and also can optimize business processes, reduce transaction and production costs, segment consumers or set optimal prices in response to market circumstances. Indeed, the growing usefulness of algorithms has led many companies around the world to employ them in order to improve business decisions and automate processes.
Benefits, Risks of Algorithms
In the competitive arena, pricing algorithms present a number of unique opportunities to businesses to improve their processes and efficiency. Algorithms make the facilitation of business processes much easier and faster. This means that sellers can automatically adjust their conduct with the demands of the market, increasing efficiency and saving resources and money. Moreover, they allow businesses partially or fully to automate some of their key systems. This further reduces costs, improves efficiency and decreases the time and monetary or personnel resources that need to be spent on these processes. This automation of tasks can lead to more and better competition in markets, as businesses quickly respond to each other and to customers, which can increase efficiency and profitability at the same time.
However, this ease of coordination is not necessarily a universal boon. Critics of algorithms worry that because algorithms have become so advanced, they may enable new forms of anticompetitive coordination that were not possible before. For example, in situations where collusion previously could only be implemented using explicit communication, algorithms, in theory, could create new mechanisms that allow businesses to implement a common policy and to observe the behavior of other firms without any human interaction at all. As the OECD postulates, “algorithms may enable firms to replace explicit collusion with tacit co-ordination.”1 The notion is that algorithms, properly programmed, can automatically detect when a cartel member is “cheating,” and thus can more effectively punish the deviant cartel member.
To understand this proposition, some suggest considering cartel behavior as a prisoner’s dilemma. Cartel members know that their agreement to fix prices is illegal, and thus unenforceable in the courts. Because of this, there is no legal enforcement mechanism in place to prevent a cartel member from lowering its prices and increasing profits at the expense of the other cartel members. However, if all members lower their prices, then all will be worse off than if all had kept selling at the agreed-upon price. In this hypothetical, algorithms may help non-cheating cartel members get around these problems by quickly detecting and punishing any deviation by other cartel members.
According to those concerned about algorithms, the most basic way for algorithms to facilitate tacit collusion is by monitoring the actions of a firm’s competitors in order to enforce an agreement. Without the power of computers and algorithms, there simply is too much data and information for a human to analyze quickly enough for it to be useful. However, using an automatic collection method, or “aggregator,” a company can collect all of the data of its competitors’ behavior, including price data, which can then be combined with a pricing algorithm that automatically detects deviations or potential deviations from an agreed-upon price. Consistent with traditional cartel theory, these algorithms will generally be faster and more efficient than humans at detecting and punishing deviations, removing much of the incentive for a firm to cheat in the first place, because the firm is less likely to profit from cheating for a long period. Another example, in theory, of a potentially anticompetitive algorithm is one that automatically sets prices and reacts to market conditions, automatically adjusting prices based on the actions of consumers and competitors.
On their own, of course, these algorithms pose no antitrust problems. After all, as described earlier, there are numerous procompetitive and unilateral reasons for using algorithms, which would apply even in oligopolistic market structures. Yet the temptation may exist for competitors to share the same algorithm, and then program it to set prices above the competitive level, potentially raising the conduct to the level of explicit collusion between the competitors.
While such explicit or tacit collusion is only thought to be a sustainable problem in oligopolistic markets in which there are few sellers, uses of algorithms adds a new wrinkle to the classic arguments about oligopoly. According to some commenters, algorithms make tacit collusion possible in larger, non-oligopolistic markets.2 In theory, algorithms can quickly detect many small changes to the market, and this may make it considerably easier for firms in more fragmented markets to understand how their competitors are acting and react accordingly. While, again, there are many procompetitive reasons for firms to use algorithms in this way unilaterally, this behavior potentially makes it possible for firms to collude tacitly by using algorithms even if they are not in an oligopolistic market.
‘Twombly’ Standard a Challenge
Critically, however, these new behaviors will still be subject to the same standard of pleading and proof, which warns that the use of algorithms will not likely change the way the antitrust laws are enforced. Thus, while algorithms may present a new opportunity to press the envelope on these laws, plaintiffs will still need to show evidence of an agreement to fix prices in order to survive a motion to dismiss. As the U.S. Supreme Court stated in Twombly, “while a showing of parallel ‘business behavior is admissible circumstantial evidence from which the fact finder may infer agreement,’ it falls short of ‘conclusively establish[ing] agreement or … itself constitut[ing] a Sherman Act offense.’ “3 Parallel conduct, on its own, can be consistent with both the possibility of conspiracy and independent rational business behavior, and courts will continue to be cautious not to make “false inferences from identical behavior.”4
To be sure, in a scenario in which an algorithm is shared among competitors for an anticompetitive purpose or is commonly programmed to set prices at a supracompetitive level, “an agreement” may not be difficult to find. But the more tricky–and likely–situation is where there is no obvious agreement, and algorithms are programmed to glean market information and then potentially act in coordination with other players to raise prices or limit output.5 A scenario like this presents a unique challenge in showing evidence of an actual agreement by sellers to fix prices: A pricing algorithm has no “mind” to meet with another actor and no subjective intent to collude or to agree with another actor or algorithm.
The interesting question, then, is how the use of algorithms will mesh with the courts’ use of so-called “plus factors” to infer “preceding” agreement under Twombly. Plus factors are affirmative actions that can form the basis to infer a preceding agreement among competitors. Some commenters argue that the existence of such self-teaching algorithms itself could be a plus factor suggesting agreement among the operators of such algorithms,6 while others argue that the algorithms fit into a gap in the antitrust laws that would make it difficult to punish those that use them.7
Something to Keep an Eye On
Ultimately, algorithms are going to be subject to the same antitrust requirements as any other conduct. However, the courts and enforcement agencies will need to hone their analyses, be it by learning more about the technologies themselves or by relying on experts and specialists to describe the parameters of the type of conduct that may be reviewed as other than “unilateral.” The legal framework will in all likelihood remain the same, and these algorithms will need to find their place within that framework. What all this recent chatter really confirms is that antitrust is ever changing as innovation presents new challenges in which to apply bedrock principles.
1. Organisation for Economic Co-operation and Development, “Algorithms and Collusion—Background Note by the Secretariat” (May 2017).
2. Michal S. Gal, “Algorithmic-Facilitated Coordination: Market and Legal Solutions,” CPI ANTITRUST CHRONICLE 6 (May 2017).
3. Bell Atlantic v. Twombly, 550 U.S. 544, 553 (2007); see also In re Commodity Exch. Silver Futures & Options Trading Litig., 560 F. App’x 84, 87 (2d Cir. 2014); Cosmetic Gallery v. Schoeneman, 495 F.3d 46, 51-52 (3d Cir. 2007).
4. Twombly, 550 U.S. at 554; see also Almanza v. United Airlines, 851 F.3d 1060, 1068 (11th Cir. 2017); Superior Offshore Int’l v. Bristow Grp., 490 F. App’x 492, 499 (3d Cir. 2012).
5. See “Algorithmic-Facilitated Coordination,” supra note 2, at 6.
7. Salil K. Mehra, “Antitrust and the Robo-Seller: Competition in the Time of Algorithms,” 100 MINN. L.R. 1323, 1328 (2015).