Game Theory Reveals How Algorithms Can Inflate Prices

The initial version of this article was published in Quanta Magazine.
Picture a town featuring two widget sellers. With customers favoring lower prices, the merchants strive to undercut each other. Frustrated by their limited profits, they converge one evening in a dimly lit tavern to brainstorm a covert plan: By jointly increasing prices instead of competing, they could enhance their earnings. However, such deliberate price-fixing, known as collusion, has been illegal for a long time. The widget sellers opt not to take the risk, allowing everyone else to benefit from affordable widgets.
For more than a century, US legislation has adhered to this fundamental principle: Prohibit those clandestine agreements to maintain fair pricing. Nowadays, it’s far more complex. In many sectors, vendors increasingly utilize computer programs known as learning algorithms, which adapt prices repeatedly based on new market data. While these algorithms are often less complex than the “deep learning” systems driving modern AI, they can still display unforeseen behaviors.
So, how can regulators ensure algorithms set equitable prices? Their traditional methods are inadequate, as they typically rely on identifying explicit collusion. “The algorithms certainly aren’t sharing drinks,” remarked Aaron Roth, a computer scientist from the University of Pennsylvania.
However, a notable 2019 study demonstrated that algorithms could implicitly learn to collude, even if not explicitly programmed to do so. A research team set two instances of a simple learning algorithm against one another in a simulated marketplace, allowing them to explore various strategies for profit enhancement. Gradually, each algorithm learned through trial and error to retaliate when the other lowered prices—significantly cutting its own price in a drastic manner. The ultimate outcome was inflated prices, supported by the mutual threat of a price war.
Implicit threats like these also underline numerous cases of human collusion. So, if the aim is to guarantee fair prices, why not mandate that vendors use algorithms incapable of expressing threats?
In a recent study, Roth and four fellow computer scientists demonstrated why this might not suffice. They established that even seemingly innocuous algorithms designed to optimize their own profits can lead to adverse outcomes for consumers. “High prices can still emerge in ways that appear reasonable from an external perspective,” said Natalie Collina, a graduate student collaborating with Roth and a co-author of the new research.
Researchers don’t universally agree on the ramifications of this discovery—much depends on how “reasonable” is defined. Nevertheless, it highlights the intricate nature of questions surrounding algorithmic pricing, and the challenges of effective regulation.

