Competition authorities from the European Union, the United Kingdom, and the United States have issued a joint statement addressing the critical need for fair competition in the emerging field of generative AI. This collaborative declaration, spearheaded by Margrethe Vestager (EC), Sarah Cardell (UK), Jonathan Kanter (DOJ), and Lina M. Khan (FTC), is intended to underscore their commitment to safeguarding open and competitive markets as society reaps the benefits of these rapidly advancing technologies.
The joint statement emphasizes the importance of fair and competitive markets in promoting innovation, economic growth, and consumer welfare. While the legal frameworks and enforcement powers differ across these jurisdictions, the authorities highlight their shared understanding of the potential benefits and risks associated with AI technologies. They acknowledge that generative AI, including foundation models, represents a technological inflection point that could transform economies and societies.
Despite differences in legal systems and enforcement capabilities, the authorities affirm their commitment to sovereign decision-making. They also recognize the global nature of the risks posed by AI, which can transcend international borders. Consequently, the authorities are committed to sharing insights and cooperating to address these challenges, leveraging their respective powers where necessary.
The joint statement outlines several competition risks that could arise in the generative AI landscape:
Concentrated Control of Key Inputs: The development of AI technologies relies on critical resources such as specialized chips, substantial computing power, data, and technical expertise. A few companies’ control over these resources could stifle competition and innovation.
Entrenching or Extending Market Power: Established digital firms with significant market power may leverage their positions to dominate AI-related markets, potentially using their control over distribution channels to suppress competition.
Amplifying Risks Through Partnerships: Collaborations and investments between firms in the AI space may sometimes undermine competition, allowing dominant players to shape market outcomes in their favor.
Principles for Protecting Competition
To mitigate these risks and foster a competitive AI ecosystem, the authorities advocate these principles:
Fair Dealing: Encouraging market power firms to engage in fair practices, avoiding exclusionary tactics that can hinder innovation.
Interoperability: Promoting the interoperability of AI products and services to enhance competition and innovation, while carefully scrutinizing claims that interoperability may compromise privacy and security.
Choice: Ensuring that businesses and consumers have access to diverse products and business models, scrutinizing potential lock-in mechanisms and partnerships that could limit competition.
Beyond competition concerns, the statement also highlights the importance of protecting consumers from potential abuses in the AI ecosystem. This includes guarding against deceptive practices, unfair data usage, and ensuring transparency in how AI applications are employed in products and services.
The joint statement serves as a call to action for vigilance and proactive measures to ensure that the transformative potential of AI benefits all market participants. By working together, the competition authorities aim to create an environment where innovation thrives, consumers are protected, and fair competition is maintained.
Editor’s Note: AI is incorporated into controversial algorithmic price-setting platforms. Read Jonathan Rubin‘s recent article: Algorithmic Price-Setting by Multiple Competitors is a U.S. Antitrust Enforcement Priority.