Leading AI Companies Change Position, Advocate for Reduced Regulations with New U.S. Administration
Technology behemoths withdraw their previous endorsement of AI regulation, aiming for expanded data access and immunity from copyright limitations in light of rising global competition.
In May 2023, leaders from major artificial intelligence firms, including OpenAI, Google's DeepMind, and Anthropic, urged U.S. lawmakers to implement federal regulations for the swiftly evolving AI industry.
Their push for oversight aimed at addressing the existential threats posed by advanced AI systems, suggesting measures like algorithmic audits, content labeling, and collaborative sharing of risk data.
At that time, the U.S. government worked alongside AI developers to establish voluntary commitments designed to ensure the safety and fairness of AI technologies.
In October 2023, a presidential executive order solidified these principles, mandating federal agencies to assess the potential ramifications of AI systems on privacy, workers' rights, and civil liberties.
With a new administration in place, the strategy regarding AI policy experienced a notable shift.
Within the first week of the new presidential term, an executive order was enacted to dismantle the prior administration's directives and encourage initiatives that bolster American AI capabilities.
This new order called for the creation of a national strategy to remove regulatory obstacles within 180 days.
In the subsequent weeks following the policy change, AI firms provided documents and proposals to influence the upcoming framework.
A fifteen-page report submitted by OpenAI urged the federal government to prevent individual U.S. states from enacting their own AI regulations.
It also pointed out the Chinese AI company DeepSeek, which developed a competitive model using a fraction of the resources required by American firms, advocating for broader access to federal data to facilitate model training.
OpenAI, Google, and Meta have also lobbied for expanded permissions to utilize copyrighted materials—including books, films, and artworks—for training their AI models.
All three companies are currently facing ongoing legal challenges related to copyright infringement.
They have sought executive clarification or legislative measures to confirm that using publicly available information for model training falls under fair use.
A prominent U.S. venture capital firm has also put forth a policy paper urging the avoidance of any new AI-specific regulations, arguing that existing consumer safety and civil rights laws suffice.
The firm called for punitive measures against harmful entities but opposed mandates that would impose regulatory responsibilities based on hypothetical risks.
This policy shift aligns with rising concerns among AI developers regarding increasing global competition.
During the previous administration, leading U.S. companies operated under the belief that their substantial investments and computational capabilities provided a lasting edge, particularly as restrictions were placed on exporting advanced AI chips to countries like China.
Recent developments, such as the introduction of advanced models by smaller foreign competitors, have challenged this viewpoint.
Some U.S. AI firms have reevaluated the extent of their technological lead and are now pursuing faster access to resources and reduced regulatory barriers.
This reassessment has resulted in a noticeable change in industry lobbying, with leading AI companies now focusing on competitive positioning rather than earlier demands for cautious and collaborative regulation.