
AI developer Anthropic revealed this week that it is allocating $20 million to a political group advocating for increased regulation of the technology—yet its chief rival, OpenAI, has informed its staff that it will refrain from making comparable donations.
In a memo circulated to employees on Thursday, OpenAI’s Chief of Global Affairs, Chris Lehan, stated that while OpenAI permits its personnel to “express their ideological beliefs regarding causes they support,” the corporation itself will not be taking such direct action in the near future.
OpenAI is not currently engaging with political Political Action Committees (PACs) or 501(c)(4) social welfare nonprofits because the company desires to maintain autonomy over its political spending, Lehan explained in an interview with CNN.
“We truly believe it is vital that this issue transcends partisan politics,” Lehan commented.
The stakes are particularly high this year. Both Anthropic and OpenAI are reportedly exploring substantial Initial Public Offerings (IPOs) in the current year, concurrent with Congress working to establish regulatory frameworks for the industry for the next decade or beyond. Furthermore, as the midterm elections approach, voters are increasingly concerned about the implications of AI advances—ranging from energy costs to privacy and job displacement.
Although OpenAI is not directly funding PACs, its executives and major investors have made considerable contributions. President and co-founder Greg Brockman and his wife, Anna, donated $25 million to a Super PAC supporting former President Donald Trump.
Brockman, alongside several of OpenAI’s largest investors, jointly contributed over $100 million to a bipartisan Super PAC named Leading the Future. This group campaigns against state-level AI regulation, favoring a unified national regulatory structure—a point Lehan acknowledged in his staff memo. The organization has already funded advertisements opposing New York State Assembly member Alex Bores, who is competing in the state’s 12th district as an outspoken proponent of AI limits.
Lehan asserted that OpenAI champions “a national federal framework and has already endorsed legislation both at the state and federal levels on a number of issues this year.”
Anthropic was founded with a core focus on AI safety and frequently emphasizes the necessity of regulation in AI development. CEO Dario Amodei frequently publishes lengthy essays and gives interviews detailing the inherent risks associated with artificial intelligence.
This week, the company announced its funding of Public First Action Super PAC—a bipartisan entity supporting AI regulation—stating they “do not want to sit on the sidelines” while AI governance is being shaped.
“(We) need good policy: flexible regulation that allows us to reap the benefits of AI, keep risks in check, and keep America ahead in the AI race,” Anthropic declared in its statement. “This means preventing critical AI technology from falling into the hands of America’s adversaries, supporting meaningful guardrails, fostering job growth, protecting children, and demanding real transparency from the companies building the most powerful AI models.”
However, Anthropic’s stance has drawn scrutiny from the Trump administration. David Sacks, the head of AI in the White House, last year accused Anthropic of “government regulatory capture that damages the ecosystem.” Anthropic is implementing a complicated scare-mongering strategy to ensnare regulators,” he posted on X.
Last year, Trump signed an executive order to prevent states from enacting their own AI regulation laws, promoting a singular national policy that remains undefined.
The divergence in perspectives between Anthropic and OpenAI regarding AI oversight reflects their long-standing competition, which publicly flared up last week when Anthropic launched a Super Bowl ad for its ad-free chatbot, Claude—only days before OpenAI began showing select users advertisements within their ChatGPT conversations this week.