
Leading specialists have resigned from OpenAI and Anthropic, citing management’s neglect of risks and the accelerated deployment of AI.
A number of senior personnel, engaged in developing artificial intelligence systems at major US IT corporations, have chosen to step down from their roles. According to information from Axios, relayed by TASS, these departures involve staff from OpenAI, Anthropic, and various other research hubs situated at the forefront of AI innovation in San Francisco.
The reported motivation for these resignations is a significant internal clash between the researchers and the executive leadership. Scientists express alarm that the advancement of potent AI systems is occurring too rapidly, while insufficient consideration is being given to the associated dangers. Some have explicitly warned that AI poses an “existential danger to humanity.”
Anthropic, which is led by former OpenAI personnel, recently released a report explicitly stating that generative AI holds the potential for being leveraged in the “creation of chemical weaponry, the planning of violent acts, and other hazardous activities entirely unchecked by human oversight.”
The document also highlighted another concerning aspect: the autonomous operation of AI without user involvement—the outcomes of which could prove completely unpredictable.
Against this backdrop, as reported by The New York Times, investors have begun adopting a cautious stance. It has been noted that the stock values of several IT firms specializing in AI-driven software solutions have seen notable drops on the New York Stock Exchange. The concern stems from fears that neural networks might soon supplant licensed software, displacing conventional programs and established usage paradigms.
As of now, neither OpenAI nor Anthropic has issued any formal statements regarding the wave of resignations or the accusations leveled against them. However, apprehension is mounting within the relevant professional community, with experts advocating for stronger oversight mechanisms for AI and for development efforts to fully factor in the global perils that such technologies are beginning to introduce presently.