A wave of resignations from leading artificial intelligence safety research teams is raising concerns about the prioritization of profit over caution within the industry. Most recently, Mrinank Sharma, a researcher at Anthropic, resigned, citing difficulties in aligning company actions with stated values. His departure follows similar exits from OpenAI, including that of Ryan Beiermeister, who reportedly opposed the rollout of adult content and adds to a growing narrative of commercial pressures influencing the direction of AI development.
Sharma’s resignation letter, whereas vague, warned of a “world in peril” and the challenges of upholding ethical principles. This echoes concerns voiced by other researchers who have left their positions, suggesting a systemic issue within the field. OpenAI, once a non-profit organization, shifted to a commercial model in 2019, leading to the creation of Anthropic as a purportedly safer alternative. Sharma’s exit indicates that even companies founded on principles of restraint are struggling to resist the financial incentives driving the industry.
The shift towards prioritizing revenue is evident in several recent decisions. OpenAI’s recruitment of Fidji Simo, a former Facebook advertising executive, has drawn scrutiny, particularly given the company’s introduction of advertisements into its ChatGPT chatbot. While OpenAI maintains that ads do not influence ChatGPT’s responses, researchers like Zoë Hitzig have warned of the potential for manipulation inherent in integrating advertising into a conversational interface. This mirrors concerns about the psychological targeting techniques employed by social media platforms.
The handling of Elon Musk’s AI chatbot, Grok, further illustrates the tension between innovation and safety. Initially active and generating misuse, Grok was restricted behind a paid subscription before being halted following investigations in the UK and EU. This sequence of events raises questions about the potential for monetizing harmful outputs.
The risks extend beyond consumer-facing applications. Specialized AI systems being developed for sectors like education and government are also susceptible to bias and the influence of profit motives. As the source material notes, the pursuit of profit tends to introduce bias into any human system, and AI is unlikely to be an exception.
These developments reach as international efforts to establish AI safety regulations face resistance. The International AI Safety Report 2026, endorsed by 60 countries, offered a framework for regulation, addressing risks ranging from automation errors to misinformation. However, both the United States and the United Kingdom declined to sign the report, signaling a potential preference for shielding industry interests over implementing binding safeguards.
In May 2025, MIT professor Max Tegmark called for AI firms to calculate the potential existential threat posed by super intelligence, comparing the need for assessment to the calculations undertaken before the first nuclear test. Tegmark estimated a 90% probability that a highly advanced AI would pose an existential threat. The Future of Life Institute’s Winter 2025 AI Safety Index found that no leading AI companies have adequate safeguards in place to prevent catastrophic misuse or loss of control. As of February 2026, no formal response to the report has been issued by the US or UK governments.