An AI safety researcher has resigned from US firm Anthropic, warning that the “world is in peril”. In a resignation letter shared on X, Mrinank Sharma said he was leaving due to concerns about artificial intelligence, bioweapons and the wider state of the world.

Sharma said he plans to step away from the tech industry to focus on writing and studying poetry, and will move back to the UK to “become invisible” for a while. His departure comes in the same week an OpenAI researcher also announced her resignation, raising concerns about the company’s decision to introduce advertising into its chatbot.

Anthropic, best known for its Claude chatbot, recently released a series of adverts aimed at OpenAI, criticising the rival firm’s move to include ads for some users. Founded in 2021 by former OpenAI employees, Anthropic has positioned itself as a more safety-focused AI company.

Sharma led a team researching AI safeguards at the firm. In his resignation letter, he highlighted his work investigating why generative AI systems flatter users, tackling AI-assisted bioterrorism risks, and exploring how AI assistants could make people “less human”. Despite enjoying his time at the company, he said it was clear the time had come to move on.

“The world is in peril,” Sharma wrote, adding that the threats extend beyond AI and bioweapons to a series of interconnected crises unfolding now. He said he had repeatedly seen how difficult it is for organisations to allow their values to guide their actions, including at Anthropic, which he said constantly faces pressure to set aside what matters most.

He later added that he would return to the UK to study poetry and writing, saying: “I’ll be moving back and letting myself become invisible for a period of time.”

Anthropic describes itself as a public benefit corporation focused on securing the benefits of AI while reducing its risks. The company has concentrated on the dangers posed by advanced AI systems, including misalignment with human values and misuse in conflict. It has also published safety reports, including one acknowledging that its technology had been used by hackers in sophisticated cyber attacks.

However, the company has faced criticism. In 2025, Anthropic agreed to pay $1.5bn to settle a class-action lawsuit from authors who accused it of using their work without permission to train its AI models.

Like OpenAI, Anthropic continues to expand its commercial offerings, including its chatbot Claude. It recently released an advert criticising OpenAI’s decision to introduce ads into ChatGPT. OpenAI chief executive Sam Altman had previously said he disliked advertising and would only use it as a last resort, later responding angrily to Anthropic’s criticism.

Writing in the New York Times, former OpenAI researcher Zoe Hitzig said she had serious concerns about OpenAI’s strategy. She warned that advertising built on deeply personal conversations could create new ways to manipulate users, and said she feared an erosion of the company’s principles as it seeks to maximise engagement.

Share.

Hi, I'm Sidney Schevchenko and I'm a business writer with a knack for finding compelling stories in the world of commerce. Whether it's the latest merger or a small business success story, I have a keen eye for detail and a passion for telling stories that matter.

© 2026 All right Reserved By Biznob.