Parents who use Instagram’s supervision tools will soon be alerted if their teenage children repeatedly search for suicide or self-harm related terms on the platform, as part of a new safety measure announced by Meta.

It marks the first time Instagram will proactively notify parents about potentially harmful search behaviour by their children, rather than simply blocking searches or redirecting users to external support services.

From next week, the alerts will be rolled out to parents and teens enrolled in Instagram’s Teen Accounts experience in the UK, the US, Australia and Canada, with other countries set to follow later.

However, the announcement has drawn strong criticism from suicide prevention charity Molly Rose Foundation, which warned the new system could have unintended and harmful consequences.

“This clumsy announcement is fraught with risk and we are concerned that forced disclosures could do more harm than good,” said the foundation’s chief executive, Andy Burrows.

The charity was established by the family of Molly Russell, who died by suicide in 2017 at the age of 14 after viewing self-harm and suicide-related content on platforms including Instagram.

Burrows said that while parents naturally want to know if their child is struggling, the alerts risk creating panic without offering sufficient guidance.

“Every parent would want to know if their child is struggling,” he said, “but these flimsy notifications will leave parents panicked and ill-prepared to have the sensitive and difficult conversations that will follow.”

Meta said that any alert sent to parents would be accompanied by expert advice and resources designed to help families navigate those conversations.

Sameer Hinduja, co-director of the Cyberbullying Research Center, said receiving such an alert would be deeply unsettling for any parent. But he stressed that the effectiveness of the system depends on what happens after the notification arrives.

“What matters is not just the alert itself but the quality and usefulness of the resources parents immediately receive to guide them through what to do next,”

“You can’t drop a notification on a parent and leave them on their own, and it seems like Meta understands that.”

Burrows also pointed to previous research published by the Molly Rose Foundation, which found that Instagram continues to “actively” recommend harmful content relating to depression, suicide and self-harm to vulnerable young users.

“The onus should be on addressing these risks rather than making yet another cynically timed announcement that passes the buck to parents,” he said.

Meta disputed those findings when they were published last September, saying the research “misrepresents our efforts to empower parents and protect teens”.

Growing scrutiny of teen safety

According to Meta, the new Teen Account alerts are designed to flag sudden changes in a young person’s behaviour and search activity on Instagram. The company said in a blog post that the initiative builds on existing safety features, which already include hiding suicide and self-harm related posts and blocking searches for dangerous material.

Parents may receive alerts by email, text message, WhatsApp or directly within Instagram, depending on what contact details Meta has on file.

The company acknowledged that its analysis of search patterns could sometimes trigger alerts when there is no immediate cause for concern, saying the system is designed to “err on the side of caution”.

Meta also said it is exploring the introduction of similar alerts in the coming months if teens discuss self-harm or suicide with AI chatbots on Instagram, noting that young people are increasingly turning to artificial intelligence tools for emotional support.

The announcement comes as social media companies face growing pressure from governments and regulators to better protect children online. Australia has already banned social media use for under-16s, while Spain, France and the UK are considering comparable measures.

Meanwhile, regulators are continuing to scrutinise the practices of major technology firms in relation to young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court to defend the company against allegations that it deliberately targeted younger audiences.

Share.

Hi, I'm Sidney Schevchenko and I'm a business writer with a knack for finding compelling stories in the world of commerce. Whether it's the latest merger or a small business success story, I have a keen eye for detail and a passion for telling stories that matter.

© 2026 All right Reserved By Biznob.