Instagram will alert parents about their teens’ suicide-related searches
listen to this article
approx 2 minutes
The audio version of this article has been generated by AI-based technology. There may be mispronunciations. We are working with our partners to continually review and improve results.
Instagram said it will notify parents if their teen repeatedly searches for suicide or suicide-related words within a short period, as pressure grows on governments to comply with Australia’s ban on social media use for people under 16.
Meta Platform Inc.-owned Instagram said Thursday it will start alerting parents who have signed up for its optional supervision setting if their children try to access content suggesting suicide or self-harm. Alerts for those signing up in Canada, the United States, Britain and Australia will begin next week.
“These alerts build on our existing work to help protect teens from potentially harmful content on Instagram,” the platform said in a statement. “We have strict policies against content that promotes or glorifies suicide or self-harm.”
Instagram said its current policy is to block such searches and redirect people to support resources.
Governments are trying to protect children from harm online, especially after concerns over AI chatbot Grok, which generated non-consensual sexual images.
Britain said in January it was considering restrictions to protect children online, following Australia’s move in December. Spain, Greece and Slovenia have said in recent weeks they are also considering limiting access.
Instagram is rolling out new teen accounts with advanced parental controls and privacy features, but some parents say Meta still needs to do more to make the platform safe for young users.
In the UK, measures designed to prevent children’s access to pornography sites have had an impact on the privacy of adults, and have led to tensions with the US over limits on free speech and regulatory access.
Instagram’s “teen accounts” under the age of 16 require parental permission to change settings, while parents can opt out of an additional layer of monitoring with their teen’s consent. They also prevent teen users from viewing “sensitive content”, including sexually suggestive or showing violence.