In an effort to protect teenagers and thwart potential scammers, Meta, the parent company of Instagram, announced on Thursday that it will test features that blur messages containing nudity. This move comes as the tech giant faces increasing scrutiny in the United States and Europe over allegations that its apps are addictive and contribute to mental health issues among young people.
The protective feature for Instagram’s direct messages will employ on-device machine learning to determine whether an image sent through the service contains nudity. The feature will be activated by default for users under 18 and adults will be encouraged to turn it on.
“Since the images are analysed on the device itself, nudity protection will also function in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone opts to report them to us,” the company explained.
Unlike Meta’s Messenger and WhatsApp apps, Instagram’s direct messages are not currently encrypted. However, the company has stated its intention to introduce encryption for the service.
In addition to this, Meta is developing technology to identify accounts that may be involved in sextortion scams. It is also testing new pop-up messages for users who may have interacted with such accounts.
Earlier in January, the social media behemoth had announced that it would conceal more content from teenagers on Facebook and Instagram, making it harder for them to encounter sensitive content such as suicide, self-harm and eating disorders.
This announcement follows a lawsuit filed in October by attorneys general from 33 US states, including California and New York, accusing the company of repeatedly misleading the public about the dangers of its platforms.
Meanwhile, in Europe, the European Commission is seeking information on how Meta safeguards children from illegal and harmful content.
Also Read: Mumbai Police Arrests Hardik Pandya’s Step-Brother in Multi-Crore Fraud Case