Teenagers are not safe on Instagram, 30 security tools completely failed
Teenagers are not safe on Instagram, 30 security tools completely failed
A study has claimed that the security tools released by Instagram, owned by Facebook's parent company Meta, for the protection of teenagers are not effective.
A study jointly conducted by various child protection organizations, including Cybersecurity for Democracy, has revealed that content that encourages suicide or self-harming activities is still easily found on the accounts of teenagers.
Instagram launched 'Teen Accounts' in 2024 with the aim of making teenagers safer through parental supervision. But 30 of the 47 security tools tested by researchers were found to be completely failed or non-existent.
The study also accused Instagram's algorithm of encouraging children under the age of 13 to engage in sexual and risky activities for 'likes' and 'views'.
The Molly Rose Foundation, which works to protect children, has strongly criticized Meta, calling it a result of its corporate culture that prioritizes engagement and profit over user safety. The foundation was established in memory of British teenager Molly Russell, who committed suicide in 2017 due to the negative effects of online content.
However, Meta has rejected the study and its findings as misrepresentations. A company spokesperson has claimed that its devices show less sensitive content to teenagers and provide more effective control tools for parents, according to international media.
Comments
Post a Comment
If you have any doubts. Please let me know.