Social media companies are playing with user safety to attract attention and make money!
Social media companies are playing with user safety to attract attention and make money!
It has been sensationally revealed that the world's most famous social media companies deliberately promoted harmful content to attract user attention and make money. Speaking to the BBC, more than a dozen whistleblowers and former employees of the companies claimed that Meta (Facebook/Instagram) and TikTok prioritized business over security.
According to a Meta engineer, in order to compete with TikTok's rapid growth and improve the company's falling share price, management had instructed to place 'borderline' harmful content in the feed. This content includes misogyny, conspiracy theories and hate speech. Which, while not technically illegal, are considered extremely harmful to users.
Matt Mottil, a senior researcher at Meta, said that Instagram's 'Reels' feature was launched in 2020 without adequate security measures. According to an internal investigation, the rate of abuse, harassment and hate speech on Reels is much higher than on other Instagram feeds.
A TikTok employee has shown the BBC evidence that the company’s internal system is prioritising general complaints about politicians over serious complaints about child sexual abuse and blackmail. The company claims that this policy is in place to avoid government sanctions and to strengthen political ties.
The member of the child protection complaints team advised parents to “keep your children away from TikTok as much as possible, delete it.”
In an interview with the BBC, Callum, 19, said he was radicalised by the social media platform’s algorithms when he was 14. He said he had experienced the violent and hateful content he saw on his feed gradually leading him towards racial and gender-based hatred. UK counter-terrorism experts have also expressed concern that racist and violent posts have become commonplace on social media in recent months.
Meta has denied these serious allegations, claiming that it is wrong to intentionally spread harmful content for profit. Meta has said that it has recently introduced new features such as 'teen accounts' to protect children.
Similarly, TikTok has called the whistleblower's claims fabricated and claimed that it has invested heavily in technology to block harmful content. TikTok has clarified that it has more than 50 safety features that are automatically activated to protect children.
However, according to former employees of the company, the algorithm is a 'black box' over which even engineers do not have full control and that it only follows the company's involvement and money more than the mental health of the users.

Comments
Post a Comment
If you have any doubts. Please let me know.