Why do we see more ‘content’ that interests us on social media?
Why do we see more ‘content’ that interests us on social media?
Overall, social media is a part of our lives, but it is wise not to let its algorithms control us.
Social media algorithms show content based on users’ interests and behavior, which is called a filter bubble and limits the scope of information.
The A-chamber effect causes people to only connect with like-minded groups, which reduces political polarization and social tolerance.
Some apps claim to show ads by listening to the voice from the microphone, but large companies have denied this, and security vulnerabilities seem to be a possibility.
Let’s say you are a foodie. It can be said that only food videos appear more on your Facebook or TikTok. You are interested in fashion. Of course, clothing items appear more on the social media you use.
If you are passionate about sports, then sports content appears more on social media. If you enjoy traveling, then travel content appears more on your social media. You might think that this world is just a matter of your interests.
If someone believes in ghosts, they may feel that the world is full of ghosts. Because that is the only content that appears on their social networks, including Facebook and TikTok. Why is this so? The average user is not particularly aware of this.
When scrolling through social media platforms such as Facebook, Instagram, TikTok, YouTube, and X, you often see only the content that you like, search for, or have previously viewed. This is not a coincidence, but rather a result of the design of algorithms and the commercial strategy of the platforms. It limits the scope of information and keeps diverse ideas away. This effect is mainly called the ‘filter bubble’ and the ‘echo chamber’.
First of all, let’s talk about the filter bubble. This term was introduced in 2011 by internet activist Eli Pariser in his book ‘The Filter Bubble: What the Internet is Hiding from You’. According to Pariser, internet companies use personal algorithms to select content based on our previous behavior - likes, shares, search history, viewing time, clicks and even location.
For example, if someone views a lot of health-related content, the algorithm will only show that content. This traps us in a ‘personal information bubble’ where different perspectives don’t enter. The main purpose of the algorithm is to keep users on the platform longer, because more time means more ad exposure and more revenue for companies.
According to one study, Facebook’s algorithm strengthened the filter bubble effect as early as 2015, which was found to limit users’ content consumption by up to 50 percent. There are also studies that deny its effect, such as a 2021 study that called the existence of filter bubbles minimally effective, but did not deny that they generally increase social polarization.
Second, the echo chamber. This is a bit different from the filter bubble, because in it we only connect with like-minded people. The people we follow, the groups we join, or the people we comment on are often like us. This reinforces our beliefs through repeated repetition, which is called confirmation bias.
According to a 2021 study published in the journal PNAS, the echo chamber on social media platforms like Facebook, Twitter, and Instagram affects the spread of information, which increases political polarization. For example, a 2024 study by the Rensselaer Institute found that users prioritize popular opinions and shy away from opposing views. This reinforces the echo chamber on social media.
According to studies, this effect is more visible on platforms like Facebook and X, because the algorithms prioritize content that increases engagement, likes, comments, and shares. Emotional or controversial content gets more engagement, so such posts are more visible. A 2025 study by Michigan State University showed that echo chambers reduce social tolerance and reduce network diversity among users in rural areas.
Does the device track what you say?
You are chatting with your friends. At this time, if you keep your phone or laptop open, then after a while when you use social media, you see an advertisement or any content of the same on your device. Seeing this, you can think that the device is not listening to your voice. But, is this really the case?
Big technology companies like Google, Facebook, Apple deny such claims. Instead, they repeatedly say that they do not target ads by listening to normal conversations through the microphone. However, we use software or apps developed by these companies in our daily lives. However, experts say that some third-party apps can listen to your voice and show ads accordingly.
According to cyber security expert and head of Bhairav Technology Vijay Limbu Senihang, after accepting cookies on websites and obtaining consent, the algorithm tracks the activities of the browser or device and shows the relevant ads, which is a common process. But mobile apps, especially on Android devices, are not safe. This vulnerability can silently tap the microphone.
Although some platforms deny this, researchers have proven some facts about it. “Apps record voice and track mainly English words and match them with an ad library, and if they match, show relevant ads,” says Limbu.
For example, in 2024, a marketing company called Cox Media Group claimed to show ads based on the user’s voice through the device. This is called ‘active listening’ technology, which takes data from the microphone and shows content in the feed based on it. The company claimed in its pitch deck that it would target ads by listening to conversations on smartphones, which raised privacy concerns.
According to Limbu, we may have received such content because of algorithms. Because social media companies are collecting large amounts of data tracking information such as location, search history, browsing, age, gender to show us content of interest. These days, algorithms are so accurate that they can make accurate inferences about us based on these bases, which may seem like coincidence to us. To avoid this, you can turn off microphone access or personalized ads.
What happens when you combine the two?
The combination of the two narrows the scope of our thinking. It can seem like everyone in the world thinks the same way as you. For example, studies have shown that filter bubbles and echo chambers helped spread fake news and misinformation in the 2016 US election.
According to a study by the National Bureau of Economic Research, fake news benefited Donald Trump in the election, with 115 pro-Trump fake stories being spread. It showed that 27.4 percent of Americans visited fake news sites.
The same is seen in political debates in Nepal. Supporters of one side never see or hear the views of the other side. However, not all studies consider this to be completely harmful. According to some research, algorithms can also increase diversity, because different content is accidentally displayed. But most users, especially young people, fall into such bubbles. This also affects mental health. This can increase anxiety and depression, as we only see negative or one-sided content.
According to a 2024 report by Psychology Today, information bubbles increase polarization, which increases social fragmentation and isolation. Another study has shown that the isolation and lack of diverse perspectives in the A-chamber increase self-esteem problems and anxiety.
How to solve this problem?
You should be aware of this problem to avoid. You should get information from different sources. Such as news websites, books, offline discussions, etc. You should follow accounts with different views on social media. Cyber security expert Limbu suggests using incognito mode in your browser or deleting your search history.
Overall, social media is a part of our lives, but it is wise not to let its algorithms control us. He said that only by seeking diversity can we get true information and a balanced perspective.
According to Limbu, some platforms have even been fined for this in Europe. This is more common on Android because it has less of a privacy focus, while iPhones have stronger privacy and features like ‘Ask App Not to Track’. When privacy is weak on Android phones, algorithms are activated, which creates an echo chamber for political opinions. The behavior varies from country to country.
In European countries with strict privacy laws, ‘conversation listening’ does not happen, but in countries like Nepal where the rules are not as strict, it does. As evidence, apps ask for consent when using a VPN from a UK IP, but not when running from Nepal. The systems are designed based on the country’s IP and rules.
Limbu says he has some personal experience with this. ‘After finding out that Instagram was listening to conversations, I removed social networks from my mobile and used them only from my laptop/desktop,’ says Limbu.

Comments
Post a Comment
If you have any doubts. Please let me know.