It’s no secret that Facebook and other social media platforms are filtering what their users can see.
There are two reasons for this. The first is that they are working with a large amount of data and so pinpointing the feeds to certain sources makes it feel relevant to the user. As an example of this, if you are connected to a given person in any way whatsoever, their posts appear on your feed. (Try it out yourself: “friend” someone in Facebook and notice how most of their recent posts appear on your wall.)
The second reason is all about money: the Facebook users who are willing to pay for exposure are getting their messages seen more frequently. Facebook is not running a charity: the users of social media pay either with their money or their data. In both cases, money is the driving force.
The algorithms that are run in the different systems try hard to recognize the patterns in your online behavior. It’s the same with Netflix or Amazon: people who are like you liked this thing and they ordered that other product. Offering you examples of what these companies, or their advertisers, sell is efficient and usually it also hits the nail on the head: even if you don’t find something that you thoroughly enjoy, you are seldom disappointed.
However, the filtering does not necessarily restrict itself to entertainment and quirky anecdotes subnitted by your Facebook friends. In 2016 a Finnish reporter working for Yleisradio (the national news network) did a story on how easily the feed can be manipulated simply by liking certain things. The story in Finnish.
Here’s how the reporter did it: he created a blank Facebook profile which had no friends. He then ran a basic setup for the profile by liking a lot of different news sites. He then started clicking on the links that the feed began to show. Every time a link suggested that he go further, he followed the system’s advice and went there. It didn’t take long before the algorithms providing the feed suggestions provided him a link for a controversial “news” source, which he then liked as per the system’s suggestion.
Two days later, after clicking like on all Facebook’s suggestions, he noticed his previously blank feed was now a black cloud of hate in which indescribably offensive language used. The issue here was not so much that hateful points of view were in the reporter’s feed, but that this constituted almost all the content in his feed. Alternative points of view were not easy to find in the feed. If his Facebook news feed was all that this reporter paid attention to, he would have been receiving only information confirming his pre-existing beliefs—no contradictory information would have been seen.
If we analyze the situation a bit it’s no wonder that this happened: the collaborative filtering works like this pragmatically, without taking into account the voice of the feed, its reliability, whether it has a non-mainstream point of view, or whether it’s a dissenting voice to what your feed already contains. If several users click like on similar things, things tend to escalate quickly, creating a much different feed when compared to other users.
It is unknown what Facebook (or other providers) are doing to resolve the issue. It was also previously discussed by The Guardian in this year’s May 22 edition
And therein lies the problem – for some significant number of users, when they look at their social media feed and believe it mirrors reality, chances are that the reflection is distorted, confirmatory to their existing biases, and unlikely to provide alternative or corrective points of view. They are “in the bubble”, and may not even be aware of it.
If we value an informed citizenry, with at least knowledge of the existence of opposing points of view—and in some cases more factual information—then this problem be a concern to us all. What can, and will, be done about it is still an open issue, but we need to be aware that we are not yet at a point where we cn be sure that all of us are getting a true picture of the state of the world around us.