Covering Disruptive Technology Powering Business in The Digital Age

Home > DTA news > News > Is It Time To Burst The Sentiment Bubble?
Is It Time To Burst The Sentiment Bubble?


We are living in an data driven age. Big Data promises insight that delivers business advantage, medical cures, improvements in public safety, the list goes on and on. The US presidential election and Brexit vote has questioned whether Big Data’s predictive capabilities are really living up to the hype. I suspect the truth is that poll companies are relying on old fashioned surveys rather than for instance mass web scrapes with big data analytics applied.

These two election have also raised questions over the impact of “fake news” in our social media feeds. In recent days Mark Zuckerberg has had to enter this discussion and think about what his company will do about the fake news problem. However I contend that fake news is only a small contributor to the real issue – “the sentiment bubble”.

Big Data driven sentiment analysis may be having a skewed effect on how social media reinforces sentiment and hardens viewpoints. This flys in the face of how we perceive social media to represent diverse opinion and open debate.

Many people consume news through social media. This in itself is worrying as the quality and validity of “news” on social media is questionable at best, differentiating between legitimate substantiated news and the more fake news stories is difficult to do. In a social media feed, the rant of an unhinged blogger can appear as legitimate as an article from the the times newspaper. So whilst the issue of fake news on Facebook has hit the headlines, we need to question if that’s the real problem here?

Big Data driven sentiment analysis is perhaps most familiar on sites like amazon where you are told, people that bought “x” also purchased “y” and “z”. This is the starting point for sentiment analysis, but it goes much further. On Facebook as an example, algorithms analyse the content that you consume, these same algorithms essentially become your own personal censor; deciding what content will be displayed in your timeline. As an example, if you were anti Trump in the US election and you picked up on the more negative news and content surrounding trump, your feed became increasingly peppered with anti trump content. Sentiment analyses wraps you in a bubble of anti trump content. More extreme views get normalised, positions harden, your own view is not challenged and the feeling that everyone shares your view increases.

Consider the same impact on someone who is starting to feel marginalized in society. If that person starts to read more extreme content, its not a stretch to argue sentiment analysis will start to populate that person’s social feed with a greater volume of these articles which in turn are likely to become more extreme in their content.

Sentiment analysis is a great technology, but it raises the question of whether its power should be left in the hands of those who create and develop it. In the same way we accept censorship of television and films, perhaps some kind of regulation is needed for algorithms that control what we see in our social feeds.

This is a tough ask, not just the ethical question around censorship, but also in terms of just how would it be done. Perhaps it requires ethical committees to help the data scientists build guidelines into the sentiment algorithms to ensure that on important issues social feeds remain rounded and more impartial.

I can sense that “freedom of speech” and “freedom of internet” proponents may instinctively recoil against this suggestion. But here’s the really strange thing. It is the sentiment analysis itself which is becoming people’s own personalised censor. Adding regulation to sentiment analysis algorithms could actually be an exercise in censorship reduction, allowing people to see a fuller less controlled picture of the news they follow.

This article was originally published on and can be viewed in full here