I am a self-confessed information junkie.
I guess it was inevitable growing up in the early information era and now working in information technology. Nowadays we have access to a massive amount of information and in a search to help us work through that information we have enlisted help…. We all do! We are probably always coming back to the same Internet search engine or we read our favourite newspaper/ sources. We trust our news sources either based on past experience or just because they ‘think’ the same way as we are, but ultimately there is a journalist that has done the research and is publishing their results. So far so good.
With the Internet the news sources started to sprawl to ever bigger numbers, I can now read news articles from a local news agency in a little Australian town in the outback from my office in the United Kingdom, 7,000 miles away….. as an example. But with that, there is so much information and news out there, how do we deal with it. How do we filter all that news?
We know from the Cambridge Analytical saga that Facebook serves you information and news that is ‘carefully’ filtered based on your interaction with Facebook and the profile it has of you. It now even has vetted the information sources. Does that not put you as the reader in a pigeon hole and feed the similar information over and over?
The same goes with Google where I get served news articles (in Chrome on my mobile) that I might be interested in. Striking thing is that I get served very similar (the same) articles as my wife. This all can be logical with the Google AI linking previous behaviours and so on, but as an information junkie I like to have a broader view of things too. How did Google AI decide to share one piece of information with me but not another. Does it really know all my preference or am I too complicated case?
It is important to be open and honest about the information provided and explain how and why it appeared on my news feed. And this is where the problems start. The algorithms are a closely guarded secret. Neither Google or Facebook are open with their algorithms in fear it would give away their business model and lose their competitive edge. How do we test this, how do we trust this, how aggressive is the algorithm filtering out all the noise? Is there no room for my own interpretation?
When gathering information we need to be open about how the information entity has got to us. That is how trust is build up. This is also important in our work environment when we gather project information from all corners of the project, we need to be open how we present this to our audience, the steps of analysis or how the machine learning allows the information to be channeled. It still comes down to getting information about how information is gathered, analysed and presented. Only then we can act on the information with the fullest of convictions.
I am a self-confessed information junkie.