It is no secret that Facebook, Google, Twitter and more are constantly under fire for ‘fake news’. Recent high profile stories such as the Russian election campaigns which saw fake news articles being targetted towards volatile swing voters. The most recent criticism is over the fake news stories that quickly popped up in the wake of the Las Vegas shootings. Some of the stories were designed to push a specific agenda or worldview, some were just designed to create trouble. Either way, these stories appear in platforms such as Facebook and Google and unsuspecting readers are fooled.

Alex Stamos – Facebook Chief Security Officer, responded to the criticism with a recent Twitter rant. In the rant, he explains the difficulty of trying to use machine learning to essentially work out which worldview is more acceptable.

Now don’t get me wrong, fake news is a HUGE issue and needs to be dealt with. But like anything, the best way to deal with it is to bring it into the light, expose it, and then teach people how to deal with it. Relying on machine learning to filter what news we should and should not see is akin to asking a robot to tell us what we should think. The thought of it is awesome (I mean, come on, it’s robots), but in practice, it would mean we all think the same, which is very dangerous.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It is hard to disagree with Alex, we need wider discussion to take place. We need media to highlight how to deal with fake news instead of sharpening their pitchforks ready to march down to Facebook and co.

Essentially, you have to be careful what you wish for.