Earlier this week, leading advertisers in the United States of America, did the unthinkable in the face of mounting criticism about the role of Facebook in spreading fake news and hate speech. The list included Unilever, Levi Strauss, Honda, Coca Cola and many more.
The advertisers have said they will stay off the platform till the end of the year. And, much of this seems to be fuelled by Facebook refusing to be, in the words of Mark Zuckerberg, “arbitrators of truth”. Facebook shares fell by 8.3% on the news and wiped out almost $56 billion of the value of the company. Facebook has announced that it will label posts by politicians that break Facebook rules, but not ban them.
For almost a decade, social media giants have been under attack for allowing its platforms to be used for hate and misinformation. Last year, Facebook and YouTube acted against several members of the American far-Right, with YouTube expunging hundreds of thousands of videos that promoted White Supremacist ideologies, and Facebook barring the accounts of many. However, this was not even the tip of the iceberg. Every day, people are subject to a barrage of hate, threats, and fake news items that can seriously endanger the lives of others.
In India, the issue of fake news and hate speech are often interlinked. Most often, vested interests put out fake news in the hope of fuelling hate. There are dedicated websites that focus on conspiracy theories, demonising minorities, and publish innuendo and gossip about political rivals. While these sites look legitimate, their agenda most likely is not.
The question is how far a social media giant should go in monitoring content, and where do its responsibilities end, and those of the user begin. Who all do you monitor? Some of the answers are easy – you can’t give a terrorist organisation like ISIS a platform – which incidentally the social media giants did. You cannot give paedophiles a platform to congregate and put children in danger. And, these are extremes that would receive agreement from people across the political divide. But what else? Can the social media platform stick a note on an advertisement that claims to make you healthier, saying ‘Check facts’? Or can it label a politicians’ assertions on the GDP as fake?
In late May, US President Donald Trump took to Twitter, as he normally does, to rail against something. This time it was on how mail votes can be compromised. “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mailboxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed (sic).”
And, he went on to vent at the Governor of California who proposed to send out ballots to citizens to vote via postal ballot. However, this time Twitter did something that was unprecedented. It slapped a fact check label on the tweet, labelling it as potentially fake.
Trump derailed further, calling it an attack on freedoms, and signed a Presidential order that would strip internet intermediaries like Facebook and Twitter off some of the protection from the content they carry. This is despite Facebook refusing to tag the post as needing verification. Zuckerberg wrote a long mail on why Facebook should not get involved in content.
Social media giants have gone beyond being mere technology products that help us connect and stay in touch. They have become organic media hubs that allow all of us to share our views, opinions, products, and services. People are attracted to the platforms not because they have superlative technology, but because of the people on the platforms that they want to interact with. The ‘network effect’ as it is called, is not about technology, it is about people and the content they post. Each of us on a social media platform is a broadcaster in our own right – and we put out content and views. And, just as broadcasters are governed by a set of rules on what is acceptable and what is not, so too must those who broadcast using social media. And, it is up to the platforms to ensure that their rules on hate speech and misinformation are followed.
In a recent paper, a digital media literacy intervention increases discernment between mainstream and false news in the United States and India, published in PNAS, the authors talk about how little steps by Social Media platforms in educating their audience to spot fake, has gone a long way in helping people recognise misinformation and hate. They claim that the campaign by Facebook on recognising fakes improved the audience’s ability to discern between genuine sites, and those that carry fake news.
In a world overwhelmed with content, it is becoming even more difficult to discern fake news from the truth. As fake has begun jostling with the truth, and in many cases edging it out, it becomes imperative that social media platforms do not allow the use of their network to spread misinformation and hate. They need to formulate clear cut guidelines on what is hate, and what is fake. And, they need to apply these rules in a clear and transparent manner.
The writer works at the intersection of digital content, technology, and audiences. She is a writer, columnist, visiting faculty, and filmmaker. Views are personal