How Facebook and Twitter Plan to Combat Misinformation

With the 2020 Elections nearing and a fragmented political discourse online, the focus turns toward Facebook and Twitter to combat misinformation. 

With misinformation currently overwhelming social media platforms, both in regard to the Black Lives Matter protests following George Floyd's murder and the coronavirus pandemic still plaguing hundreds of nations, big-time platforms are taking stock of the situation and are developing ways to combat it. Two platforms of notice, Twitter and Facebook, are positioning different approaches to tackle these rampant misinformation campaigns. This has become even more critical with the 2020 U.S. Elections fast approaching.

 

As early as February, Twitter, having a central role in the daily pundintry among candidates and the electorate, began testing new ways to fight misinformation. They are exploring numerous ways to address and contextualize tweets, with a Twitter spokesperson saying, "misinformation is a critical issue and we will be testing many different ways to address it."  

 

One iteration of their solution included a community-based point system where Twitter users could earn "points" and a "community badge" if they "contribute in good faith and act like a good neighbor" and "provide critical context to help people understand information they see." Such a democratic point system would shift the arbitrator of truth toward these platform users, similar to the community moderation present among the anonymous users of Wikipedia. 

 

In its most recent effort to discourage the spread of misinformation and to foster more thoughtful communication on its social network, Twitter developed a system of prompting some users (on the Android mobile operating system for now) to click links to other websites before retweeting.  They will also ask users to reconsider before sending vulgar tweets. But for those who’ve ever used “Screen Time” on an iPhone, we both know this friendly nudge has varying impact.

 

Twitter Inc. also employed fact-checking labels on tweets whose accuracy are refutable or warning labels on tweets that are inappropriate or violate Twitter's rules against promoting or glorifying violence, as seen with Trump’s May 26th tweet which (falsely) declared vote-by-mail fraudulent.

There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone.....

— Donald J. Trump (@realDonaldTrump) May 26, 2020

 

Facebook on the other hand employed a different approach in moderating content and combating misinformation. They created an independent Oversight Board, a first-of-its-kind internet governance body on which they spent $130 million in funding to provide independent review of its content moderation decisions. Individual content judgments by the board will be treated by Facebook as binding, but responsibility for implementing board decisions rests solely on Facebook. Board members assured that they are committed to carefully balance freedom of expression with other human rights, to operate transparently and to represent global diversity.  As this approach is new and experimental, the co-chairs say they expect to make mistakes.  

 

These arbiters of information, recognizing their responsibility, will have to continue to develop innovative means of battling misinformation without curtailing free speech and a venue for healthy conversation in an ever-changing social media landscape.

 


connect