Enlarge /. Google logo during the Google Developer Days (GDD) in Shanghai, China, September 2019.
In August 2018, President Donald Trump claimed that social media "totally discriminates against Republican / Conservative voices". There was nothing new about it: Conservative technology companies have been accusing political bias for years. Just last July, Senator Ted Cruz (R-Texas) asked the FTC to examine the guidelines for moderating content from technology companies like Google. A day after Google's vice president insisted that YouTube was apolitical, Cruz claimed that political bias against YouTube was "massive."
But Cruz doesn't back up the data – and it's been available for a while. While the actual policies and procedures for moderating content are often opaque, it is possible to review the results of the moderation and determine whether there are any signs of bias. And last year, computer scientists decided to do just that.
moderation
Due to the longstanding dispute in Washington DC, Northeastern University computer scientists decided to investigate the political bias in YouTube’s comment moderation. The team analyzed 84,068 comments on 258 YouTube videos. At first glance, the team found that comments on right-wing videos seemed more moderated than comments on left-wing ones. However, when the researchers also took into account factors such as the spread of hate speech and misinformation, they found no difference between moderating comments on right- and left-handed videos.
"There is no political censorship," said Christo Wilson, one of the co-authors and associate professor at Northeastern University. "In fact, YouTube only seems to enforce its hate speech policies, which they say they do." Wilson's contributors to the newspaper were graduate students Shan Jiang and Ronald Robertson.
To test whether the way comments were moderated was politically biased, the team needed to know if a video was right or left, if it contained misinformation or hate speech, and which of its comments had been moderated over time.
On the Snopes and PolitiFact websites, where facts were verified, the scientists were able to view a series of YouTube videos marked as true or false. By scanning comments on these videos twice every six months, they could determine which ones were removed. They also used natural language processing to identify hate speech in the comments.
To rate their YouTube videos left or right, the team used an independent set of voter records. They checked voters' Twitter profiles to see which videos were shared by Democrats and Republicans, and assigned biases accordingly.
Controls matter
The raw numbers "seem to indicate that there is such an imbalance in how the moderation takes place," said Wilson. "But if you dig a little deeper and take into account other factors such as the presence of hate speech and misinformation, this effect suddenly disappears and moderates left and right equally."
Kristina Lerman, a computer scientist at the University of Southern California, admitted that bias studies were difficult because the same results could be caused by several factors known in statistics as confusing variables. Right-wing videos may have resulted in stricter comment moderation simply because they had more dislike or contained incorrect information, or because the comments contained hate speech. Lerman said that Wilson's team included alternative options in their analysis using a statistical method known as propensity score matching, and that their analysis looked "solid."
Kevin Munger, a political scientist at Penn State University, said that while such a study is important, it is only a "snapshot". Munger said it would be "much more useful" if the analysis could be repeated over a longer period of time.
In the article, the authors admitted that their results could not be generalized over time because "guidelines for platform moderation are notoriously inconsistent". Wilson added that their results could not be transferred to other platforms. "The big limitation here is that we only watch YouTube," he said. "It would be great if there were more work on Facebook, Instagram, Snapchat and all the other platforms that kids use today."
Wilson also said that social media platforms were in a "fatal hug" and that any decision to censor or allow content must generate criticism from the other side of the political spectrum.
"We are so polarized now – maybe nobody will ever be happy," he said with a laugh.