Moderating Online Content

Should platforms have the right to censor, or be responsible for, their platform’s content? 

As the role of the internet in public discourse has expanded and changed, so too have the questions that the internet poses for free speech rights. A few decades ago, one of the tech world’s foremost challenges was establishing an environment in which freedom of speech operated in a similar way to the laws which predated the internet’s existence. Social media platforms like Instagram, Facebook, and Twitter have never served as impartial blank slates.These platforms have the censoring technology to establish whatever online environment and norms they choose. The question is whether they have the right to use that technology in whatever way they want to, or if the government should tell them how to use it. 

Section 230 of the 1996 Communications Decency Act was a landmark policy which defined the role of online platforms in moderating content in two major ways:

  1. Only the person who posts on a social media platform can be held liable for the content, not the platform itself. 
  2. An internet provider can restrict content it deems obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. 

Essentially, online platforms aren’t responsible for their content, but they can remove posts if they choose. 

Politicians in the US government mostly call for one of these two reforms of Section 230:

  1. Continuing to offer platforms protection from liability for content in exchange for their meeting specific government standards of moderating that content. 
  2. Increasing platforms’ accountability for content posted on their site, so they can be held liable for slander, hate speech, etc. 

Either reform would change the internet freedom of speech standards that we know today. 

Some argue Section 230 enables platforms to unfairly censor viewpoints under the guise of hate speech. Citizens and representatives of the Republican party frequently express concerns that giant tech companies led by liberal Silicon Valley coders are intolerant of their viewpoints. Recent attempts at reform include proposed legislation to remove Section 230 protections from platforms unless they prove they do not moderate content to disadvantage a political perspective. A more extreme proposal is to prevent platforms from censoring content unless it violates the law, such as libel and hate speech. 

Others are concerned about the effects of a lack of moderation. Violent extremists use social media to spread propaganda, recruit members, and organize attacks. People can unknowingly spread disinformation with dangerous consequences, like rumors that social distancing and masks were ineffective against Covid-19, leading to otherwise-preventable outbreaks. Worse still, malevolent actors use social media to intentionally spread misinformation, seen recently in the Russian government’s efforts to disrupt American elections by promoting conspiracy theories about Democratic candidates. 

Of the four biggest online players, Facebook, Youtube (owned by Google), Twitter, and Reddit, Facebook has taken the most public heat for its content moderation practices. In response, in February of 2020, Facebook shared a whitepaper arguing that a governmental body should be in charge of determining what Facebook should keep up and take down; that the responsibility of moderating content guidelines should not be with platforms. Recently, Facebook has created what it calls the Supreme Court for content moderation, which will have 40 seats filled by topic experts who are independent of Facebook. 

The question going forward will be how the government intervenes. In a dynamic online world, policing speech has required — and will continue to require — new norms of regulation.

Where do you think the US system should sit on this range of censorship expectations? 

Loading

Share this post

Give feedback on this brief: