Category: Technology

  • Introduction to Net Neutrality

    Introduction to Net Neutrality

    Download the One Pager as a PDF

    Do users have the right to equality of access to online information?

    Network neutrality is the notion that internet service providers (ISPs) like AT&T, Verizon, and Comcast must treat all online content flowing through their cables equally. This means that ISPs which connect users to information online cannot charge companies extra for access to certain services, nor can they slow down or stop providing access to certain online content.  For example, under net neutrality, a large TV streaming platform like Netflix could not pay Verizon to make its TV shows stream faster than shows on a small startup platform. Shows on both platforms would stream to users at the same pace. 

    Net neutrality advocates champion it for two reasons: 

    1. Protecting free expression online. A handful of ISPs dominate access to the internet, and without net neutrality they could suppress particular viewpoints by making them load slowly or not at all, or by charging fees for posting particular types of content on the internet. 
    2. Enabling innovation and keeping markets free. If ISPs can provide faster access to companies which can pay them more, smaller companies will have their growth stunted at the outset. 

    The Federal Communications Commission (FCC) passed the Open Internet Order in 2015 establishing net neutrality as law. Prior to then, the Bush administration had enforced basic net neutrality rules, which were later expanded under the Obama administration. Under the Trump administration, the FCC reversed the 2015 order and removed all legal provisions protecting net neutrality. Instead, ISPs are required to disclose their content streaming practices to users. For example, if Verizon decided to stream Hulu faster than Netflix because Hulu paid more, Verizon would have to tell users. 

    The FCC’s justification for this change is that small ISPs struggle with the costs of proving they are upholding net neutrality. To show they are streaming all content equally, ISPs had to hire expensive lawyers and accountants. Large ISPs were more able to bear increased costs, while small ISPs were heavily impacted. As a result of the 2015 Open Internet Order, large ISPs had to follow stricter rules, but customers also had fewer options to switch to if they weren’t satisfied. In response to this challenge, the FEC lifted reporting requirements for ISPs serving fewer than 100,000 people.

    Those opposed to net neutrality argue that a market-based system, where users have the freedom to select an ISP based on their streaming practices, is more efficient than asking the government to monitor streaming practices. One challenge to that idea is that in much of the US, only one ISP provides internet coverage to a geographic area. For 83.3 million Americans, switching based on streaming practice preferences is impossible because they only have access to service from a single provider. 

    The net neutrality issue is likely to continue developing through legislative and judicial means:

    • Antitrust issues: As it stands, blocking a competitor’s website is technically a violation of antitrust laws. But slowing down access to a competitor’s website still lingers in a murky legal realm. The line between slowing down streaming and blocking content is unclear. 
    • States’ Rights: Since the FCC’s repeal of the 2015 net neutrality order, a federal court ruled that states can establish their own net neutrality laws, and the FCC cannot infringe upon these state laws. As a result, several states have passed laws in support of net neutrality within their borders. 

    The crux of the issue is whether users should have the opportunity to take streaming practices into account when selecting a network (assuming multiple ISPs cover their area), or if the government should require networks to give everyone the same rights to access online content. This risks eliminating smaller firms and empowering large firms who can afford the costs.

    How much involvement do you believe the US government should have in enforcing net neutrality, if any at all? 

  • Moderating Online Content

    Moderating Online Content

    Should platforms have the right to censor, or be responsible for, their platform’s content? 

    As the role of the internet in public discourse has expanded and changed, so too have the questions that the internet poses for free speech rights. A few decades ago, one of the tech world’s foremost challenges was establishing an environment in which freedom of speech operated in a similar way to the laws which predated the internet’s existence. Social media platforms like Instagram, Facebook, and Twitter have never served as impartial blank slates.These platforms have the censoring technology to establish whatever online environment and norms they choose. The question is whether they have the right to use that technology in whatever way they want to, or if the government should tell them how to use it. 

    Section 230 of the 1996 Communications Decency Act was a landmark policy which defined the role of online platforms in moderating content in two major ways:

    1. Only the person who posts on a social media platform can be held liable for the content, not the platform itself. 
    2. An internet provider can restrict content it deems obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. 

    Essentially, online platforms aren’t responsible for their content, but they can remove posts if they choose. 

    Politicians in the US government mostly call for one of these two reforms of Section 230:

    1. Continuing to offer platforms protection from liability for content in exchange for their meeting specific government standards of moderating that content. 
    2. Increasing platforms’ accountability for content posted on their site, so they can be held liable for slander, hate speech, etc. 

    Either reform would change the internet freedom of speech standards that we know today. 

    Some argue Section 230 enables platforms to unfairly censor viewpoints under the guise of hate speech. Citizens and representatives of the Republican party frequently express concerns that giant tech companies led by liberal Silicon Valley coders are intolerant of their viewpoints. Recent attempts at reform include proposed legislation to remove Section 230 protections from platforms unless they prove they do not moderate content to disadvantage a political perspective. A more extreme proposal is to prevent platforms from censoring content unless it violates the law, such as libel and hate speech. 

    Others are concerned about the effects of a lack of moderation. Violent extremists use social media to spread propaganda, recruit members, and organize attacks. People can unknowingly spread disinformation with dangerous consequences, like rumors that social distancing and masks were ineffective against Covid-19, leading to otherwise-preventable outbreaks. Worse still, malevolent actors use social media to intentionally spread misinformation, seen recently in the Russian government’s efforts to disrupt American elections by promoting conspiracy theories about Democratic candidates. 

    Of the four biggest online players, Facebook, Youtube (owned by Google), Twitter, and Reddit, Facebook has taken the most public heat for its content moderation practices. In response, in February of 2020, Facebook shared a whitepaper arguing that a governmental body should be in charge of determining what Facebook should keep up and take down; that the responsibility of moderating content guidelines should not be with platforms. Recently, Facebook has created what it calls the Supreme Court for content moderation, which will have 40 seats filled by topic experts who are independent of Facebook. 

    The question going forward will be how the government intervenes. In a dynamic online world, policing speech has required — and will continue to require — new norms of regulation.

    Where do you think the US system should sit on this range of censorship expectations?