Tell Congress: attacking Section 230 is dangerous
Why is everyone talking about Section 230?
Free speech is under attack. Trump is suing journalists for unfavorable reporting, investigating companies that have DEI programs, and trying to deport green card holders for protesting against the genocide of Palestinians. Section 230 is a shield against efforts to crush online dissent. But Congress wants to rip it away in the name of fighting Big Tech while refusing to embrace the policies that would actually address the harm caused by these companies. Let’s be clear: Repealing Section 230 would give Trump and his oligarch buddies even more power to censor the Internet and consolidate authoritarian control.
Meaningful policy solutions need to strike at the root causes of Big Tech’s harms: monopoly power, data harvesting and abuse, and algorithmic manipulation. But instead, Section 230 of the Communications Decency Act has become an increasingly popular scapegoat for lawmakers who want to be seen to fight Big Tech. This widely misunderstood but crucially important law essentially allows ordinary people to have a voice on the Internet. Many proposed changes to Section 230 would do enormous harm to vulnerable communities, undermine human rights, and utterly fail to address the legitimate problems with Big Tech companies like Facebook, Amazon, and Google.
This website attempts to clear up some of the most common misconceptions about Section 230. Tell your lawmakers to read it before they change a law they don’t understand. We cannot afford to open the floodgate of government and private censorship while our society is on the verge of fascist takeover.
FAQ
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Section 230 was introduced in response to lawsuits against two websites, Prodigy and CompuServe, which allowed users to access information on the internet (much like Google). The websites had been sued for defamation due to third-party content, with the difference being that one website had actively made efforts to moderate content and one had not. In response to this “Moderator’s Dilemma,” Section 230 created guidelines so that all private, non-government websites could apply a “shield and sword approach” to hosting and removing the kind of content they wished to, according to their rules.
Why is that important for the average internet user?
Section 230 is the law that essentially allows all user generated content on the Internet: your posts, your thoughts, your ideas, your photos, jokes, opinions, memes, videos, reviews, how-to guides, recipes, etc. As outlined by Wikipedia in this blog post, Section 230 is what allows website operators the freedom to review edits to a biographical article (for example) without legal risk. Without Section 230, platforms would be unwilling to host content created by ordinary people. The Internet would become more like Cable TV, where everything you see and hear is backed by big money and has been cleared with risk-averse corporate lawyers. If any platforms hosting user-generated content do survive, they would be unable to engage in even the most basic forms of moderation, like removing spam posts, for fear of lawsuits.
While Trump has called for a repeal of Section 230, most lawmakers just say they want to change it. But even minor tweaks to the existing rule can have enormous consequences for online free speech. Trump clearly intends to use Section 230 reform as an opportunity for censorship. Even without government intervention, websites would be unwilling to host user generated content because of the constant fear of lawsuits. Only the biggest of Big Tech companies like Meta and Google could hope to maintain platforms in a world without Section 230, because they could afford armies of lawyers and content moderators to remove content–legal, free speech–and avoid liability. Smaller platforms, like Bluesky, will simply be sued out of existence.
Content moderation should be based on the needs of online communities and based on human rights principles, not fear of lawsuits. Without 230, platforms will have the wrong incentives in place and will be unwilling to host your opinions because it allows them to avoid legal challenges. Research shows that the 230 provision has been beneficial for fostering a diversity of platforms and content – exactly as the authors originally intended it to be.
Section 230 doesn’t just apply to “Big Tech,” as some lawmakers seem to think. It protects user generated speech and content on smaller platforms like Discord, Reddit, Patreon, and Etsy, as well as the comments sections of local newspapers, a local library’s message board – literally anywhere that people can express themselves online. Section 230 is especially important for protecting the speech and organizing efforts of marginalized people, who have been frequently targeted by law enforcement and have had their speech suppressed by corporations. If Section 230 were weakened, platforms would be unwilling to host, for example, videos of Israeli war crimes in Gaza or posts critical of Elon Musk and Donald Trump.
This is a trick question: The answer is, it doesn’t matter. What matters is only the content in question. If that content is created by a third party, the website hosting it cannot be sued over it. Additionally, there is no “neutrality” component to Section 230, which is a good thing, because otherwise the government could decide what is “neutral” and what is not, allowing for the state to censor speech at will.
As this explainer from the Electronic Frontier Foundation details further, “rather than enshrine some significance between online “platforms” and “publishers,” Section 230 intentionally nullifies any distinction that might have existed. It explicitly grants immunity to all intermediaries, both the “neutral” and the proudly biased. It treats them exactly the same, and does so on purpose. That’s a feature of Section 230, not a bug,” because it protects ALL user generated content on the internet.
While 230 provides protection for platforms to host user content, it is not some kind of magic shield or “Get Out of Jail Free” card – Big Tech companies are still, and always have been, absolutely liable for content that they themselves create, or if they are caught knowingly hosting or sharing content that is considered illegal.The spread and amplification of hate speech and disinformation online is a valid concern. Unchecked harassment of women and people of color, often as a direct result of online mis- and disinformation, can be its own form of censorship if people face abuse and are afraid to express themselves online. But messing with Section 230 will not fix that.
Section 230 allows for a range of moderation practices: various online communities can moderate in the best interest of their users. As private, non-government actors they are free to make their own decisions on what they want to leave up and what they wish to take down. Section 230 is what makes it possible for platforms to remove terrible (but not illegal) content, and even more mundane content like spam, without getting sued for it. So if you are concerned there is too much harmful content online, gutting Section 230 will make it harder for platforms to remove it.
Opponents of Section 230 argue that it gives platforms too much power to filter content, and this has led to bias against conservatives and silencing of their viewpoints. This claim simply isn’t true, and is not backed up by data. Yes, big tech companies have too much power over what we can see and hear, and that includes suppressing controversial viewpoints on both the left and the right, but changing or killing Section 230 will make this problem worse. It will lead to faster, more draconian, and more haphazard moderation, and make risk-averse lawyers at these companies the ones deciding what content is allowed, thus centralizing decision-making in the hands of a few corporate leaders.
Concerns about online censorship and speech are actually a problem of monopoly on our attention and our speech, and because these platforms are trying to enforce community standards that are actually impossible to enforce equally at scale. If Facebook and YouTube didn’t have so much power, we wouldn’t care so much about their moderation decisions because if you didn’t like them you could choose another platform.
If lawmakers want to tinker with Section 230, it is crucial to look at a case study as to what happened the last time that somebody tried to poke a hole in it. The SESTA/FOSTA legislation, intended to end online sex trafficking, instead led to Craigslist shutting down its “Personals” section, and Tumblr cracking down on a wide range of content, including adult artwork and sex education information, which disproportionately affected sex workers and LGBTQ+ creators by shutting down spaces that made people safer and pushing them back onto the street. Sex workers lost income, became homeless overnight, were forced to take on clients who were dangerous or unstable, and could no longer negotiate with or vet clients in advance. Two sex workers took their own lives within a month of SESTA-FOSTA’s passage. And as journalist Cathy Riesenwitz chronicles in this article, “SESTA-FOSTA hasn’t [even] reduced the rate of sex trafficking, according to sex workers, advocates, sex trafficking survivors, and even the Department of Justice. It actually [made] it more difficult for law enforcement to find and rescue victims.”
Senator Elizabeth Warren (D-MA) and Congressman Ro Khanna (D-CA) have introduced legislation to study the effects and impact of SESTA/FOSTA, and the need for this kind of research underscores one of the biggest concerns surrounding Section 230: proposed amendments to it can have massive unintended consequences and should be approached very cautiously. But Trump’s intention to censor dissent could not be more clear, making even the most cautious Section 230 reforms unacceptably risky.
There are a number of things lawmakers can and should do right now to hold Big Tech companies accountable, address their harm, and reign in their monopoly power without touching Section 230. Given the partisan gridlock around Section 230, and the many ways that changes to Section 230 could backfire and do more harm than good, lawmakers should focus on actual policy solutions to Big Tech’s abuses, rather than rushing through some legislation so they can say they did. Here’s a few things lawmakers should do right now:
- Passing strong federal data privacy legislation. Social media’s biggest harm to society is its lack of transparency and accountability: How their algorithms operate and manipulate, how they amplify the worst speech on the internet, and how they profit off promoting this speech to the people most susceptible, using micro-targeted advertising. This is the best way to pressure the companies to do better, even if Congress is slow to act.
- Enforcement and expansion of civil rights legislation.
- The monopoly power and unchecked acquisitions of these tech companies has stifled innovation, and requires a major “update” of existing antitrust laws to expand to the realities of consumer harm in the digital age.
The suggestions above strike at the root of the harms we see from Big Tech, and we need REAL policy solutions that actually address the issues rather than just turning Section 230 into a scapegoat – changing it could do enormous harm and not even fix the problems we have with the internet.