Meanwhile, Senator Josh Hawley, a Missouri Republican, has introduced legislation that would encourage individuals to sue platforms for making content decisions in “bad faith”–an unsubtle invitation to conservatives who feel they’ve been the targets of politically motivated slights. In fact, there’s scant evidence of systematic anti-right bias by social-media platforms, according to two analyses by The Economist and a third by a researcher at the conservative American Enterprise Institute.
Other skeptics say Section 230 allows platforms to profit from hosting misinformation and hate speech. This is Biden’s position: that by providing a shield against litigation, the law creates a disincentive for companies to remove harmful content. In a December 2019 conversation with the New York Times editorial board, Biden responded to questions about Section 230 with pique at Facebook for failing to fact-check inaccurate Trump campaign ads about him. The law “should be revoked because [Facebook] is not merely an internet company,” he said. “It is propagating falsehoods they know to be false.”
Biden’s mistake, though, is urging revocation of Section 230 to punish Facebook, when what he really seems to want is for the company to police political advertising. He has said nothing publicly in the intervening months indicating that he has altered this position.
Several more nuanced, bipartisan reform proposals do contain ingredients worth considering. A bill cosponsored by Senators John Thune, a South Dakota Republican, and Brian Schatz, a Hawaii Democrat, would require internet companies to explain their content moderation policies to users and provide detailed quarterly statistics on which items were removed, down-ranked, or demonetized. The bill would amend Section 230 to give larger platforms just 24 hours to remove content determined by a court to be unlawful. Platforms would also have to create complaint systems that notify users within 14 days of taking down their content and provide for appeals.
More smart ideas come from experts outside government. A 2019 report (pdf) published by scholars gathered by the University of Chicago’s Booth School of Business suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.
Quid pro quo
In my view, lawmakers should adopt the quid pro quo approach for Section 230. It provides a workable organizing principle to which any number of platform obligations could be attached. The Booth report provides examples of quids that larger platforms could offer to receive the quo of continued immunity. One would “require platform companies to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement.” Under a second, platforms would disclose data on content moderation methods, advertising practices, and which content is being promoted and to whom.
Retooling Section 230 isn’t the only way to improve the conduct of social-media platforms. It would also be worth creating a specialized federal agency devoted to the goal. The new Digital Regulatory Agency would focus on making platforms more transparent and accountable, not on debating particular pieces of content.
For example, under a revised Section 230, the agency might audit platforms that claim their algorithms do not promote sensational material to heighten user engagement. Another potential responsibility for this new government body might be to oversee the prevalence of harmful content on various platforms–a proposal that Facebook put forward earlier this year in a white paper.
Facebook defines “prevalence” as the frequency with which detrimental material is actually viewed by a platform’s users. The US government would establish prevalence standards for comparable platforms. If a company’s prevalence metric rose above a preset threshold, Facebook suggests, that company “might be subject to greater oversight, specific improvement plans, or–in the case of repeated systematic failures–fines.”
Facebook, which is already estimating prevalence levels for certain categories of harmful content on its site, concedes that the measurement could be gamed. That’s why it would be important for the new agency to have a technically sophisticated staff and meaningful access to company data.
Reforming Section 230 and establishing a new digital regulator may turn, like so much else, on the outcome of the November election. But regardless of who wins, these and other ideas are available, and could prove useful in pushing platforms to take more responsibility for what’s posted and shared online.
Paul M. Barrett is the deputy director of the NYU Stern Center for Business and Human Rights.