The Florida bill attempts to weaken defendants’ protections in defamation lawsuits, including by making it easier to sue if the plaintiff has been accused of discriminating by race, sex, sexual orientation, or gender identity. The bill also would help plaintiffs more easily establish actual malice.

I question whether parts of the Florida bill, if passed, would withstand a constitutional challenge, because the actual malice requirement is rooted in the First Amendment and cannot be overridden by a state legislature. But if Justices Thomas and Gorsuch have their way, the court could reconsider the constitutional protections in defamation cases, leaving the door open for Florida and other states to make it far easier to sue not only news organizations but individual critics on social media. Although the debate about Sullivan often focuses on large news organizations like The New York Times and Fox News, it protects all speakers and is essential to open online discourse.

Also looming over the Supreme Court are requests to consider the constitutionality of Texas and Florida laws that restrict the ability of social media companies to moderate user content. Last May, the Eleventh Circuit blocked a Florida law that limits the ability of platforms to moderate political candidates’ content or stories from news organizations. “Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it,” Judge Kevin Newsom wrote. But in September, the Fifth Circuit upheld a Texas law that prohibits social media platforms from “censoring” user content based on viewpoint. “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say,” Judge Andrew Oldham wrote. Although the court has not yet agreed to hear the cases, it probably will do so in the next year.

A Supreme Court ruling on those laws has the potential to overhaul how online platforms have operated since the dawn of the internet. If the court agrees that platforms do not have a First Amendment right to moderate as they see fit, the platforms could soon face a state-by-state patchwork of restrictions and edicts to carry user content even if it violates the platforms’ internal policies.  Platforms have made some bad content-moderation decisions, but even this imperfect system is better than allowing courts and legislators to decide when platforms can block content. 

And states are not only passing social media laws that require platforms to carry content, but also attempting to limit harmful but constitutionally protected speech. For instance, after last year’s Buffalo supermarket shooting, New York enacted a law that requires platforms to provide “a clear and easily accessible mechanism for individual users to report incidents of hateful conduct,” and to have policies on their response to complaints about hateful conduct. This month, a New York federal district judge struck down the law, concluding that it “both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users.” And last month, a California federal district judge blocked a California law that prohibited physicians and surgeons from disseminating “misinformation or disinformation” about Covid-19 to patients. The New York and California judges reached the correct decisions under current Supreme Court First Amendment precedent, but it is unlikely to be the last time that a state tries to limit constitutionally protected online speech. Eventually those cases may well end up in the Supreme Court, giving it another chance to reevaluate the scope of its free speech protections.