“I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” said Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI who studies recommendation algorithms. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?”

New Jersey Democrat Frank Pallone’s Justice Against Malicious Algorithms Act, for example, would withdraw immunity when a platform “knew or should have known” that it was making a “personalized recommendation” to a user. But what counts as personalized? According to the bill, it’s using “information specific to an individual” to enhance the prominence of certain material over other material. That’s not a bad definition. But, on its face, it would seem to say that any platform that doesn’t show everyone the exact same thing would lose Section 230 protections. Even showing someone posts by people they follow arguably relies on information specific to that person.

Malinowski’s bill, the Protecting Americans From Dangerous Algorithms Act, would take away Section 230 immunity for claims invoking certain civil rights and terrorism-related statutes if a platform “used an algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information.” It contains exceptions, however, for algorithms that are “obvious, understandable, and transparent to a reasonable user,” and it lists some examples that would fit the bill, including reverse chronological feeds and ranking by popularity or user reviews.

There’s a great deal of sense to that. One problem with engagement-based algorithms is their opacity: Users have little insight into how their personal data is being used to target them with content a platform predicts they’ll interact with. But Stray pointed out that distinguishing between good and bad algorithms isn’t so easy. Ranking by user reviews or up-voting/down-voting, for example, is crappy on its own. You wouldn’t want a post with a single up-vote or five-star review to shoot to the top of the list. A standard way to fix that, Stray explained, is to calculate the statistical margin of error for a given piece of content and rank it according to the bottom of the distribution. Is that technique—which took Stray several minutes to explain to me—obvious and transparent? What about something as basic as a spam filter?

“It’s not clear to me whether the intent of excluding systems that are ‘simple’ enough would in fact exclude any system that is actually practical,” Stray said. “My suspicion is, probably not.”

In other words, a bill that took away Section 230 immunity with respect to algorithmic recommendation might end up looking the same as a straight-up repeal, at least as far as social media platforms are concerned. Jeff Kosseff, the author of the definitive book about Section 230, The Twenty-Six Words That Created the Internet, pointed out that internet companies have many legal defenses to fall back on, including the First Amendment, even without the law’s protection. If the statute gets riddled with enough exceptions, and exceptions to exceptions, those companies might decide there are easier ways to defend themselves in court.