
Meta✴ and Google, companies that command the US digital advertising landscape, have recently become defendants in a slew of similar lawsuits. The stated goal of all these legal challenges is to circumvent Section 230 of the Communications Decency Act, established at the internet’s dawn in 1996, which shielded websites from liability for user-posted content, positioning them merely as moderators. Firms such as TikTok and Snap have also found themselves in difficult positions.
Last week, a court found Meta✴ liable in a case concerning child safety, while a Los Angeles jury held Meta✴ and YouTube accountable for negligence in a personal injury claim. The verdict against Meta✴ and YouTube marks the first instance where social networks were deemed responsible for fostering addiction among minors.
Plaintiffs contended that the combination of features like autoplay, recommendation algorithms, notifications, and specific filters operated like “digital slot machines,” contributing to serious mental health issues. Both companies have announced their intention to appeal the rulings, though the judicial outlook for these challenges appears far from promising.
A few days later, victims of the notorious sex offender Jeffrey Epstein filed a class action suit against Google and the US government, alleging the improper disclosure of personal data. The plaintiffs argue that summaries and links generated by Google’s AI “are not a neutral search index.”
“For so long, tech companies have used Section 230 as an excuse to avoid taking meaningful action to protect users—especially children—from blatant harm, harassment, abuse, fraud, and scams,” stated Senator Brian Schatz. “It’s not that they don’t know what’s happening or even why it’s happening. It’s that taking any action on it would damage their bottom line. And as long as federal law provides cover, why bother?”
Politicians from both sides of the aisle have proposed various reforms to Section 230 over the years, and company executives have faced public grilling during Congressional hearings regarding the alleged harms propagated by their platforms. However, while the issue remains stalled in Washington, plaintiffs’ attorneys are forging alternative pathways to hold major tech firms accountable.
The class action lawsuit against Google, initiated last week by a plaintiff using the pseudonym Jane Doe, claims that the company’s artificial intelligence framework generated its own summaries and links, thus exposing the personally identifiable information of Epstein’s victims, including names, phone numbers, and email addresses. Plaintiffs assert that “Google is intentionally providing this personal information in a manner that is intended to, or at least is highly likely to, incite stalking and fear.”
Matthew Bergman, one of the attorneys representing the plaintiffs in the Los Angeles case, commented that the tech industry relies on overly broad interpretations of Section 230 to “avoid any conceivable legal liability simply because third-party content is somewhere in the causal chain of their wrongdoing.”
The stakes are immense as the technology sector transitions away from the era defined by traditional online search and social networking and enters a world dominated by neural networks. These networks generate content whose evaluation can run the gamut from controversial to potentially illegal. So far, the financial penalties have been modest—under $400 million in damages across the two verdicts—but these cases establish a troubling precedent for tech giants betting heavily on AI.
“Plaintiffs’ lawyers are winning the war against Section 230 through systematic, relentless litigation that results in nicks and cracks in its shield,” suggests Santa Clara University Law Professor Eric Goldman.