In cases against social media companies for addicting adolescents to their sites, often leading to depression and even suicide, the MDL Court, in considering motions to dismiss based on Section 230 the Communications Decency Act (as well as the First Amendment and other bases), concluded that the alleged design defects not equivalent to ‘speaking’ or ‘publishing’, and which can be fixed by the defendants without altering the publishing of third-party content, include:

  • Not providing effective parental controls including notification to parents that children are using the platforms;
  • Not providing options to users to self-restrict time used on a platform;
  • Making it challenging for users to choose to delete their account;
  • Not using robust age verification;
  • Making it challenging for users to report predator accounts and content to the platform;
  • Offering appearance-altering filters;
  • Not labelling filtered content;
  • Timing and clustering notifications of defendants’ content to increase addictive use;
  • Not implementing reporting protocols to allow users or visitors of defendants’ platforms to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting.

By contrast, the MDL Court found that the following alleged design defects directly target defendants’ roles as ‘publishers’ of third-party content, and are therefore barred by Section 230 of the Communications Decency Act:

  • Failing to put default protective limits to the length and frequency of sessions;
  • Failing to institute blocks to use during certain times of day (such as during school hours or late at night);
  • Not providing a beginning and end to a user’s Feed;
  • Publishing geolocating information for minors;
  • Recommending minor accounts to adult strangers;
  • Limiting content to short-form and ephemeral content, and allowing private content;
  • Timing and clustering of notifications of third-party content in a way that promotes addiction;
  • Use of algorithms to promote addictive engagement.

First, addressing some of these defects “would necessarily require defendants to publish less third-party content. Unlike the opt-in restrictions described above, which allow users to choose to view or receive less content, but do not limit defendants’ ability to post such content on their platforms, these alleged defects would inherently limit what defendants are able to publish. Similarly, limiting publication of geolocation data provided by users to be published by the site inherently targets the publishing of third-party content and would require defendants to refrain from publishing such content.”

Second, Section 230 also immunizes defendants from allegations that they recommend adult accounts to adolescents. “The publishing conduct covered by Section 230 immunity includes recommending content to users. Plaintiffs do not dispute that user accounts or profiles are third-party content published by the platform. Thus, recommending one user’s profile to another is publishing of third-party content, which is entitled to Section 230 immunity. In essence, the recommendation function challenged is indistinguishable from publishing, it is the means through which defendants publish third-party content to users. Plaintiffs do not explain how the alleged defects could be addressed without requiring defendants to change how they publish such content. Indeed, the only solution they suggest for addressing the alleged problems caused by the connection of child and adult profiles is to eliminate product features that recommend accounts between children and adult strangers.”

Third, Section 230 also immunizes where the products are allegedly defective because they provide short-form and ephemeral content. “Editorial decisions such as determining the length of content published and how long to publish content are traditional editorial functions immune under Section 230, where exercised with regard to third-party content.”

Fourth, with respect to private messaging, “plaintiffs cite no authority indicating that posting third-party content is not publishing where it is posted only to one other person. Indeed, when confronted with this exact question in Fields, the court held that private messaging functions do fall within the publishing umbrella.”

Fifth, where notifications are made to alert users to third-party content, Section 230 bars plaintiffs’ product defect claims. “This includes notifications that someone has commented on or liked a user’s post.”

Sixth, to the extent plaintiffs challenge defendants’ “use of algorithms to determine whether, when, and to whom to publish third-party content, Section 230 immunizes defendants. Whether done by an algorithm or an editor, these are traditional editorial functions that are essential to publishing. Further, plaintiffs identify no means by which defendants could fix this alleged defect other than by altering when, and to whom they publish third-party content.”

With respect to the question of whether the defendants’ platforms could be considered “products”, the Court generally rejects the “all-or-nothing” notions that they are either (i) “services” or (ii) “tangible” or (iii) analogous to “tangible personal property” or (iv) akin to “ideas, content or free expression” or (v) as “software”, products.

However, the Court finds that certain functionalities constitute “products”:

“The first three design defects relate to defendants’ allegedly defective parental controls and age verification systems, namely: a failure to implement robust age verification processes to determine users’ ages; a failure to implement effective parental controls; and a failure to implement effective parental notifications. The Court begins by asking whether these alleged defects are analogous to tangible personal property in the context of their use and distribution. The answer is yes. Myriad tangible products contain parental locks or controls to protect young children. Take, for instance, parental locks on bottles containing prescription medicines. Other examples include parental locks on televisions that enable adults to determine which channels or shows young children should be permitted to watch while unsupervised. The Court also considers whether these defects concern design elements of defendants’ platforms and are content-agnostic, as plaintiffs argue, or are more akin to ideas, content, and free expression upon which products liability claims cannot be based. Again, these identified defects primarily relate to the manner in which young users are able to access defendants’ apps, including whether their age is accurately assessed during the sign-up process and whether, subsequent to signing up, their activity and settings can be accessed and controlled by their parents. These defects are therefore more akin to user interface/experience choices, such as those found to be products….

“The next two design defects pertain to app session duration: a failure to implement opt-in restrictions to the length and frequency of use sessions; and a failure to implement default protective limits to the length and frequency of use sessions. Again, the Court begins with an analogy to tangible personal property. The most obvious analog to these identified defects is physical timers and alarms, which have long been in use. Modern examples are also available. For instance, many of us carry in our pockets smart phones which are tangible products. These phones contain features that enable users to receive auto-notifications should they exceed pre-set screen time limits. These examples are sufficiently analogous to tangible personal property in terms of their use and distribution. Importantly, these alleged defects are also content-agnostic. Plaintiffs’ theory concerns the manner in which users access the apps (i.e., for uninterrupted, long periods of time), not the content they view there. For this reason, these alleged defects are not excluded on the grounds that they pertain to ideas, thoughts, and expressive content under Winter and its progeny….

With respect to Creating Barriers to Account Deactivation and/or Deletion, “Plaintiffs allege that each defendant’s account deactivation/deletion process is needlessly complicated and serves to disincentivize users from leaving their respective social media platforms. Here, defendants’ global arguments casting all of plaintiffs’ allegations as essentially content-related are particularly lacking….”

With respect to the Failure to Label Edited Content, plaintiffs allege defendants fail to label images and videos that have been edited through in-app filters. “This alleged defect concerns the design of defendants’ social media platforms rather than the content made available through such platforms. That said, the Court recognizes that the labeling, or failing to label, content, in any way, is tied to the nature of the content itself. However, that connection relates to the output of the labeling, not the labeling tool itself. On balance, and given the posture of this litigation, the Court is required to accept plaintiffs’ allegations as true when testing the sufficiency of their claims. For this reason, the Court finds that this design defect may proceed as a product to the extent that plaintiffs’ allegations center on the design of the filter. For instance, labeling a photo as ‘edited’ does not alter the underlying photo as much as it guides the user in better understanding how to interpret that photo. The Court finds this distinction meaningful. Accordingly, while a closer question, plaintiffs have plausibly stated the existence of a product relative to this defect….

“The next alleged defect concerns defendants’ filters, which enable users to manipulate content prior to posting it on defendants’ platforms or otherwise sharing it with others. Plaintiffs challenge two main categories of filters. One, they target filters that permit users to blur imperfections and otherwise enhance their appearance in order to create the perfect selfie. Plaintiffs assert the widespread use of such filters promotes unattainable beauty standards and facilitates social comparison, which combine to cause negative mental health outcomes for users, particularly young girls. Two, they target filters like Snapchat’s Speed Filter, which enable users to overlay content on top of existing content. Specifically, the Speed Filter is a functionality that enables users to overlay the speed they are traveling in real life onto a photo or video before sharing that content with others via the Snapchat app. The Court examines these categories of filters separately. With respect to the filters that permit appearance alteration, the Court notes that defendants, admittedly in the First Amendment context, have referred to such filters as tools that allow users to speak to one another, such as by creating or modifying their own expression (including with visual effects that change the look of images). Defendants’ use of the word ‘tools’ here is notable because defendants implicitly concede that a distinction exists between a ‘tool’, or functionality, that permit users to manipulate content and the content itself. Here, the concession inures to plaintiffs’ benefit as it bolsters their contention that this alleged defect is really about design, not content. Given the procedural posture, plaintiffs’ products liability claims may proceed with respect to defendants’ appearance-altering filters….

“Finally, the Court analyses plaintiffs’ allegations that defendants failed to design their platforms to include reporting protocols that allow users or visitors to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting. The Court determines this allegation specifically concerns the design of defendants’ platforms. Plaintiffs seek to hold defendants’ accountable for requiring users to have logged into a registered account in order to report certain obscene content or profiles. This is quintessentially a matter of design, user interface, and system architecture rather than content. Accordingly, the Court determines that plaintiffs have adequately alleged that the design of defendants’ CSAM and adult predator account reporting mechanisms are products.”


In re Social Media Adolescent Addiction Lit., No.22-3047, 2023 WL 7524912, 2023 U.S.Dist.LEXIS 203926 (N.D.Cal. Nov. 14, 2023).