Meta Let Adult Strangers Contact Minors: Explosive Unsealed Allegations Against Meta

Newly unsealed court filings have cast a harsh light on Meta Platforms Inc., accusing the social media giant of knowingly allowing millions of adult strangers to contact minors on its platforms while downplaying and concealing serious risks to young users, Time reported in an exclusive.

meta social media Edited by
Meta Let Adult Strangers Contact Minors: Explosive Unsealed Allegations Against Meta

Meta Let Adult Strangers Contact Minors And It Was Aware: Explosive Unsealed Allegations Against Meta

Newly unsealed court filings have cast a harsh light on Meta Platforms Inc., accusing the social media giant of knowingly allowing millions of adult strangers to contact minors on its platforms while downplaying and concealing serious risks to young users, Time reported in an exclusive. The allegations, reportedly, come from a major multidistrict lawsuit encompassing over 1,800 plaintiffs, including children, parents, school districts, and state attorneys general, targeting Meta alongside other tech companies behind popular social apps like Instagram, TikTok, Snapchat, and YouTube.

The plaintiffs’ brief, filed in the Northern District of California and first reported by Time, draws on sworn depositions from current and former Meta executives, internal communications, and company research obtained through the lawsuit’s discovery process. It reveals a disturbing internal culture that recklessly prioritised user growth over the safety and mental wellbeing of children and teenagers.

One former Instagram safety chief, Vaishnavi Jayakumar, testified that Meta operated a disturbingly high “17 strikes” policy for accounts reported for sex trafficking – – an account wouldn’t be suspended until at least 16 violations had accumulated. This tolerance threshold, confirmed by internal documents, far exceeds industry norms and contributed to the widespread persistence of sex trafficking content on Instagram.

The filings also allege that Meta knowingly misled the public and Congress about the extent of harms on its platforms. Internal research revealed since at least 2017 repeatedly found that Instagram and Facebook use among teenagers correlated with increased anxiety, depression, and loneliness. A 2019 “deactivation study” indicated that users who stopped using Meta’s platforms for a week showed improved mental health outcomes. However, Meta suppressed these results, fearing damage to the company’s reputation, and responded dishonestly to Congressional inquiries about these issues.

One of the most alarming charges is that for years Instagram allowed adults to connect with and harass teenage users, despite internal recommendations dating back to 2019 that all teen accounts be set private by default to prevent unwanted adult contact. Meta’s growth team, however, rejected the safety measure due to fears it would reduce engagement and growth metrics. The brief states that by delaying implementation of default privacy settings until 2024, young users were exposed to billions of inappropriate interactions with adults, termed “IIC” (inappropriate interactions with children) internally.

The suit also outlines Meta’s deliberate focus on capturing and retaining young users, even targeting preteens. Internal documents from 2024 described acquiring new teen users as “mission critical” to Instagram’s success. The company deployed strategies such as sending location-based notifications to students during school hours, encouraging habitual platform use. Despite federal legal requirements that children under 13 not be allowed on social media, Meta was aware that millions of children under 13 used Instagram and other services, yet failed to enforce age restrictions.

Meta’s internal debates on reducing platform toxicity paint an equally bleak picture. The company initially shelved initiatives like “hiding likes” on posts, which research showed could lessen harmful social comparison among teens, because the feature negatively impacted engagement and ad revenue. Similarly, warnings about mental health risks from beauty filters were ignored or reversed when growth concerns prevailed.

Furthermore, Meta allegedly did not automatically remove harmful content including child sexual abuse material and posts glorifying self-harm, often allowing such content to remain accessible to vulnerable teenage users. Even where its AI tools detected violations with high confidence, content was often kept live unless a high threshold was met, contributing to ongoing exposure to harmful material.

At the core of the allegations is a business strategy strikingly compared to tobacco companies’ historical recklessness. Plaintiffs’ lead attorney Previn Warren compared Meta to “pushers” who knowingly market addictive, damaging products to children to drive profit. Meta studied these addiction dynamics under the guise of “problematic use” but routinely suppressed or ignored recommendations for features that could reduce teen addiction to its platforms.

Despite introducing some updated safety features in 2024, including Instagram Teen Accounts that set privacy defaults for minors and limit contact with adults, these came after years of resistance internally. One former Meta executive, Brian Boland, summed up the company’s stance bluntly: “They don’t meaningfully care about user safety.”

Meta has declined to comment to Time on the newly unsealed allegations.