(LifeSiteNews) – A newly unsealed court filing alleges that Facebook and Instagram parent company Meta knew how widespread sexual and psychological exploitation of children was on its platforms but publicly downplayed the risk and would knowingly let repeat offenders remain despite more than a dozen violations.
Time magazine reported that the filing, unsealed November 21, contains testimony from former Instagram head of safety and well-being Vaishnavi Jayakumar that, upon her joining the company in 2020, she discovered it had a policy under which one “could incur 16 violations for prostitution and sexual solicitation,” but it would take the 17th before “your account would be suspended.” She claimed to have raised the issue multiple times, only to be rebuffed on the grounds that fixing the issue would be too difficult.
The 1,800-plus plaintiffs, who include families, school districts, and attorneys general across multiple states and localities, accuse Instagram, TikTok, Snapchat, and YouTube of “relentlessly pursu(ing) a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health (…) “Meta never told parents, the public, or the Districts that it doesn’t delete accounts that have engaged over 15 times in sex trafficking.”
“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” argued Previn Warren, one of the plaintiffs’ lead attorneys. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids. They did it anyway, because more usage meant more profits for the company.”
Meta denied the so-called “17x” rule, insisting it removes accounts immediately upon such suspicions and has proactively worked for years to protect minors on its platforms; the plaintiffs claim to have documentation corroborating Jayakumar’s allegations, and that Meta knew more than it let on or acted upon evidence of its platforms’ negative impacts on teen mental health issues such as eating disorders and suicide.
“We know parents are worried about their teens having unsafe or inappropriate experiences online, and that’s why we’ve significantly reimagined the Instagram experience for tens of millions of teens with new teen accounts,” a company spokeswoman previously told Time. “These accounts provide teens with built-in protections to automatically limit who’s contacting them and the content they’re seeing, and teens under 16 need a parent’s permission to change those settings. We also give parents oversight over their teens’ use of Instagram, with ways to see who their teens are chatting with and block them from using the app for more than 15 minutes a day, or for certain periods of time, like during school or at night.”
The brief also alleges that Meta lied to Congress in its 2020 testimony to the Senate Judiciary Committee when it answered “no” to a written question if it was “able to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depression” or “increased signs of anxiety.” The plaintiffs maintain this was false because the year before Meta began a “deactivation study” that found one week without Facebook or Instagram was associated with reduced rates of anxiety, depression, and loneliness among teens. The plaintiffs argue Meta was withholding damaging evidence; Meta maintains the findings were not useful because the change was only among users who went in predisposed to consider Facebook bad for themselves.
Last year, The New York Times and The Wall Street Journal also reported on evidence that Meta was aware that its subscription tools were being used to facilitate child sexual exploitation but neglected to solve the issue, specifically 2023 warnings from safety staffers to superiors about paid subscription tools being used by hundreds of “parent-managed minor accounts” to sell adult male users’ images of their own young daughters wearing swimsuits and leotards.
The photos themselves were not sexual, nude, or otherwise illegal, but many customers made perfectly clear to the mothers running the accounts that they were deriving sexual enjoyment from them. “Sometimes parents engaged in sexual banter about their own children or had their daughters interact with subscribers’ sexual messages,” the Journal reported. “Meta’s recommendation systems were actively promoting such underage modeling accounts to users suspected of behaving inappropriately online toward children.”
The Times added that Instagram users who reported sexually explicit images and suspected predators were “typically met with silence or indifference,” and even if they used the block function on “many” of them, they were actually penalized with limits on their own access to certain features.
















