New Mexico's Meta lawsuit: some police officers testify that Meta's AI is sending a flood of "junk" CSAM reports that are draining resources and slowing cases

Read full article → theguardian.com
New Mexico's lawsuit against Meta is highlighting significant issues with the company's AI-driven child sexual abuse material (CSAM) reporting system. Law enforcement officers from the Internet Crimes Against Children (ICAC) taskforce testified that Meta's AI is generating a high volume of "junk" or low-quality CSAM reports. These "junk" reports are overwhelming investigators, draining valuable resources, and slowing down legitimate investigations into child exploitation. The ICAC taskforce, a nationwide network of agencies coordinated with the US Department of Justice, has seen a doubling of "cybertips" received between 2024 and 2025, with a substantial increase in unviable reports from Meta's platforms like Instagram, Facebook, and WhatsApp. This situation arises as Meta faces allegations of prioritizing profits over child safety, while the company maintains its cooperation with law enforcement and highlights its new safety features. The volume of tips is further amplified by recent legislative changes like the Report Act, which broadened reporting obligations for online service providers, potentially incentivizing companies to over-report to avoid penalties.

Key Details

Law enforcement officers involved in combating child exploitation have stated that Meta's artificial intelligence systems are inundating them with a significant volume of "junk" reports concerning potential child sexual abuse material (CSAM). These reports are described as lacking crucial information or being entirely non-criminal, diverting law enforcement resources and impeding the progress of genuine investigations. This testimony emerged during New Mexico's ongoing lawsuit against Meta, where the state attorney general accuses the company of prioritizing profits over child safety, a claim Meta disputes by pointing to its enhanced safety features and cooperation with authorities.

The sheer volume of these low-quality tips is straining the operational capacity of the Internet Crimes Against Children (ICAC) taskforce. One officer reported that the total number of "cybertips" received doubled from 2024 to 2025, with a substantial proportion coming from Meta's platforms like Instagram, Facebook, and WhatsApp. The lack of actionable intelligence in many of these AI-generated reports means investigators cannot identify perpetrators, even when aware that a crime may have occurred, leading to significant frustration and operational inefficiency.

Meta, as the largest reporter of CSAM to the National Center for Missing and Exploited Children (NCMEC), generated 13.8 million reports in 2024 alone. The Guardian previously reported that AI-generated tips often require a warrant to access, further slowing investigations due to Fourth Amendment protections, a burden Meta claims is exacerbated by court rulings. The company asserts its AI systems are crucial for detecting CSAM at a scale impossible manually and has introduced new safety features designed to work even with encrypted chats, though child safety groups have criticized the rollout of encryption in services like Facebook Messenger.

AI Meta
Covered by: techmeme
Read full article →