We’re unprepared for the threat GenAI on Instagram, Facebook, and Whatsapp poses to kids

Waves of Child Sexual Abuse Material (CSAM) are inundating social media platforms as bad actors target these sites for their accessibility and reach.



The National Center for Missing and Exploited Children (NCMEC) reported 36 million cases of suspected CSAM in 2023, containing 100 million files. An overwhelming 85% came from Meta—primarily Facebook, Instagram, and WhatsApp.



As if NCMEC and law enforcement didn’t have their hands full identifying victims and perpetrators, we’re now seeing a new threat turbocharging the already rampant spread of this illicit content: Artificial Intelligence-Generated Child Sexual Abuse Material (AIG-CSAM). Bad actors are using widely available AI tools to make generative AI CSAM, which is still CSAM , and possession of it is still a federal crime.



President Biden recently signed the REPORT Act into law, which mandates social platforms report all kinds of CSAM, but the proliferation of AIG-CSAM is outpacing our institutions’ capacity to adequately combat it. Offenders are often creating these harmful and illegal deepfakes using both benign images of minors found online as well as manipulating existing CSAM, thereby revictimizing their subject. In June of last year, the FBI warned the public of AI-generated sextortion schemes being on the rise.



Navigating the complexities of detection



This urgent problem is only becoming more complex, creating torrential headwinds for the players involved. The influx in AIG-CSAM reports makes it harder for law enforcement to identify authentic CSAM endangering real minors. NCMEC has responded by adding a “Generative AI” field on their CyberTipline form to parse through the inbounds, but they’ve noted that many reports often lack this metadata. This may be because people can’t discern AI-generated content from the real thing, further hampering NCMEC by an influx of low-quality reports.



The good news is AI is increasingly getting better at policing itself, but there are limitations and challenges. OpenAI’s newly-released “Deepfake Detector” claims to detect synthetic content from its own image generator, DALL-E, but is not designed to detect images produced by other popular generators such as Midjourney and Stability AI. Companies like Meta are also increasingly flagging and labeling AI-generated content on their platforms, but most are relatively benign (think: Katy Perry at the Met ), making AIG-CSAM detection like finding a needle in a haystack.



To fight AIG-CSAM, developers must dig into design



Much more can be done along the pipeline of responsibility, beginning with the AI developers making these tools inaccessible to those who exploit them. Developers must embrace a more stringent set of core design practices, including mitigations like removing CSAM from training data , which can lead AI models to generate or replicate such material, further spreading harmful content. Additionally, developers should invest in stress-testing models to understand how they can be misused, and limiting child-related queries users can ask.



Platforms must invest in CSAM detection



From a technological perspective, platform investment in CSAM protection involves a combination of digital fingerprint hashing against databases for known CSAM, machine-learning algorithms for unknown CSAM, and models that can detect AI-generated content.



But machine learning isn’t enough and is known to generate significant false positives in this area, making it difficult to find the signal in the noise. What’s more, bad actors are constantly changing their tactics, using seemingly innocuous hashtags and coded language known to their community to find each other and exchange illegal material.



Politicians must translate bipartisan support into funding



From a governmental perspective, child safety is thankfully an area with resounding bipartisan support. Although the REPORT Act represents positive governmental action to uphold platform accountability, the legislation has received criticism for compounding the overreporting problem NCMEC already faces. Platforms are now incentivized to err on the side of caution for fear of being fined. To address this, the government must appropriately fund organizations like NCMEC to tackle the surge of reports spurred by both the legislation and AI.



Parents must understand the latest threats



Finally, parents can play an integral role in protecting their children. They can discuss the very real risk of online predators with their kids. Parents should also keep their own social media profiles private, which likely contain images of their kids, and ensure privacy settings are in place on their kids’ profiles.



Reverse image searches on Google can be helpful for identifying photos parents don’t know are on the open web, and there are services like DeleteMe available that will remove private information that has been scraped and shared by shady data brokers.



The future of child safety in the AI era



Child sexual abuse material is not a new challenge, but its exacerbation by generative AI represents a troubling evolution in how such material proliferates. To effectively curb this, a unified effort from all stakeholders—AI developers, digital platforms, governmental bodies, nonprofits, law enforcement, and parents—is essential.



AI developers must prioritize robust, secure systems that are resistant to misuse. Platforms need to diligently identify and report any abusive content, while the government should ensure adequate funding for organizations like NCMEC. Meanwhile, parents must be vigilant and proactive.



The stakes could not be higher; the safety and well-being of our children in this new AI-driven age hang in the balance.