The White House released a statement today outlining efforts by several AI companies to curb the creation and distribution of image-based sexual abuse. Participating companies revealed the steps they are taking to prevent their platforms from being used to create Non-Consent Intimate Images of Adults (NCII) and Child Sexual Abuse Material (CSAM).
Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI announced the following:
All of the aforementioned, except for Common Crawl, agreed:
“We will incorporate feedback loops and iterative stress-testing strategies into our development process to prevent our AI models from outputting image-based sexual abuse content.”
And, where appropriate, “remove nudity from our AI training datasets.”
Because this is a voluntary initiative, today’s announcement does not result in any new actionable steps or consequences for not meeting the commitments. However, any good faith effort to address this serious issue should be applauded. Notable absences from today’s White House announcement include Apple, Amazon, Google and Meta.
Separate from the federal effort, many major technology and AI companies are working to make it easier for victims of deepfakes to stop the spread of their images and videos. StopNCII has partnered with multiple companies for a comprehensive approach to removing this content, and other companies are rolling out their own tools for reporting AI-generated image-based sexual abuse on their platforms.
If you believe you have been a victim of non-consensual intimate image sharing, you can report it to StopNCII here. If you are under 18, you can report it to NCMEC here.
