Microsoft announced that it has partnered with StopNCII to remove non-consensual intimate images, including deepfakes, from its Bing search engine.
When a victim opens a “case” with StopNCII, the database creates digital fingerprints (also called “hashes”) of private images and videos stored on that individual’s device, without the need for the victim to upload any files. The hashes are then sent to participating industry partners, who can look for matches to the originals and remove them from the platform if they violate their content policies. This process also applies to AI-generated deepfakes of real people.
Several other technology companies have also agreed to work with StopNCII to remove intimate images that have been shared without permission. Meta helped build the tool, which is used across its Facebook, Instagram and Threads platforms. Other services collaborating on the effort include TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse and Redgifs.
Curiously, Google isn’t on the list. The tech giant has its own set of tools for reporting non-consensual imagery, including AI-generated deepfakes. But its lack of participation in one of the few centralized locations for removing revenge porn and other private images places an additional burden on victims, who will have to take a piecemeal approach to restoring their privacy.
In addition to efforts like StopNCII, the US government has taken steps this year to specifically address the harms of deepfake non-consensual imagery: The US Copyright Office has called for new legislation on the issue, and a group of senators has moved to provide protections for victims with the NO FAKES Act, introduced in July.
If you believe you have been a victim of non-consensual intimate image sharing, you can report it to StopNCII here and to Google here. If you are under 18, you can report it to NCMEC here.