YouTube is developing new tools to help prevent artists and creators from having their likenesses used without their permission. The company said Thursday it is developing new technology to detect AI-generated content using people’s faces and voices, with a test program set to begin early next year.
The upcoming facial detection technology will reportedly let people across industries “detect and moderate” content that uses AI-generated depictions of their faces. YouTube says it’s developing tools that will let creators, actors, musicians and athletes spot videos that contain deepfake versions of their faces and choose what to do about them. The company has not yet revealed a release date for the facial detection tool.
Meanwhile, the “synthetic vocal identification” technology will become part of YouTube’s automated IP protection system, Content ID, which the company says will allow partners to find and manage content that uses AI-generated vocals.
“As AI evolves, we believe it should enhance human creativity, not replace it,” Amjad Hanif, YouTube’s vice president of product for creators, wrote in a blog post. “We’re committed to working with our partners to ensure future advancements amplify their voices, and we’ll continue to develop guardrails to address concerns and achieve our shared goals.”