Many people tried to use OpenAI’s DALL-E image generator during the election period, but the company announced that it was able to prevent its use as a deepfake creation tool. According to OpenAI’s latest report, ChatGPT rejected more than 250,000 requests to generate images of President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance, and Governor Walz. The company explained that this is a direct result of the security measures ChatGPT previously implemented to refuse to generate images of real people, including politicians.
OpenAI has been preparing for the US presidential election since the beginning of the year. The company laid out a strategy to prevent its tools from being used to help spread misinformation and ensured that people asking ChatGPT questions about voting in the U.S. were directed to CanIVote.org . According to OpenAI, 1 million ChatGPT responses drove people to the website in the month leading up to Election Day. The chatbot also generated 2 million responses on election day and the day after, telling people who asked about the results to check the Associated Press, Reuters and other news sources. OpenAI also confirmed that ChatGPT responses “do not express political preferences or endorse candidates, even when explicitly asked.”
Of course, DALL-E isn’t the only AI image generator out there, and there are plenty of election-related deepfakes circulating on social media. One such deepfake is a campaign video that was altered to say something it didn’t actually say, such as “I was chosen because I’m the ultimate diversity hire.” It features Kamala Harris.
