OpenAI is working on a tool to verify images created by DALL-E 3, its AI image generator.
At The Wall Street Journal's Tech Live event on Tuesday, OpenAI CTO Mira Murati said the company is developing a technology to identify whether an image is created by DALL-E 3. The tool is "almost 99% reliable," said Murati in the onstage interview. But it is still going through internal testing. Murati didn't share any details on the release of such a tool, but the DALL-E 3 page on the OpenAI website said more information about the tool, referred to as a "provenance classifier" would be shared soon.
SEE ALSO: Bing Image Creator users are doing some... interesting things with new DALL-E 3 upgradeThe advancements of AI-generated imagery has heightened concerns about the spread of disinformation and deepfakes. Tools like DALL-E 3 (available to ChatGPT Plus users), Midjourney, Stable Diffusion, Adobe Firefly, and Shutterstock's AI image generator — even free chatbots like Bing Image Creator and Google's SGE — are, or will soon be, capable of creating convincing photorealistic images. Companies that offer these tools are safeguarding against harmful behavior by adding watermarks or blocking users from creating images with famous people or using identifiable personal information.
Already, deepfakes look so realistic that spreading them can have real consequences. A fake image of the Pentagon on fire caused a momentary dip in the stock market this past May. In early October, actor Tom Hanks posted on Instagram saying he had nothing to do with a deepfake of him promoting a dental plan. What was previously considered a hypothetical problem has become a reality, and we need the tools and safeguards to ensure deepfakes don't become a major threat.