Focue Provides the Latest and Most Up-to-Date News, What You Focus On is What You Get.
⎯ 《 Focue • Com 》

AI Is Marking Some Real Images From the War in Israel As Fake

2023-10-16 04:29
Some artificial intelligence programs are marking images from the war in Israel and Palestine as
AI Is Marking Some Real Images From the War in Israel As Fake

Some artificial intelligence programs are marking images from the war in Israel and Palestine as fake, despite them being real photographs depicting actual events.

A popular free AI image detector labeled a photo of an infant killed in Hamas’ recent attack on Israel as being generated by AI even though it is likely real, 404Media reports.

The image was examined by Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images. He says the image, which was first tweeted by Israel’s official Twitter account, doesn’t show any signs it was created by AI.

The idea that the image was fake was first started by Ben Shapiro, a conservative Jewish commentator who tweeted the accusation Thursday morning. Later in the day, Jackson Hinkle shared the image as well along with a screenshot from the AI image detection tool “AI or Not," which said the image was fake.

A new context note added to the tweet notes that the site AI or Not is unreliable at detecting images and gives users different results for the same image.

Farid notes that one of the easiest ways to detect an AI image is how lines appear in it. AI generators currently have issues with creating highly structured shapes and straight lines. Ensuring shadows are consistent with the image is also an easy way to detect an AI image, in that AI often struggles with creating correct shadowing.

In both cases for the image in question, it passes the test. The shadows on the table are consistent with an overhead light source, and the leg on the table is straight.

Farid warns that the tools currently available online aren’t a good way to determine if an image is real or AI-generated, and that even if an image is labeled “real” or “fake” by one of those tools that doesn’t mean the tool’s analysis is accurate.

“It’s a second level of disinformation,” Farid told 404. For now, at least, he warns that we should all be skeptical about results generated by similar tools, because more often than not they might provide inaccurate results.

Tags ai