Fake Pentagon explosion photo goes viral: How to spot an AI image

A picture claiming to show an explosion near the Pentagon raises concerns about AI’s ability to produce misinformation.

A fake image appearing to show a large explosion near the Pentagon was shared on social media on Monday prompting a brief dip in the stock market.

Within minutes, a wave of social media accounts including some verified accounts shared the fake picture, further amplifying the confusion.

Officials later confirmed that no such incident had occurred.

Social media sleuths, including Nick Waters from Bellingcat, an online news verification group, were quick to point out some notable problems with the image, including:

  • That there were no other firsthand witnesses to corroborate the event, especially in a busy area like the Pentagon. “This is why it’s so difficult (I’d argue effectively impossible) to create a believable fake of such an event,” Waters tweeted.
  • That the building itself looks noticeably different from the Pentagon. This can easily be verified by using tools like Google Street View to compare the two images.
  • Other details including the odd-looking floating lamp post and black pole protruding from the pavement were another giveaway that the image was not what it seemed. Artificial intelligence still has a difficult time recreating locations without introducing random artefacts.

How to spot AI-generated and fake images

There are many generative AI tools like Midjourney, Dall-e 2 and Stable Diffusion that can create life-like images with very little effort.

These tools are trained by looking at large volumes of real images but fill in the gaps with their own interpretation when training data is missing. This can result in people having extra limbs and objects that are morphed with their surroundings.

When seeing images online that are purported to show a breaking news event, it is worth keeping the following in mind:

  • News doesn’t happen in a vacuum – In the case of a large explosion or big event, expect to see an influx of on-the-ground reports from different people and different angles.
  • Who is uploading the content – Look at the post history of the user account. Does their location and the location of the event add up? Look at who they are following and who follows them. Can you reach out to them or talk to them?
  • Use open-source intelligence tools – Reverse image search tools like Google Images and TinEye can allow you to upload an image and determine where and when it was first used. There are several other tools that you can use like looking at live public traffic camera footage to verify that an event is taking place.
  • Analyse the image and its surroundings – Look for clues in the image like nearby landmarks, road signs and even the weather conditions to help you place where or when the purported event could have taken place.
  • Hands, eyes and posture – When dealing with images of people, pay special attention to their eyes, hands and general posture. AI-generated videos that mimic people, known as deep fakes, tend to have problems blinking, as most training data sets do not contain faces with their eyes closed. Hands that are not correctly grasping objects or limbs that look unnaturally twisted can also help spot the fake.

For more information on news verification and open-source intelligence investigations, Al Jazeera’s Media Institute has published a handful of guidebooks available in multiple languages available to download below.

  • Finding the truth among the fakes [PDF]
  • News verification – A practical guide [PDF]
  • Open Source Investigations [PDF]

Source: Al Jazeera