On November 30, Chinese foreign ministry spokesman Lijian Zhao pinned an image to his Twitter profile. In it, a soldier stands on an Australian flag and grins maniacally as he holds a bloodied knife to a boy’s throat. The boy, whose face is covered by a semi-transparent veil, carries a lamb. Alongside the image, Zhao tweeted, “Shocked by murder of Afghan civilians & prisoners by Australian soldiers. We strongly condemn such acts, &call [sic] for holding them accountable.”
The tweet is referencing a recent announcement by the Australian Defence Force, which found “credible information” that 25 Australian soldiers were involved in the murders of 39 Afghan civilians and prisoners between 2009 and 2013. The image purports to show an Australian soldier about to slit the throat of an innocent Afghan child. Explosive stuff.
Except the image is fake. Upon closer examination, it’s not even very convincing. It could have been put together by a Photoshop novice. This image is a so-called cheapfake, a piece of media that has been crudely manipulated, edited, mislabeled, or improperly contextualized in order to spread disinformation.
The cheapfake is now at the heart of a major international incident. Australia’s prime minister, Scott Morrison, said China should be “utterly ashamed” and demanded an apology for the “repugnant” image. Beijing has refused, instead accusing Australia of “barbarism” and of trying to “deflect public attention” from alleged war crimes by its armed forces in Afghanistan.
There are two important political lessons to draw from this incident. The first is that Beijing sanctioned the use of a cheapfake by one of its top diplomats to actively spread disinformation on Western online platforms. China has traditionally exercised caution in such matters, aiming to present itself as a benign and responsible superpower. This new approach is a significant departure.
More broadly, however, this skirmish also shows the growing importance of visual disinformation as a political tool. Over the last decade, the proliferation of manipulated media has reshaped political realities. (Consider, for instance, the cheapfakes that catalyzed a genocide against the Rohingya Muslims in Burma, or helped spread covid disinformation.) Now that global superpowers are openly sharing cheapfakes on social media, what’s to stop them (or any other actor) from deploying more sophisticated visual disinformation as it emerges?
For years, journalists and technologists have warned about the dangers of “deepfakes.” Broadly, deepfakes are a type of “synthetic media'” that has been manipulated or created by artificial intelligence. They can also be understood as the “superior” successor to cheapfakes.
Technological advances are simultaneously improving the quality of visual disinformation and making it easier for anyone to generate. As it becomes possible to produce deepfakes through smartphone apps, almost anyone will be able to create sophisticated visual disinformation at next to no cost.
False alarm
Deepfake warnings reached a fever pitch ahead of the US presidential election this year. For months, politicians, journalists, and academics debated how to counter the perceived threat. In the run-up to the vote, state legislatures in Texas and California even preemptively outlawed the use of deepfakes to sway elections.