Home » today » Business » DeepFakes – Of created truths and created realities

DeepFakes – Of created truths and created realities

Imagine watching a video on YouTube showing Donald Trump as President of the United States declaring war on China. I am sure there is no guarantee that this video is real. Against the background of Trump’s political leitmotif “There are no taboos, everything is possible” and the basic trust in the medium of video, this decision is particularly difficult. This video offers an insight into DeepFakes. [0]

Why are DeepFakesas we see them in the video presented here, so successful? How are they changing political communication? And do they mean the end of an era in which moving images were considered truth?

What are DeepFakes?

A scenario like the one outlined above is no longer pure fiction, nor is it a purely theoretical concept of political communication. Impressive example of applied DeepFakes – a combination of Deep Learning[1] and Fakes – is a video in which Trump – apparently – wants to get the Belgian public to withdraw from the Paris climate treaty.[2] Behind the term Deep Learning hides algorithms and complex chains of artificial intelligence (AI), called neural networks, which can independently create forecasts from large amounts of data. Fake means the conscious manipulation of the original material.[3] So z. B. create a new, “fake” picture of the former US president from a few photos of Barack Obama. This picture would never actually be photographed, but rather artificially created by software – gefaked – been. Not even algorithms can tell the difference between real images and imitations, and certainly not we humans.[4]

At the DeepFaking become existing videos, audio files or even just individual images[5] Used by people to meet new, authentic acting Create videos. With the very popular tool “DeepFaceLab”[6] it’s amazingly simple, compelling DeepFakes to create. Except for the selection of the videos to be used, the program takes care of all implementation steps on its own. Since then using this program in 2017[7] The first videos of this kind were made, the quality of the DeepFakes strongly developed.

The problem

Videos are an important part of the media landscape, YouTube alone has around two billion monthly active users.[8] Apart from their entertainment purpose, videos are used, for example, to record criminal offenses, to promote opinion-forming or, as in the case of the Austrian “Ibiza affair”, to bring down a government through investigative journalism. Most people have great confidence[9] into the authenticity and authenticity of videos; and for good reason: Since videos have existed since the beginning of the 1890s,[10] However, manipulations only became possible much later, the conviction that video recordings are almost inevitably true-to-life images of reality. Til today.

First of all are DeepFakes simply manipulation of videos – but very cleverly produced and deceptively real-looking manipulations. The behavior of a person is reproduced in such detail that the authenticity can no longer be assessed by observation alone. Even a person’s way of speaking can be modified through social engineering[11] can be modeled almost perfectly. Public figures become puppets of the manipulator who, thanks to AI, do almost anything and say what their creator commands. Above all, the manipulator can use the artificial creations to give his / her own views more authority.

Also work DeepFakes psychologically highly complex. The human brain has the “confirmation tendency” for dealing with statements without directly verifiable truth content (the so-called. Confirmation Bias) “Developed”. A dubious situation is reinterpreted in such a way that it corresponds to one’s own already existing views.[12]

The end of the truth

When excerpts of HG Wells’ “War of the Worlds” were broadcast for the first time in the USA at the end of 1938, many listeners panic and believed that an alien invasion was approaching.[13] Much has changed in the way stories are presented in the past 81 years, and films have created alternative realities like never before. The technology that is used is so advanced that these fictional scenes can even be mistaken for images of “reality”. How do we still know that we are not living in the apocalyptic world of “Mad Max”? In short: through our ability to abstract – that is, by classifying what we have seen or heard using additional information. Without this information, however, it is no longer possible to know the truth.

Even if journalism plays a crucial role in the classification of facts, we are facing a huge “truth crisis”. A Studie des Pew Research Center[14] from 2019 shows that trust in news has already fallen massively and that fewer news are consumed as a result. One of the reasons for this is that it would take a lot of effort to check every single message, which is why many people prefer not to deal with the messages right away. Aviv Ovadya, fake news researcher at Stanford University, coined the term “reality apathy”[15], in English: Reality apathy, i.e. the almost complete disinterest in what happens in reality outside of one’s own living environment.

At the same time, reality, especially facts, falls into the area of ​​”plausible deniability” (in German: “credible deniability”): Each Information could have been invented, which makes it very easy to deny authenticity of pictures or videos at will. In 2018, for example, the communications minister of the country of Cameroon described a video published by Amnesty International in which the Cameroon army was executing civilians as fake news.[16]

The solution?

„Once a political narrative is shifted, it’s almost impossible to bring it back to its original trajectory.”
Eileen Donahoe

Even if Artificial Intelligences are not only getting better and better, DeepFakes to create, but also to expose them, it is not enough to reveal the forgeries afterwards, because the interest in such retrospective clarifications is usually far too little in a fast-moving media landscape. Another approach is to digitally watermark original videos online and in a Blockchain save. In a Blockchain information is not stored side by side, as is usual with conventional data systems, but one after the other. Therefore, each piece of information also consists in part of previous information. In addition, several versions of the data set are permanently compared with one another using cryptographic measures, similar to the medieval concept of the kerbstock. This procedure is also used for most digital currencies and is almost safe from manipulation. Despite these technical solutions, it is an – increasingly – important task of political education to ensure that people get through DeepFakes do not lapse into apathy, but rather develop your own media skills and remain or become confident users of digital media.

To end this article on a positive final chord: DeepFakes can also be used positively. The “Project Revoice”, for example, has designed a technology with which people suffering from ALS can get their voice back.[17] Voice actors also made possible DeepFake-Technology a new level of work: For the first time, the film characters can be adapted to the language of the voice actors – and not the other way around. However, to distinguish between the negative and positive facets of DeepFakes To be able to differentiate, it first requires a competent handling of this technology.


[3] Vgl. Jones, Nicola: Computer Science: The learning machines, in: Nature, H. 505, 09.01.2014, S. 146–148.

[12] Vgl. Oswald, Margit E./Grosjean, Stefan: Confirmation Bias, in: Pohl, Rüdiger F. (Hrsg.): Cognitive illusions – a handbook on fallacies and biases in thinking, judgement and memory, New York 2004, S. 79–96.


– .

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.