In this era conditioned by technology and incessant innovations, the deepfake as a latent threat to pay attention to. These synthetic media are capable of manipulating reality as we know it, which represents a great danger that we must be aware of.
Learn how they work, the risk they pose and how to detect them.
What is deepfake
The term arises from the combination of the English words “deep” (from the technologies of deep learning) and "fake” which means false, referring to something artificially created by this kind of deep learning technology.
Also called “synthetic media”, these are images, videos or audio generated by artificial intelligence that imitate the physical image (appearance) and sound of real people, making the material produced appear authentic.
The most common applications of this type of fake content are the production of manipulated videos, audio that recreates a person's voice, and augmented reality filters.
Although when talking about deep fakes we do so by warning of the dangers that this practice entails, there are also potentially positive and fun uses. We can mention some examples such as:
- Face swipe applications
- Recreations of historical figures and environments for educational purposes
- The creation of special and visual effects in the film industry.
Types of deepfake
There are mainly two variations of deepfake, which are:
- Deepfaces: They are those that generate images from scratch that look real but are totally fake, to be used in the creation of other videos or animations.
- Deepvoices: These are those that consist of impersonating a person's voice in an audio, making it appear that it is their real voice.
Both types of deepfakes use artificial intelligence machine learning to create audiovisual material that appears legitimate, but is in fact completely falsified.
Some known examples of deepfake use
Among the most well-known examples of deepfakes are:
- Deepfake video of Obama with the voice of Jordan Peele
In 2018, a video starring a fake Barack Obama and the comedian and film director Jordan Pelee was published by BuzzFeed, in which Obama's image and voice were reproduced to simulate a speech and warn about the ease and risk posed by new technologies to impersonate a person.
- This person does not exist
It is the name of a website that uses generative adversarial networks (GAN), a type of AI algorithms, to generate very convincing but non-existent faces of people. It is the perfect example of deepface.
- Fake celebrity porn videos
Starting in 2018, the use of deepfakes in adult film scenes became popular, thanks to a Reddit user, by inserting the faces of celebrities such as Emma Watson and Natalie Portman.
This practice became popular due to the ease of access to images of famous people through social networks, making it easy to manipulate and create fake erotic material with the faces of these international celebrities.
- Salvador Dali Museum
In 2019 the Salvador Dalí Museum, located in the United States, used deepfake technology to recreate and bring the artist Salvador Dalí to life on a full scale. This recreation was capable of greeting, engaging in conversations, and even taking selfies with visitors.
These are some of the most famous, however, more and more cases appear with stories that border on the edge of legality and that, in some cases, involve large monetary losses. Such is the case of a man in Canada, who says he invested $11,000 after watching a deepfake video of President Trudeau supporting the operations of a platform that turned out to be false.
How deepfake works
A deepfake is created through artificial intelligence programs calleddeepfake programs, or making use of algorithms such as GAN, whose acronym in English stands for degenerative adversarial neural networks.
Thedeepfake programs They use an algorithm known as an encoder that executes an action in which they work with thousands of shots and angles of the face of the person they want to use to produce this content.
This algorithm detects the similarities between all these shots and works on the basis of these common characteristics (facial expressions) through a process in which the images are compressed, learning the patterns and then reproducing them and creating new fake content. Then the work of the decoding algorithm begins, responsible for recovering the images that have been compressed.
Face swapping occurs when encoded images (from subject A) are introduced into the opposite decoder (subject B) so that it reconstructs the other person's face. This exchange allows an image of a subject to be created with the facial features and movements of the other individual, obtaining a fake video.
Another method to perform deepfakes is throughdegenerative adversarial neural networks (GAN). GANs contrast two types of algorithms: one generator and the other discriminator.
The generating algorithm is responsible for creating the synthetic image that will then be added to the sequence of real images, where the discriminator algorithm acts, responsible for coupling the false image to the flow of original images.
Generally, these types of artificial intelligence algorithms require repetition of the process for greater accuracy.
The danger of deepfakes
The main danger of deepfakes is that they have become extremely convincing, which confuses people and can even fool algorithms, making it very difficult to distinguish between what is real and what is fake.
Most deepfakes pose a great political, social and economic danger by perpetuating thedisinformation- can manipulate information when creatingfake news (fake news), twisting reality to suit convenience and potentially causing, for example,defamation of any person.
This distortion of reality also facilitatesonline fraud and theidentity fraudthrough biometrics, since deepfakes make it easier to circumvent online verification systems that are based on facial or voice recognition.
This represents a risk to personal safety by exposing any individual to theforgery, theft, deception and scams.
They can also be used as a form of revenge through the creation of adult videos in order to be used as a method ofextortion, cyberbullying or bullying.
Deepfakes are also a great threat by calling into question the necessary credibility in the legal world, since they could easily be used for themanipulation or alteration of evidence, taking advantage of the difficulty of verification.
As technology advances, these realities and dangers are exacerbated thanks to the use of social networks, due to the ease of access to information and the speed with which content goes viral.
How to detect deepfakes
With the evolution of artificial intelligence techniques, it is becoming more and more complex to recognize deepfakes. However, below we share some factors and characteristics that you can take into account to recognize these fake images or videos.
- The face and neck
Deepfakes are usually focused on the face, since it is much easier than doing a full body modification.
Observe carefully if the face matches the person's body. Compare the texture and color of the skin on the neck and face, look at the shadows around the eyes, and whether the facial hair looks realistic or not.
- The flicker
Pay attention to whether it is too little or too much. Deep learning algorithms are not able to reproduce blinking at the same speed as real people do.
- The inside of the mouth
Machine learning algorithms are unable to accurately imitate the tongue, teeth, and inside of the mouth.
Observe the color of the lips, the movements of the mouth and see if there is blurring inside the mouth when speaking.
- Sound
Synchronization between audio and image often fails. Examine whether the sound matches the lip movements.
- Video source
Researching who shared the file, where and in what context it was published can help you identify if it is a deepfake.
- Video length
Most deepfake videos usually last a few seconds, since producing them takes a lot of time, work and money.
Therefore, if we are faced with a short video with improbable or irrational content, it is most likely a deepfake.
Risks and impact of deepfake for companies
This synthetic content is essentially a spoofing tool that facilitates any form offraud. In the business environment, both users and consumers, employees and organizations can be directly affected by this new reality.
The threat is high because deepfakes are capable of circumventing biometric verification measures, which is why they are capable of passing KYC verifications. This means that anyone could be a victim of espionage, blackmail or fraud at the corporate level.
Before this panorama, Zenpli Becomes aessential guarantee for companies, by providing the necessary protection regarding identity verifications. And not only that, it also adds an in-depth inspection and advisory process that allows you to determine the impact of these malicious elements, so that they are not an obstacle to achieving your company's goals.
Besides,Zenpli can detect synthetic identity fraud attempts (the creation of a false identity with real personal data, which is much more difficult to emulate than a document and a selfie), in case of evasion of biometric validation and facematch against the document.
With this, it reinforces security with respect to the digital footprint and goes deeper by adding indicators linked tobehavior, which makes your digital defenses much more difficult to defeat.
It is clear that in the face of so many threats, security is a key aspect for users when choosing services linked to technology.
If you want to know how to strengthen your company's security processes, contact us and invest in security, reducing deepfake threats.