During the last few decades, AI has been getting better and better. It can be used in a lot of different ways and for a lot of different reasons. In many industries, it improves the quality and speed at which they work.
But not all AI products are good for society. Sometimes new technologies are made that are later used by people who are bad. It is called “Deepfake,” and it is a type of technology.
A person’s face can be replaced with that of another person, either in a picture or a video, by AI programs that have been trained to do this. It is very common for well-known people, celebrities, and politicians to be the victims of these kinds of crimes. This article will show you what this technology is based on and how you can tell if it’s there.
WHAT IS A DEEPFAKE?
The word “Deepfake” is clearly made up of two words that people use every day, “deep” and “fake.” A type of AI technology known as “deep learning,” or “deep,” is what’s at play in this case. Deepfake technology is used in synthetic media to make fake content, such as replacing or synthesizing faces, speech, and emotions. It is used to make a person do something that he or she didn’t do.
The first steps toward this technology were taken by academic institutions in the 90s. Later, it was used by a wider group of people. Even though making deepfake programs isn’t very common, the idea has made a lot of noise in the media. In the next part, we’ll show you how deepfake content is made.
HOW DOES DEEPFAKE WORK?
There are many ways to make deepfake software using machine learning algorithms. These are algorithms that can make new content based on the data you give them. It must first be taught how to make a new face or change a part of a person’s face.
There is a lot of data that the program is fed. It then uses that data to learn how to make new data of its own. Most of the time, they are made with autoencoders and sometimes with generative adversarial networks (GAN). Now, let’s find out what these methods are and how they work so we can use them.
Self-supervised neural networks called autoencoders are a group of networks that learn to copy their own input. They are mostly based on dimensionality reduction. In other words, autoencoders are very good at compressing data that is like the data they have been trained on. As a side note, though, the autoencoder’s output will not be the same as its input.
An autoencoder has three parts: an encoder, a code, and a decoder. The encoder compresses the input data and makes the code after the decoder only looks at the code to figure out what the input looked like before. They come in many different types, like denoising autoencoders, deep autoencoders, convolutional autoencoders, contractive autoencoders, and so on.
Gan is a way to make new models from the data that came in. In order to make new data, these learn from the data that comes in. In order to train the system, it uses two different kinds of neural networks: one that generates and one that tells. The generator finds regularities or patterns in the data and learns how to copy them. Data that has been generated is sent to the discriminator along with real data so that it can be analyzed. Why did the generator do this? To fool the discriminator.
If you want the system to be able to tell the difference between real and fake data, you need to train it. It gets better at learning if it’s hard to tell fake data from real data. GANs are more difficult to train and require more money and time. When people use them, they are more often used to take photos than to make videos.
How Neural Networks Make Deepfakes Possible
We’ve talked about neural networks a lot in this article. There are neural networks at the heart of deepfake technology, so that’s why.
Neural networks make machine learning possible because they have a “feed-forward” structure of nodes that connect to each other. There are nodes here that look like neurons in the brain. And just like the human brain, a computer can learn how to do a job by being taught.
A computer model of the human brain’s ability to recognize faces spontaneously made “invariant representations” of faces. This is true, according to researchers at MIT who made the report in 2016.
“The trained system included an intermediate processing step that represented a face’s degree of rotation, like 45 degrees from center, but not the direction, like left or right.” This is what they found when they looked at the model.
Because this step wasn’t in the algorithm and looked like how our brains work when we see faces and objects, the researchers said it was “an indication that their system and the brain are doing something similar.”
HOW DANGEROUS ARE DEEPFAKES?
Deepfake is thought to be one of the most dangerous ways to use AI. Most of the time, its real-world applications are aimed at discrediting or deceiving people. When deepfake fraud was first used, it happened in the UK. In a scam, scammers called the CEO of a UK-based energy company and sounded like the CEO’s German boss. They told him to transfer €220,000 to a third-party bank account.
The end result of a deepfake may not be able to be told apart from the real thing. It can hurt someone’s reputation or make them say or do something they don’t need to. I think scammers can use it to do a lot of damage with it. It takes time to build a system that can tell when something is fake. These programs also need to be taught how to work well together.
IS DEEPFAKE LEGAL?
Because deepfake has only been around for a few years, laws about how it can be used haven’t kept up with it. Most of the time, it isn’t regulated at all in many places. In China, there is a law that says you can’t use deepfakes in that country.
The Cyberspace Administration of China said that making fake news with deepfake is illegal. Most of the states in the United States have laws against deepfake porn, but not all of them. Another bill doesn’t allow deepfake content that hurts people who want to run for public office.
HOW ARE DEEPFAKES USED?
The first fake porn videos to spread deepfake were made to look like they were real. A lot of female celebrities have been scammed by deepfake porn. Some of the people who were there were Daisy Ridley and Jennifer Lawrence, as well as Emma Watson and Gal Gadot Some of the leaders of different countries, like Michelle Obama, Ivanka Trump and Kate Middleton were also affected by the subject of the story.
2019 saw the release of a desktop app called DeepNude. It was able to take clothes off women. Soon after, it was taken down, but copies of the app can still be found in the Internet.
In the next group of people who will be harmed by deepfake are politicians. In videos, President Obama was seen calling President Trump a name. It was done in another video to make people think that Nancy Pelosi was drunk. In another video, President Trump was shown to be making fun of Belgium for being a member of the Paris Agreement on Climate Change.
Note that there are also times when it is used legally. Plenty of apps that let you change the faces in photos and videos are on the App Store and Play Market. There is a belief that deepfakes will be the next big thing in content creation. Deepfake was used by the South Korean TV station MBN to replace its news anchor.
How to Combat Deepfakes with Technology?
There are a lot of groups working together to make sure AI is used for good and that fakes don’t hurt people. They are here:
- Google is working on tools that can convert text to speech so that people can be sure they are who they say they are.
- Deeptrace is a company based in Amsterdam that makes AI detection tools that can tell if something is a fake. like a deepake antivirus, the US Defense Advanced Research Projects Agency (DARPA) is funding research on how to make automated screening of deepfake using a program called MediFor or Medical Forensics. This program will be called MediFor.
- Adobe’s system lets you add a signature to your work to show who made it and how it was made.
- People can no longer use “malicious deepfakes.” Twitter and Facebook have now made it official.
- Sensity has made an e-mail system that tells people when they’re watching something that isn’t real. This is called a “detection platform.”
HOW DO WE FIGHT DEEPFAKES TODAY?
Some of the most important technology companies are already working on their own ways to stop deepfake content. Microsoft and Google have datasets that developers can use to train their systems to recognize deepfakes. Facebook teamed up with Microsoft, Amazon Web Services, and some of the best universities in the world to launch the Deepfake Detection Challenge. The goal is to come up with ways to tell when a video is fake.
$500,000. The winner of the competition got a prize of that amount. People who worked on the project, which had about 3,500 people in it, made the materials available so that other people could use them, too. DARPA has signed contracts with SRI International to make programs that can find photos and videos that have been changed. This is part of the MediFor project. Sensiti has its own solution for businesses that want to protect themselves from technologies that make things look real. Minerva is a program that looks for deepfake porn on popular adult sites and sends an alert to remove it.
CAN BIOMETRICS HELP WITH DEEPFAKES?
Biometrics are the physical traits that make us unique. Knowing a person’s biometrics helps us figure out who they are. Biometrics may be a good way to find out if someone is a fake. The key may be behavioral biometrics and facial recognition, which is also based on AI.
Deepfake is a new technology that has a lot of promise. Humanity is still getting used to it and hasn’t yet found its full use in our society. It has both good and bad things about it, just like many other things. It could hurt or help our world. For us to get the most out of it in different industries, we’ll need some time to learn how.