What exactly are deepfakes, and how can you identify them? The use of artificial films generated by AI is on the rise (and convincing). Let’s be concerned. The definition of a deepfake is as follows:
Are you aware of President Obama referring to Donald Trump as a “complete dipshit”, as well as Facebook founder Mark Zuckerberg discussing having “total control of billions of people’s stolen data”, or did you witness Game of Thrones’ Jon Snow offer a deeply personal apology for the disappointing ending of the series? Answering ‘yes’ indicates that you’ve encountered a deepfake. With deepfakes, the 21st century’s response to Photoshopping, artificial intelligence known as deep learning is used to create fake photographs. Hence, the photos are termed deepfakes. Are you interested in creating new phrases for politicians to use, starring in your favourite movie, or dancing like a professional? At this point, we’re ready to produce a deepfake.
For what purpose are they intended?
They are in many ways pornographic. In September 2019, the AI firm Deeptrace detected 15,000 deepfake videos on the internet, which is almost double what they found in March 2019. 94% of those images were pornographic, while 99% of them were celebrities’ images superimposed on porn stars. Unskilled individuals are increasingly able to create deepfakes with a handful of photographs, which will result in deepfake revenge porn getting more widely distributed. It’s been claimed that Deepfake technology is being weaponized against women, according to Danielle Citron, a professor of law at Boston University. You will find more than just porn here; you will find spoof, satire, and mischief.
It’s simply about videos, isn’t it?
No. Instead of having to develop photos from start, deepfake technology can create convincing but wholly fictional images from scratch. We can rule out a made-up Bloomberg journalist “Maisy Kinsley” from being a deepfake. This LinkedIn “Katie Jones” persona is believed to be a deepfake developed for a foreign espionage operation.
In addition to the audio techniques of “deepfaking” and creating “vocal skins” and “voice clones” of popular personalities, audio may also be deepfaked to generate various “voice masks.” A German energy corporation has a subsidiary in the UK that received a call from a fraudster who resembled the CEO’s voice. In March of last year, the CEO of this subsidiary deposited over £200,000 into a Hungarian bank account after receiving that phone call. Insurers for the corporation claim the voice was a deepfake, but it is uncertain whether they are correct. People have experienced similar frauds in which they have used recorded WhatsApp voice messages.
In what way are they made?
School of applied science researchers and special effects studios have explored the outer limits of video and picture manipulation for many years. But the idea of deepfakes emerged in 2017 when a person with the Reddit username “deepfakes” shared edited pornography on the site. To give the videos a unique look, the celebs’ faces — including Gal Gadot, Taylor Swift, Scarlett Johansson, and others – were morphed onto adult film stars.
Face-swap videos need a few steps to make. Next, the programme, which is called an encoder, processes hundreds of facial photos of the two persons. The encoder uses facial recognition to identify the similarities between the two faces, and compresses the images by reducing the common features between them. Once the compressed photos have been decompressed, a second AI programme, a decoder, is instructed to discern the faces from the results. Training two decoders (one for each face) is necessary because the faces are distinct. To do the face swap, you just feed the photos that have been encoded the wrong way into the decoder. Consider an example, for example, of a picture of a person’s face that has been compressed and is now given into the decoder for the target person. Once the decoder decodes the facial expressions and orientation of face A, it re-creates the facial expressions and orientation of face B. A video must be done on every frame if it is to be believed.
Additionally, a generative adversarial network, known as Gan, can be used to create deepfakes. In this game, two artificial intelligence algorithms, the Alpha algorithm and the Beta algorithm, are pitted against one other. The first method, known as the generator, uses random noise and creates a picture from it. Once a synthetic image has been created, this image is then sent into the discriminator algorithm. To begin with, the synthetic images will bear no resemblance to human faces. However, repeated execution allows the discriminator and generator to improve with time. The generator will generate the absolutely perfect faces of completely nonexistent celebrities if provided with enough cycles and input.
How many different people are creating deepfakes?
Anyone who is a researcher, in industry or academic settings, or an enthusiast who works in any of these sectors. The use of technology is on the rise for governments, too, as they search for new methods to discredit and disrupt extremist groups and make contact with individuals who are attractive to them, like targets.
What technological resources do you require?
The quality of a deepfake on a regular computer is difficult to achieve. Most are developed on top-of-the-line desktops with high-end graphics cards or better, where they are stored in a data centre with more computational capacity. Reducing the processing time from days or weeks to hours not only saves money, but also has a positive impact on productivity. The technical skills required to touch up videos to remove flicker and other visual faults go hand in hand with the ability to create great videos in the first place. However, various software now exists to assist users in creating deepfakes. A number of firms will design and manufacture them for you, and they will be processed in the cloud. In addition to that, there is a mobile phone app called Zao that allows users to input their faces so that the system may find them in characters from TV and movies with which it has been taught.
When are deepfakes obvious?
As technology advances, it becomes more difficult. Researchers in the United States have discovered that deepfake faces don’t blink in a typical manner. Obviously: the algorithms never learn about blinking, as the bulk of photographs show people with their eyes open. In the beginning, it appeared to be a solution to the detection problem. However, only a short time after the paper was released, deepfakes began showing up, blinking. A weakness will always be revealed as soon as it appears, which makes fixing it necessary.
It is easy to distinguish low-quality deepfakes. In terms of quality, the lip synching may be off, or the skin tone uneven. If there is flickering around the margins of transposed faces, there may be an issue with the transposed face. It is very challenging to produce lifelike results with deepfakes when hair is evident in the fringe area. Anything that is badly drawn jewellery or teeth, as well as unusual lighting effects, such as inconsistent illumination and reflections on the iris, can all be signs of participation in underground contests.
Deepfakes are funding a lot of government, university, and tech firm-funded research. Deepfake Detection Challenge: The first of its kind, supported by Microsoft, Facebook, and Amazon started last month. Research teams around the world will be vying to see who has the superior ability to detect deepfakes.
In order to avoid misleading viewers in the run-up to the next US presidential election, Facebook recently banned deepfake videos that make it appear as though people said things they didn’t actually say. Nevertheless, the regulation solely addresses misinformation produced by AI, which excludes “shallowfakes” (see below).
Will Deepfakes ruin everything?
More deepfakes that harass, intimidate, degrade, undermine, and destabilise us is expected in the future. It remains to be seen whether deepfakes will generate international incidents. Due to the lack of clarity, the situation is less certain. Armageddon should not be created because of a deepfake of a world leader clicking the big red button. For one thing, satellite photographs of army movements on a border won’t provoke much concern. Most countries have credible imaging systems of their own to monitor borders.
Though there is plenty of room for mischief-making, there are limits. Last year, Tesla shares took a nosedive when Elon Musk, while smoking a joint on stage, told the world that he was stoned. In December, footage surfaced of other global leaders reportedly insulting Donald Trump when he was at a NATO gathering. Is there a possibility that deepfakes will be able to change market prices, influence voters, and stir up religious conflict? It seems safe to assume.
Will they have ulterior motives?
Deepfakes’ pernicious effect is to create a society where people can no longer identify truth from deception, as they don’t trust anything anymore. When confidence is weakened, then suspicions regarding individual events are easier to raise.
At the beginning of last year, Cameroon’s minister of communication denounced the footage from Amnesty International that the organisation says depicts Cameroon’s army committing atrocities against civilians as fake news.
What should we do?
This AI could be the solution. Existing detection technologies, such as AI, already help to detect fraudulent videos, but because these systems are based on hours of freely available material, they are prone to making mistakes when applied to regular citizens. Detecting falsified merchandise is now a common practise among the tech industry. An alternative technique is to investigate the media’s provenance. Watermarks on digital media aren’t infallible, but a blockchain system that maintains an online ledger of movies, photos, and music might serve as a tamper-proof record of content to ensure its provenance and any alterations can always be verified.