A few days back, an AI-generated video featuring Mamata Banerjee made waves on social media, prompting the killjoy Kolkata Police to send a warning to the creator — which made it even more popular.
Later, the PM, never to miss an opportunity, retweeted a similar video of him, showing his dancing, with a funny line. Guess who won that round.
Across the oceans, a video message from Arizona Secretary of State Adrian Fontes welcomed election workers to a seminar on deepfakes — without letting on the video was a deepfake to start with.
Am sure you have seen similar videos of Dhoni belting out Kishore hits, Imran Khan giving a ‘victory speech’ from jail, or even an article on Ambani Junior giving a fake interview which led to a crypto link (that was very authentic, fooled Facebook too, and I think it’s still up).
Even the CPI (M), the original anti-computers party, used a deepfake of the former Bengal CM Buddhadev Bhattacharya, in the ongoing Lok Sabha campaign.
Feel like creating some deepfake videos? Creating deepfake videos involves sophisticated AI technology but there are many types of software that can help. If you’re interested in learning about this technology purely for educational purposes or for ethical creative projects, here you go:
Data collection: Gather a large dataset of images and/or videos of the person you want to mimic. The quality and variety of this dataset significantly affect the realism of the resulting deep fake. The richer the dataset, the better your deep fake. Ever wonder why Modi deep fakes are better? Because there are more images of the PM online.
Choose the right software: Tools and platforms like DeepFaceLab, Faceswap or Zao are commonly used for creating deepfakes. Each tool has its own set of requirements and learning curves.
Training the model: Use a machine learning algorithm, typically a type of GAN (Generative Adversarial Network), to train your model. This process involves teaching the algorithm to understand and replicate the facial features and expressions of the target from your dataset.
Now these deepfakes of politicians don’t really harm anyone except politicians — temporarily. (Though deepfakes promoting religious hatred are extremely dangerous of course, and I don’t see Meta or X winning the war.)
Deepfakes have expanded into various other areas, leveraging advanced AI technologies. There’s news that China is going to interfere in the ongoing general elections through AI-generated deep fakes.
Here’s a brief take of how deepfake technology is being applied beyond just videos:
Medicine and finance are going to be badly hit. Nirav Modi is going to look like a kindergarten kid.
Creating fake medical records: AI can be used to generate realistic medical records that are entirely fictitious. These fake records could be used to make fake insurance claims, obtain prescription medications improperly, or support false disability claims.
Altering existing medical records: AI technologies, especially those that excel in pattern recognition and image manipulation, could be used to alter existing, genuine medical records. For example, medical images like X-rays or MRIs could be digitally modified to fake a condition that isn’t present or even wipe out evidence of an existing condition. This could be done to manipulate outcomes in legal cases, insurance claims, or to defraud patients and insurers.
Identity theft: Deepfakes can be used to impersonate individuals in videos or audio recordings, tricking banks and financial institutions into granting access to sensitive financial information or accounts.
Fake KYCs: When identity verification is compromised by deep fakes, financial institutions might face challenges in complying with Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations.
I’ll end with some tips on how to avoid AI-generated deep fake scams. Because rest assured, tomorrow you or your company or organisation will be the victim of one.
Look for irregularities in facial features:
Eye movement: In many deep fakes, the eyes might not blink naturally or may seem off in some way.
Facial expressions: Observe if facial expressions look mismatched or if they don’t synchronise well with the emotional tone of the speech.
Audio-visual mismatch:
Lip-sync: Pay attention to the lips. If the movement doesn’t perfectly match the spoken words, it might be a deep fake.
Voice quality: Sometimes, the voice might not sound quite right — the tone might be a bit off.
Check the video quality and consistency:
Video resolution: Deepfakes may often have varying qualities within the same video. For example, the face might be in high res while the surroundings or background could be of lower quality.
Contextual clues:
Background details: Sometimes the focus is so much on getting the face right that the background might be neglected.
Behavioural cues: Consider whether the person is acting in a way that is consistent with their known behaviour.
Source verification:
Consider the source: Check where the video came from.
Cross-verification: Look for the same video or similar announcements from official or multiple reliable sources.
The government has got cracking on the issue of deepfakes or morphed videos on the internet and has instructed social media companies such as Instagram, X and Facebook to remove such content from their platforms within 24 hours of receiving a complaint.
You could of course reach out to the police and report the cyber crime.
Do follow the Twitter handle @Cyberdost for the latest information on cyber crimes in India.
(Shubho Sengupta is a digital marketer with an analogue ad agency past. He can be found @shubhos on X)