In 2024, a finance worker at Arup transferred HK$200 million to fraudsters after attending a video conference where every participant (including his company’s CFO) was a deepfake. The call looked real. The faces were familiar. The voices matched. The instructions were clear.
This is not a future threat. It is an operational reality that Indian financial institutions, HR teams, and compliance functions need to address now. India is projected to lose ₹70,000 crore in 2025 to deepfake fraud, making it one of the fastest-growing financial crime vectors in the country, per ETEdge Insights. For context on the range of deepfake fraud cases already documented globally, the examples are extensive and accelerating.
This guide is for enterprise and compliance readers. It is not a technical tutorial, but a clear-eyed threat briefing and action plan.
What is deepfake AI?
Deepfake AI is synthetic media technology that generates realistic video, audio, or images of a person saying or doing something they never actually said or did. It is distinct from earlier digital manipulation tools because it uses deep learning, not manual editing, to generate the output.
The “deep learning + fake” definition
The term comes from combining “deep learning” (a subset of AI using neural networks trained on large datasets) with “fake” (synthetic, not real). The core architecture is a Generative Adversarial Network (GAN): two neural networks working against each other: a generator that creates synthetic media, and a discriminator that evaluates how convincing it is. Over thousands of training iterations, the generator learns to produce output that the discriminator cannot distinguish from real.
The result: video, audio, or images of a target person that are indistinguishable from genuine recordings, produced on demand, at scale, without the target’s knowledge or consent.
How realistic have deepfakes become in 2026?
The gap between deepfake quality and human detection ability is now significant. Human detection accuracy for high-quality video deepfakes has fallen to 24.5%, barely above chance, according to Bright Defense. Voice cloning now requires as little as 20–30 seconds of audio from any public recording: a YouTube interview, a conference presentation, a social media clip.
The more important development: real-time deepfake now works in live video calls. Earlier deepfake tools required rendering time and worked only on pre-recorded video. Today, live AI overlay during an active video call is technically feasible: what the camera transmits is fabricated while the call is in progress.
Recently, our very own Kedar Kulkarni wrote about the growing threat of deepfakes, specifically in KYC.
How deepfakes are made
Understanding the mechanism helps compliance teams assess which defenses are actually effective.
Video face-swap and lip-sync
The target person’s facial features are mapped onto a different person’s body or video using neural network face replacement. Lip movements are synchronized with an injected audio track. Open-source tools have made this accessible, with no machine learning expertise required.
Voice cloning
A text-to-speech model trained on real audio samples (as few as 20–30 seconds of clean audio) generates a synthetic voice that can say anything in the target’s voice. Tools like ElevenLabs and similar platforms have made this accessible to non-technical actors. The output is used in phone calls, audio messages, and as the voice layer in video deepfakes.
Real-time deepfake: The most dangerous evolution
Previous deepfake attacks used pre-recorded video. The attacker staged the deepfake in advance, then played it during a call.
Real-time deepfake is different: AI processes the camera feed and overlays the target person’s face live, during an active video session. What the Video KYC system “sees” is not the attacker’s face but a convincing synthetic rendering of the person being impersonated, updated frame by frame in real time.
This is the attack pattern targeting India’s Video KYC processes today, and it cannot be caught by visual review, because the official reviewing the session sees a normal-looking face.
5 enterprise attack patterns using deepfake AI
Executive impersonation (CEO/CFO fraud)
The Arup case is the clearest documented example: HK$200 million transferred after a convincing deepfake video conference. In a separate incident, a Ferrari executive received a WhatsApp call from someone using a convincing voice clone of the CEO, requesting an urgent wire transfer. The attack was stopped only because the target asked a personal question the attacker could not answer.
The pattern: attacker obtains audio or video of the target executive (public earnings calls, conference presentations, LinkedIn videos), clones their voice or face, places a call requesting an urgent financial action.
Business email compromise (BEC) amplification
Traditional BEC relied on email spoofing. Deepfake BEC adds a voice or video call to the social engineering chain, dramatically increasing believability. The call “confirms” the email instruction with a voice or face the recipient recognizes. Financial crime teams are increasingly treating deepfake-amplified BEC as the standard attack pattern, not an edge case.
KYC and identity verification bypass
A fraudster combines stolen identity documents (Aadhaar, PAN) with a deepfake video to pass the liveness check in a Video KYC session. The resulting account is opened in the victim’s name, usable for money muling, instant loan fraud, and UPI-based fraud.
In Q1 2025, 179 deepfake incidents were reported globally, a 19% increase compared to the entire year of 2024, per Ceartas. Financial services accounted for 42.5% of AI-related fraud attempts in the same period.
Brand impersonation for investment scam advertising
Deepfake videos featuring celebrities, sports figures, and industry leaders are used to promote fraudulent investment schemes, fake crypto platforms, and “guaranteed return” trading products, distributed primarily via social media and messaging apps. India has seen a significant rise in this pattern, with well-known faces used to lend credibility to schemes targeting retail investors.
Remote hiring fraud
State-linked actors have placed fake remote workers inside technology companies using deepfake video interviews, with hiring processes that HR teams completed believing they were interacting with a real candidate. The hired “employee” then had insider access to credentials, systems, and intellectual property.
Indian financial sector: Specific deepfake AI attack surfaces
Video KYC (V-CIP): The primary Indian attack surface
India’s RBI-mandated V-CIP process involves over 11 lakh (1.1 million) video KYC calls daily. Each of these is a potential target for deepfake injection.
The “Jamtara 2.0” phenomenon (named after the original Jharkhand-based phishing network and documented by Pi-Labs) describes how organized criminal networks are now using deepfake to manipulate Video KYC processes at scale. Unlike card skimming or phishing, deepfake KYC fraud creates legitimate-looking accounts that are far harder to detect post-opening. Nearly 65% of cyber incidents involving deepfakes in India remain unreported.
Deepfake-related cybercrime in India has grown 550% since 2019.
Voice banking and phone authentication bypass
Several Indian banks use voice biometric authentication for IVR and phone banking access. A voice cloning attack requires a short audio sample (available from any public recording) to generate a synthetic voice that passes voice authentication. The additional vector: synthetic voice used in social engineering calls to customer service, to change registered contact details or authorize account modifications.
NBFC and digital lending onboarding risk
Digital lenders and NBFCs rely on mobile-native V-CIP for instant loan onboarding, often with tighter speed requirements and smaller security teams than large banks.
The attack pattern is straightforward: deepfake passes liveness check, instant personal loan is disbursed, account is abandoned. By the time the fraud is detected, the loan has been disbursed to a mule account. The NBFC is left with a non-performing loan attached to a fraudulent identity.
India’s regulatory response to deepfake AI
IT Rules 2026: Takedown and labelling
India’s updated IT Rules introduce a 3-hour takedown mandate for deepfake content after a valid complaint, mandatory AI labelling for AI-generated content on platforms, and a ban on non-consensual intimate imagery. Platforms face liability if they fail to act on flagged deepfake content.
DPDP Act 2023: Biometric data implications
India’s Digital Personal Data Protection (DPDP) Act has direct implications for deepfake. Biometric data (facial features, voice profiles) collected without consent and used for deepfake generation constitutes a violation. Companies that suffer data breaches enabling deepfake creation face Data Fiduciary obligations under the Act.
What’s missing: India vs US and EU
The US TAKE IT DOWN Act (2025) criminalizes non-consensual deepfake intimate imagery at the federal level. The EU AI Act (August 2025) mandates deepfake labelling and prohibits certain high-risk AI uses outright. India has neither equivalent statute.
The result: India’s regulatory framework leans on general IT Act provisions and IPC sections that were not designed with synthetic media in mind. Financial institutions operating in India cannot rely on regulation to constrain the deepfake threat. They must implement their own technical defenses.
For a full review of what the law currently permits and prohibits, see are deepfakes illegal in India.
Deepfake AI detection: The arms race
Why human detection fails
Human detection accuracy for advanced deepfakes has fallen to 24.5%, per Bright Defense. AI detection tools perform better but lose 45–50% of their lab accuracy in real-world deployments. Models trained on older deepfake datasets cannot reliably catch current-generation fakes.
For a detailed breakdown of what detection tools can and cannot catch, and how liveness detection specifically applies to KYC, see how to detect deepfakes.
What AI-based detection can and cannot do
Commercial detection systems analyze facial inconsistencies, blinking patterns, and audio-visual synchronization. These approaches work against presentation attacks: someone showing a screen or photograph to a camera.
What they do not catch: injection attacks. In an injection attack, the deepfake video is fed into the video stream at the software level, below where the camera operates. The detection system receives a fabricated feed. Analysis of the video signal alone cannot detect this.
Effective defense requires operating at the device and session level: device integrity verification, signal-layer analysis, and ISO 30107-3 Level 2 Presentation Attack Detection with injection detection modules. This is the minimum defensible standard for V-CIP compliance.
What Indian enterprises should do now
For KYC and compliance teams
- Upgrade V-CIP liveness detection to include injection attack detection, as presentation attack detection alone is insufficient against the current threat.
- Require ISO 30107-3 Level 2 PAD compliance from your V-CIP vendor and ask specifically whether it covers injection attacks, not just presentation attacks.
- Implement out-of-band verification for high-value account openings: a callback to a pre-registered number through a separate channel, independent of the V-CIP session.
HyperVerge’s Video KYC solution is built specifically for India’s V-CIP requirements, with injection attack detection included.
For HR teams
- During video interviews: ask the candidate to hold their government-issued photo ID directly to the camera and read one specific detail aloud at your instruction. Deepfake injection tools cannot dynamically generate a matching physical document.
- Use a randomised gesture prompt at the start of every interview: ask for a specific movement you name at that moment. This breaks screen-replay attacks.
- Use two separate video platforms for screening and final interview, as real-time deepfake tools often fail on platform switches.
For enterprise security teams
- Establish a verbal codeword protocol for any executive call requesting a financial transaction. The codeword should be agreed in advance, through a separate channel.
- Never approve a wire transfer or financial authorization based solely on a video or voice call. Require a second confirmation through an independently verified channel.
- Log all video calls involving financial requests. Real-time review is not always possible, but a recorded session can be reviewed post-event with detection tools before a transaction is finalized.
The combined framework (technical liveness controls at the KYC layer, process controls at the authorization layer, and behavioral protocols at the human layer) is the minimum defensive posture for Indian enterprises in 2025. See how HyperVerge protects you from deepfake injection.
Frequently asked questions
What is deepfake AI and how does it work?
Deepfake AI uses Generative Adversarial Networks (GANs), two neural networks trained against each other, to generate synthetic video, audio, or images of a real person. The generator produces increasingly convincing fakes until the output cannot be distinguished from genuine media. Voice cloning works similarly, using short audio samples to train a text-to-speech model.
How are deepfakes used in cybercrime?
The main attack patterns are executive impersonation for wire transfer fraud, KYC bypass using fabricated liveness checks, brand impersonation for investment scam advertising, and remote hiring fraud using deepfake video interviews. Financial services accounts for 42.5% of AI-related fraud attempts.
Can deepfake AI be detected?
Yes, but imperfectly. Human detection accuracy for advanced deepfakes is just 24.5%. Commercial AI detection systems achieve 78% video accuracy under controlled conditions. Injection attacks (where the deepfake is fed at the software level, below the camera) require device and session-level defenses, not just video analysis.
What is the risk of deepfake AI for businesses?
Financial fraud (wire transfers, loan fraud), identity theft enabling account takeover, insider access from fake-hired employees, and reputational damage from brand impersonation. India’s projected deepfake fraud loss in 2025 is ₹70,000 crore.
Is deepfake AI illegal in India?
Creating or distributing non-consensual deepfake content is actionable under the IT Act and IPC, and the 2026 IT Rules introduce a 3-hour takedown mandate. However, India lacks a dedicated deepfake criminal statute equivalent to the US TAKE IT DOWN Act. See are deepfakes illegal in India for the full legal picture.
How do I protect my company from deepfake fraud?
For KYC: ISO 30107-3 Level 2 PAD with injection detection. For financial authorizations: verbal codeword protocols and secondary confirmation channels. For hiring: physical ID verification and randomized gesture prompts in interviews.
What are the best deepfake detection tools?
Reality Defender (enterprise, multi-modal), Deepware Scanner (free consumer tier), and Microsoft Video Authenticator are widely used. For KYC-specific use cases, purpose-built liveness detection platforms with injection detection are more appropriate than general deepfake detectors.
What are real examples of deepfake AI fraud?
The Arup HK$200M wire transfer, the Ferrari CEO voice clone attack, and India’s Jamtara 2.0 criminal network using deepfake to bypass Video KYC at scale. For a broader catalogue, see deepfake examples.
