Deepfake Bank Fraud Explained: AI Attacks on Indian Banks (2026 Guide)

In January 2024, an employee at a Hong Kong–based firm transferred US$25 million after receiving instructions from her CFO during a video call that appeared to include several colleagues. However, none of them were actually present. Fraudsters had used deepfake technology to replicate their faces and voices, convincing her the request was legitimate. Cases like […]

In January 2024, an employee at a Hong Kong–based firm transferred US$25 million after receiving instructions from her CFO during a video call that appeared to include several colleagues. However, none of them were actually present. Fraudsters had used deepfake technology to replicate their faces and voices, convincing her the request was legitimate.

Cases like this highlight how quickly AI-driven impersonation is evolving. As generative AI tools become more sophisticated and widely accessible, attackers can now mimic identities with a level of realism that traditional security checks often fail to detect.

Industry data already reflects this shift. The FBI’s Internet Crime Complaint Center reported in 2023 that business email compromise (BEC) and impersonation scams have caused over $50 billion in global losses since 2013, including nearly $3 billion in the United States. Additionally, Deloitte’s Center for Financial Services estimates that generative AI could push fraud losses in the U.S. to $40 billion by 2027, up from $12.3 billion in 2023.

Indian financial institutions face even greater exposure. With 18 billion UPI transactions each month and widespread adoption of video KYC, digital banking has created new opportunities for attackers to attempt deepfake bank fraud.

This raises a pressing question for banks and fintechs: How does deepfake financial fraud work, and how can institutions stop it?

How Deepfake Attacks Target Banks Specifically

Most banking leaders already understand what a deepfake is, but many still underestimate how easily fraudsters adapt these tools to real financial workflows.

Attackers do not rely on a single technique. They combine identity theft, synthetic media generation, and social engineering to exploit specific weaknesses inside banking operations. The most common attacks fall into four categories.

1. Video KYC bypass

Even though remote onboarding is one of the most important digital banking innovations in India, it has also created a new attack surface.

Fraudsters now attempt deepfake video KYC bypass attacks during digital identity verification sessions. These attacks use face-swap technology to replace the attacker’s face with a synthetic one that matches the identity document submitted during onboarding.

A typical attack works like this:

  • The attacker purchases stolen identity documents from data breaches or underground marketplaces
  • AI tools generate a matching synthetic face or perform a real-time face swap
  • The attacker joins the verification call while the software overlays the manipulated face

The most advanced version is called a camera injection attack. Instead of using the webcam, the attacker injects a pre-generated video stream directly into the verification platform.

Financial platforms have already seen these attempts. In 2024, a fraud group tried to open accounts at a U.S. cryptocurrency exchange using deepfake identity videos paired with forged documents. Liveness detection eventually flagged irregular blinking patterns and lighting inconsistencies, and the accounts were blocked before onboarding was completed

Many systems that only rely on basic liveness detection in video KYC will fail to detect this attack. The system only confirms that a face appears on screen, but it cannot determine whether the face belongs to a real human. As remote onboarding expands across Indian banks and NBFCs, deepfake KYC fraud is becoming one of the fastest-growing identity risks.

2. Account takeover via social engineering

As banks are deploying AI to streamline their operations, scammers are also using AI to dupe their customers, and this time through voice cloning tools.

Modern voice synthesis can mimic tone, cadence, and accent with just a few seconds of audio. Fraudsters grab these samples from social media, interviews, or leaked call recordings. Once the cloned voice sounds convincing, the attacker calls the bank’s helpline and attempts account recovery or password reset workflows.

Voice impersonation is already hitting multiple industries. In one high-profile case of AI CEO fraud in India, cybercriminals cloned telecom czar Sunil Bharti Mittal’s voice to trick his executive into transferring money. Luckily, the executive realized Mittal wouldn’t request such a huge transfer, and the scam was stopped.

Inside banking environments, the risk becomes even higher. Relationship managers and agents may believe they are speaking with the real customer. If the process lacks strong identity verification, the attacker may gain access to account information, credentials, or transaction approvals. This tactic is now a growing part of deepfake financial fraud across digital banking channels.

3. CEO impersonation fraud

If you think voice-cloning fraud is sophisticated, wait till you hear about BEC 2.0. 

Corporate finance teams aren’t just getting fake emails anymore. Attackers now mix emails with deepfake audio and video to bypass trust.

A well‑documented case involved a UK executive who believed he was speaking with the CEO of his company’s German parent. The voice on the phone had the right accent, tone, and cadence, so he transferred about $243,000 to what he believed was an urgent supplier payment. But the voice was fake, generated with AI to mimic his boss. That transfer was never reimbursed.

These attacks exploit urgent financial instructions, such as approving wire transfers, processing vendor payments, or overriding approvals. Banks handling corporate accounts and treasury operations must stay alert, as attackers often target client finance teams, not the banks themselves.

4. Synthetic identity KYC fraud

Some fraudsters go even further. Instead of impersonating real individuals, they create entirely new identities using AI.

Modern generative models can produce realistic human faces that do not belong to any real person. Attackers pair these synthetic faces with fabricated identity documents to create new banking profiles, which they later use for:

  • Money laundering networks
  • Mule account operations
  • Credit fraud schemes
  • Coordinated transaction fraud

Because these identities never belonged to real individuals, traditional fraud databases often fail to detect them. As a result, synthetic identity fraud is now one of the most difficult threats for banks to detect during digital onboarding.

Real Cases: Deepfake Fraud in Indian Financial Services

Financial institutions often understand threats more clearly when they see real incidents. These high-profile deepfake banking fraud cases already show how quickly these attacks evolve.

Case #1: Nirmala Sitharaman scam

In Bengaluru in 2025, a 54‑year‑old woman lost over ₹33 lakh after she trusted a deepfake video of Union Finance Minister Nirmala Sitharaman endorsing a fake trading platform. The video appeared on Facebook and looked authentic enough to convince her to invest. 

She initially put in a small sum, and the scammers used email and WhatsApp to build trust before she wired large sums across 9 transactions. The police have registered the case under cybercrime laws, and the victim is seeking recovery. 

Case #2: Narayana Murthy trap

That same year, a 79‑year‑old Bengaluru resident was duped of nearly ₹35 lakh in a related AI scam. Fraudsters used deepfake videos of NR Narayana Murthy to lure her into a fake AI trading platform that promised huge returns. 

The criminals created a polished website and assigned her a “financial manager,” using the deepfake credibility to trap her in a months‑long scam.

Case #3: BSE CEO deepfake video

In early 2026, a deepfake video of BSE CEO Sundararaman Ramamurthy circulated on social media, urging viewers to join WhatsApp groups for “investment tips.” The AI-generated clip made the CEO appear to promise high returns, tricking casual investors. 

BSE quickly warned the public that the video was fake and removed it where possible. This case shows how deepfakes exploit trust, urgency, and recognizable figures to commit financial fraud.

These examples of deepfake fraud in financial services show how attackers use AI to build credibility, trigger urgent actions, and bypass human skepticism. They combine social engineering with impressive synthetic media to make their requests feel legitimate.

Stopping deepfake fraud in banking takes layered defenses. Multi‑factor verification and out‑of‑band confirmations help prevent fake investment scams. For corporate finance, strict dual approvals and real‑time confirmation with known contacts can block unauthorized transfers. On the consumer side, checking official sources and being skeptical of unsolicited endorsements, especially those using celebrity likenesses, can save money and stress.

The Regulatory Response: What RBI & SEBI Expect

To keep pace with AI‑generated threats like deepfakes and voice cloning, regulators have been working to implement deepfake laws in India.

RBI video KYC rules

The Reserve Bank of India (RBI) first made remote identity checks official with its Video‑based Customer Identification Process (V‑CIP). Under V‑CIP, banks must capture a live video of the customer, verify their identity documents in real time, record the session, and match the customer’s face to official IDs to onboard new accounts securely. 

This setup goes beyond old‑style static checks because simple liveness tests can be fooled by advanced synthetic media like deepfakes. Meeting these video KYC standards helps prevent fraud at the very first step of customer onboarding.

RBI deepfake guidelines 

RBI has also warned the public about deepfake videos of its officials circulating online that promote fraudulent investment schemes or offer bogus financial advice. 

For example, in late 2024, the RBI publicly clarified that these videos were fake and urged people not to trust or act on them. This warning reflects the regulator’s concern that synthetically generated content can mislead investors and harm public trust.

SEBI AI guidelines

In addition to RBI’s guidance, the Securities and Exchange Board of India (SEBI) is developing rules for the responsible use of AI in financial markets. In 2025, SEBI proposed a set of principles to ensure that firms using AI and machine learning for trading, advisory services, or client interactions do so in a transparent, secure, and accountable way. 

Most recently, SEBI used its own AI tool to remove more than 1.2 lakh misleading social media posts by unregistered financial influencers, showing it is taking digital misinformation and AI abuse seriously.

On the legal front, India’s Information Technology Act, 2000, provides the backbone for prosecuting deepfake‑enabled fraud. Section 66D of the IT Act makes it a crime to cheat by impersonation using a computer resource, punishable by up to three years in prison and a fine. 

This applies when someone uses AI‑generated content to impersonate another person and trick a victim into incurring a financial loss. Courts and cybercrime units in India regularly use this provision to pursue cases of online impersonation and fraud.

Meeting RBI’s deepfake guidelines and SEBI’s responsible AI expectations is a starting point for preventing deepfake and AI‑driven fraud in banking. Compliance sets a foundation, but real protection comes from going beyond the minimum to anticipate and stop threats before they can do harm.

How Deepfake Detection Works in a Banking Context

Banks already use identity checks during onboarding. But not every system can handle modern deepfake attacks. 

To understand the gap, it helps to compare basic liveness checks with advanced deepfake detection:

Detection CapabilityBasic LivenessAdvanced Deepfake Detection
Detects print attacksYesYes
Detects replay attacksYesYes
Detects face swap (GAN)NoYes
Detects injection attacksNoYes
RBI Video KYC compliantYesYes
Suitable for BFSI 2026BorderlineYes

Liveness detection vs deepfake detection

Traditional liveness detection checks whether a person is actually present in front of the camera by looking for simple cues like blinking or movement. It can stop basic “print” or replay attacks, but it often fails against AI‑generated deepfakes that convincingly mimic those movements. 

Deepfake detection, by contrast, looks for subtle inconsistencies left by generative models that plain liveness checks miss. This makes deepfake detection essential for modern KYC systems, where attackers may bypass simple motion checks with synthetic video.

Why passive liveness works better for scale

Passive liveness systems run quietly in the background and analyze frame‑by‑frame signals without asking the user to perform specific actions. 

This approach scales better for high‑volume onboarding than active challenge/response, which can frustrate real users and still be spoofed by AI‑generated responses.

Detecting GAN artifacts in real-time

Deepfake detection identifies subtle visual artifacts created by generative models. These artifacts appear at the frame level in video streams.

Detection systems analyze patterns such as:

  • Inconsistent skin texture
  • Unnatural lighting transitions
  • Irregular eye reflections
  • Frame-to-frame inconsistencies in facial geometry

Machine learning models scan these signals across dozens of frames per second. If the system detects patterns typical of GAN-generated faces, it flags the session for fraud review. 

In fact, recent GAN‑based models have shown detection accuracy above 95 % when distinguishing real from fake content in controlled environments, making them powerful additions to KYC pipelines.

Detecting injection attacks at the infrastructure level

Some of the most dangerous deepfake attacks never touch the camera at all. Instead, attackers inject pre-generated video streams directly into the onboarding pipeline. This technique is called an injection attack.

Detecting it requires infrastructure-level controls. The system must verify the integrity of the video stream, confirm that the input originates from the device camera, and detect abnormal data patterns in the media pipeline. 

When evaluating solutions, focus on the false acceptance rate (FAR), the false rejection rate (FRR), processing latency, and whether the system provides a robust compliance audit trail for regulators. Without these checks, even strong facial analysis can be bypassed.

Prevention Checklist for Indian Financial Institutions

Deepfake attacks are evolving quickly, which means banks need layered controls across onboarding, infrastructure, and human processes. If you are evaluating how to prevent deepfake fraud in banking, use this checklist as a baseline:

uncheckedImplement passive liveness detection with GAN artifact analysis: Go beyond simple presence detection. The system should analyze facial micro-signals and identify artifacts produced by generative models.

uncheckedAdd injection attack detection at the video infrastructure level: Verify that the video feed truly comes from a device camera and not a pre‑rendered or injected stream, so attackers cannot feed fake video into your verification flow.

uncheckedEnable real‑time confidence scoring with compliance audit logging: Score every identity check and record an immutable trail. This makes investigations faster and satisfies compliance audits required in Indian banking.

uncheckedTrain relationship managers and service teams on deepfake social engineering indicators: Teach staff to spot unnatural movement, inconsistent lighting, and odd phrasing so they catch fraud attempts before losses occur.

uncheckedEstablish executive wire transfer verification protocols independent of video or audio channels: Never approve high-value transactions based only on video or voice confirmation.

uncheckedReview Video KYC vendor capability against current attack vectors, not just RBI minimums: Evaluate whether the platform can detect GAN faces, injection attacks, and large-scale synthetic onboarding attempts.

uncheckedRun quarterly deepfake fraud simulations: Testing attack scenarios helps institutions prepare for evolving AI fraud banking in India. 

How HyperVerge Detects Deepfakes in Video KYC

Deepfake attacks often appear at the exact moment banks verify identity during onboarding. That is why detection must happen within the video KYC flow, not after it. HyperVerge approaches this challenge with a layered verification system that analyzes both the face and the video stream in real time.

The process begins with passive liveness detection, which verifies that the person in front of the camera is real without asking them to blink, smile, or move their head. Instead, the system analyzes subtle biometric signals such as skin texture, light reflections, and facial micro-features. These signals are difficult for synthetic media to reproduce, which makes passive analysis more reliable for detecting deepfake manipulation.

From there, HyperVerge’s models scan frames for anomalies linked to face swaps, GAN-generated faces, replay attacks, and print attacks. At the same time, the platform checks the integrity of the video stream to detect injection attacks, in which attackers attempt to bypass the camera and insert a pre-generated deepfake feed directly into the verification pipeline. This layered approach allows detection to remain both fast and accurate. 

Independent benchmarks show HyperVerge’s deepfake and liveness detection achieves around 98.5% accuracy and detects deception in under three seconds. The system has also been tested extensively in production environments. For example, it has processed more than 850 million liveness checks over the past three years with over 99.9% accuracy. 

Additionally, the platform’s passive liveness technology is also ISO 30107-3 Level 2 certified by iBeta, confirming it can withstand advanced spoofing attempts, including masks and deepfakes. Real banks are already using this technology. Suryoday Small Finance Bank, for instance, deployed HyperVerge’s video KYC solution in 2026 to improve both conversion rates and compliance confidence across its onboarding processes. 

If you want to understand the full verification flow in action, see how HyperVerge detects deepfakes in a real onboarding environment. Book a demo today to learn more!

Frequently Asked Questions

Fraudsters manipulate video feeds or use synthetic faces during onboarding sessions. These attacks exploit systems that rely only on basic presence checks. Advanced detection systems analyze video artifacts and device signals to detect manipulation.

Yes. Section 66D of the Information Technology Act criminalizes cheating through personation using computer resources. Authorities can prosecute deepfake impersonation cases under this provision and other cybercrime laws.

Liveness detection confirms that a person appears in front of the camera. On the other hand, deepfake detection analyzes whether the face itself is authentic. Advanced systems such as HyperVerge identify AI-generated faces, injection attacks, and synthetic media artifacts.

Yes, fraudsters use voice cloning tools to impersonate customers or executives during phone calls. Banks that rely only on voice recognition may fail to detect these impersonation attempts.

Banks should document the incident, alert fraud teams immediately, and report the attack to regulators. Institutions must also review identity verification controls and strengthen detection systems to prevent similar attempts.

Preeti Kulkarni

Preeti Kulkarni

Content Marketer

LinedIn
Preeti is a tech enthusiast who enjoys demystifying complex tech concepts majorly in fintech solutions. Infusing her enthusiasm into marketing, she crafts compelling product narratives for HyperVerge's diverse audience.

Related Blogs

RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide

Fraud losses reported to the RBI surged 715% in the first half...

Deepfake Bank Fraud Explained: AI Attacks on Indian Banks (2026 Guide)

In January 2024, an employee at a Hong Kong–based firm transferred US$25...

Deepfake Audio Detection: How It Works, Why It Matters & How to Protect Your Business

Voice-related fraud in India has grown multifold over the years, costing individuals...