RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide

Fraud losses reported to the RBI surged 715% in the first half of FY2024-25, hitting a staggering INR 21,367 crore ($2.56 billion). With financial services remaining the primary target for bad actors, the industry is facing a fundamental shift in the nature of identity theft. India’s Video KYC (V-CIP) framework was originally designed for a […]

Fraud losses reported to the RBI surged 715% in the first half of FY2024-25, hitting a staggering INR 21,367 crore ($2.56 billion). With financial services remaining the primary target for bad actors, the industry is facing a fundamental shift in the nature of identity theft.

India’s Video KYC (V-CIP) framework was originally designed for a simpler era, one where the primary threat was a fraudster holding a physical photograph in front of a lens.

Today’s sophisticated attackers bypass the camera entirely. By injecting AI-generated video streams directly into sessions or using real-time synthetic overlays, they can easily circumvent legacy liveness detection systems.

In 2026, the critical question is no longer whether your platform has liveness detection, but whether that detection is robust enough to stop the specific deepfake vectors currently targeting Indian banks and fintechs.

This article breaks down the updated RBI guidelines on deepfake prevention, provides a framework to assess your current implementation, and explores what genuinely compliant detection looks like in the age of generative AI.

What does the RBI’s video KYC framework say about deepfake detection and compliance?

India introduced V-CIP under the RBI Master Direction DBR.AML.BC.No.81/14.01.001/2015-16, becoming one of the first countries globally to recognise video KYC as equivalent to in-person verification formally. 

The framework has since been amended multiple times, each update tightening the technical requirements as the digital onboarding evolved. But the parts that matter most to deepfake detection RBI compliance are buried in the technical requirements of Paragraph 18, which governs V-CIP infrastructure, procedure, and data management.

What paragraph 18 actually says about technology?

The Master Direction states explicitly that the V-CIP application must have ‘components with face liveness/spoof detection as well as face matching technology with a high degree of accuracy.’ It then adds that ‘Appropriate Artificial Intelligence (AI) technology can be used to ensure that the V-CIP is robust.’

RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide
source: RBI Master Circular

That phrase ‘appropriate AI technology’ is significant. RBI is not prescribing a specific method. It is prescribing an outcome: robustness. And robustness, in the context of today’s deepfake threats, means something considerably more than a blink-and-turn check.

The circular also mandates that, based on detected or near-miss cases of forged identity, the technology infrastructure must be regularly upgraded. And any detected case of forged identity through V-CIP must be reported as a cyber event under extant regulatory guidelines. 

These two provisions: upgrade obligation and incident reporting, are where many implementations fall short.

RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide
source: RBI Master Circular

Who must comply?

The Master Direction applies to all Regulated Entities (REs) as defined in Paragraph 3(b)(xiv). This includes every Scheduled Commercial Bank, Regional Rural Bank, Urban Cooperative Bank, NBFC, Payment System Provider, Prepaid Payment Instrument Issuer, and other RBI-regulated financial entity.

If your organisation onboards customers remotely and uses video KYC compliance India 2026 processes, V-CIP compliance is a baseline regulatory requirement.

The infrastructure requirements (not optional)

Before a regulated entity can go live with V-CIP, the circular requires that the infrastructure meet a set of non-negotiable baseline conditions. These include:

  1. End-to-end encryption: The connection between the customer’s device and the V-CIP application must be encrypted to appropriate standards.
  2. IP-origin controls: The application must be capable of preventing connections from IP addresses outside India or from spoofed IP addresses (this is a specific technical requirement, not a best practice).
  3. Geo-tagging and timestamp: Video recordings must include live GPS coordinates and a date and time stamp, with sufficient quality to identify the customer beyond a reasonable doubt.
  4. Security testing: The infrastructure must undergo Vulnerability Assessment, Penetration Testing, and a Security Audit conducted by CERT-In empanelled auditors. These must be repeated periodically, and any critical gap must be resolved before the system goes live or remains live.
RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide
source: RBI Master Circular
  1. Cloud data sovereignty: If cloud infrastructure is used, all data, including video recordings, must be transferred to the RE’s exclusively owned or leased servers immediately after the V-CIP process is completed. No data may be retained by a third-party cloud provider.

The procedure requirements: What the official must do

The circular is precise on the human side of the process. Under the RBI KYC circular liveness check requirements, V-CIP must be operated only by officials of the regulated entity who are specifically trained for the purpose. These officials must be able to:

  • Carry out liveness checks and detect fraudulent manipulation or suspicious conduct independently
  • Vary the sequence and type of questions to establish that the interaction is real-time and not pre-recorded
  • Match the customer’s photograph on the OVD/Aadhaar and PAN against the live video feed
  • Reject sessions if any prompting is observed at the customer’s end

Additionally, the Master Direction explicitly encourages REs to adopt AI and machine learning technologies for ongoing due diligence monitoring, stating that such tools can ‘support effective monitoring’ of transaction patterns. 

Here are the key V-CIP requirements at a glance:

ProvisionParagraphWhat it requiresDeepfake relevance
Liveness + spoof detection18(a)(v)High-accuracy face liveness, spoof detection, and face matchingMandatory detection of synthetic faces and manipulated video
Spoofed IP prevention18(a)(iii)Block connections from spoofed IPsInjection attack defence at the network layer
Forged identity upgrades18(a)(vi)Upgrade infrastructure on near-miss cases; report as a cyber eventContinuous improvement and reporting obligations
Varied liveness questions18(b)(iii)Vary the question sequence to confirm real-time interactionProcedural defence against replay + pre-recorded attacks
AI/ML for monitoringPara 36Adopt AI/ML for ongoing due diligence monitoringPost-onboarding deepfake and fraud monitoring
New technology risk assessmentPara 62Assess ML/TF risk from new technologies before deploymentDeepfake as an emerging technology risk
Passive liveness18(b)(i), Aug 2025Liveness check must not exclude persons with special needsImplies passive AI liveness as baseline standard

Source: RBI Master Direction DBR.AML.BC.No.81/14.01.001/2015-16, updated August 14, 2025

RBI master circular on liveness detection (active and passive)

The 2020 V-CIP compliance framework involved a basic liveness check that satisfied the literal minimum of the 2020 circular. But in 2026, it is defeated by the advanced AI spoofing and fraud attacks. Liveness detection RBI norms have evolved alongside these threats, and the compliance baseline has moved accordingly.

Now, as stated above, if a regulated entity’s V-CIP system is being targeted and those attacks are succeeding or nearly succeeding, the entity has an explicit regulatory obligation under the RBI video KYC requirements banks framework to upgrade its infrastructure. Failure to do so is a documented non-compliance.

The Master Direction does not use the terms ‘active liveness’ and ‘passive liveness’ explicitly. But what it does specify, under Paragraph 18(b), is substantive.

  1. First, the application must have ‘components with face liveness/spoof detection as well as face matching technology with a high degree of accuracy.’ This is a technology requirement, not just a procedural one.
  2. Second, the authorised official conducting the V-CIP ‘should be capable to carry out liveness check and detect any other fraudulent manipulation or suspicious conduct of the customer and act upon it.’ This means the human element is part of the liveness framework, not a replacement for automated detection. 
  3. Third, and importantly for anti-deepfake purposes, the Direction states that ‘the sequence and/or type of questions, including those indicating the liveness of the interaction, during video interactions shall be varied in order to establish that the interactions are real-time and not pre-recorded.’ This is an active liveness requirement: it requires behavioral unpredictability that defeats replay and pre-recorded video attacks.
The compliance question compliance teams are not asking:
Most V-CIP audits verify that liveness detection is present and that the infrastructure passed its last VAPT. Very few ask: has the liveness detection been specifically tested against injection attacks and GAN-generated faces, not just print and replay attacks? If your VAPT report does not specifically reference injection attack testing, the answer is probably no.

How deepfakes are actually attacking onboarding flows and video KYC compliance

There are two primary attack vectors that compliance teams need to understand when it comes to deepfake KYC fraud prevention RBI obligations.

  1. The first is face swap attacks, where a bad actor takes a legitimate person’s identity documents and substitutes a deepfake face during the live video session. Early-generation face swaps were detectable, but modern GAN-based (Generative Adversarial Network) tools produce face swaps that defeat simple liveness checks with high reliability.
  2. The second, and more sophisticated, is the video injection attack. Rather than appearing in front of a real camera, the attacker feeds a pre-generated or real-time synthesized video stream directly into the device’s virtual camera layer, bypassing the physical camera entirely. The detection system sees what appears to be a live video, but is receiving a manufactured feed.

Both attack types are specifically relevant to the Master Direction’s requirement that the application ‘prevent connection from IP addresses outside India or from spoofed IP addresses’ and maintain ‘end-to-end encryption of data between customer device and the hosting point of the V-CIP application.’ 

These provisions were designed partly with injection-style attacks in mind.

The IT Act’s ‘reasonable precautions’ standard

Paragraph 18(b)(xiii) of the Master Direction states: ‘All matters not specified under the paragraph but required under other statutes such as the Information Technology (IT) Act shall be appropriately complied with by the RE.’

RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide
source: RBI Master Circular

Section 43A of the IT Act requires organisations handling sensitive personal data to implement and maintain ‘reasonable security practices and procedures.’ What constitutes ‘reasonable’ evolves with the threat landscape. 

A court or regulator assessing whether an institution took reasonable precautions against a deepfake-enabled identity fraud in 2026 would apply a very different standard than they might have in 2021.

This is why the compliance case for GAN-level deepfake detection does not require an explicit RBI circular to be compelling.

Compliance self-assessment: Is your video KYC deepfake-ready?

Work through the following questions against your current V-CIP infrastructure to map the VCIP guidelines deepfake requirements:

Infrastructure

  1. Is your V-CIP technology infrastructure housed in your organisation’s own premises? ☐ Yes ☐ No ☐ Partially
  2. If you use cloud deployment, does ownership of all data, including video recordings, rest exclusively with your organisation, and is data transferred to your server immediately after each V-CIP session? ☐ Yes ☐ No ☐ Not Sure
  3. Does your application block connections from IP addresses outside India and from spoofed IP addresses? ☐ Yes ☐ No ☐ Not Sure
  4. Have you conducted Vulnerability Assessment, Penetration Testing, and Security Audit by a CERT-In empanelled auditor? ☐ Yes ☐ No ☐ In Progress

Liveness and deepfake detection

  1. Does your V-CIP application include automated face liveness detection, not just face presence detection? ☐ Yes ☐ No ☐ Not Sure
  2. Does your liveness detection system specifically address video injection attacks? ☐ Yes ☐ No ☐ Not Sure
  3. Does your system have the capability to detect GAN-based or AI-generated face swaps, distinct from basic spoofing? ☐ Yes ☐ No ☐ Not Sure
  4. Are the questions and behavioral prompts used during V-CIP sessions varied, as required to prevent pre-recorded or replay attacks? ☐ Yes ☐ No ☐ Partially

Process and governance

  1. Do you have a documented process for logging detected, attempted, and near-miss cases of forged identity during V-CIP? ☐ Yes ☐ No 
  2. Does your technology infrastructure get reviewed and upgraded in response to those logs? ☐ Yes ☐ No ☐ Ad Hoc
  3. Are forgery cases reported as cyber events under applicable RBI guidelines? ☐ Yes ☐ No ☐ Not Consistently
  4. Are all V-CIP accounts subjected to concurrent audit before being made operational? ☐ Yes ☐ No ☐ Partially
Scoring guidance: If you answered ‘No’ or ‘Not Sure’ to three or more questions in the ‘Liveness and Deepfake Detection’ section, your current V-CIP implementation may not meet the effective standard for deepfake detection RBI compliance. If you answered ‘No’ or ‘Not Sure’ to two or more questions in the ‘Infrastructure’ section, your implementation may not meet the baseline technical requirements of Paragraph 18(a), regardless of deepfake considerations.

What does compliant deepfake detection look like in practice?

The Master Direction doesn’t prescribe a specific technical method for RBI video KYC deepfake guidelines. It does, however, mandate high accuracy, regular upgrades, and a documented audit trail.

In practice, that means passive liveness with GAN artefact analysis as the primary layer, with active gesture prompts as a backup. It also means injection attack detection at the stream level, not just IP filtering.

Deepfake detection must happen within the live session itself. Post-session analysis doesn’t satisfy the ‘seamless, live’ requirement under Paragraph 18. Every session needs a logged record: liveness score, face match score, GPS coordinates, timestamp, and the conducting official’s ID.

On model updates, quarterly is the floor. Deepfake tools evolve fast, and a model benchmarked in 2024 could have real blind spots by 2026.

How does HyperVerge support RBI V-CIP compliance? 

Is your compliance team able to keep pace with every RBI revision? Do they find it hard to decode the RBI video KYC requirements banks for your tech stack? HyperVerge can help flip that.

HyperVerge’s Video KYC solution covers the full V-CIP workflow: CKYCR record management, Digilocker integration, KRA verification, and liveness detection built for the current threat environment. 

One SDK, 100+ API integrations, live in under a week. Your team stays audit-ready without having to rebuild from scratch every time the circular updates.

Schedule a demo with HyperVerge’s Video KYC team today.

Frequently Asked Questions

Per the Master Direction on KYC, the V-CIP application must have ‘components with face liveness/spoof detection as well as face matching technology with high degree of accuracy.’ Additionally, under Paragraph 18(b)(i), the authorised official conducting the V-CIP must be ‘capable to carry out liveness check and detect any other fraudulent manipulation.’ These form the core of video KYC liveness detection RBI obligations.

Basic liveness detection satisfies a narrow reading of the RBI KYC circular liveness check compliance requirement. However, the Master Direction also requires infrastructure to be 'regularly upgraded' based on emerging fraud patterns, and requires compliance with the IT Act's reasonable precautions standard. 

Deepfakes can bypass basic compliance processes. But the Master Direction's combination of spoof detection, AI-powered robustness, varied interaction sequencing, and regular upgrades is designed to make V-CIP resistant to evolving fraud. 

The primary regulatory document is the RBI Master Direction titled ‘Know Your Customer (KYC) Direction, 2016.’ V-CIP is governed under Paragraph 18 of Chapter VI. The Direction has been updated multiple times; the most recent update, as of this writing, is dated August 14, 2025. 

RBI’s VCIP guidelines deepfake does not use the word 'deepfake.' However, it does require 'spoof detection,' a 'high degree of accuracy' in face matching and liveness, the use of AI technology for robustness, and regular upgrades based on detected fraud patterns.

Per the Master Direction, any detected case of forged identity through V-CIP must be 'reported as a cyber event under extant regulatory guidelines.' Beyond the reporting obligation, a successful deepfake-enabled identity fraud would likely trigger scrutiny of the institution's V-CIP implementation, specifically whether it met the upgrade obligation, the spoof detection requirement, and the IT Act's reasonable precautions standard.

Preeti Kulkarni

Preeti Kulkarni

Content Marketer

LinedIn
Preeti is a tech enthusiast who enjoys demystifying complex tech concepts majorly in fintech solutions. Infusing her enthusiasm into marketing, she crafts compelling product narratives for HyperVerge's diverse audience.

Related Blogs

RBI Video KYC Deepfake Guidelines: 2026 Compliance Guide

Fraud losses reported to the RBI surged 715% in the first half...

Deepfake Bank Fraud Explained: AI Attacks on Indian Banks (2026 Guide)

In January 2024, an employee at a Hong Kong–based firm transferred US$25...

Deepfake Audio Detection: How It Works, Why It Matters & How to Protect Your Business

Voice-related fraud in India has grown multifold over the years, costing individuals...