Are Deepfakes Illegal in India? Laws, Penalties & 2026 Updates

Uncover how deepfake laws and regulations worldwide are addressing this ever-evolving tech.

Deepfakes are illegal in India when they violate privacy, impersonate someone, or are used for fraud or harassment. Under provisions of the Information Technology Act (IT Act), 2000, creating or distributing deepfake content without consent can lead to up to three years of imprisonment and fines of up to ₹2 lakh, depending on the applicable section. As of 2026, several countries have introduced dedicated laws or broader AI regulations to govern deepfakes and synthetic media.

What is deepfake technology and why is it a concern?

The term ‘dee­pfakes‘, linking ‘deep le­arning’ and ‘fake’, pins to ultra-life-like cybe­r manipulations. Faces or voices can be switche­d, often without consent. Here’s how deepfakes work—they are driven by high-e­nd AI and machine and deep learning technology, its prospects for misuse­ have sparked immense­ concern.

guide to deepfake detection by HyperVerge

Creating deepfake videos has become alarmingly easy, thanks to the availability of user-friendly software and the vast amount of data accessible online.

Advanced Artificial Intelligence tools required to create deepfakes are now more accessible than ever. With basic knowledge and resources, almost anyone can create a convincing deepfake. Moreover, social media platforms he­lp deepfakes trave­l fast around the world. A deepfake video can become popular fast, re­aching many people before­ anyone even que­stions if it’s real.

Read more: How to spot a deepfake

Potential harms of deepfakes

Deepfakes represent a significant threat in various ways, particularly in terms of defamation, privacy invasion, and damage to reputations. Let’s delve deeper into these aspects:

Defamation: Deepfake content can be weaponized to create false narratives. For instance, a politician could be depicted as saying something they never did, potentially ruining their career. This kind of digital defamation on online platforms is not just a personal attack but can influence public opinion and disrupt democratic processes.

Privacy Invasion: Think about a case where a person’s image is used without their approval, maybe in a tricky or embarrassing situation. This not only breaks their privacy but may cause mental upset and social shame.

Read more: How to detect AI-generated selfies

Damage to reputation: For companies, video manipulation could depict a top executive partaking in illegal or unethical actions. This could result in losing customer’s faith, a sharp fall in stock values, and lasting damage to the business image.

The existing legal system isn’t fully ready to deal with the problems born from such fake videos. There are efforts being made worldwide as we speak, to create regulatory frameworks around deepfakes. Deepfake detection tools are increasingly becoming the need of the hour.

Deepfake Laws Around the World

Governments are increasingly regulating deepfakes due to their impact on privacy, financial fraud, and misinformation. While some jurisdictions rely on existing cybercrime laws, others have introduced AI-specific legislation targeting synthetic media.

CountryGoverning LawMaximum Penalty
IndiaInformation Technology Act 2000 + IT (Intermediary Guidelines) RulesUp to 3 years imprisonment and fines up to ₹2 lakh depending on the applicable section
United StatesDEFIANCE Act (2024) and various state-level deepfake lawsCivil liability and financial damages
United KingdomOnline Safety Act 2023Criminal liability and unlimited fines
European UnionEU AI ActUp to 6% of global annual revenue

Unlike the EU or UK, India currently regulates deepfakes through existing cybercrime, privacy, and platform accountability laws rather than a dedicated deepfake statute.

Real Enforcement Cases Involving Deepfakes and Financial Fraud

Deepfake-enabled fraud is no longer theoretical. Regulators and law enforcement agencies worldwide have already investigated several cases where AI-generated media was used to impersonate individuals and manipulate financial transactions.

Corporate Deepfake Voice Fraud (UK)

In one widely reported case, criminals used AI voice cloning to impersonate a company CEO and instructed a finance executive to transfer funds to a supplier. Believing the voice request was legitimate, the employee transferred over $240,000 before the fraud was discovered.

The incident highlighted how deepfake voice technology can be used to bypass trust-based corporate payment controls.

Deepfake Executive Video Fraud (Hong Kong, 2024)

In 2024, scammers conducted a deepfake video call impersonating multiple company executives during what appeared to be a legitimate internal meeting.

An employee was convinced to authorize several transactions totaling more than $25 million to fraudulent accounts. Authorities later confirmed that the meeting participants were AI-generated video impersonations.

Deepfake Investment Promotion Scams (India)

Indian authorities have also warned about AI-generated videos impersonating financial experts and public figures to promote fraudulent investment schemes.

These scams often circulate on social media ads or messaging groups, directing victims to fake trading platforms where funds are collected and later disappear.

Why this matters for fintech:
These cases demonstrate how deepfakes can exploit identity verification gaps, payment authorization processes, and investor trust, making them a growing concern for banks, fintech platforms, and digital financial services.

Deepfake Laws in India

India does not yet have a standalone law specifically targeting deepfakes. Instead, authorities address malicious deepfakes using provisions from the Information Technology Act, intermediary rules governing platforms, and sector-specific regulations.

These laws focus on privacy protection, impersonation fraud, and harmful digital content, which are the most common ways deepfakes are misused.

1. IT Act Sections Applicable to Deepfakes

Several provisions of the IT Act are commonly applied when deepfakes are used for fraud, impersonation, or privacy violations.

Section 66E — Violation of Privacy

This provision applies when someone publishes or transmits images of a person’s private areas without consent. Non-consensual deepfake content, particularly intimate deepfakes, may fall under this section.

Penalty: Up to 3 years imprisonment or a fine of up to ₹2 lakh, or both.

Section 66D — Cheating by Personation Using a Computer Resource

If deepfakes are used to impersonate someone for financial fraud or deception, such as voice cloning scams or fake executive videos, they may be prosecuted under this section.

Penalty: Up to 3 years imprisonment and/or a fine of up to ₹1 lakh.

In cases involving obscene or sexually explicit deepfake content, authorities may also apply Sections 67 and 67A of the IT Act, which regulate the publication and transmission of explicit digital material.

2. IT (Intermediary Guidelines and Digital Media Ethics Code) Rules

India’s Intermediary Rules govern how online platforms handle harmful or illegal content.

While the rules do not explicitly mention deepfakes, they require social media platforms to remove content that:

  • Impersonates individuals
  • Misleads users or spreads misinformation
  • Violates privacy or dignity

Platforms that fail to act after being notified can lose their safe harbour protection, meaning they may be held legally liable for hosting unlawful content.

In practice, these rules are one of the primary mechanisms used to remove deepfake content in India.

3. SEBI Guidance on Misleading Digital Content

India’s securities regulator, SEBI, has repeatedly warned about misleading or manipulated content used in financial promotions.

Regulators have emphasized that:

  • Investor communications must remain accurate and verifiable
  • Digital content that impersonates public figures or executives can mislead investors
  • Financial firms must monitor online channels for fraudulent communications

This is increasingly relevant as deepfake videos and AI-generated voice messages are being used in investment scams and financial misinformation campaigns.

5. Government Advisory on Deepfakes (MeitY, 2023)

In November 2023, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to online platforms addressing the rise of deepfake content.

The advisory reminded platforms that they must:

  • Remove unlawful or impersonating deepfake content
  • Ensure compliance with the IT Act and intermediary rules
  • Prevent the spread of misleading synthetic media

This marked one of the first instances where the Indian government explicitly addressed deepfakes as a growing digital risk.

6. Emerging Regulations on AI-Generated Content

India is also considering stricter rules around synthetic media and AI-generated content.

Policy discussions and proposed regulatory updates suggest that platforms may soon be required to:

  • Label AI-generated or manipulated media
  • Improve detection systems for synthetic content
  • Increase transparency around algorithmically generated media

If implemented, these measures would bring India closer to global frameworks such as the EU AI Act, which explicitly regulates synthetic media.

Can Deepfakes Be Used in Financial Fraud?

Yes. Deepfakes are increasingly being used in financial fraud and identity impersonation, especially in digital onboarding and remote verification flows.

Fraudsters can use AI-generated video, voice cloning, or face-swap technology to impersonate legitimate users during processes such as:

  • Video KYC onboarding
  • Facial authentication checks
  • Customer support interactions
  • Investment promotions or financial advice

If successful, attackers can create synthetic identities, open fraudulent accounts, or manipulate victims into transferring money.

For banks and fintech platforms, this makes deepfake detection an emerging compliance and fraud-prevention priority.

What RBI and SEBI Expect Financial Institutions to Do

Indian regulators increasingly expect financial institutions to implement strong safeguards against impersonation and digital identity fraud.

RBI Requirements:

Under the Video Customer Identification Process (V-CIP) framework, institutions must verify that:

  • The interaction happens in real time
  • The customer is physically present
  • The video feed is live and not prerecorded

These requirements are designed to prevent impersonation attempts during digital onboarding.

SEBI Guidance:

SEBI has warned about misleading digital content used in financial scams, including manipulated media impersonating financial experts or public figures. Financial institutions are expected to monitor such campaigns and ensure investor communications remain accurate and verifiable.

Read more: How To Prevent Deepfake Scams In User Onboarding

What Businesses Must Do to Prevent Deepfake Fraud

As deepfake technology becomes more accessible, organizations must strengthen their identity verification and fraud detection systems.

Key safeguards include:

  • Advanced liveness detection to identify AI-generated faces or replay attacks
  • Secure Video KYC systems that detect virtual camera injections or prerecorded feeds
  • Synthetic identity monitoring during digital onboarding
  • Monitoring impersonation scams targeting customers or investors
  • Continuous fraud monitoring after onboarding

For financial institutions and digital platforms, detecting manipulated images, videos, and biometric spoofing attempts is becoming critical for both fraud prevention and regulatory compliance.

👉 Learn how HyperVerge’s AI-powered identity verification and deepfake detection technology helps organizations detect deepfakes and prevent identity fraud.

Conclusion

The rise of deepfakes presents a complex challenge that spans legal, ethical, and technological realms. The potential for misuse, including defamation, privacy invasion, and the spread of misinformation, highlights the urgent need for robust laws and regulations. These laws must balance the prevention of harm with the protection of free speech and innovation, a task that requires global cooperation and nuanced understanding.

As we navigate this evolving landscape, the role of technology in detecting and combating deepfakes becomes increasingly vital. This is where solutions like HyperVerge’s deepfake detection play a crucial role. Want to see it in action? Sign up and get a customized demo.

Nupura Ughade

Nupura Ughade

Content Marketing Lead

LinedIn
With a strong background B2B tech marketing, Nupura brings a dynamic blend of creativity and expertise. She enjoys crafting engaging narratives for HyperVerge's global customer onboarding platform.

Related Blogs

Are Deepfakes Illegal in India? Laws, Penalties & 2026 Updates

8 Best Deepfake Detection Tools in 2026

Discover the top 5 deepfake detection solutions, pros and cons, and pricing...
Are Deepfakes Illegal in India? Laws, Penalties & 2026 Updates

Explain like I’m Five: Deepfakes and Image injections Part 1

Dive into the intricate landscape of face spoofing and their implications....
Are Deepfakes Illegal in India? Laws, Penalties & 2026 Updates

How to Prevent Deepfake Scams: A Complete Guide [2026]

Discover key strategies to combat deepfake scams in user onboarding and how...