Artificial Intelligence is growing rapidly in India. While AI tools are helping businesses, education, and content creators, they are also being misused to create fake videos, voice clones, and misleading digital content. To control this misuse, the Government of India has introduced a strict rule requiring the removal of illegal AI and deepfake content within three hours of receiving a valid complaint.
This move is aimed at preventing misinformation, online fraud, political manipulation, and reputational damage caused by synthetic media.
What Is the 3-Hour AI & Deepfake Rule?
The Ministry of Electronics and Information Technology has amended the Information Technology Rules to ensure faster action against harmful AI-generated content.
According to the new directive:
- Social media platforms and digital intermediaries must remove illegal AI or deepfake content within 3 hours of receiving official notice or a valid complaint.
- The rule applies to content that is misleading, impersonating, fraudulent, or harmful.
- Platforms that fail to comply may face legal consequences under Indian cyber law.
This rule strengthens accountability for digital platforms operating in India.

RAAD WORLD AI Generated Image
Why Did the Government Introduce This Rule?
Deepfake technology has become more advanced and accessible. Several incidents involving fake political speeches, celebrity impersonation, financial scams using AI voice cloning, and misinformation campaigns raised serious concerns.
The government observed that harmful content spreads rapidly within minutes. Delayed removal often leads to irreversible damage. Therefore, the 3-hour window ensures rapid response and public protection.
Legal Basis of the Rule
The amendment is made under the Information Technology Act, 2000 and the IT Rules, 2021 framework. The government has expanded the responsibility of intermediaries to ensure:
- Faster grievance redressal
- Stronger due diligence
- Prevention of AI misuse
- Protection of users from digital harm
The updated framework clarifies that platforms cannot ignore complaints regarding synthetic or manipulated media.
What Type of Content Will Be Removed?
The rule mainly targets:
- Deepfake videos of public figures
- AI-generated fake political speeches
- Fraudulent AI voice scams
- Morphed images used for blackmail
- Misleading synthetic media causing public harm
Normal creative AI content that does not violate laws is not targeted under this rule.
What This Means for Social Media Platforms
Major platforms operating in India must now:
- Strengthen monitoring systems
- Improve AI detection tools
- Respond quickly to complaints
- Maintain transparent grievance systems
Failure to comply may result in loss of safe harbour protection under Indian IT law.

RAAD WORLD AI Generated Image
What This Means for Content Creators
Content creators must be careful while using AI tools. If AI-generated content misleads people or harms someone’s reputation, it may face immediate removal.
However, educational, creative, and ethical AI use remains fully allowed. Responsible AI usage is the key.
Impact on Digital India
This rule marks an important step in regulating artificial intelligence while maintaining innovation. It aims to create a safer digital ecosystem without restricting genuine creativity.
India is positioning itself as a country that supports AI growth but does not tolerate misuse.

RAAD WORLD AI Generated Image
READ THIS: Indian Railways Deploys AI-Powered Humanoid Robots to Strengthen Railway Station Security
Conclusion:
The 3-hour AI and deepfake removal rule reflects the government’s intention to act swiftly against digital misinformation and fraud. As AI technology evolves, stronger digital governance becomes essential.
For users, the message is simple: verify before sharing. For platforms, the responsibility is clear: act fast. For creators, the path is responsible innovation. Digital freedom comes with digital accountability.