top of page

India Mandates 3 Hour Takedown For Deepfakes

  • 3 days ago
  • 1 min read

The Indian government notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules on 10th February, formally bringing AI generated content within India's intermediary regulation framework for the first time. 


The amendments, set to take effect 20th February, mandate that all synthetically generated audio, visual, or audiovisual content appearing real must be prominently labeled as AI generated before publication on any platform.


Platforms must embed persistent metadata or unique identifiers that enable tracing synthetic content back to the originating computer resource, and intermediaries are prohibited from allowing removal or modification of these AI labels. Social media platforms must obtain user declarations at the time of upload confirming whether content has been synthetically generated or altered using AI, and deploy reasonable technical measures including automated tools to verify these declarations.


The amendments reduce takedown timelines to three hours for content deemed illegal by courts or government authorities, and two hours for sensitive violations including non consensual deepfake nudity classified as non consensual intimate imagery. 


Platforms must act fast when they become aware of violations involving synthetically generated information, whether on their own initiative or upon receipt of a complaint, by disabling access to content, suspending user accounts, and reporting to appropriate authorities where required by law.

Comments


bottom of page