New IT Rules: Social Media Platforms Must Remove Illegal Posts Within 3 Hours

13 Feb 2026
news-picture

India introduces new IT Rules 2026 mandating 2–3 hour takedown timelines for illegal and deepfake content, with strict AI labelling requirements for social media platforms.

The Centre has introduced sweeping amendments to its digital governance framework, significantly tightening obligations on major social media platforms. The updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, mandate sharply reduced timelines for removal of unlawful and AI-manipulated content.

Under the revised rules, platforms classified as significant intermediaries—including global players such as YouTube, Meta platforms and X—must remove unlawful content within three hours of receiving an official court order or government-authorised directive. Earlier, companies were given up to 36 hours.

For highly sensitive categories—such as non-consensual intimate imagery, morphed visuals, sexually explicit deepfakes, impersonation, or other harmful synthetic media—the takedown timeline has been further reduced to two hours. The countdown begins once the platform receives “actual knowledge” through authorised channels.

Failure to comply could result in the loss of “safe harbour” protection under Section 79 of the IT Act, exposing companies to direct legal liability.

The amendments also introduce strict requirements for AI-generated or photorealistic synthetic content. Such material must carry visible labels and traceable metadata to identify its origin. Users uploading AI-created content may be required to disclose it, while platforms must deploy automated tools to detect and block unlawful synthetic media.

The rules will come into force from February 20, 2026.

The move comes amid growing concerns over deepfakes, misinformation, and digitally altered content spreading rapidly online. With India’s vast internet user base, authorities appear focused on ensuring faster response mechanisms.

However, digital rights advocates caution that ultra-short compliance windows may lead to hurried removals and possible overreach. A balanced implementation will be critical to ensure that efforts to curb misuse do not inadvertently stifle legitimate expression.

As enforcement begins, platforms will need to upgrade moderation systems quickly—while users may see clearer labels on AI-generated posts in the coming weeks.