As the world is pacing forward breathlessly, AI has become a necessary evil. The mushrooming of AI apps has given the audience a myriad of options sans any accountability. Photo-realistic images and videos, deepfakes, have disrupted the ability of the Internet users to identify the wrong or the right.
Thankfully, the government has taken cognisance of the problem and brought in new AI rules. Theamendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new norm mandates the labelling of the AI-generated content on the part of the platforms with effect from February 20, 2026. Tailing this, is the timeline that have been specified for the unlawful ‘synthetic media’ to be taken down.
The amended IT Rules call for the removal of the ‘unlawful content’ within two to three hours in contrast to the prevailing timeline of 24-36 hours. Content deemed illegal by a court or an ‘appropriate government’ will have to be taken down within 3 hours, while sensitive content, featuring non-consensual nudity and deepfakes, must be removed within 2 hours.
What is the buzz about synthetic media?
The IT (Intermediary Guidelines and Digital media Ethics Code) Amendment Rules, 2026, defines synthetically generated content as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resources, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.”
The failures to comply will result in loss of “safe harbour”. This provides the AI developer protection from legal liability if a third-party misuses that AI. It is to be taken into notice that the safe harbour protects the developers from the risk of over-censorship. This also gives them the liberty to innovate and not be worried about the legal risk.

