The IT Ministry has issued revised guidelines for social media intermediaries like Facebook, Instagram, and YouTube to address the spread of AI-based deepfakes online. These guidelines direct platforms to clearly label all AI-generated content and ensure that such synthetic material carries embedded identifiers. Social media companies must take down AI-generated or deepfake content within three hours of being flagged by the government or ordered by a court.
The official notification also prohibits digital platforms from removing or suppressing AI labels or associated metadata once applied. Platforms are required to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative, or deceptive AI-generated content as per the MeitY order. An intermediary must inform users periodically, at least once every three months, about the consequences of violating rules related to AI misuse through its rules and regulations, privacy policy, user agreement, or other means.
If an intermediary identifies any violation related to synthetically generated information, it must take prompt and suitable action. Social media platforms need to implement reasonable technical measures, including automated tools, to prevent users from creating, generating, modifying, or disseminating any synthetically generated information that violates existing laws. The draft rules propose user disclosure when posting AI-generated or modified content and require platforms to use technology to verify such declarations.
Social media platforms have already introduced features that allow users to label specific content as generated or modified using artificial intelligence (AI).
