In the wake of the recent
controversy
surrounding Google's
AI platform
"Gemini" making unpleasant remarks about
Prime Minister Narendra Modi
, the
Indian government
has issued a new advisory for social media companies and other online platforms. The move aims to increase transparency and accountability regarding Artificial Intelligence (AI) models deployed in the country.
The Ministry of Electronics and Information Technology (MeitY) released the advisory on March 1, warning platforms that failing to comply could result in legal action. The advisory emphasizes preventing the spread of unlawful content and outlines specific measures:
Labeling Under-Trial AI: Platforms must clearly label any Artificial Intelligence models, Large Language Models (LLMs), or generative AI tools still under development. This label should inform users about the "possible and inherent fallibility or unreliability" of the outputs generated by these models.
Government Approval for Unreliable AI: Social media companies and other platforms will need to seek government approval before deploying AI models deemed "under-testing" or unreliable.
User Consent: Platforms must obtain explicit consent from users before exposing them to under-trial or unreliable AI models. This could involve a "consent popup" mechanism explaining the potential for inaccurate or misleading outputs.
The advisory follows strong reactions from the Indian government after Google's AI platform, Gemini, generated controversial responses to queries about Prime Minister Modi. Minister of State for IT Rajeev Chandrasekhar called this a violation of Information Technology (IT) laws and emphasized that "apologizing later" is not an excuse.
"The episode of
Google Gemini
is very embarrassing," stated Chandrasekhar. He further stressed the importance of "safe and trusted" platforms within the Indian internet ecosystem.
This new advisory builds upon previous efforts by MeitY. In December 2023, the Ministry issued an advisory focused on tackling deepfakes and misinformation online.
The move has sparked discussions about the balance between innovation and regulation in the rapidly evolving field of AI. While some see it as a necessary step towards responsible AI development, others raise concerns about potential limitations on technological progress.