The Indian government has announced a new requirement for tech companies to obtain government approval before publicly releasing artificial intelligence (AI) tools that are still in development or deemed “unreliable,” Reuters reported on March 4.
The move is part of India’s efforts to manage the deployment of AI technologies, with the aim of promoting the accuracy and reliability of the tools available to its citizens as the country prepares for elections.
Rules for AI
According to a directive from the Ministry of Information Technology, all AI-based applications, especially those involving generative AI, must obtain explicit permission from the government before being introduced in the Indian market.
Additionally, these AI tools should be marked with warnings about their potential to generate incorrect answers to user questions, reinforcing the government’s position on the need for clarity on AI’s capabilities.
The regulations are in line with global trends where countries are trying to establish guidelines for the responsible use of AI. India’s approach to increasing oversight of AI and digital platforms coincides with its broader regulatory strategy to safeguard users’ interests in a rapidly advancing digital age.
The government’s advice also highlights concerns about the influence of AI tools on the integrity of the electoral process. With the upcoming general elections, where the ruling party is expected to retain its majority, greater attention is being paid to ensure that AI technologies do not compromise electoral fairness.
Twin criticism
The move follows recent criticism of Google’s Gemini AI tool, which attracted comments seen as unfavorable towards Indian Prime Minister Narendra Modi.
Google responded to the incident by acknowledging the imperfections of its AI tool, especially when it comes to sensitive topics such as current events and politics. The company said the tool was still “unreliable.”
Deputy IT Minister Rajeev Chandrasekhar said the reliability issues do not exempt platforms from legal responsibilities and stressed the importance of adhering to legal obligations related to security and trust.
By introducing these regulations, India is taking steps towards creating a controlled environment for the introduction and use of AI technologies.
The requirement for government approval and the emphasis on transparency with possible inaccuracies are seen as measures to balance technological innovation with social and ethical considerations, with the aim of protecting democratic processes and the public interest in the digital age.