New Delhi, India: Tech companies operating in India must now seek government approval before releasing potentially “unreliable” or unrefined artificial intelligence (AI) tools to the public, according to a recent advisory issued by the country’s IT ministry.
This move comes amidst growing global concerns around responsible AI development and potential misuse of the technology. India, a rapidly evolving tech hub, has been actively tightening regulations on tech giants, particularly social media companies, operating within its borders.
The advisory specifically targets “unreliable” AI tools, including generative AI, which can generate creative text formats or translate languages. It emphasizes that such tools, “available to users on the Indian internet,” must have explicit permission from the government.
This announcement follows a recent incident where Google’s AI tool, LaMDA, generated a controversial response to a user query regarding Indian Prime Minister Narendra Modi. The response sparked criticism, prompting Google to acknowledge LaMDA’s limitations and its potential for unreliable outputs, especially related to sensitive topics like current events and politics.
Responding to Google’s statement, India’s deputy IT minister, Rajeev Chandrasekhar, emphasized platform responsibility, stating on social media platform X, “Safety and trust is platforms’ legal obligation. ‘Sorry Unreliable’ does not exempt from law.”
Furthermore, the advisory urges platforms to ensure their AI tools don’t “threaten the integrity of the electoral process,” with India’s general elections scheduled for this summer, where the ruling party is projected to secure a significant win.
Kejriwal Willing To Answer ED Queries After March 12. Read More