In what can be called a clampdown on AI invasion, the Centre has asked tech companies to seek its explicit nod before publicly launching “unreliable” or “under-tested” generative AI models or tools.
It has also warned companies that their AI products should not generate responses that “threaten the integrity of the electoral process” as the country gears up for 2024 Lok Sabha Elections.
The government’s efforts to regulate artificial intelligence represent a walk-back from its earlier stance of a hands-off approach when it informed Parliament in April 2023 that it was not eyeing any legislation to regulate AI.
The advisory was issued last week by India’s Ministry of Electronics and Information Technology (MeitY) briefly after Google’s Gemini faced a right-wing backlash for its response over a query: ‘Is Modi a fascist?’
It responded that PM Narendra Modi was “accused of implementing policies some experts have characterised as fascist”, citing his government’s “crackdown on dissent and its use of violence against religious minorities”.
Rajeev Chandrasekhar, Minister of State at Ministry of Electronics and Information Technology, responded by accusing Google’s Gemini of violating India’s laws. “‘Sorry ‘unreliable’ does not exempt from the law,” he added. Chandrashekar claimed Google had apologised for the response, saying it was a result of an “unreliable” algorithm. The company responded by saying it was addressing the problem and working to improve the system.
In the West, major tech companies have often faced accusations of a liberal bias. Those allegations of bias have trickled down to generative AI products, including OpenAI’s ChatGPT and Microsoft Copilot.
In India, meanwhile, the government’s advisory has raised concerns among AI entrepreneurs that their nascent industry could be suffocated by too much regulation. Others worry that with the national election set to be announced soon, the advisory could reflect an attempt by the Modi government to choose which AI applications to allow, and which to bar, effectively giving it control over online spaces where these tools are influential.
The advisory is not legislation that is automatically binding on companies. However, noncompliance can attract prosecution under India’s Information Technology Act. “This nonbinding advisory seems more political posturing than serious policymaking,” said Mishi Choudhary, founder of India’s Software Freedom Law Center. “We will see much more serious engagement post-elections. This gives us a peek into the thinking of the policymakers.”
Several other leaders in the generative AI industry have also criticised the advisory as an example of regulatory overreach. Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, wrote on social media platform X that the move was a “travesty”, was “anti-innovation” and “anti-public”.
Amid that backlash, Chandrashekar issued a clarification on X adding that the government would exempt start-ups from seeking prior permission for deployment of generative AI tools on “the Indian internet” and that the advisory only applies to “significant platforms”.
For the Indian government, dealing with AI regulations is a difficult balancing act, said analysts.
Millions of Indians are scheduled to cast their vote in the national polls likely to be held in April and May. With the rise of easily available, and often free, generative AI tools, India has already become a playground for manipulated media, a scenario that has cast a shadow over election integrity. India’s major political parties continue to deploy deepfakes in campaigns.
Earlier, in November and December 2023, the Indian government asked Big Tech firms to take down deep fake items within 24 hours of a complaint, label manipulated media, and make proactive efforts to tackle the misinformation — though it did not mention any explicit penalties for not adhering to the directive.
Also Read: Bhavnagar Police Seize Mephedrone Worth Rs Nine lakh