India’s AI Governance Under Fire: Experts Warn of Regulatory Loopholes

Lack of Accountability in MeitY's AI Governance Report Raises Industry and Civil Society Concerns
Is India’s AI Governance Too Lenient? Experts Warn of Weak Oversight
The Indian government’s draft AI governance framework has sparked criticism from industry leaders and civil society groups over its lack of strong accountability mechanisms. Released by the Ministry of Electronics and Information Technology (MeitY) on January 6, 2025, the report focuses on voluntary commitments rather than legally binding regulations, raising concerns about regulatory gaps in AI deployment.
Industry and Civil Society Voice Concerns
Industry Perspective: Need for Clearer Roles and Responsibilities
Tech industry stakeholders, including the Business Software Alliance (BSA) – which represents global tech giants like Microsoft, Adobe, and IBM – support a risk-based approach but argue that broad voluntary commitments may create regulatory ambiguity.
- Lack of clarity on responsibilities: The BSA pointed out that while the draft recognizes the different roles of AI developers and deployers, it does not clearly allocate responsibilities between them.
- Uncertain enforcement mechanisms: Without structured guidelines, tech firms may struggle with compliance, leading to inconsistent AI governance practices.
Civil Society Concerns: Overreliance on Self-Regulation
Organizations like the Internet Freedom Foundation (IFF) and the Software Freedom Law Center (SFLC.in) have criticized the report’s dependence on voluntary compliance, warning that self-regulation could weaken public protections.
- Call for legal accountability: IFF argues that corporate-led ethical frameworks cannot substitute legally binding regulations.
- Potential risks of AI surveillance: SFLC.in highlighted concerns over AI-driven surveillance tools such as facial recognition and predictive policing, emphasizing the need for independent oversight of government AI deployments.
Key Issues in the AI Governance Report
1. Lack of Enforcement Mechanisms
While the report outlines key AI governance principles—transparency, accountability, safety, privacy, and non-discrimination—it does not specify how these will be enforced. Critics warn that without legal backing, these principles may remain aspirational rather than actionable.
2. Overemphasis on Voluntary Compliance
MeitY’s report relies on self-regulation, which civil society groups argue allows companies to prioritize profits over ethical considerations like fairness and transparency.
3. Absence of Independent Oversight
SFLC.in stresses the need for external regulatory bodies to oversee AI deployment, particularly in sensitive areas like surveillance and law enforcement.
What Experts Suggest
- Stronger Regulatory Framework: Civil society advocates propose legally binding AI accountability provisions rather than a self-regulation model.
- Independent AI Oversight Body: Establishing an independent agency to monitor AI deployments and ensure compliance with ethical standards.
- Clearly Defined Responsibilities: Industry leaders emphasize the importance of specifying distinct responsibilities for AI developers and deployers to prevent regulatory ambiguity.
The debate over India’s AI governance framework highlights a critical gap between voluntary commitments and enforceable regulations. While MeitY’s principle-based approach aims for flexibility, industry and civil society stakeholders insist that stronger oversight is essential to ensure ethical AI deployment. Without clear accountability, India risks creating an AI ecosystem that prioritizes corporate interests over public welfare.