top of page

Shaping AI by Shaping the System: Inside India’s New Governance Guidelines

  • Writer: Nikita Silaech
    Nikita Silaech
  • Nov 16, 2025
  • 5 min read

Updated: Dec 24, 2025

India's new AI Governance Guidelines look, at first glance, like another government document full of principles and pillars, but there is a clear choice running through them once you look carefully. Instead of writing a single hard AI law, India is trying to shape how AI is built and used by changing everything around it, from data rules to sector regulators to new safety institutions (MeitY, 2025).


The guidelines were released by the Ministry of Electronics and Information Technology on 5 November 2025 as part of the broader IndiaAI programme. They are organised around seven "Sutras" that are supposed to apply across sectors and model types, including trust, people first, innovation over restraint, fairness and equity, accountability, systems that can be understood, and safety, resilience, and sustainability. In other words, they read less like a list of prohibitions and more like a set of expectations about what "safe and trusted AI" should feel like in India, without tying those expectations to any one technology stack (MeitY, 2025).


Around those principles, the document lays out six governance pillars and a phased action plan. The pillars cover things you would expect, such as infrastructure, capacity building, policy and regulation, risk mitigation, accountability, and institution building (MeitY, 2025). Each pillar is broken into short, medium, and long-term actions, which range from fairly basic steps like drawing up national risk taxonomies and awareness programmes, to more involved tools like certification schemes and public trust labels for AI systems. The sense you get is that the government is not pretending to have the full toolbox ready on day one, but wants to sketch the path along which those tools will be built (Vision IAS, 2025).


The distinctive move is what the guidelines refuse to do. They do not introduce an AI Act that creates a new central regulator or a full risk-classification regime in the way the European Union AI Act does (EU, 2024). Instead, they locate most of the legal weight in existing laws such as the Digital Personal Data Protection Act, the Information Technology Act, and sectoral statutes like the Pre-Conception and Pre-Natal Diagnostic Techniques Act. AI is treated as something that sits on top of, and inside, legal frameworks that India already has, rather than as a reason to build an entirely new one from scratch (Vision IAS, 2025).


That immediately shifts attention to the sector regulators. The guidelines assume that bodies like the Reserve Bank of India, SEBI, IRDAI, health regulators and telecom authorities will adapt their own rules, supervision methods, and enforcement practices to deal with AI in their domains (Vision IAS, 2025). A bank deploying AI in credit scoring or fraud detection, for instance, would not report to an "AI regulator" but to the same financial regulator, which is now expected to understand and monitor AI systems as part of its existing mandate.


India's sector regulators already have supervisory infrastructure, inspection powers, and some experience with algorithmic and automated systems, so there is at least a base to build on (MeitY, 2025). It is arguably easier to ask an existing regulator to extend its rulebook and risk models to AI than to create a new agency that sits above everyone else and tries to learn every sector at once.


The trade-off is that capacity becomes the central question. Regulators that are already stretched across banks, insurers, hospitals, and telecoms now need enough technical expertise and staff time to keep up with rapidly changing AI models and deployments (Vision IAS, 2025). The guidelines acknowledge this indirectly by listing "capacity building" as a core pillar and calling for training programmes, technical partnerships, and expert committees to support regulators and public officials (MeitY, 2025).


That is where the new institutional layer comes in. The guidelines propose setting up an AI Governance Group as an inter-ministerial body, an AI Safety Institute to handle testing and evaluation, and a technical-policy expert committee to advise both government and regulators. The AI Safety Institute is supposed to become a central reference point for evaluation methods, test protocols, and standard-setting so that every sector regulator does not have to invent its own AI safety lab from scratch (MeitY, 2025). This mirrors, in a lighter form, what the UK and US have been doing with their own AI safety labs, but here it is integrated into a national governance blueprint rather than treated as a separate experiment (Carnegie Endowment, 2024).


The way the guidelines handle risk also reflects this preference for flexibility over hard categories. The EU AI Act famously divides systems into prohibited, high-risk, and other categories and attaches detailed, enforceable requirements to each, from documentation and testing to conformity assessments and sanctions (EU, 2024). India instead talks about developing "AI risk classifications" and "harm taxonomies" but does not, in this document, tie them directly to binding obligations or fines (AIGN, 2025). The emphasis is on creating tools that help identify and mitigate risk - codes of practice, risk registers, model assessments, and regulatory sandboxes - rather than on a fixed legal ladder of risk.


This is where the line "innovation over restraint" shows up in practice. The guidelines make it clear that the government wants to encourage experimentation, deployment, and domestic capability building, and is wary of front-loading too many mandatory checks that could slow those things down (Fortune India, 2025). That is not the same as saying "no rules," but it does mean that early governance will lean heavily on voluntary commitments, sector-specific guidance, and soft law tools before anything resembling a strict AI licensing regime appears (AZB Partners, 2025).


At the same time, the text does not ignore the harder edge of the problem. It raises issues like liability for AI-driven harm, misuse in sensitive areas such as prenatal diagnostics or biometric surveillance, and the risk that foundation models could create systemic vulnerabilities if widely deployed without oversight (Vision IAS, 2025). The solutions are left open, however, with references to future consultations, possible amendments to existing laws, and the need to think about insurance, redress, and accountability models that fit AI systems.


Taken together, the guidelines feel less like a finished regulatory regime and more like a structured opening move. They set out how the Indian state wants to talk about AI, who it expects to do the day-to-day governing, and which institutions will be built to support that work, without forcing every detail into place immediately (AIGN, 2025). For developers and companies, that means there is no single AI law to point to yet, but there is a clearer map of where scrutiny and expectations are likely to come from (Vision IAS, 2025). As for regulators, it is a prompt to start building AI-specific standards, tests, and supervisory practices within their existing powers (AZB Partners, 2025).


Whether this light-touch, sector-led experiment can keep up with frontier AI risks without a dedicated AI statute is something that will only become clear once out into practice. For now, the guidelines make India's bet legible by governing the uses, the data, and the institutions around AI hard, keeping the core framework relatively open, and leaving enough room for both the technology and the rulebook to mature together (MeitY, 2025).


Comments


bottom of page