Introduction
Artificial intelligence is transforming every sector of India's economy - from banking and healthcare to agriculture, education, and governance. India's National Strategy for Artificial Intelligence, articulated through NITI Aayog's reports, envisions AI as a catalyst for inclusive growth and positions India as an 'AI garage' for developing solutions that address the needs of developing economies. However, the rapid deployment of AI systems also raises profound questions about privacy, fairness, accountability, and transparency. As AI systems make decisions that affect people's access to credit, employment, healthcare, and government services, the need for governance frameworks that ensure these systems operate responsibly has become urgent. India is charting its own path on AI governance - one that balances innovation promotion with risk mitigation, and that draws from but does not simply replicate Western regulatory approaches. This guide examines the current state of AI governance in India and provides practical guidance for businesses deploying AI systems.
Current Regulatory Framework for AI in India
India does not yet have a dedicated, comprehensive AI regulation comparable to the EU's AI Act. Instead, AI governance in India operates through a combination of existing laws, sector-specific guidelines, and government advisories. The Information Technology Act, 2000 provides the foundational legal framework for digital activities, including AI systems operating in digital environments. The DPDPA directly impacts AI systems that process personal data, imposing consent, purpose limitation, and data principal rights obligations on AI-driven data processing. The Consumer Protection Act, 2019 and its e-commerce rules apply to AI-driven recommendations and automated decision-making in consumer contexts. The Competition Act addresses AI-related concerns around algorithmic pricing and data-driven market dominance. Sector-specific regulators like the RBI, SEBI, and IRDAI have begun issuing guidance on AI use within their respective domains. This fragmented landscape means that businesses deploying AI must navigate multiple regulatory touchpoints rather than relying on a single comprehensive framework.
MeitY's Approach to AI Governance
The Ministry of Electronics and Information Technology (MeitY) has taken the lead in shaping India's AI governance direction. MeitY's approach has evolved through several phases and reflects the government's intent to promote AI innovation while establishing guardrails for responsible deployment.
- The 2023 advisory on AI-generated content required platforms to label AI-generated content and prevent the generation of content that threatens India's integrity or violates existing laws
- The March 2024 advisory initially proposed requiring government approval for deploying AI models, which was later clarified to apply only to unreliable or untested AI platforms, not to the broader AI ecosystem
- MeitY has emphasized the concept of 'AI for All' and positioned India's approach as one that promotes responsible innovation rather than restrictive regulation
- The IndiaAI Mission, with significant government funding, focuses on building AI compute infrastructure, developing Indian language models, and establishing AI safety and ethics testing capabilities
- MeitY has engaged with industry stakeholders through consultations and working groups to develop guidelines that are practical and innovation-friendly while addressing safety concerns
DPDPA Implications for AI Systems
The DPDPA has significant implications for AI systems that process personal data, even though it is not explicitly an AI regulation. Every AI system that ingests, processes, or generates personal data must comply with the DPDPA's provisions. This includes AI-powered recommendation engines, automated decision-making systems, chatbots that process personal queries, facial recognition systems, and natural language processing models trained on user data. The consent requirements are particularly challenging for AI - organizations must specify the purpose of data processing at the time of consent collection, but AI systems often derive insights or make decisions that were not explicitly anticipated when data was collected. The purpose limitation principle means organizations cannot simply collect data for one purpose and then feed it into AI models for a different purpose without obtaining fresh consent. Data principal rights create additional complexity - if a customer exercises their right to erasure, the organization must consider whether that person's data was used in training AI models and whether those models need to be retrained. The right to explanation, while not explicitly codified in the DPDPA, is increasingly expected by regulators and consumers alike when AI systems make decisions that affect individuals.
Sector-Specific AI Guidelines
Several Indian regulators have begun issuing sector-specific guidance on AI deployment that businesses must track alongside the broader governance framework.
- RBI has issued guidelines on the responsible use of AI and ML in financial services, emphasizing the need for explainability in credit decisions, fairness in lending algorithms, and human oversight of AI-driven processes
- SEBI has explored the regulatory implications of AI in algorithmic trading, market surveillance, and investor advisory services, with a focus on systemic risk and market manipulation
- IRDAI has recognized the potential of AI in insurance underwriting, claims processing, and fraud detection while cautioning against discriminatory pricing models based on AI-driven risk assessments
- The Indian Council of Medical Research (ICMR) has published guidelines on the ethical use of AI in biomedical research and healthcare, emphasizing patient consent, data quality, and clinical validation
- The National Association of Software and Service Companies (NASSCOM) has developed responsible AI guidelines for the IT services industry that serve as voluntary industry standards
- The Bureau of Indian Standards (BIS) has begun developing AI standards aligned with international frameworks like ISO/IEC 42001 for AI management systems
Key Principles for Responsible AI in India
While a comprehensive AI law is still under development, several foundational principles have emerged from government publications, regulatory guidance, and industry frameworks that businesses should incorporate into their AI governance strategies. Transparency requires that AI systems be explainable to the extent feasible, particularly when they make decisions affecting individuals. Accountability demands clear ownership of AI system outcomes, with designated individuals or teams responsible for monitoring and addressing issues. Fairness requires proactive identification and mitigation of bias in training data, model design, and deployment contexts - particularly important in India's diverse social context where algorithmic bias could reinforce existing inequalities. Safety and security require robust testing, monitoring, and incident response capabilities for AI systems, including the ability to shut down or override AI decisions when necessary - organisations designated as Significant Data Fiduciaries face heightened obligations in this regard. Privacy by design mandates that AI systems be built with data protection principles embedded from the design phase, not bolted on after deployment. These principles should inform every stage of the AI lifecycle, from problem formulation and data collection through model development, deployment, and ongoing monitoring.
What Businesses Should Do Now
Even in the absence of a comprehensive AI law, businesses deploying AI systems in India should take proactive steps to establish governance frameworks that will position them well for whatever regulations emerge.
- Establish an AI governance committee that includes representatives from legal, compliance, technology, business, and ethics functions to oversee AI strategy and deployment decisions
- Conduct AI impact assessments for high-risk AI applications - particularly those that make decisions about individuals' access to credit, employment, healthcare, or government services
- Implement bias testing and monitoring for AI models, including regular audits of model outputs across demographic groups to identify and address discriminatory patterns
- Ensure DPDPA compliance for all AI systems that process personal data, including purpose-specific consent, data minimization, and mechanisms for data principal rights
- Document AI system design decisions, training data sources, validation approaches, and performance metrics to demonstrate accountability and support regulatory inquiries
- Build human oversight mechanisms that allow qualified personnel to review, override, and intervene in AI-driven decisions, particularly for high-stakes applications
- Monitor the evolving regulatory landscape and participate in industry consultations and standard-setting processes to stay ahead of emerging requirements
The Road Ahead for AI Regulation in India
India's approach to AI regulation is still taking shape, but several trends are becoming clear. The government is likely to favour a sector-specific, risk-based approach rather than a single comprehensive AI law - at least in the near term. This means that high-risk AI applications in financial services, healthcare, and criminal justice will face stricter oversight than lower-risk applications in entertainment or productivity. India will likely draw on international frameworks like the EU AI Act and the OECD AI Principles while adapting them to Indian realities - including the scale of AI deployment in government services, the diversity of the Indian population, and the need to support AI innovation as a driver of economic growth. Businesses that build robust AI governance frameworks now will be well-positioned to adapt to whatever regulatory requirements emerge, while those that wait for prescriptive regulation risk scrambling to comply under compressed timelines.
How Kraver.ai Addresses AI Governance
Kraver.ai's platform helps businesses navigate the intersection of AI deployment and data protection compliance. Our AI discovery module identifies all AI systems within your organization that process personal data and maps their data flows, consent dependencies, and compliance obligations under the DPDPA. The platform provides AI impact assessment templates aligned with emerging Indian guidelines and international best practices. Our bias monitoring capabilities help organizations test AI model outputs for fairness across relevant demographic dimensions. For organizations using generative AI, Kraver.ai's data leakage prevention module monitors AI interactions to prevent personal data from being inadvertently shared with third-party AI services. By integrating AI governance with broader data protection compliance, Kraver.ai ensures that your AI strategy and your compliance strategy work together rather than in tension. Contact us to learn more about our AI governance solutions.