loader image

The Artificial Intelligence (Ethics and Accountability) Bill, 2025: A Bold Question but Not the Final Answer

The Artificial Intelligence (Ethics and Accountability) Bill, 2025: A Bold Question but Not the Final Answer

AI Ethics Bill 2025

Introduction

Smt. Bharti Pardhi, Member of Parliament introduced the Artificial Intelligence (Ethics and Accountability) Bill, 2025 in the Lok Sabha in December 2025, marking a significant moment in India’s long journey toward governing artificial intelligence (AI). For years, India’s AI policy terrain has been defined by advisory guidelines, voluntary frameworks, corporate policies, and sectoral norms rather than binding legal rules. In this context, the Bill’s very existence reflects a shift from ethical aspiration toward accountability mandates for AI systems whose decisions increasingly shape lives and opportunities.

The Bill’s stated aim is to create a statutory framework that ensures AI technologies are fair, transparent, and accountable, especially where they intersect with surveillance, high-stakes decision-making, and human rights. Its framers argue that algorithmic systems can no longer be left to self-regulation or ad hoc compliance with existing laws; instead, there must be clear institutional oversight. However, as one peers into the provisions and the responses from legal experts, civil society, and industry, the optimism surrounding the Bill quickly gives way to the reality of a regulatory intent that stops short of substantive safeguards.

What the Bill Proposes: An Overview

At the centre of the Bill is the creation of a central Ethics Committee for Artificial Intelligence, appointed by the Central Government and composed of experts from academia, civil society, technology, law, and human rights fields. This Committee is empowered to formulate ethical guidelines, monitor compliance, review complaints of misuse or bias, and recommend enforcement actions.

The Bill singles out two domains for special scrutiny: AI-enabled surveillance and AI systems used in critical decision-making such as law enforcement, financial credit scoring, and employment decisions. In these high-risk contexts, deployments would require prior approval from the Committee and be subject to stringent ethical review.

Developers are obliged to maintain transparency by documenting the intended purpose and limitations of their systems, and to keep records on compliance with ethical norms. Affected individuals and groups can lodge complaints with the Committee, which can investigate and recommend remedies. For non-compliance, the Bill envisages financial penalties of up to ₹5 crore, possible revocation or suspension of licences, and even criminal liability in serious cases.

In a country characterised by a largely responsive rather than anticipatory regulation of technology, this legislative pivot from voluntary codes towards enforceable standards is being viewed by stakeholders as a welcome albeit tentative move toward responsible AI governance.

A Closer Look: Strengths, Gaps, and Ambiguities

Viewed holistically, the Bill encapsulates several important regulatory instincts: a recognition that AI can exhibit algorithmic bias, impact individual opportunities, and empower unchecked surveillance. It underscores the need for an institutional mechanism to articulate and enforce ethical standards. It implicitly acknowledges that AI is not value-neutral technology but one that can produce measurable social harms.

Yet, beneath this promising rhetoric lie glaring gaps and structural ambiguities that risk diluting the Bill’s effectiveness. Beyond some solid foundational instincts, the Bill has less to offer: it feels less like a fleshed-out regulatory instrument and more like a skeletal first draft rushed to Parliament.

1. Narrow Scope and Limited Scrutiny

On its face, the Bill’s scope appears selective. By tethering scrutiny primarily to surveillance settings and certain enumerated decision-making domains (law enforcement, financial credit, employment), it leaves enormous swathes of AI usage outside its direct ambit. Sectors such as healthcare diagnostics, educational assessments, immigration and travel authorisation, generative AI, content moderation on social platforms, etc. are all areas where algorithmic decisions materially affect individuals but do not seem to be covered under the Bill.

This carve-out is not just technical; it creates a duality where regulated AI coexists with effectively unregulated AI. Developers and deployers could, in theory, classify systems in ways that avoid regulatory scrutiny, even if those systems produce life-changing outcomes. By contrast, more comprehensive approaches taken elsewhere, such as the European Union’s AI Act, which adopts risk-based classifications across a broad spectrum of uses, demonstrate a willingness to treat AI governance as a cross-domain concern rather than a selective intervention.

Discrimination does not result simply from straightforward resume-filtering algorithms but from more subliminal and informal sources like AI-generated content on the internet, LLMs trained on misrepresentative sample data, that sow more lasting prejudice and biases into the minds of hiring managers. Not including the private uses of AI in day-to-day life in a conversation about ethics and accountability seems like a diversion from the root cause of harm.

2. Licensing Without a License Framework

One of the more curious features of the Bill is the power to suspend or revoke licences under Clause 8, even though the statute nowhere spells out how licences are to be obtained (or for that matter, whether they have to be obtained), on what basis, under what criteria, or with what procedural safeguards. This inversion of regulatory logic by providing for revocation without a clear licensing architecture creates a legal vacuum. It places enormous discretionary power in the hands of the Ethics Committee without defining the front-end obligations that generate liability.

This loophole raises concerns about arbitrariness and legal uncertainty, potentially deterring innovation as developers hesitate to invest in systems that could be penalised under unclear standards. It also invites litigation and interpretive battles over what constitutes authorised deployment in the first place.

3. Accountability Without Liability

Perhaps the most significant omission is the silence on liability for harms caused by AI systems. While the Bill contemplates penalties and regulatory sanctions, it does not articulate standards for civil liability, i.e., who is legally responsible when a system’s errors lead to discrimination, financial loss, reputational harm, or wrongful exclusion.

There is no clear pathway for affected individuals to claim damages against developers or deployers for misinformation, defamatory outputs, or psychologically damaging algorithmic decisions. For example, there is also no clarity or accountability for when AI hallucinates medical advice or generates deepfakes.

This gap is consequential because ethical guidelines without enforceable liability norms can become symbolic instruments rather than substantive protections. In real-world disputes, the absence of liability provisions means the Bill does not empower individuals to seek redress through ordinary civil claims. The mismatch between ethical principles and enforceable legal remedies undermines the very accountability the Bill purports to champion.

4. Data Consent and Copyright: Missing Conversations

The Bill’s discussion of transparency stops at disclosures about AI intentions and limitations. It does not engage with whether data used in training such systems was lawfully obtained, whether consent was secured from the data principals whose personal or sensitive data was consumed, or how copyrighted works used for training are to be compensated. Civil society groups have long highlighted that modern generative AI models are often trained on copyrighted material without licences: an ethical and legal issue that the Bill sidesteps entirely.

This omission is not trivial. It implicitly assumes unfettered access to data and creative content, which contradicts emerging norms around data sovereignty, consent, and remuneration for intellectual property. Without a robust regime governing data provenance and use rights, India risks institutionalising a framework where private entities extract value from data and content without accountability to those whose labour or privacy has been exploited.

5. Transparency Obligations: Aspirational and Technically Hard

The Bill’s requirement that developers disclose “limitations of the AI system” and “reasons for any decisions made by AI that impact individuals” embodies an ideal of explainable AI. Yet in practice, explainability remains a technical challenge, particularly for complex machine-learning and black-boxmodels whose internal reasoning is not easily decomposed into human-readable rationales. Moreover, undiscovered limitations may emerge post-deployment.

In these situations, mandating explanation of unknown limitations (which by definition have not yet been discovered) risks reducing transparency obligations to box-ticking exercises rather than providing accurate and meaningful insight to users.

6. Political Challenges

It is also important to keep the Bill’s political prospects in perspective. The Bill is a private member’s bill, meaning it was introduced by a Member of Parliament who is not part of the government. In India’s parliamentary history, private member’s bills rarely become law: since Independence, only 14 such bills have ever been passed by both Houses of Parliament and become law, with the most recent being in 1970. No private member’s bill has successfully navigated the full legislative process in the last five decades, and hundreds languish without debate or progress. That means that even if this AI Bill sparks important debate, its journey from proposal to statute is far from assured, and its chances of becoming law remain slim in the absence of government backing or momentum.

Would This Bill Work in Practice?

The Bill delegates a remarkable amount of regulatory power to a central Ethics Committee, designed to be staffed by experts across sectors. In theory, this centralised body could provide consistent oversight, develop standards, and serve as a repository of expertise on AI harms and benefits. But relying on a single national committee to review all high-risk systems in a country the size of India, with its vast and diffused AI ecosystem, raises questions about bureaucratic capacity, bottlenecks in approvals, and systemic delays that could slow innovation rather than shape it responsibly.

Industry commentators have already warned that unclear definitions of “lawful purposes” in surveillance and overlapping obligations with existing statutes such as the Digital Personal Data Protection Act, Information Technology Act, Reserve Bank of India Act, etc. may generate compliance burdens that are costly, complicated, and obscure for developers and adopting firms.

Without integration and coordination with sectoral regulators like the RBI, SEBI, and telecom authorities, the Committee’s guidance could remain advisory rather than operationally enforceable, particularly when statutory conflicts arise with existing privacy, data protection, and cybercrime laws.

What This Would Mean on the Ground

If enacted in its present form, the Bill might curb a narrow band of egregious practices, such as unregulated AI surveillance and blatant algorithmic bias in hiring, but it probably would not transform the broader governance landscape. AI systems influencing everyday decisions in health, education, personalized pricing, or platform governance could continue unexamined under this statute. The penalty architecture, anchored at ₹5 crore for violations, may be significant for startups and SMEs but barely register for large multinational platforms whose revenues dwarf such fines.

Furthermore, by failing to wage a sustained conversation on data consent, privacy, content ownership, and civil liability, the Bill leaves many individuals with little recourse when algorithms go haywire.

Towards a More Comprehensive Framework

To evolve into a regime that genuinely balances innovation with accountability, the Bill would benefit from several recalibrations. Leaving crucial foundational provisions to future delegated legislation is not the solution to tackle a pervasive technology that is starting to embed in all spheres of public and private life.

A broader risk-based framework that includes not only surveillance and traditional high-stakes decisions but also AI applications in content moderation, healthcare, and consumer interfaces could prevent regulatory arbitrage. Clear licensing processes, coupled with procedural safeguards and avenues for appeal, would add predictability.

One way of aligning AI governance with existing constitutional and intellectual property rights frameworks is by legislating civil liability standards, for example in the form of a system akin to the EU’s AI Act, where the liability is proportional to risk of the AI model. Intellectual property and data consent norms can be integrated by mandating consent frameworks like opt-out registries and fair-use exceptions with remuneration funds for IP owners.

Finally, including meaningful transparency standards that are technically informed, perhaps through collaboration with independent auditors and standards bodies, would help bridge the chasm between ethical ideals and operational realities.

All in all, the Bill would benefit from deeper dialogue, clear guidance, and probably some pilots before it can be gainfully implemented across industries.

Conclusion

The Artificial Intelligence (Ethics and Accountability) Bill, 2025 is noteworthy not because it is perfect, but because it marks a decisive break from India’s previous reticence to legislate AI governance at all. It gestures toward accountability, institutional oversight, and ethical clarity. At the same time, it stops short of addressing the core legal and normative questions that make AI governance truly robust: Who is responsible? Whose data is being used and how? What rights do individuals have when algorithms err?

In its current form, the Bill may regulate the fringes of AI systems that affect public safety and fundamental rights, but it leaves much of the territory of AI unchecked. Its ultimate success will depend less on the lofty language of ethics and more on whether Parliament, civil society, industry, and technologists are prepared to engage deeply with its gaps, and to evolve it into a framework that is as inclusive, enforceable, and rights-respecting as the technology it seeks to govern.