securitylinkindia

AGENTIC AI HAS ARRIVED – THE LIABILITY – DOCTRINE HAS NOT

Dr. Pavan DuggalAdvocate, Supreme Court of IndiaArchitect, Global AI Accountability On the eve of the International AI Accountability Forum 2026, India is positioned to declare the world’s first multi-actor liability framework for autonomous artificial agents. The window will not remain open. On 14 May 2026, when the International AI Accountability Forum convenes at New Delhi, the international community will be forced to confront the question every legal system on the planet has so far chosen to defer – who is liable when an autonomous artificial agent acts upon the world and causes harm? The question is no longer hypothetical. It is the operating reality of every major economy in 2026. The agentic turn in artificial intelligence is complete. Earlier generations of AI advised. Contemporary agentic systems act. An autonomous agent today receives a goal, decomposes it into sub-tasks, plans across multiple tools and environments, executes against the real world, observes outcomes, and adapts. Contracts are being concluded by such agents. Financial trades are being executed by them. Code is being generated and deployed by them. Consequential real-world tasks are being carried out by them across borders and across legal regimes. And yet, in every major jurisdiction on the planet, the law of liability remains designed for a world in which the consequential decision was reserved to a human actor. That world no longer exists. The consequence is doctrinal strain on a scale the international legal order has not previously seen. The law of agency, drafted for human agents and human principals, strains when an artificial agent transacts. The law of vicarious liability strains when an agent causes harm through emergent behaviour that no developer expressly programmed and no deployer expressly authorised. The law of mens rea strains when an artificial agent commits an act that, performed by a human, would constitute fraud, harassment, or defamation. Jurisdictional rules strain when developer, deployer, and victim are domiciled in three different sovereign jurisdictions. Where the law strains, accountability fails. Where accountability fails, the victim bears the cost of innovation that benefited others. That is not a regulatory inconvenience. It is a moral failure. A framework adequate to the technology The Duggal Global Agentic AI Liability Framework, advanced under the doctrinal authority of the New Delhi Accord on AI and Emerging Tech Law of 24 July 2025 and proposed for adoption at the International AI Accountability Forum, is designed to close this gap. It rests upon five operative pillars that, together, supply the first comprehensive multi-actor liability architecture engineered specifically for autonomous AI agents. First, tiered multi-actor liability. Responsibility for agentic harm must attach across the entire supply chain – to the model developer for foundational design choices, training-data composition, and disclosure of known limitations; to the orchestration-layer operator for the design of planning and tool-use scaffolding; to the deployment platform for the integration of safeguards and post-deployment monitoring; and to the end-user enterprise for the appropriateness of deployment, the design of oversight, and the quality of consent and disclosure to affected persons. Liability is joint and several where causal contributions overlap. Complexity cannot be the alibi of irresponsibility. Second, autonomous contract formation. Where an agentic AI concludes a contract on behalf of a deployer, the contract binds the deployer to the extent the agent acted within an objectively communicated scope of authority, with appropriate defences preserved for fraudulent inducement and unconscionability. Disclosure that one is contracting with an artificial agent is a substantive requirement, not a courtesy. The counterparty is entitled to know. Third, vicarious liability for learned behaviour. The Framework rejects the proposition that a deployer escapes liability merely because the agent caused harm through emergent or unanticipated behaviour. Where a deployer placed the agent in operation, foreseeably benefited from its operation, and possessed the capacity to design oversight, monitoring, and override, the deployer bears responsibility within the agent’s operational footprint. The doctrine is calibrated, not strict. It is not, however, absent. Fourth, override and kill-switch obligations. An agentic AI deployed in any consequential context must be designed with capacities for real-time human interruption and authorised termination. Their absence is, in itself, a basis of liability where harm ensues. Fifth, insurance, compensation, and victim redress. Consequential agentic deployments must be backed by mandatory financial-responsibility arrangements calibrated to risk, and a no-fault compensation pool, funded by levies on such deployments, must supply redress where individual apportionment is inefficient. No victim of agentic AI harm should remain uncompensated by reason of doctrinal complexity alone. The Indian window The European Union’s Artificial Intelligence Act, however ambitious within its regional reach, is structured around product-safety logic that maps imperfectly onto agentic systems and is thin on civil liability. The United States operates without comprehensive federal AI legislation. The OECD Principles and the UNESCO Recommendation are instruments of soft law. The Council of Europe Framework Convention establishes principles but operates primarily as an inter-state instrument with limited reach against private deployers. Across the entire international architecture, there exists no harmonised cross-border liability regime for the agentic systems already in deployment. This is the absence that the International AI Accountability Forum 2026 is convened to fill. It is the absence that India is positioned – by constitutional tradition, by demographic weight, by convening capacity, and by the doctrinal momentum generated through Global Summit in Artificial Intelligence Emerging Tech Law and Governance 2025 (GSAIET 2025) and the New Delhi Declaration on Responsible Artificial Intelligence endorsed by eighty-six countries at Bharat Mandapam in February 2026 – to address on behalf of the international community. Agentic AI without legal accountability is civilisational recklessness. The law of artificial intelligence is being written now, in this calendar year, on this continent. Those who participate in its writing will determine its content. Those who do not will inherit it. Dr Pavan Duggal is Advocate, Supreme Court of India; Founder and Chairman of the Global Artificial Intelligence Law and Governance Institute; Chief Executive of the Artificial Intelligence Law Hub; and Founder and Honorary Chancellor of Cyberlaw University. He is the architect of the Duggal…

Read More