Algorithmic Criminal Liability in Greenwashing: Comparing India, United States, and European Union
- URL: http://arxiv.org/abs/2512.12837v1
- Date: Sun, 14 Dec 2025 20:49:41 GMT
- Title: Algorithmic Criminal Liability in Greenwashing: Comparing India, United States, and European Union
- Authors: Sahibpreet Singh, Manjit Singh,
- Abstract summary: This study conducts a comparative legal analysis of criminal liability for AI-mediated greenwashing across India, the US, and the EU.<n>Existing statutes exhibit anthropocentric biases by predicating liability on demonstrable human intent, rendering them ill-equipped to address algorithmic deception.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-powered greenwashing has emerged as an insidious challenge within corporate sustainability governance, exacerbating the opacity of environmental disclosures and subverting regulatory oversight. This study conducts a comparative legal analysis of criminal liability for AI-mediated greenwashing across India, the US, and the EU, exposing doctrinal lacunae in attributing culpability when deceptive claims originate from algorithmic systems. Existing statutes exhibit anthropocentric biases by predicating liability on demonstrable human intent, rendering them ill-equipped to address algorithmic deception. The research identifies a critical gap in jurisprudential adaptation, as prevailing fraud statutes remain antiquated vis-à-vis AI-generated misrepresentation. Utilising a doctrinal legal methodology, this study systematically dissects judicial precedents and statutory instruments, yielding results regarding the potential expansion of corporate criminal liability. Findings underscore the viability of strict liability models, recalibrated governance frameworks for AI accountability, and algorithmic due diligence mandates under ESG regimes. Comparative insights reveal jurisdictional disparities, with the EU Corporate Sustainability Due Diligence Directive (CSDDD) offering a potential transnational model. This study contributes to AI ethics and environmental jurisprudence by advocating for a hybrid liability framework integrating algorithmic risk assessment with legal personhood constructs, ensuring algorithmic opacity does not preclude liability enforcement.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Comparative Algorithmic Governance of Public Health Instruments across India, EU, US and LMICs [0.0]
The study investigates the juridico-technological architecture of international public health instruments.<n>It focuses on their implementation across India, the European Union, the United States and low- and middle-income countries (LMICs)<n>The principal objective is to assess how artificial intelligence augments implementation of instruments grounded in IHR 2005 and the WHO FCTC.
arXiv Detail & Related papers (2026-01-25T15:14:18Z) - The unsuitability of existing regulations to reach sustainable AI [0.0]
We argue that, despite incremental progress, current approaches remain ill-suited to correcting the market failures underpinning AI-related energy use, water consumption, and material demand.<n>The analysis situates these regulatory gaps within a wider ecosystem of academic research, civil society advocacy, standard-setting, and industry initiatives.
arXiv Detail & Related papers (2026-01-08T14:02:51Z) - Managing Ambiguity: A Proof of Concept of Human-AI Symbiotic Sense-making based on Quantum-Inspired Cognitive Mechanism of Rogue Variable Detection [39.146761527401424]
The study contributes to management theory by reframing ambiguity as a first-class construct.<n>It demonstrates the practical value of human-AI symbiosis for organizational resilience in VUCA environments.
arXiv Detail & Related papers (2025-12-17T11:23:18Z) - Cybercrime and Computer Forensics in Epoch of Artificial Intelligence in India [0.0]
This study scrutinizes the AI "dual-use" dilemma, functioning as both a cyber-threat vector and forensic automation mechanism.<n>While Machine Learning offers high accuracy in pattern recognition, it introduces vulnerabilities regarding data poisoning and algorithmic bias.<n>Findings highlight a critical tension between the Act's data minimization principles and forensic data retention requirements.
arXiv Detail & Related papers (2025-12-16T19:39:22Z) - Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives [0.9668407688201359]
Artificial Intelligence (AI) systems are increasingly deployed in legal contexts.<n>The so-called black box problem'' undermines legitimacy of automated decision-making.<n>XAI has proposed a variety of methods to enhance transparency.
arXiv Detail & Related papers (2025-10-13T07:19:15Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Unequal Uncertainty: Rethinking Algorithmic Interventions for Mitigating Discrimination from AI [38.122893275090206]
Uncertainty in artificial intelligence predictions poses urgent legal and ethical challenges for AI-assisted decision-making.<n>We examine two algorithmic interventions that act as guardrails for human-AI collaboration: selective abstention and selective friction.<n>We argue that although both interventions pose risks of unlawful discrimination under UK law, selective frictions offer a promising pathway toward fairer and more accountable AI-assisted decision-making.
arXiv Detail & Related papers (2025-08-11T11:43:34Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - On Algorithmic Fairness and the EU Regulations [0.2538209532048867]
The paper focuses on algorithmic fairness focusing on non-discrimination in the European Union (EU)<n>The paper demonstrates that correcting discriminatory biases in AI systems can be legally done under the EU regulations.<n>The paper contributes to the algorithmic fairness research with a few legal insights, enlarging and strengthening the growing research domain of compliance in AI engineering.
arXiv Detail & Related papers (2024-11-13T06:23:54Z) - Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing [0.0]
The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
arXiv Detail & Related papers (2024-08-05T15:35:34Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Adversarial Scrutiny of Evidentiary Statistical Software [32.962815960406196]
U.S. criminal legal system increasingly relies on software output to convict and incarcerate people.
We propose robust adversarial testing as an audit framework to examine the validity of evidentiary statistical software.
arXiv Detail & Related papers (2022-06-19T02:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.