Comparative Algorithmic Governance of Public Health Instruments across India, EU, US and LMICs
- URL: http://arxiv.org/abs/2601.17877v1
- Date: Sun, 25 Jan 2026 15:14:18 GMT
- Title: Comparative Algorithmic Governance of Public Health Instruments across India, EU, US and LMICs
- Authors: Sahibpreet Singh,
- Abstract summary: The study investigates the juridico-technological architecture of international public health instruments.<n>It focuses on their implementation across India, the European Union, the United States and low- and middle-income countries (LMICs)<n>The principal objective is to assess how artificial intelligence augments implementation of instruments grounded in IHR 2005 and the WHO FCTC.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The study investigates the juridico-technological architecture of international public health instruments, focusing on their implementation across India, the European Union, the United States and low- and middle-income countries (LMICs), particularly in Sub-Saharan Africa. It addresses a research lacuna: the insufficient harmonisation between normative health law and algorithmic public health infrastructures in resource-constrained jurisdictions. The principal objective is to assess how artificial intelligence augments implementation of instruments grounded in IHR 2005 and the WHO FCTC while identifying doctrinal and infrastructural bottlenecks. Using comparative doctrinal analysis and legal-normative mapping, the study triangulates legislative instruments, WHO monitoring frameworks, AI systems including BlueDot, Aarogya Setu and EIOS, and compliance metrics. Preliminary results show that AI has improved early detection, surveillance precision and responsiveness in high-capacity jurisdictions, whereas LMICs face infrastructural deficits, data privacy gaps and fragmented legal scaffolding. The findings highlight the relevance of the EU Artificial Intelligence Act and GDPR as regulatory prototypes for health-oriented algorithmic governance and contrast them with embryonic AI integration and limited internet penetration in many LMICs. The study argues for embedding AI within a rights-compliant, supranationally coordinated regulatory framework to secure equitable health outcomes and stronger compliance. It proposes a model for algorithmic treaty-making inspired by FCTC architecture and calls for WHO-led compliance mechanisms modelled on the WTO Dispute Settlement Body to enhance pandemic preparedness, surveillance equity and transnational governance resilience.
Related papers
- Algorithmic Criminal Liability in Greenwashing: Comparing India, United States, and European Union [0.0]
This study conducts a comparative legal analysis of criminal liability for AI-mediated greenwashing across India, the US, and the EU.<n>Existing statutes exhibit anthropocentric biases by predicating liability on demonstrable human intent, rendering them ill-equipped to address algorithmic deception.
arXiv Detail & Related papers (2025-12-14T20:49:41Z) - An AI Implementation Science Study to Improve Trustworthy Data in a Large Healthcare System [1.6881002551798014]
This study presents an AI implementation case study within Shriners Childrens (SCs), a large multisite pediatric system.<n>We introduce a Python-based data quality assessment tool compatible with SCs infrastructure, extending OHDsi's R/Java-based Data Quality Dashboard.<n>We also compare systematic and case-specific AI implementation strategies for Craniofacial Microsomia (CFM) using the FHIR standard.
arXiv Detail & Related papers (2025-12-01T14:21:16Z) - AI Regulation in Telecommunications: A Cross-Jurisdictional Legal Study [0.6117371161379207]
This paper conducts a comparative legal study of policy instruments across ten countries.<n>It examines how telecom, cybersecurity, data protection, and AI laws approach AI-related risks in infrastructure.
arXiv Detail & Related papers (2025-11-27T08:30:12Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.<n>First, we propose using standardized AI flaw reports and rules of engagement for researchers.<n>Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.<n>Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Mapping the Regulatory Learning Space for the EU AI Act [0.8987776881291145]
The EU AI Act represents the world's first transnational AI regulation with concrete enforcement measures.<n>It builds on existing EU mechanisms for regulating health and safety of products but extends them to protect fundamental rights.<n>We argue that this will lead to multiple uncertainties in the enforcement of the AI Act.
arXiv Detail & Related papers (2025-02-27T12:46:30Z) - Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China [0.0]
This paper conducts a comparative analysis of AI risk management strategies across the European Union, United States, United Kingdom (UK), and China.<n>The findings show that the EU implements a structured, risk-based framework that prioritizes transparency and conformity assessments.<n>The U.S. uses a decentralized, sector-specific regulations that promote innovation but may lead to fragmented enforcement.
arXiv Detail & Related papers (2025-02-25T18:52:17Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act [40.233017376716305]
The EU's Artificial Intelligence Act (AI Act) is a significant step towards responsible AI development.<n>It lacks clear technical interpretation, making it difficult to assess models' compliance.<n>This work presents COMPL-AI, a comprehensive framework consisting of the first technical interpretation of the Act.
arXiv Detail & Related papers (2024-10-10T14:23:51Z) - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis [0.7373617024876725]
This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations.
It provides a practical framework for developers and researchers, ensuring their AI innovations adhere to legal and ethical standards.
arXiv Detail & Related papers (2024-08-27T14:59:27Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Organizational Governance of Emerging Technologies: AI Adoption in
Healthcare [43.02293389682218]
The Health AI Partnership aims to better define the requirements for adequate organizational governance of AI systems in healthcare settings.
This is one of the most detailed qualitative analyses to date of the current governance structures and processes involved in AI adoption by health systems in the United States.
We hope these findings can inform future efforts to build capabilities to promote the safe, effective, and responsible adoption of emerging technologies in healthcare.
arXiv Detail & Related papers (2023-04-25T18:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.