Artificial Intelligence in Environmental Protection: The Importance of Organizational Context from a Field Study in Wisconsin
- URL: http://arxiv.org/abs/2501.04902v1
- Date: Thu, 09 Jan 2025 01:27:36 GMT
- Title: Artificial Intelligence in Environmental Protection: The Importance of Organizational Context from a Field Study in Wisconsin
- Authors: Nicolas Rothbacher, Kit T. Rodolfa, Mihir Bhaskar, Erin Maneri, Christine Tsang, Daniel E. Ho,
- Abstract summary: We report results from a unique case study of a satellite imagery-based AI tool to detect dumping of agricultural waste.
The tool was utilized for field investigations when dumping was presumptively illegal in February-March 2023.
While AI tools promise to prioritize allocation of environmental protection resources, they may expose important gaps of existing law.
- Score: 3.529400240074136
- License:
- Abstract: Advances in Artificial Intelligence (AI) have generated widespread enthusiasm for the potential of AI to support our understanding and protection of the environment. As such tools move from basic research to more consequential settings, such as regulatory enforcement, the human context of how AI is utilized, interpreted, and deployed becomes increasingly critical. Yet little work has systematically examined the role of such organizational goals and incentives in deploying AI systems. We report results from a unique case study of a satellite imagery-based AI tool to detect dumping of agricultural waste, with concurrent field trials with the Wisconsin Department of Natural Resources (WDNR) and a non-governmental environmental interest group in which the tool was utilized for field investigations when dumping was presumptively illegal in February-March 2023. Our results are threefold: First, both organizations confirmed a similar level of ground-truth accuracy for the model's detections. Second, they differed, however, in their overall assessment of its usefulness, as WDNR was interested in clear violations of existing law, while the interest group sought to document environmental risk beyond the scope of existing regulation. Dumping by an unpermitted entity or just before February 1, for instance, were deemed irrelevant by WDNR. Third, while AI tools promise to prioritize allocation of environmental protection resources, they may expose important gaps of existing law.
Related papers
- Developing an Ontology for AI Act Fundamental Rights Impact Assessments [0.0]
The recently published EU Artificial Intelligence Act (AI Act) regulates the use of AI technologies.
One of its novel requirements is the obligation to conduct a Fundamental Rights Impact Assessment (FRIA)
We present our novel representation of the FRIA as an ontology based on semantic web standards.
arXiv Detail & Related papers (2024-12-20T00:37:33Z) - Towards an Environmental Ethics of Artificial Intelligence [0.0]
This paper explores the ethical implications of the environmental impact of Artificial Intelligence (AI) for designing AI systems.
We draw on environmental justice literature, in which three categories of justice are distinguished, referring to three elements that can be unjust.
Based on these tenets of justice, we outline criteria for developing environmentally just AI systems.
arXiv Detail & Related papers (2024-12-19T17:48:54Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Responsible AI for Earth Observation [10.380878519901998]
We systematically define the intersection of AI and EO, with a central focus on responsible AI practices.
We identify several critical components guiding this exploration from both academia and industry perspectives.
The paper explores potential opportunities and emerging trends, providing valuable insights for future research endeavors.
arXiv Detail & Related papers (2024-05-31T14:47:27Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Missing Value Chain in Generative AI Governance China as an example [0.0]
China's Provisional Administrative Measures of Generative Artificial Intelligence Services came into effect in August 2023.
Measure presents unclear distinctions regarding different roles in the value chain of Generative AI.
Lack of distinction and clear legal status between different players in the AI value chain can have profound consequences.
arXiv Detail & Related papers (2024-01-05T13:28:25Z) - Sustainable AI Regulation [3.0821115746307663]
The ICT sector contributes up to 3.9 percent of global greenhouse gas emissions.
The carbon footprint water consumption of AI, especially large-scale generative models like GPT-4, raise significant sustainability concerns.
The paper suggests a multi-faceted approach to achieve sustainable AI regulation.
arXiv Detail & Related papers (2023-06-01T02:20:48Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.