FAIR for AI: An interdisciplinary and international community building
perspective
- URL: http://arxiv.org/abs/2210.08973v2
- Date: Tue, 1 Aug 2023 15:40:07 GMT
- Title: FAIR for AI: An interdisciplinary and international community building
perspective
- Authors: E.A. Huerta, Ben Blaiszik, L. Catherine Brinson, Kristofer E.
Bouchard, Daniel Diaz, Caterina Doglioni, Javier M. Duarte, Murali Emani, Ian
Foster, Geoffrey Fox, Philip Harris, Lukas Heinrich, Shantenu Jha, Daniel S.
Katz, Volodymyr Kindratenko, Christine R. Kirkpatrick, Kati Lassila-Perini,
Ravi K. Madduri, Mark S. Neubauer, Fotis E. Psomopoulos, Avik Roy, Oliver
R\"ubel, Zhizhen Zhao and Ruike Zhu
- Abstract summary: FAIR principles were proposed in 2016 as prerequisites for proper data management and stewardship.
The FAIR principles have been re-interpreted or extended to include the software, tools, algorithms, and datasets that produce data.
This report builds on the FAIR for AI Workshop held at Argonne National Laboratory on June 7, 2022.
- Score: 19.2239109259925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A foundational set of findable, accessible, interoperable, and reusable
(FAIR) principles were proposed in 2016 as prerequisites for proper data
management and stewardship, with the goal of enabling the reusability of
scholarly data. The principles were also meant to apply to other digital
assets, at a high level, and over time, the FAIR guiding principles have been
re-interpreted or extended to include the software, tools, algorithms, and
workflows that produce data. FAIR principles are now being adapted in the
context of AI models and datasets. Here, we present the perspectives, vision,
and experiences of researchers from different countries, disciplines, and
backgrounds who are leading the definition and adoption of FAIR principles in
their communities of practice, and discuss outcomes that may result from
pursuing and incentivizing FAIR AI research. The material for this report
builds on the FAIR for AI Workshop held at Argonne National Laboratory on June
7, 2022.
Related papers
- A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? [3.0406004578714008]
The rapid evolution of Large Language Models highlights the necessity for ethical considerations and data integrity in AI development.
While FAIR principles are crucial for ethical data stewardship, their specific application in the context of LLM training data remains an under-explored area.
We propose a novel framework designed to integrate FAIR principles into the LLM development lifecycle.
arXiv Detail & Related papers (2024-01-19T21:21:02Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Information Retrieval Meets Large Language Models: A Strategic Report
from Chinese IR Community [180.28262433004113]
Large Language Models (LLMs) have demonstrated exceptional capabilities in text understanding, generation, and knowledge inference.
LLMs and humans form a new technical paradigm that is more powerful for information seeking.
To thoroughly discuss the transformative impact of LLMs on IR research, the Chinese IR community conducted a strategic workshop in April 2023.
arXiv Detail & Related papers (2023-07-19T05:23:43Z) - FAIR Begins at home: Implementing FAIR via the Community Data Driven
Insights [1.5766133856827325]
We report on the experiences of the Community of Data Driven Insights (CDDI)
These experiences show the complex dimensions of FAIR implementation to researchers across disciplines in a single university.
arXiv Detail & Related papers (2023-03-13T19:12:16Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - The Different Faces of AI Ethics Across the World: A
Principle-Implementation Gap Analysis [12.031113181911627]
Artificial Intelligence (AI) is transforming our daily life with several applications in healthcare, space exploration, banking and finance.
These rapid progresses in AI have brought increasing attention to the potential impacts of AI technologies on society.
Several ethical principles have been released by governments, national and international organisations.
These principles outline high-level precepts to guide the ethical development, deployment, and governance of AI.
arXiv Detail & Related papers (2022-05-12T22:41:08Z) - AI Ethics Principles in Practice: Perspectives of Designers and Developers [19.16435145144916]
We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO)
Interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government.
arXiv Detail & Related papers (2021-12-14T15:28:45Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.