Data and AI governance: Promoting equity, ethics, and fairness in large language models
- URL: http://arxiv.org/abs/2508.03970v1
- Date: Tue, 05 Aug 2025 23:15:31 GMT
- Title: Data and AI governance: Promoting equity, ethics, and fairness in large language models
- Authors: Alok Abhishek, Lisa Erickson, Tushar Bandopadhyay,
- Abstract summary: We cover approaches to govern, assess and quantify bias across the complete life cycle of machine learning models.<n>Data and AI governance approach is suitable for practical, real-world applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we cover approaches to systematically govern, assess and quantify bias across the complete life cycle of machine learning models, from initial development and validation to ongoing production monitoring and guardrail implementation. Building upon our foundational work on the Bias Evaluation and Assessment Test Suite (BEATS) for Large Language Models, the authors share prevalent bias and fairness related gaps in Large Language Models (LLMs) and discuss data and AI governance framework to address Bias, Ethics, Fairness, and Factuality within LLMs. The data and AI governance approach discussed in this paper is suitable for practical, real-world applications, enabling rigorous benchmarking of LLMs prior to production deployment, facilitating continuous real-time evaluation, and proactively governing LLM generated responses. By implementing the data and AI governance across the life cycle of AI development, organizations can significantly enhance the safety and responsibility of their GenAI systems, effectively mitigating risks of discrimination and protecting against potential reputational or brand-related harm. Ultimately, through this article, we aim to contribute to advancement of the creation and deployment of socially responsible and ethically aligned generative artificial intelligence powered applications.
Related papers
- Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment [49.81946749379338]
This work seeks to analyze the capacity of Transformers-based systems to learn demographic biases present in the data.<n>We propose a privacy-enhancing framework to reduce gender information from the learning pipeline as a way to mitigate biased behaviors in the final tools.
arXiv Detail & Related papers (2025-06-13T15:29:43Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models [8.018569128518187]
This study establishes a novel framework for systematically evaluating the moral reasoning capabilities of large language models (LLMs)<n>Our framework addresses this challenge by quantifying alignment with human ethical standards through three dimensions.<n>This approach enables precise identification of ethical strengths and weaknesses in LLMs, facilitating targeted improvements and stronger alignment with societal values.
arXiv Detail & Related papers (2025-05-01T20:36:19Z) - BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models [0.0]
We introduce BEATS, a novel framework for evaluating Bias, Ethics, Fairness, and Factuality in Large Language Models (LLMs)<n>We present a bias benchmark for LLMs that measure performance across 29 distinct metrics.<n>These metrics span a broad range of characteristics, including demographic, cognitive, and social biases, as well as measures of ethical reasoning, group fairness, and factuality related misinformation risk.
arXiv Detail & Related papers (2025-03-31T16:56:52Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Powering LLM Regulation through Data: Bridging the Gap from Compute Thresholds to Customer Experiences [0.0]
This paper argues that current regulatory approaches, which focus on compute-level thresholds and generalized model evaluations, are insufficient to ensure the safety and effectiveness of specific LLM-based user experiences.<n>We propose a shift towards a certification process centered on actual user-facing experiences and the curation of high-quality datasets for evaluation.
arXiv Detail & Related papers (2025-01-12T16:20:40Z) - The Synergy of LLMs & RL Unlocks Offline Learning of Generalizable Language-Conditioned Policies with Low-fidelity Data [50.544186914115045]
TEDUO is a novel training pipeline for offline language-conditioned policy learning in symbolic environments.<n>Our approach harnesses large language models (LLMs) in a dual capacity: first, as automatization tools augmenting offline datasets with richer annotations, and second, as generalizable instruction-following agents.
arXiv Detail & Related papers (2024-12-09T18:43:56Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - The Impossibility of Fair LLMs [17.812295963158714]
We analyze a variety of technical fairness frameworks and find inherent challenges in each that make the development of a fair language model intractable.<n>We show that each framework either does not extend to the general-purpose AI context or is infeasible in practice.<n>These inherent challenges would persist for general-purpose AI, including LLMs, even if empirical challenges, such as limited participatory input and limited measurement methods, were overcome.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Inadequacies of Large Language Model Benchmarks in the Era of Generative Artificial Intelligence [5.147767778946168]
We critically assess 23 state-of-the-art Large Language Models (LLMs) benchmarks.
Our research uncovered significant limitations, including biases, difficulties in measuring genuine reasoning, adaptability, implementation inconsistencies, prompt engineering complexity, diversity, and the overlooking of cultural and ideological norms.
arXiv Detail & Related papers (2024-02-15T11:08:10Z) - FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training? [3.0406004578714008]
The rapid evolution of Large Language Models highlights the necessity for ethical considerations and data integrity in AI development.
While FAIR principles are crucial for ethical data stewardship, their specific application in the context of LLM training data remains an under-explored area.
We propose a novel framework designed to integrate FAIR principles into the LLM development lifecycle.
arXiv Detail & Related papers (2024-01-19T21:21:02Z) - Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for
AI Accountability [28.67753149592534]
This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue.
Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems.
arXiv Detail & Related papers (2023-11-22T04:43:16Z) - Ethical Considerations and Policy Implications for Large Language
Models: Guiding Responsible Development and Deployment [48.72819550642584]
This paper examines the ethical considerations and implications of large language models (LLMs) in generating content.
It highlights the potential for both positive and negative uses of generative AI programs and explores the challenges in assigning responsibility for their outputs.
arXiv Detail & Related papers (2023-08-01T07:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.