FairSense-AI: Responsible AI Meets Sustainability
- URL: http://arxiv.org/abs/2503.02865v2
- Date: Wed, 05 Mar 2025 16:24:43 GMT
- Title: FairSense-AI: Responsible AI Meets Sustainability
- Authors: Shaina Raza, Mukund Sayeeganesh Chettiar, Matin Yousefabadi, Tahniat Khan, Marcelo Lotif,
- Abstract summary: We introduce FairSense-AI: a framework designed to detect and mitigate bias in both text and images.<n>By leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), FairSense-AI uncovers subtle forms of prejudice or stereotyping.<n>FairSense-AI integrates an AI risk assessment component that aligns with frameworks like the MIT AI Risk Repository and NIST AI Risk Management Framework.
- Score: 1.980639720136382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce FairSense-AI: a multimodal framework designed to detect and mitigate bias in both text and images. By leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), FairSense-AI uncovers subtle forms of prejudice or stereotyping that can appear in content, providing users with bias scores, explanatory highlights, and automated recommendations for fairness enhancements. In addition, FairSense-AI integrates an AI risk assessment component that aligns with frameworks like the MIT AI Risk Repository and NIST AI Risk Management Framework, enabling structured identification of ethical and safety concerns. The platform is optimized for energy efficiency via techniques such as model pruning and mixed-precision computation, thereby reducing its environmental footprint. Through a series of case studies and applications, we demonstrate how FairSense-AI promotes responsible AI use by addressing both the social dimension of fairness and the pressing need for sustainability in large-scale AI deployments. https://vectorinstitute.github.io/FairSense-AI, https://pypi.org/project/fair-sense-ai/ (Sustainability , Responsible AI , Large Language Models , Vision Language Models , Ethical AI , Green AI)
Related papers
- A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.
This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.
Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - The Right to AI [3.2132738637761027]
This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives.
We critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight.
arXiv Detail & Related papers (2025-01-29T04:32:41Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Journey to Trustworthy AI: Pursuit of Pragmatic Frameworks [0.0]
This paper reviews Trustworthy Artificial Intelligence (TAI) and its various definitions.<n>We argue against using terms such as Responsible or Ethical AI as substitutes for TAI.<n>Instead, we advocate for approaches centered on addressing key attributes and properties such as fairness, bias, risk, security, explainability, and reliability.
arXiv Detail & Related papers (2024-03-19T08:27:04Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - On the meaning of uncertainty for ethical AI: philosophy and practice [10.591284030838146]
We argue that this is a significant way to bring ethical considerations into mathematical reasoning.
We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
arXiv Detail & Related papers (2023-09-11T15:13:36Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.