Recommender Systems for Sustainability: Overview and Research Issues
- URL: http://arxiv.org/abs/2412.03620v1
- Date: Wed, 04 Dec 2024 15:03:47 GMT
- Title: Recommender Systems for Sustainability: Overview and Research Issues
- Authors: Alexander Felfernig, Manfred Wundara, Thi Ngoc Trang Tran, Seda Polat-Erdeniz, Sebastian Lubos, Merfat El-Mansi, Damian Garber, Viet-Man Le,
- Abstract summary: The article summarizes the state of the art in applying recommender systems to support the achievement of sustainability development goals.
Specifically, recommender systems can provide support for organizations and individuals to achieve the defined goals.
- Score: 39.08078205630303
- License:
- Abstract: Sustainability development goals (SDGs) are regarded as a universal call to action with the overall objectives of planet protection, ending of poverty, and ensuring peace and prosperity for all people. In order to achieve these objectives, different AI technologies play a major role. Specifically, recommender systems can provide support for organizations and individuals to achieve the defined goals. Recommender systems integrate AI technologies such as machine learning, explainable AI (XAI), case-based reasoning, and constraint solving in order to find and explain user-relevant alternatives from a potentially large set of options. In this article, we summarize the state of the art in applying recommender systems to support the achievement of sustainability development goals. In this context, we discuss open issues for future research.
Related papers
- The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment [33.27140396561271]
The emergence of large language models (LLMs) has sparked the possibility of about Artificial Superintelligence (ASI)
Superalignment aims to address two primary goals -- scalability in supervision to provide high-quality guidance signals and robust governance to ensure alignment with human values.
Specifically, we explore the concept of ASI, the challenges it poses, and the limitations of current alignment paradigms in addressing the superalignment problem.
arXiv Detail & Related papers (2024-12-21T03:51:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding
the Development and Assessment of AI Systems [0.0]
This study conducts a systematic literature review spanning the years 2020 to 2023.
Through the synthesis of knowledge extracted from the SLR, this study presents a conceptual framework tailored for privacy- and security-aware AI systems.
arXiv Detail & Related papers (2024-03-13T15:39:57Z) - Social Environment Design [39.324202132624215]
Social Environment Design is a general framework for the use of AI for automated policy-making.
The framework seeks to capture general economic environments, includes voting on policy objectives, and gives a direction for the systematic analysis of government and economic policy through AI simulation.
arXiv Detail & Related papers (2024-02-21T19:29:14Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - End-User Development for Artificial Intelligence: A Systematic
Literature Review [2.347942013388615]
End-User Development (EUD) can allow people to create, customize, or adapt AI-based systems to their own needs.
This paper presents a literature review that aims to shed the light on the current landscape of EUD for AI systems.
arXiv Detail & Related papers (2023-04-14T09:57:36Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.