Socio-Economic Consequences of Generative AI: A Review of Methodological Approaches
- URL: http://arxiv.org/abs/2411.09313v1
- Date: Thu, 14 Nov 2024 09:40:25 GMT
- Title: Socio-Economic Consequences of Generative AI: A Review of Methodological Approaches
- Authors: Carlos J. Costa, Joao Tiago Aparicio, Manuela Aparicio,
- Abstract summary: We identify the primary methodologies that may be used to help predict the economic and social impacts of generative AI adoption.
Through a comprehensive literature review, we uncover a range of methodologies poised to assess the multifaceted impacts of this technological revolution.
- Score: 0.0
- License:
- Abstract: The widespread adoption of generative artificial intelligence (AI) has fundamentally transformed technological landscapes and societal structures in recent years. Our objective is to identify the primary methodologies that may be used to help predict the economic and social impacts of generative AI adoption. Through a comprehensive literature review, we uncover a range of methodologies poised to assess the multifaceted impacts of this technological revolution. We explore Agent-Based Simulation (ABS), Econometric Models, Input-Output Analysis, Reinforcement Learning (RL) for Decision-Making Agents, Surveys and Interviews, Scenario Analysis, Policy Analysis, and the Delphi Method. Our findings have allowed us to identify these approaches' main strengths and weaknesses and their adequacy in coping with uncertainty, robustness, and resource requirements.
Related papers
- Large Language Model for Qualitative Research -- A Systematic Mapping Study [3.302912592091359]
Large Language Models (LLMs), powered by advanced generative AI, have emerged as transformative tools.
This study systematically maps the literature on the use of LLMs for qualitative research.
Findings reveal that LLMs are utilized across diverse fields, demonstrating the potential to automate processes.
arXiv Detail & Related papers (2024-11-18T21:28:00Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - AI in Supply Chain Risk Assessment: A Systematic Literature Review and Bibliometric Analysis [0.0]
Supply chain risk assessment (SCRA) has witnessed a profound evolution through the integration of artificial intelligence (AI) and machine learning (ML) techniques.
Previous reviews have outlined established methodologies but have overlooked emerging AI/ML techniques.
This paper conducts a systematic literature review combined with a comprehensive bibliometric analysis.
arXiv Detail & Related papers (2023-12-12T17:47:51Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Navigating the Complexity of Generative AI Adoption in Software
Engineering [6.190511747986327]
The adoption patterns of Generative Artificial Intelligence (AI) tools within software engineering are investigated.
Influencing factors at the individual, technological, and societal levels are analyzed.
arXiv Detail & Related papers (2023-07-12T11:05:19Z) - Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions [2.35574869517894]
This study focuses on analyzing the recent advances in the area of Multimodal XAI (MXAI)
MXAI comprises methods that involve multiple modalities in the primary prediction and explanation tasks.
arXiv Detail & Related papers (2023-06-09T07:51:50Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z) - Achieving a Data-driven Risk Assessment Methodology for Ethical AI [3.523208537466128]
We show that a multidisciplinary research approach is the foundation of a pragmatic definition of ethical and societal risks faced by organizations using AI.
We propose a novel data-driven risk assessment methodology, entitled DRESS-eAI.
arXiv Detail & Related papers (2021-11-29T12:55:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.