AI exposure predicts unemployment risk
- URL: http://arxiv.org/abs/2308.02624v1
- Date: Fri, 4 Aug 2023 15:21:07 GMT
- Title: AI exposure predicts unemployment risk
- Authors: Morgan Frank, Yong-Yeol Ahn, Esteban Moro
- Abstract summary: We assess which models of AI exposure predict job separations and unemployment risk.
We find that individual AI exposure models are not predictive of unemployment rates, unemployment risk, or job separation rates.
Our results also call for dynamic, context-aware, and validated methods for assessing AI exposure.
- Score: 1.5101132008238312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Is artificial intelligence (AI) disrupting jobs and creating unemployment?
Despite many attempts to quantify occupations' exposure to AI, inconsistent
validation obfuscates the relative benefits of each approach. A lack of
disaggregated labor outcome data, including unemployment data, further
exacerbates the issue. Here, we assess which models of AI exposure predict job
separations and unemployment risk using new occupation-level unemployment data
by occupation from each US state's unemployment insurance office spanning 2010
through 2020. Although these AI exposure scores have been used by governments
and industry, we find that individual AI exposure models are not predictive of
unemployment rates, unemployment risk, or job separation rates. However, an
ensemble of those models exhibits substantial predictive power suggesting that
competing models may capture different aspects of AI exposure that collectively
account for AI's variable impact across occupations, regions, and time. Our
results also call for dynamic, context-aware, and validated methods for
assessing AI exposure. Interactive visualizations for this study are available
at https://sites.pitt.edu/~mrfrank/uiRiskDemo/.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards the Terminator Economy: Assessing Job Exposure to AI through LLMs [10.844598404826355]
One-third of U.S. employment is highly exposed to AI, primarily in high-skill jobs.
This exposure correlates positively with employment and wage growth from 2019 to 2023.
arXiv Detail & Related papers (2024-07-27T08:14:18Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Brief for the Canada House of Commons Study on the Implications of
Artificial Intelligence Technologies for the Canadian Labor Force: Generative
Artificial Intelligence Shatters Models of AI and Labor [1.0878040851638]
As with past technologies, generative AI may not lead to mass unemployment.
generative AI is creative, cognitive, and potentially ubiquitous.
As AI's full set of capabilities and applications emerge, policy makers should promote workers' career adaptability.
arXiv Detail & Related papers (2023-11-06T22:58:24Z) - Unmasking Biases and Navigating Pitfalls in the Ophthalmic Artificial
Intelligence Lifecycle: A Review [3.1929071422400446]
This review article breaks down the AI lifecycle into seven steps.
Data collection; defining the model task; data pre-processing and labeling; model development; model evaluation and validation; deployment.
Finally, post-deployment evaluation, monitoring, and system recalibration and delves into the risks for harm at each step and strategies for mitigating them.
arXiv Detail & Related papers (2023-10-08T03:49:42Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Being Automated or Not? Risk Identification of Occupations with Graph
Neural Networks [13.092145058320316]
Rapid advances in automation technologies, such as artificial intelligence (AI) and robotics, pose an increasing risk of automation for occupations.
Recent social-economic studies suggest that nearly 50% of occupations are at high risk of being automated in the next decade.
We propose a graph-based semi-automated classification method named textbfAutomated textbfOccupation textbfClassification to identify the automated risk for occupations.
arXiv Detail & Related papers (2022-09-06T02:19:50Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.