Navigating Automated Hiring: Perceptions, Strategy Use, and Outcomes Among Young Job Seekers
- URL: http://arxiv.org/abs/2502.05099v1
- Date: Fri, 07 Feb 2025 17:18:07 GMT
- Title: Navigating Automated Hiring: Perceptions, Strategy Use, and Outcomes Among Young Job Seekers
- Authors: Lena Armstrong, DanaƩ Metaxa,
- Abstract summary: We conducted a survey with 448 computer science students about perceptions of the procedural fairness of automated employment decision tools (AEDTs)
We find that young job seekers' perceptions of and willingness to be evaluated by AEDTs varied with the level of automation involved and the technical nature of the task being evaluated.
Our work speaks to young job seekers' distrust of automation in hiring contexts, as well as the continued role of social and socioeconomic privilege in job seeking.
- Score: 0.0
- License:
- Abstract: As the use of automated employment decision tools (AEDTs) has rapidly increased in hiring contexts, especially for computing jobs, there is still limited work on applicants' perceptions of these emerging tools and their experiences navigating them. To investigate, we conducted a survey with 448 computer science students (young, current technology job-seekers) about perceptions of the procedural fairness of AEDTs, their willingness to be evaluated by different AEDTs, the strategies they use relating to automation in the hiring process, and their job seeking success. We find that young job seekers' procedural fairness perceptions of and willingness to be evaluated by AEDTs varied with the level of automation involved in the AEDT, the technical nature of the task being evaluated, and their own use of strategies, such as job referrals. Examining the relationship of their strategies with job outcomes, notably, we find that referrals and family household income have significant and positive impacts on hiring success, while more egalitarian strategies (using free online coding assessment practice or adding keywords to resumes) did not. Overall, our work speaks to young job seekers' distrust of automation in hiring contexts, as well as the continued role of social and socioeconomic privilege in job seeking, despite the use of AEDTs that promise to make hiring "unbiased."
Related papers
- Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Exploring the Implementation of AI in Early Onset Interviews to Help Mitigate Bias [0.0]
This paper investigates the application of artificial intelligence (AI) in early-stage recruitment interviews.
Results indicate that AI effectively minimizes sentiment-driven biases by 41.2%.
arXiv Detail & Related papers (2025-01-17T00:40:35Z) - TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks [52.46737975742287]
We build a self-contained environment with data that mimics a small software company environment.
We find that with the most competitive agent, 24% of the tasks can be completed autonomously.
This paints a nuanced picture on task automation with LM agents.
arXiv Detail & Related papers (2024-12-18T18:55:40Z) - Follow the money: a startup-based measure of AI exposure across occupations, industries and regions [0.0]
Existing measures of AI occupational exposure focus on AI's theoretical potential to substitute or complement human labour on the basis of technical feasibility.
We introduce the AI Startup Exposure (AISE) index-a novel metric based on occupational descriptions from O*NET and AI applications developed by startups.
Our findings suggest that AI adoption will be gradual and shaped by social factors as much as by the technical feasibility of AI applications.
arXiv Detail & Related papers (2024-12-06T10:25:05Z) - Assessing the Performance of Human-Capable LLMs -- Are LLMs Coming for Your Job? [0.0]
SelfScore is a benchmark designed to assess the performance of automated Large Language Model (LLM) agents on help desk and professional consultation tasks.
The benchmark evaluates agents on problem complexity and response helpfulness, ensuring transparency and simplicity in its scoring system.
The study raises concerns about the potential displacement of human workers, especially in areas where AI technologies excel.
arXiv Detail & Related papers (2024-10-05T14:37:35Z) - WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - Fairness in AI-Driven Recruitment: Challenges, Metrics, Methods, and Future Directions [0.0]
Big data and machine learning has led to a rapid transformation in the traditional recruitment process.
Given the prevalence of AI-based recruitment, there is growing concern that human biases may carry over to decisions made by these systems.
This paper provides a comprehensive overview of this emerging field by discussing the types of biases encountered in AI-driven recruitment.
arXiv Detail & Related papers (2024-05-30T05:25:14Z) - WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - "Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets [4.955822723273599]
Large Language Model (LLM) based generative AI, such as ChatGPT, is considered the first generation of Artificial General Intelligence (AGI)
Our paper offers crucial insights into AI's influence on labor markets and individuals' reactions.
arXiv Detail & Related papers (2023-08-09T19:45:00Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.