Reputation Agent: Prompting Fair Reviews in Gig Markets
- URL: http://arxiv.org/abs/2005.06022v1
- Date: Fri, 8 May 2020 01:56:10 GMT
- Title: Reputation Agent: Prompting Fair Reviews in Gig Markets
- Authors: Carlos Toxtli, Angela Richmond-Fuller, Saiph Savage
- Abstract summary: Reputation Agent promotes fairer reviews from requesters (employers or customers) on gig markets.
Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers.
- Score: 3.100029131772499
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Our study presents a new tool, Reputation Agent, to promote fairer reviews
from requesters (employers or customers) on gig markets. Unfair reviews,
created when requesters consider factors outside of a worker's control, are
known to plague gig workers and can result in lost job opportunities and even
termination from the marketplace. Our tool leverages machine learning to
implement an intelligent interface that: (1) uses deep learning to
automatically detect when an individual has included unfair factors into her
review (factors outside the worker's control per the policies of the market);
and (2) prompts the individual to reconsider her review if she has incorporated
unfair factors. To study the effectiveness of Reputation Agent, we conducted a
controlled experiment over different gig markets. Our experiment illustrates
that across markets, Reputation Agent, in contrast with traditional approaches,
motivates requesters to review gig workers' performance more fairly. We discuss
how tools that bring more transparency to employers about the policies of a gig
market can help build empathy thus resulting in reasoned discussions around
potential injustices towards workers generated by these interfaces. Our vision
is that with tools that promote truth and transparency we can bring fairer
treatment to gig workers.
Related papers
- When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Multi-Agent Imitation Learning: Value is Easy, Regret is Hard [52.31989962031179]
We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to coordinate a group of agents.
Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert within the support of the demonstrations.
While doing so is sufficient to drive the value gap between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee to deviations by strategic agents.
arXiv Detail & Related papers (2024-06-06T16:18:20Z) - Decentralized Peer Review in Open Science: A Mechanism Proposal [0.0]
We propose a community-owned and -governed system for peer review.
The system aims to increase quality and speed of peer review while lowering the chance and impact of erroneous judgements.
arXiv Detail & Related papers (2024-04-28T11:42:54Z) - Language Models Can Reduce Asymmetry in Information Markets [100.38786498942702]
We introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants.
The central mechanism enabling this marketplace is the agents' dual capabilities: they have the capacity to assess the quality of privileged information but also come equipped with the ability to forget.
To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information.
arXiv Detail & Related papers (2024-03-21T14:48:37Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Self-Supervised Discovering of Interpretable Features for Reinforcement
Learning [40.52278913726904]
We propose a self-supervised interpretable framework for deep reinforcement learning.
A self-supervised interpretable network (SSINet) is employed to produce fine-grained attention masks for highlighting task-relevant information.
We verify and evaluate our method on several Atari 2600 games as well as Duckietown, which is a challenging self-driving car simulator environment.
arXiv Detail & Related papers (2020-03-16T08:26:17Z) - Studying the Effects of Cognitive Biases in Evaluation of Conversational
Agents [10.248512149493443]
We conduct a study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents.
We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias.
arXiv Detail & Related papers (2020-02-18T23:52:39Z) - Combating False Negatives in Adversarial Imitation Learning [67.99941805086154]
In adversarial imitation learning, a discriminator is trained to differentiate agent episodes from expert demonstrations representing the desired behavior.
As the trained policy learns to be more successful, the negative examples become increasingly similar to expert ones.
We propose a method to alleviate the impact of false negatives and test it on the BabyAI environment.
arXiv Detail & Related papers (2020-02-02T14:56:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.