Earning Sans Learning: Noisy Decision-Making and Labor Supply on Gig
Economy Platforms
- URL: http://arxiv.org/abs/2111.00002v1
- Date: Thu, 28 Oct 2021 04:13:53 GMT
- Title: Earning Sans Learning: Noisy Decision-Making and Labor Supply on Gig
Economy Platforms
- Authors: Daniel Freund and Chamsi Hssaine
- Abstract summary: We study a gig economy platform's problem of finding optimal compensation schemes when faced with workers who base their participation decisions on limited information with respect to their earnings.
We uncover phenomena that may arise when earnings are volatile and hard to predict, as both the empirical literature and our own data-driven observations suggest may be prevalent on gig economy platforms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a gig economy platform's problem of finding optimal compensation
schemes when faced with workers who myopically base their participation
decisions on limited information with respect to their earnings. The stylized
model we consider captures two key, related features absent from prior work on
the operations of on-demand service platforms: (i) workers' lack of information
regarding the distribution from which their earnings are drawn and (ii) worker
decisions that are sensitive to variability in earnings. Despite its stylized
nature, our model induces a complex stochastic optimization problem whose
natural fluid relaxation is also a priori intractable. Nevertheless, we uncover
a surprising structural property of the relaxation that allows us to design a
tractable, fast-converging heuristic policy that is asymptotically optimal
amongst the space of all policies that fulfill a fairness property. In doing
so, via both theory and extensive simulations, we uncover phenomena that may
arise when earnings are volatile and hard to predict, as both the empirical
literature and our own data-driven observations suggest may be prevalent on gig
economy platforms.
Related papers
- Pareto Optimal Algorithmic Recourse in Multi-cost Function [0.44938884406455726]
algorithmic recourse aims to identify minimal-cost actions to alter an individual features, thereby obtaining a desired outcome.
Most current recourse mechanisms use gradient-based methods that assume cost functions are differentiable, often not applicable in real-world scenarios.
This work proposes an algorithmic recourse framework that handles nondifferentiable and discrete multi-cost functions.
arXiv Detail & Related papers (2025-02-11T03:16:08Z) - Differentiation of Multi-objective Data-driven Decision Pipeline [34.577809430781144]
Real-world scenarios frequently involve multi-objective data-driven optimization problems.
Traditional two-stage methods apply a machine learning model to estimate problem coefficients, followed by invoking a solver to tackle the predicted optimization problem.
Recent efforts have focused on end-to-end training of predictive models that use decision loss derived from the downstream optimization problem.
arXiv Detail & Related papers (2024-06-02T15:42:03Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Resilient Constrained Reinforcement Learning [87.4374430686956]
We study a class of constrained reinforcement learning (RL) problems in which multiple constraint specifications are not identified before study.
It is challenging to identify appropriate constraint specifications due to the undefined trade-off between the reward training objective and the constraint satisfaction.
We propose a new constrained RL approach that searches for policy and constraint specifications together.
arXiv Detail & Related papers (2023-12-28T18:28:23Z) - Learning Fair Policies for Multi-stage Selection Problems from
Observational Data [4.282745020665833]
We consider the problem of learning fair policies for multi-stage selection problems from observational data.
This problem arises in several high-stakes domains such as company hiring, loan approval, or bail decisions where outcomes are only observed for those selected.
We propose a multi-stage framework that can be augmented with various fairness constraints, such as demographic parity or equal opportunity.
arXiv Detail & Related papers (2023-12-20T16:33:15Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Adaptive Decision-Making with Constraints and Dependent Losses:
Performance Guarantees and Applications to Online and Nonlinear
Identification [5.787117733071415]
We consider adaptive decision-making problems where an agent optimize a cumulative performance objective by repeatedly choosing among a finite set of options.
Our algorithm and our analysis is instance dependent, that is, suboptimal choices of the environment are exploited and reflected in our regret bounds.
The performance of the resulting algorithms is highlighted in two numerical examples.
arXiv Detail & Related papers (2023-04-06T18:32:26Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z) - On the Fundamental Trade-offs in Learning Invariant Representations [7.868449549351487]
We identify and determine two fundamental trade-offs between utility and semantic dependence induced by the statistical dependencies between the data and its corresponding target and semantic attributes.
We numerically quantify the trade-offs on representative problems and compare to the solutions achieved by baseline representation learning algorithms.
arXiv Detail & Related papers (2021-09-08T01:26:46Z) - Algorithmic Challenges in Ensuring Fairness at the Time of Decision [6.228560624452748]
Algorithmic decision-making in societal contexts can be framed as optimization under bandit feedback.
Recent lawsuits accuse companies that deploy algorithmic pricing practices of price gouging.
We introduce the well-studied fairness notion of envy-freeness within the context of convex optimization.
arXiv Detail & Related papers (2021-03-16T19:06:28Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.