Heuristic-Based Weak Learning for Automated Decision-Making
- URL: http://arxiv.org/abs/2005.02342v3
- Date: Wed, 2 Dec 2020 22:55:01 GMT
- Title: Heuristic-Based Weak Learning for Automated Decision-Making
- Authors: Ryan Steed, Benjamin Williams
- Abstract summary: Machine learning systems impact many stakeholders and groups of users, often disparately.
Prior studies have reconciled conflicting user preferences by aggregating a high volume of manually labeled pairwise comparisons.
We show that our weak learning approach, which requires little to no manual labeling, agrees with participants' pairwise choices nearly as often as fully supervised approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning systems impact many stakeholders and groups of users, often
disparately. Prior studies have reconciled conflicting user preferences by
aggregating a high volume of manually labeled pairwise comparisons, but this
technique may be costly or impractical. How can we lower the barrier to
participation in algorithm design? Instead of creating a simplified labeling
task for a crowd, we suggest collecting ranked decision-making heuristics from
a focused sample of affected users. With empirical data from two use cases, we
show that our weak learning approach, which requires little to no manual
labeling, agrees with participants' pairwise choices nearly as often as fully
supervised approaches.
Related papers
- Retrieval Augmentation via User Interest Clustering [57.63883506013693]
Industrial recommender systems are sensitive to the patterns of user-item engagement.
We propose a novel approach that efficiently constructs user interest and facilitates low computational cost inference.
Our approach has been deployed in multiple products at Meta, facilitating short-form video related recommendation.
arXiv Detail & Related papers (2024-08-07T16:35:10Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Combining Self-labeling with Selective Sampling [2.0305676256390934]
This work combines self-labeling techniques with active learning in a selective sampling scenario.
We show that naive application of self-labeling can harm performance by introducing bias towards selected classes.
The proposed method matches current selective sampling methods or achieves better results.
arXiv Detail & Related papers (2023-01-11T11:58:45Z) - Towards Fewer Labels: Support Pair Active Learning for Person
Re-identification [5.076419064097734]
Supervised-learning based person re-identification (re-id) require a large amount of manual labeled data.
We propose a Support Pair Active Learning framework to lower the manual labeling cost for large-scale person reidentification.
arXiv Detail & Related papers (2022-04-21T10:10:18Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Just Label What You Need: Fine-Grained Active Selection for Perception
and Prediction through Partially Labeled Scenes [78.23907801786827]
We introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes.
Our experiments on a real-world, large-scale self-driving dataset suggest that fine-grained selection can improve the performance across perception, prediction, and downstream planning tasks.
arXiv Detail & Related papers (2021-04-08T17:57:41Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users'
Feedback [62.997667081978825]
We present a novel approach for considering user feedback and evaluate it using three distinct strategies.
Despite a limited number of feedbacks returned by users (as low as 20% of the total), our approach obtains similar results to those of state of the art approaches.
arXiv Detail & Related papers (2020-09-16T07:32:51Z) - How fair can we go in machine learning? Assessing the boundaries of
fairness in decision trees [0.12891210250935145]
We present the first methodology that allows to explore the statistical limits of bias mitigation interventions.
We focus our study on decision tree classifiers since they are widely accepted in machine learning.
We conclude experimentally that our method can optimize decision tree models by being fairer with a small cost of the classification error.
arXiv Detail & Related papers (2020-06-22T16:28:26Z) - Empowering Active Learning to Jointly Optimize System and User Demands [70.66168547821019]
We propose a new active learning approach that jointly optimize the active learning system (training efficiently) and the user (receiving useful instances)
We study our approach in an educational application, which particularly benefits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user.
We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users.
arXiv Detail & Related papers (2020-05-09T16:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.