HumanAL: Calibrating Human Matching Beyond a Single Task
- URL: http://arxiv.org/abs/2205.03209v1
- Date: Fri, 6 May 2022 13:38:46 GMT
- Title: HumanAL: Calibrating Human Matching Beyond a Single Task
- Authors: Roee Shraga
- Abstract summary: We build a behavioral profile for human annotators which is used as a feature representation of the provided input.
We show that by utilizing black-box machine learning, we can take into account human behavior and calibrate their input to improve the labeling quality.
- Score: 6.599344783327053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work offers a novel view on the use of human input as labels,
acknowledging that humans may err. We build a behavioral profile for human
annotators which is used as a feature representation of the provided input. We
show that by utilizing black-box machine learning, we can take into account
human behavior and calibrate their input to improve the labeling quality. To
support our claims and provide a proof-of-concept, we experiment with three
different matching tasks, namely, schema matching, entity matching and text
matching. Our empirical evaluation suggests that the method can improve the
quality of gathered labels in multiple settings including cross-domain (across
different matching tasks).
Related papers
- Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback [8.04095222893591]
We find significant gaps in fairness preferences depending on the race, age, political stance, educational level, and LGBTQ+ identity of annotators.
We also demonstrate that demographics mentioned in text have a strong influence on how users perceive individual fairness in moderation.
arXiv Detail & Related papers (2024-06-09T19:42:25Z) - Disentangled Interaction Representation for One-Stage Human-Object
Interaction Detection [70.96299509159981]
Human-Object Interaction (HOI) detection is a core task for human-centric image understanding.
Recent one-stage methods adopt a transformer decoder to collect image-wide cues that are useful for interaction prediction.
Traditional two-stage methods benefit significantly from their ability to compose interaction features in a disentangled and explainable manner.
arXiv Detail & Related papers (2023-12-04T08:02:59Z) - AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable
Diffusion Model [69.12623428463573]
AlignDiff is a novel framework to quantify human preferences, covering abstractness, and guide diffusion planning.
It can accurately match user-customized behaviors and efficiently switch from one to another.
We demonstrate its superior performance on preference matching, switching, and covering compared to other baselines.
arXiv Detail & Related papers (2023-10-03T13:53:08Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Human-Guided Fair Classification for Natural Language Processing [9.652938946631735]
We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to generate semantically similar sentences that differ along sensitive attributes.
We validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification.
arXiv Detail & Related papers (2022-12-20T10:46:40Z) - PART: Pre-trained Authorship Representation Transformer [64.78260098263489]
Authors writing documents imprint identifying information within their texts: vocabulary, registry, punctuation, misspellings, or even emoji usage.
Previous works use hand-crafted features or classification tasks to train their authorship models, leading to poor performance on out-of-domain authors.
We propose a contrastively trained model fit to learn textbfauthorship embeddings instead of semantics.
arXiv Detail & Related papers (2022-09-30T11:08:39Z) - Human-Object Interaction Detection via Disentangled Transformer [63.46358684341105]
We present Disentangled Transformer, where both encoder and decoder are disentangled to facilitate learning of two sub-tasks.
Our method outperforms prior work on two public HOI benchmarks by a sizeable margin.
arXiv Detail & Related papers (2022-04-20T08:15:04Z) - PoWareMatch: a Quality-aware Deep Learning Approach to Improve Human
Schema Matching [20.110234122423172]
We examine a novel angle on the behavior of humans as matchers, studying match creation as a process.
We design PoWareMatch that makes use of a deep learning mechanism to calibrate and filter human matching decisions.
PoWareMatch predicts well the benefit of extending the match with an additional correspondence and generates high quality matches.
arXiv Detail & Related papers (2021-09-15T14:24:56Z) - Learning to Characterize Matching Experts [19.246576904646172]
We characterize human matching experts, those humans whose proposed correspondences can mostly be trusted to be valid.
We show that our approach can improve matching results by filtering out inexpert matchers.
arXiv Detail & Related papers (2020-12-02T14:16:38Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.