Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for
Identifying Human Values from Arguments
- URL: http://arxiv.org/abs/2305.05335v1
- Date: Tue, 9 May 2023 10:54:34 GMT
- Title: Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for
Identifying Human Values from Arguments
- Authors: Sougata Saha, Rohini Srihari
- Abstract summary: We present an ensemble approach for detecting human values from argument text.
Our ensemble comprises three models: (i) An entailment-based model for determining the human values based on their descriptions, (ii) A Roberta-based classifier that predicts the set of human values from an argument.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The subtle human values we acquire through life experiences govern our
thoughts and gets reflected in our speech. It plays an integral part in
capturing the essence of our individuality and making it imperative to identify
such values in computational systems that mimic human actions. Computational
argumentation is a field that deals with the argumentation capabilities of
humans and can benefit from identifying such values. Motivated by that, we
present an ensemble approach for detecting human values from argument text. Our
ensemble comprises three models: (i) An entailment-based model for determining
the human values based on their descriptions, (ii) A Roberta-based classifier
that predicts the set of human values from an argument. (iii) A Roberta-based
classifier to predict a reduced set of human values from an argument. We
experiment with different ways of combining the models and report our results.
Furthermore, our best combination achieves an overall F1 score of 0.48 on the
main test set.
Related papers
- It HAS to be Subjective: Human Annotator Simulation via Zero-shot
Density Estimation [15.8765167340819]
Human annotator simulation (HAS) serves as a cost-effective substitute for human evaluation such as data annotation and system assessment.
Human perception and behaviour during human evaluation exhibit inherent variability due to diverse cognitive processes and subjective interpretations.
This paper introduces a novel meta-learning framework that treats HAS as a zero-shot density estimation problem.
arXiv Detail & Related papers (2023-09-30T20:54:59Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Towards Objective Evaluation of Socially-Situated Conversational Robots:
Assessing Human-Likeness through Multimodal User Behaviors [26.003947740875482]
This paper focuses on assessing the human-likeness of the robot as the primary evaluation metric.
Our approach aims to evaluate the robot's human-likeness based on observable user behaviors indirectly, thus enhancing objectivity and objectivity.
arXiv Detail & Related papers (2023-08-21T20:21:07Z) - Epicurus at SemEval-2023 Task 4: Improving Prediction of Human Values
behind Arguments by Leveraging Their Definitions [5.343406649012618]
We describe our experiments for SemEval-2023 Task 4 on the identification of human values behind arguments.
Because human values are subjective concepts which require precise definitions, we hypothesize that incorporating the definitions of human values during model training can yield better prediction performance.
arXiv Detail & Related papers (2023-02-27T16:23:11Z) - Revisiting the Gold Standard: Grounding Summarization Evaluation with
Robust Human Evaluation [136.16507050034755]
Existing human evaluation studies for summarization either exhibit a low inter-annotator agreement or have insufficient scale.
We propose a modified summarization salience protocol, Atomic Content Units (ACUs), which is based on fine-grained semantic units.
We curate the Robust Summarization Evaluation (RoSE) benchmark, a large human evaluation dataset consisting of 22,000 summary-level annotations over 28 top-performing systems.
arXiv Detail & Related papers (2022-12-15T17:26:05Z) - Enabling Classifiers to Make Judgements Explicitly Aligned with Human
Values [73.82043713141142]
Many NLP classification tasks, such as sexism/racism detection or toxicity detection, are based on human values.
We introduce a framework for value-aligned classification that performs prediction based on explicitly written human values in the command.
arXiv Detail & Related papers (2022-10-14T09:10:49Z) - Training Language Models with Natural Language Feedback [51.36137482891037]
We learn from language feedback on model outputs using a three-step learning algorithm.
In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements.
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.
arXiv Detail & Related papers (2022-04-29T15:06:58Z) - Dynamic Human Evaluation for Relative Model Comparisons [8.843915018287476]
We present a dynamic approach to measure the required number of human annotations when evaluating generated outputs in relative comparison settings.
We propose an agent-based framework of human evaluation to assess multiple labelling strategies and methods to decide the better model in a simulation and a crowdsourcing case study.
arXiv Detail & Related papers (2021-12-15T11:32:13Z) - Hierarchical Human Parsing with Typed Part-Relation Reasoning [179.64978033077222]
How to model human structures is the central theme in this task.
We seek to simultaneously exploit the representational capacity of deep graph networks and the hierarchical human structures.
arXiv Detail & Related papers (2020-03-10T16:45:41Z) - Learning Compositional Neural Information Fusion for Human Parsing [181.48380078517525]
We formulate the approach as a neural information fusion framework.
Our model assembles the information from three inference processes over the hierarchy.
The whole model is end-to-end differentiable, explicitly modeling information flows and structures.
arXiv Detail & Related papers (2020-01-19T10:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.