Effective Approach to Develop a Sentiment Annotator For Legal Domain in
a Low Resource Setting
- URL: http://arxiv.org/abs/2011.00318v1
- Date: Sat, 31 Oct 2020 17:12:32 GMT
- Title: Effective Approach to Develop a Sentiment Annotator For Legal Domain in
a Low Resource Setting
- Authors: Gathika Ratnayaka, Nisansa de Silva, Amal Shehan Perera, Ramesh
Pathirana
- Abstract summary: Analyzing the sentiments of legal opinions available in Legal Opinion Texts can facilitate several use cases such as legal judgement prediction, contradictory statements identification and party-based sentiment analysis.
The task of developing a legal domain specific sentiment annotator is challenging due to resource constraints such as lack of domain specific labelled data and domain expertise.
In this study, we propose novel techniques that can be used to develop a sentiment annotator for the legal domain while minimizing the need for manual annotations of data.
- Score: 0.41783829807634776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analyzing the sentiments of legal opinions available in Legal Opinion Texts
can facilitate several use cases such as legal judgement prediction,
contradictory statements identification and party-based sentiment analysis.
However, the task of developing a legal domain specific sentiment annotator is
challenging due to resource constraints such as lack of domain specific
labelled data and domain expertise. In this study, we propose novel techniques
that can be used to develop a sentiment annotator for the legal domain while
minimizing the need for manual annotations of data.
Related papers
- DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Enhancing Pre-Trained Language Models with Sentence Position Embeddings
for Rhetorical Roles Recognition in Legal Opinions [0.16385815610837165]
The size of legal opinions continues to grow, making it increasingly challenging to develop a model that can accurately predict the rhetorical roles of legal opinions.
We propose a novel model architecture for automatically predicting rhetorical roles using pre-trained language models (PLMs) enhanced with knowledge of sentence position information.
Based on an annotated corpus from the LegalEval@SemEval2023 competition, we demonstrate that our approach requires fewer parameters, resulting in lower computational costs.
arXiv Detail & Related papers (2023-10-08T20:33:55Z) - Datasets for Portuguese Legal Semantic Textual Similarity: Comparing
weak supervision and an annotation process approaches [1.9244230111838758]
Brazilian National Council of Justice has established in Resolution 469/2022 formal guidance for document and process digitalization.
This article contributes with four datasets from the legal domain, two with documents and metadata but unlabeled, and another two labeled with a aiming at its use in textual similarity tasks.
The analysis of ground truth labels highlights that semantic analysis of domain text can be challenging even for domain experts.
arXiv Detail & Related papers (2023-05-29T18:27:10Z) - Unlocking Practical Applications in Legal Domain: Evaluation of GPT for
Zero-Shot Semantic Annotation of Legal Texts [0.0]
We evaluate the capability of a state-of-the-art generative pre-trained transformer (GPT) model to perform semantic annotation of short text snippets.
We found that the GPT model performs surprisingly well in zero-shot settings on diverse types of documents.
arXiv Detail & Related papers (2023-05-08T01:55:53Z) - SAILER: Structure-aware Pre-trained Language Model for Legal Case
Retrieval [75.05173891207214]
Legal case retrieval plays a core role in the intelligent legal system.
Most existing language models have difficulty understanding the long-distance dependencies between different structures.
We propose a new Structure-Aware pre-traIned language model for LEgal case Retrieval.
arXiv Detail & Related papers (2023-04-22T10:47:01Z) - Agent-Specific Deontic Modality Detection in Legal Language [19.94131001761646]
LEXDEMOD is a corpus of English contracts annotated with deontic modality expressed with respect to a contracting party or agent.
We benchmark this dataset on two tasks: (i) agent-specific multi-label deontic modality classification, and (ii) agent-specific deontic modality and trigger span detection.
Experiments show that the linguistic diversity of modal expressions in LEXDEMOD generalizes reasonably from lease to employment and rental agreements.
arXiv Detail & Related papers (2022-11-23T07:32:23Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Important Sentence Identification in Legal Cases Using Multi-Class
Classification [0.1499944454332829]
This research explores the usage of sentence embeddings for multi-class classification to identify important sentences in a legal case.
A task-specific loss function is defined in order to improve the accuracy restricted by the straightforward use of categorical cross entropy loss.
arXiv Detail & Related papers (2021-11-10T14:58:29Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.