SEDA: A Self-Adapted Entity-Centric Data Augmentation for Boosting Gird-based Discontinuous NER Models
- URL: http://arxiv.org/abs/2511.20143v1
- Date: Tue, 25 Nov 2025 10:06:50 GMT
- Title: SEDA: A Self-Adapted Entity-Centric Data Augmentation for Boosting Gird-based Discontinuous NER Models
- Authors: Wen-Fang Su, Hsiao-Wei Chou, Wen-Yang Lin,
- Abstract summary: We aim to address the segmentation and omission issues associated with discontinuous entities.<n>Recent studies have shown that grid-tagging methods are effective for information extraction.<n>We integrate image data augmentation techniques, such as cropping, scaling, and padding, into grid-based models to enhance their ability to recognize discontinuous entities.
- Score: 1.1470070927586018
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Named Entity Recognition (NER) is a critical task in natural language processing, yet it remains particularly challenging for discontinuous entities. The primary difficulty lies in text segmentation, as traditional methods often missegment or entirely miss cross-sentence discontinuous entities, significantly affecting recognition accuracy. Therefore, we aim to address the segmentation and omission issues associated with such entities. Recent studies have shown that grid-tagging methods are effective for information extraction due to their flexible tagging schemes and robust architectures. Building on this, we integrate image data augmentation techniques, such as cropping, scaling, and padding, into grid-based models to enhance their ability to recognize discontinuous entities and handle segmentation challenges. Experimental results demonstrate that traditional segmentation methods often fail to capture cross-sentence discontinuous entities, leading to decreased performance. In contrast, our augmented grid models achieve notable improvements. Evaluations on the CADEC, ShARe13, and ShARe14 datasets show F1 score gains of 1-2.5% overall and 3.7-8.4% for discontinuous entities, confirming the effectiveness of our approach.
Related papers
- AI-Based Culvert-Sewer Inspection [0.0]
Culverts and sewer pipes are critical components of drainage systems, and their failure can lead to serious risks to public safety and the environment.<n>This thesis proposes three methods to significantly enhance defect segmentation and handle data scarcity.<n>ForTRESS is a novel architecture that combines depthwise separable convolutions, adaptive Kolmogorov-Arnold Networks (KAN), and multi-scale attention mechanisms.
arXiv Detail & Related papers (2026-01-21T16:33:33Z) - Explainable Human-in-the-Loop Segmentation via Critic Feedback Signals [0.20999222360659608]
We propose a human-in-the-loop interactive framework that enables interventional learning through targeted human corrections of segmentation outputs.<n>We demonstrate that our framework improves segmentation accuracy by up to 9 mIoU points on challenging cubemap data.<n>This work provides a practical framework for researchers and practitioners seeking to build segmentation systems that are accurate, robust to dataset biases, data-efficient, and adaptable to real-world domains such as urban climate monitoring and autonomous driving.
arXiv Detail & Related papers (2025-10-11T01:16:41Z) - Exploring Generalized Gait Recognition: Reducing Redundancy and Noise within Indoor and Outdoor Datasets [24.242460774158463]
Generalized gait recognition aims to achieve robust performance across diverse domains.<n>Mixed-dataset training is widely used to enhance generalization.<n>We propose a unified framework that systematically improves cross-domain gait recognition.
arXiv Detail & Related papers (2025-05-21T06:46:09Z) - When Segmentation Meets Hyperspectral Image: New Paradigm for Hyperspectral Image Classification [4.179738334055251]
Hyperspectral image (HSI) classification is a cornerstone of remote sensing, enabling precise material and land-cover identification through rich spectral information.<n>While deep learning has driven significant progress in this task, small patch-based classifiers, which account for over 90% of the progress, face limitations.<n>We propose a novel paradigm and baseline, HSIseg, for HSI classification that leverages segmentation techniques combined with a novel Dynamic Shifted Regional Transformer (DSRT) to overcome these challenges.
arXiv Detail & Related papers (2025-02-18T05:04:29Z) - Optimizing against Infeasible Inclusions from Data for Semantic Segmentation through Morphology [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion.<n>InSeIn extracts explicit inclusion constraints that govern spatial class relations from the semantic segmentation training set at hand.<n>It then enforces a morphological yet differentiable loss that penalizes violations of these constraints during training to promote prediction feasibility.
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - Learning to Generate Training Datasets for Robust Semantic Segmentation [37.9308918593436]
We propose a novel approach to improve the robustness of semantic segmentation techniques.
We design Robusta, a novel conditional generative adversarial network to generate realistic and plausible perturbed images.
Our results suggest that this approach could be valuable in safety-critical applications.
arXiv Detail & Related papers (2023-08-01T10:02:26Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - Generative Partial Visual-Tactile Fused Object Clustering [81.17645983141773]
We propose a Generative Partial Visual-Tactile Fused (i.e., GPVTF) framework for object clustering.
A conditional cross-modal clustering generative adversarial network is then developed to synthesize one modality conditioning on the other modality.
To the end, two pseudo-label based KL-divergence losses are employed to update the corresponding modality-specific encoders.
arXiv Detail & Related papers (2020-12-28T02:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.