AILS-NTUA at SemEval-2026 Task 3: Efficient Dimensional Aspect-Based Sentiment Analysis
- URL: http://arxiv.org/abs/2603.04933v1
- Date: Thu, 05 Mar 2026 08:30:59 GMT
- Title: AILS-NTUA at SemEval-2026 Task 3: Efficient Dimensional Aspect-Based Sentiment Analysis
- Authors: Stavros Gazetas, Giorgos Filandrianos, Maria Lymperaiou, Paraskevi Tzouveli, Athanasios Voulodimos, Giorgos Stamou,
- Abstract summary: We present AILS-NTUA system for Track-A of SemEval-2026 Task 3 on Dimensional Aspect-Based Sentiment Analysis (DimABSA)<n>Our methodology combines fine-tuning of language-appropriate backbones for continuous aspect-level sentiment prediction with language-specific instruction tuning of large language models using LoRA.<n>This unified yet task-adaptive design emphasizes parameter-efficient specialization across languages and domains, enabling reduced training and inference requirements while maintaining strong effectiveness.
- Score: 13.528308058170479
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present AILS-NTUA system for Track-A of SemEval-2026 Task 3 on Dimensional Aspect-Based Sentiment Analysis (DimABSA), which encompasses three complementary problems: Dimensional Aspect Sentiment Regression (DimASR), Dimensional Aspect Sentiment Triplet Extraction (DimASTE), and Dimensional Aspect Sentiment Quadruplet Prediction (DimASQP) within a multilingual and multi-domain framework. Our methodology combines fine-tuning of language-appropriate encoder backbones for continuous aspect-level sentiment prediction with language-specific instruction tuning of large language models using LoRA for structured triplet and quadruplet extraction. This unified yet task-adaptive design emphasizes parameter-efficient specialization across languages and domains, enabling reduced training and inference requirements while maintaining strong effectiveness. Empirical results demonstrate that the proposed models achieve competitive performance and consistently surpass the provided baselines across most evaluation settings.
Related papers
- PromotionGo at SemEval-2025 Task 11: A Feature-Centric Framework for Cross-Lingual Multi-Emotion Detection in Short Texts [1.210852962855694]
This paper presents our system for SemEval 2025 Task 11: Bridging the Gap in Text-Based Emotion Detection.<n>We propose a feature-centric framework that dynamically adapts document representations and learning algorithms to optimize language-specific performance.
arXiv Detail & Related papers (2025-07-11T11:21:18Z) - Enhancing Spatial Reasoning in Multimodal Large Language Models through Reasoning-based Segmentation [50.81551581148339]
We introduce Relevant Reasoning (R$2$S), a reasoning-based segmentation framework.<n>We also introduce 3D ReasonSeg, a reasoning-based segmentation dataset.<n>Both experiments demonstrate that the R$2$S and 3D ReasonSeg effectively endow 3D point cloud perception with stronger spatial reasoning capabilities.
arXiv Detail & Related papers (2025-06-29T06:58:08Z) - Dependency Structure Augmented Contextual Scoping Framework for Multimodal Aspect-Based Sentiment Analysis [9.240806100782718]
DASCO is a fine-grained scope-oriented framework for sentiment analysis.<n>It enhances aspect-level sentiment reasoning by leveraging dependency parsing trees.<n>Experiments on two benchmark datasets demonstrate that DASCO achieves state-of-the-art performance in MABSA.
arXiv Detail & Related papers (2025-04-15T16:05:09Z) - Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation Models [0.8602553195689513]
Entity-Aspect Sentiment Triplet Extraction (EASTE) is a novel Aspect-Based Sentiment Analysis task.
Our research aims to achieve high performance on the EASTE task and investigates the impact of model size, type, and adaptation techniques on task performance.
Ultimately, we provide detailed insights and achieving state-of-the-art results in complex sentiment analysis.
arXiv Detail & Related papers (2024-07-04T16:48:14Z) - Unified Language-driven Zero-shot Domain Adaptation [55.64088594551629]
Unified Language-driven Zero-shot Domain Adaptation (ULDA) is a novel task setting.
It enables a single model to adapt to diverse target domains without explicit domain-ID knowledge.
arXiv Detail & Related papers (2024-04-10T16:44:11Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts
For Aspect Sentiment Triplet Extraction [13.077459544929598]
We present a novel pre-training strategy using CONTRastive learning to enhance the ASTE performance.
We also demonstrate the advantage of our proposed technique on other ABSA tasks such as ACOS, TASD, and AESC.
arXiv Detail & Related papers (2023-10-24T07:40:09Z) - Exploring Dimensionality Reduction Techniques in Multilingual
Transformers [64.78260098263489]
This paper gives a comprehensive account of the impact of dimensional reduction techniques on the performance of state-of-the-art multilingual Siamese Transformers.
It shows that it is possible to achieve an average reduction in the number of dimensions of $91.58% pm 2.59%$ and $54.65% pm 32.20%$, respectively.
arXiv Detail & Related papers (2022-04-18T17:20:55Z) - Incorporating Dynamic Semantics into Pre-Trained Language Model for
Aspect-based Sentiment Analysis [67.41078214475341]
We propose Dynamic Re-weighting BERT (DR-BERT) to learn dynamic aspect-oriented semantics for ABSA.
Specifically, we first take the Stack-BERT layers as a primary encoder to grasp the overall semantic of the sentence.
We then fine-tune it by incorporating a lightweight Dynamic Re-weighting Adapter (DRA)
arXiv Detail & Related papers (2022-03-30T14:48:46Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.