Structured Tuning for Semantic Role Labeling
- URL: http://arxiv.org/abs/2005.00496v2
- Date: Tue, 5 May 2020 07:39:17 GMT
- Title: Structured Tuning for Semantic Role Labeling
- Authors: Tao Li, Parth Anand Jawale, Martha Palmer, Vivek Srikumar
- Abstract summary: Recent neural network-driven semantic role labeling systems have shown impressive improvements in F1 scores.
We present a structured tuning framework to improve models using softened constraints only at training time.
- Score: 38.66432166217337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent neural network-driven semantic role labeling (SRL) systems have shown
impressive improvements in F1 scores. These improvements are due to expressive
input representations, which, at least at the surface, are orthogonal to
knowledge-rich constrained decoding mechanisms that helped linear SRL models.
Introducing the benefits of structure to inform neural models presents a
methodological challenge. In this paper, we present a structured tuning
framework to improve models using softened constraints only at training time.
Our framework leverages the expressiveness of neural networks and provides
supervision with structured loss components. We start with a strong baseline
(RoBERTa) to validate the impact of our approach, and show that our framework
outperforms the baseline by learning to comply with declarative constraints.
Additionally, our experiments with smaller training sizes show that we can
achieve consistent improvements under low-resource scenarios.
Related papers
- DINO-R1: Incentivizing Reasoning Capability in Vision Foundation Models [18.06361678575107]
We propose textbfDINO-R1, the first attempt to incentivize visual in-context reasoning capabilities of vision foundation models.<n>DINO-R1 introduces textbfGroup Relative Query Optimization (GRQO), a novel reinforcement-style training strategy.<n>Experiments on COCO, LVIS, and ODinW demonstrate that DINO-R1 significantly outperforms supervised fine-tuning baselines.
arXiv Detail & Related papers (2025-05-29T21:58:06Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.
We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - Lattice-Based Pruning in Recurrent Neural Networks via Poset Modeling [0.0]
Recurrent neural networks (RNNs) are central to sequence modeling tasks, yet their high computational complexity poses challenges for scalability and real-time deployment.
We introduce a novel framework that models RNNs as partially ordered sets (posets) and constructs corresponding dependency lattices.
By identifying meet irreducible neurons, our lattice-based pruning algorithm selectively retains critical connections while eliminating redundant ones.
arXiv Detail & Related papers (2025-02-23T10:11:38Z) - Meta-Learning for Physically-Constrained Neural System Identification [9.417562391585076]
We present a gradient-based meta-learning framework for rapid adaptation of neural state-space models (NSSMs) for black-box system identification.
We show that the meta-learned models result in improved downstream performance in model-based state estimation in indoor localization and energy systems.
arXiv Detail & Related papers (2025-01-10T18:46:28Z) - Self-Organizing Recurrent Stochastic Configuration Networks for Nonstationary Data Modelling [3.8719670789415925]
Recurrent configuration networks (RSCNs) are a class of randomized models that have shown promise in modelling nonlinear dynamics.
This paper aims at developing a self-organizing version of RSCNs, termed as SORSCNs, to enhance the continuous learning ability of the network for modelling nonstationary data.
arXiv Detail & Related papers (2024-10-14T01:28:25Z) - Advancing Neural Network Performance through Emergence-Promoting Initialization Scheme [0.0]
Emergence in machine learning refers to the spontaneous appearance of capabilities that arise from the scale and structure of training data.
We introduce a novel yet straightforward neural network initialization scheme that aims at achieving greater potential for emergence.
We demonstrate substantial improvements in both model accuracy and training speed, with and without batch normalization.
arXiv Detail & Related papers (2024-07-26T18:56:47Z) - Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning [55.5715496559514]
LoRA Slow Cascade Learning (LoRASC) is an innovative technique designed to enhance LoRA's expressiveness and generalization capabilities.
Our approach augments expressiveness through a cascaded learning strategy that enables a mixture-of-low-rank adaptation, thereby increasing the model's ability to capture complex patterns.
arXiv Detail & Related papers (2024-07-01T17:28:59Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - A Neuromorphic Architecture for Reinforcement Learning from Real-Valued
Observations [0.34410212782758043]
Reinforcement Learning (RL) provides a powerful framework for decision-making in complex environments.
This paper presents a novel Spiking Neural Network (SNN) architecture for solving RL problems with real-valued observations.
arXiv Detail & Related papers (2023-07-06T12:33:34Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Rotation-equivariant Graph Neural Networks for Learning Glassy Liquids Representations [0.5249805590164901]
We build a Graph Neural Network that learns a robust representation of the glass' static structure.
We show that this constraint significantly improves the predictive power at comparable or reduced number of parameters.
While remaining a Deep network, our model has improved interpretability compared to other GNNs.
arXiv Detail & Related papers (2022-11-06T22:05:27Z) - Extended Unconstrained Features Model for Exploring Deep Neural Collapse [59.59039125375527]
Recently, a phenomenon termed "neural collapse" (NC) has been empirically observed in deep neural networks.
Recent papers have shown that minimizers with this structure emerge when optimizing a simplified "unconstrained features model"
In this paper, we study the UFM for the regularized MSE loss, and show that the minimizers' features can be more structured than in the cross-entropy case.
arXiv Detail & Related papers (2022-02-16T14:17:37Z) - The Self-Simplifying Machine: Exploiting the Structure of Piecewise
Linear Neural Networks to Create Interpretable Models [0.0]
We introduce novel methodology toward simplification and increased interpretability of Piecewise Linear Neural Networks for classification tasks.
Our methods include the use of a trained, deep network to produce a well-performing, single-hidden-layer network without further training.
On these methods, we conduct preliminary studies of model performance, as well as a case study on Wells Fargo's Home Lending dataset.
arXiv Detail & Related papers (2020-12-02T16:02:14Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.