Partial sequence labeling with structured Gaussian Processes
- URL: http://arxiv.org/abs/2209.09397v1
- Date: Tue, 20 Sep 2022 00:56:49 GMT
- Title: Partial sequence labeling with structured Gaussian Processes
- Authors: Xiaolei Lu, Tommy W.S. Chow
- Abstract summary: We propose structured Gaussian Processes for partial sequence labeling.
It encodes uncertainty in the prediction and does not need extra effort for model selection and hyper parameter learning.
It is evaluated on several sequence labeling tasks and the experimental results show the effectiveness of the proposed model.
- Score: 8.239028141030621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing partial sequence labeling models mainly focus on max-margin
framework which fails to provide an uncertainty estimation of the prediction.
Further, the unique ground truth disambiguation strategy employed by these
models may include wrong label information for parameter learning. In this
paper, we propose structured Gaussian Processes for partial sequence labeling
(SGPPSL), which encodes uncertainty in the prediction and does not need extra
effort for model selection and hyperparameter learning. The model employs
factor-as-piece approximation that divides the linear-chain graph structure
into the set of pieces, which preserves the basic Markov Random Field structure
and effectively avoids handling large number of candidate output sequences
generated by partially annotated data. Then confidence measure is introduced in
the model to address different contributions of candidate labels, which enables
the ground-truth label information to be utilized in parameter learning. Based
on the derived lower bound of the variational lower bound of the proposed
model, variational parameters and confidence measures are estimated in the
framework of alternating optimization. Moreover, weighted Viterbi algorithm is
proposed to incorporate confidence measure to sequence prediction, which
considers label ambiguity arose from multiple annotations in the training data
and thus helps improve the performance. SGPPSL is evaluated on several sequence
labeling tasks and the experimental results show the effectiveness of the
proposed model.
Related papers
- Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors [17.640500920466984]
This paper presents a novel framework for estimating the joint PMF and automatically inferring its rank from observed data.
We derive a deterministic solution based on variational inference (VI) to approximate the posterior distributions of various model parameters. Additionally, we develop a scalable version of the VI-based approach by leveraging variational inference (SVI)
Experiments involving both synthetic data and real movie recommendation data illustrate the advantages of our VI and SVI-based methods in terms of estimation accuracy, automatic rank detection, and computational efficiency.
arXiv Detail & Related papers (2024-10-08T20:07:49Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Active Learning for Regression with Aggregated Outputs [28.40183946090337]
We propose an active learning method that sequentially selects sets to be labeled to improve the predictive performance with fewer labeled sets.
With the experiments using various datasets, we demonstrate that the proposed method achieves better predictive performance with fewer labeled sets than existing methods.
arXiv Detail & Related papers (2022-10-04T02:45:14Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.