Supervised PCA: A Multiobjective Approach
- URL: http://arxiv.org/abs/2011.05309v4
- Date: Tue, 16 Aug 2022 19:53:08 GMT
- Title: Supervised PCA: A Multiobjective Approach
- Authors: Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra S. Sripada,
Clayton Scott
- Abstract summary: Methods for supervised principal component analysis (SPCA)
We propose a new method for SPCA that addresses both of these objectives jointly.
Our approach accommodates arbitrary supervised learning losses and, through a statistical reformulation, provides a novel low-rank extension of generalized linear models.
- Score: 70.99924195791532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods for supervised principal component analysis (SPCA) aim to incorporate
label information into principal component analysis (PCA), so that the
extracted features are more useful for a prediction task of interest. Prior
work on SPCA has focused primarily on optimizing prediction error, and has
neglected the value of maximizing variance explained by the extracted features.
We propose a new method for SPCA that addresses both of these objectives
jointly, and demonstrate empirically that our approach dominates existing
approaches, i.e., outperforms them with respect to both prediction error and
variation explained. Our approach accommodates arbitrary supervised learning
losses and, through a statistical reformulation, provides a novel low-rank
extension of generalized linear models.
Related papers
- Spectral Representation for Causal Estimation with Hidden Confounders [33.148766692274215]
We address the problem of causal effect estimation where hidden confounders are present.
Our approach uses a singular value decomposition of a conditional expectation operator, followed by a saddle-point optimization problem.
arXiv Detail & Related papers (2024-07-15T05:39:56Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Explain, Adapt and Retrain: How to improve the accuracy of a PPM
classifier through different explanation styles [4.6281736192809575]
Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring model for outcome-oriented predictions provides wrong predictions.
We show how to exploit the explanations to identify the most common features that induce a predictor to make mistakes in a semi-automated way.
arXiv Detail & Related papers (2023-03-27T06:37:55Z) - Query-Adaptive Predictive Inference with Partial Labels [0.0]
We propose a new methodology to construct predictive sets using only partially labeled data on top of black-box predictive models.
Our experiments highlight the validity of our predictive set construction as well as the attractiveness of a more flexible user-dependent loss framework.
arXiv Detail & Related papers (2022-06-15T01:48:42Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Selective Classification via One-Sided Prediction [54.05407231648068]
One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
arXiv Detail & Related papers (2020-10-15T16:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.