Locally Adaptive and Differentiable Regression
- URL: http://arxiv.org/abs/2308.07418v2
- Date: Fri, 13 Oct 2023 02:27:42 GMT
- Title: Locally Adaptive and Differentiable Regression
- Authors: Mingxuan Han, Varun Shankar, Jeff M Phillips, Chenglong Ye
- Abstract summary: We propose a general framework to construct a global continuous and differentiable model based on a weighted average of locally learned models in corresponding local regions.
We demonstrate that when we mix kernel ridge and regression terms in the local models, and stitch them together continuously, we achieve faster statistical convergence in theory and improved performance in various practical settings.
- Score: 10.194448186897906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-parameterized models like deep nets and random forests have become very
popular in machine learning. However, the natural goals of continuity and
differentiability, common in regression models, are now often ignored in modern
overparametrized, locally-adaptive models. We propose a general framework to
construct a global continuous and differentiable model based on a weighted
average of locally learned models in corresponding local regions. This model is
competitive in dealing with data with different densities or scales of function
values in different local regions. We demonstrate that when we mix kernel ridge
and polynomial regression terms in the local models, and stitch them together
continuously, we achieve faster statistical convergence in theory and improved
performance in various practical settings.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - Local and Global Trend Bayesian Exponential Smoothing Models [42.23414385928431]
This paper describes a family of seasonal and non-seasonal time series models that can be viewed as generalisations of additive and multiplicative exponential smoothing models.
Our models have a global trend that can smoothly change from additive to multiplicative, and is combined with a linear local trend.
We leverage state-of-the-art Bayesian fitting techniques to accurately fit these models that are more complex and flexible than standard exponential smoothing models.
arXiv Detail & Related papers (2023-09-25T08:31:50Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs [19.130197923214123]
Learning an effective global model on private and decentralized datasets has become an increasingly important challenge of machine learning.
We propose a feature fusion approach that extracts local representations from local models and incorporates them into a global representation that improves the prediction performance.
This paper presents solutions to these problems and demonstrates them in real-world applications on time series data such as power grids and traffic networks.
arXiv Detail & Related papers (2023-06-02T02:24:27Z) - Improving Heterogeneous Model Reuse by Density Estimation [105.97036205113258]
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
Model reuse is a promising solution for multiparty learning, assuming that a local model has been trained for each party.
arXiv Detail & Related papers (2023-05-23T09:46:54Z) - Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination [33.12164201146458]
We propose a novel and FL paradigm named FedMR (Federated Model Recombination)
The goal of FedMR is to guide the recombined models to be trained towards a flat area.
Compared with state-of-the-art FL methods, FedMR can significantly improve the inference accuracy without exposing privacy of each client.
arXiv Detail & Related papers (2023-05-18T05:58:24Z) - Global-to-Local Modeling for Video-based 3D Human Pose and Shape
Estimation [53.04781510348416]
Video-based 3D human pose and shape estimations are evaluated by intra-frame accuracy and inter-frame smoothness.
We propose to structurally decouple the modeling of long-term and short-term correlations in an end-to-end framework, Global-to-Local Transformer (GLoT)
Our GLoT surpasses previous state-of-the-art methods with the lowest model parameters on popular benchmarks, i.e., 3DPW, MPI-INF-3DHP, and Human3.6M.
arXiv Detail & Related papers (2023-03-26T14:57:49Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Variational Filtering with Copula Models for SLAM [5.242618356321224]
We show how it is possible to perform simultaneous localization and mapping (SLAM) with a larger class of distributions.
We integrate the distribution model with copulas into a Sequential Monte Carlo estimator and show how unknown model parameters can be learned through gradient-based optimization.
arXiv Detail & Related papers (2020-08-02T15:38:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.