Learning Hybrid Representation by Robust Dictionary Learning in
Factorized Compressed Space
- URL: http://arxiv.org/abs/1912.11785v1
- Date: Thu, 26 Dec 2019 06:52:34 GMT
- Title: Learning Hybrid Representation by Robust Dictionary Learning in
Factorized Compressed Space
- Authors: Jiahuan Ren, Zhao Zhang, Sheng Li, Yang Wang, Guangcan Liu, Shuicheng
Yan, Meng Wang
- Abstract summary: We investigate the robust dictionary learning (DL) to discover the hybrid salient low-rank and sparse representation in a factorized compressed space.
A Joint Robust Factorization and Projective Dictionary Learning (J-RFDL) model is presented.
- Score: 84.37923242430999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the robust dictionary learning (DL) to discover
the hybrid salient low-rank and sparse representation in a factorized
compressed space. A Joint Robust Factorization and Projective Dictionary
Learning (J-RFDL) model is presented. The setting of J-RFDL aims at improving
the data representations by enhancing the robustness to outliers and noise in
data, encoding the reconstruction error more accurately and obtaining hybrid
salient coefficients with accurate reconstruction ability. Specifically, J-RFDL
performs the robust representation by DL in a factorized compressed space to
eliminate the negative effects of noise and outliers on the results, which can
also make the DL process efficient. To make the encoding process robust to
noise in data, J-RFDL clearly uses sparse L2, 1-norm that can potentially
minimize the factorization and reconstruction errors jointly by forcing rows of
the reconstruction errors to be zeros. To deliver salient coefficients with
good structures to reconstruct given data well, J-RFDL imposes the joint
low-rank and sparse constraints on the embedded coefficients with a synthesis
dictionary. Based on the hybrid salient coefficients, we also extend J-RFDL for
the joint classification and propose a discriminative J-RFDL model, which can
improve the discriminating abilities of learnt coeffi-cients by minimizing the
classification error jointly. Extensive experiments on public datasets
demonstrate that our formulations can deliver superior performance over other
state-of-the-art methods.
Related papers
- DiffATR: Diffusion-based Generative Modeling for Audio-Text Retrieval [49.076590578101985]
We present a diffusion-based ATR framework (DiffATR) that generates joint distribution from noise.
Experiments on the AudioCaps and Clotho datasets with superior performances, verify the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-16T06:33:26Z) - R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Machine Learning Techniques for Data Reduction of CFD Applications [10.881548113461493]
We present an approach called guaranteed block autoencoder that leverages Correlations for reducing scientific results.
It uses a multidimensional block of tensors (CFD) for both input and output.
arXiv Detail & Related papers (2024-04-28T04:01:09Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Federated Latent Class Regression for Hierarchical Data [5.110894308882439]
Federated Learning (FL) allows a number of agents to participate in training a global machine learning model without disclosing locally stored data.
We propose a novel probabilistic model, Hierarchical Latent Class Regression (HLCR), and its extension to Federated Learning, FEDHLCR.
Our inference algorithm, being derived from Bayesian theory, provides strong convergence guarantees and good robustness to overfitting. Experimental results show that FEDHLCR offers fast convergence even in non-IID datasets.
arXiv Detail & Related papers (2022-06-22T00:33:04Z) - Accurate Discharge Coefficient Prediction of Streamlined Weirs by
Coupling Linear Regression and Deep Convolutional Gated Recurrent Unit [2.4475596711637433]
The present study proposes data-driven modeling techniques, as an alternative to CFD simulation, to predict the discharge coefficient based on an experimental dataset.
It is found that the proposed three layer hierarchical DL algorithm consists of a convolutional layer coupled with two subsequent GRU levels, which is also hybridized with the LR method, leads to lower error metrics.
arXiv Detail & Related papers (2022-04-12T01:59:36Z) - A Sparsity-promoting Dictionary Model for Variational Autoencoders [16.61511959679188]
Structuring the latent space in deep generative models is important to yield more expressive models and interpretable representations.
We propose a simple yet effective methodology to structure the latent space via a sparsity-promoting dictionary model.
arXiv Detail & Related papers (2022-03-29T17:13:11Z) - Distributionally Robust Multi-Output Regression Ranking [3.9318191265352196]
We introduce a new listwise listwise learning-to-rank model called Distributionally Robust Multi-output Regression Ranking (DRMRR)
DRMRR uses a Distributionally Robust Optimization framework to minimize a multi-output loss function under the most adverse distributions in the neighborhood of the empirical data distribution.
Our experiments were conducted on two real-world applications, medical document retrieval, and drug response prediction.
arXiv Detail & Related papers (2021-09-27T05:19:27Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.