ANIMC: A Soft Framework for Auto-weighted Noisy and Incomplete
Multi-view Clustering
- URL: http://arxiv.org/abs/2011.10331v3
- Date: Tue, 28 Sep 2021 04:02:45 GMT
- Title: ANIMC: A Soft Framework for Auto-weighted Noisy and Incomplete
Multi-view Clustering
- Authors: Xiang Fang, Yuchong Hu, Pan Zhou, and Dapeng Oliver Wu
- Abstract summary: We propose a novel Auto-weighted Noisy and Incomplete Multi-view Clustering framework (ANIMC) via a soft auto-weighted strategy and a doubly soft regular regression model.
ANIMC has three unique advantages: 1) it is a soft algorithm to adjust our framework in different scenarios, thereby improving its generalization ability; 2) it automatically learns a proper weight for each view, thereby reducing the influence of noises; and 3) it aligns the same instances in different views, thereby decreasing the impact of missing instances.
- Score: 59.77141155608009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view clustering has wide applications in many image processing
scenarios. In these scenarios, original image data often contain missing
instances and noises, which is ignored by most multi-view clustering methods.
However, missing instances may make these methods difficult to use directly and
noises will lead to unreliable clustering results. In this paper, we propose a
novel Auto-weighted Noisy and Incomplete Multi-view Clustering framework
(ANIMC) via a soft auto-weighted strategy and a doubly soft regular regression
model. Firstly, by designing adaptive semi-regularized nonnegative matrix
factorization (adaptive semi-RNMF), the soft auto-weighted strategy assigns a
proper weight to each view and adds a soft boundary to balance the influence of
noises and incompleteness. Secondly, by proposing{\theta}-norm, the doubly soft
regularized regression model adjusts the sparsity of our model by choosing
different{\theta}. Compared with existing methods, ANIMC has three unique
advantages: 1) it is a soft algorithm to adjust our framework in different
scenarios, thereby improving its generalization ability; 2) it automatically
learns a proper weight for each view, thereby reducing the influence of noises;
3) it performs doubly soft regularized regression that aligns the same
instances in different views, thereby decreasing the impact of missing
instances. Extensive experimental results demonstrate its superior advantages
over other state-of-the-art methods.
Related papers
- Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning [53.766434746801366]
Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot classification by learning from millions of image-caption pairs crawled from the Internet.
Hackers may unauthorizedly exploit image-text data for model training, potentially including personal and privacy-sensitive information.
Recent works propose generating unlearnable examples by adding imperceptible perturbations to training images to build shortcuts for protection.
We propose Multi-step Error Minimization (MEM), a novel optimization process for generating multimodal unlearnable examples.
arXiv Detail & Related papers (2024-07-23T09:00:52Z) - An Adaptive Cost-Sensitive Learning and Recursive Denoising Framework for Imbalanced SVM Classification [12.986535715303331]
Category imbalance is one of the most popular and important issues in the domain of classification.
Emotion classification model trained on imbalanced datasets easily leads to unreliable prediction.
arXiv Detail & Related papers (2024-03-13T09:43:14Z) - Dynamic Weighted Combiner for Mixed-Modal Image Retrieval [8.683144453481328]
Mixed-Modal Image Retrieval (MMIR) as a flexible search paradigm has attracted wide attention.
Previous approaches always achieve limited performance, due to two critical factors.
We propose a Dynamic Weighted Combiner (DWC) to tackle the above challenges.
arXiv Detail & Related papers (2023-12-11T07:36:45Z) - DealMVC: Dual Contrastive Calibration for Multi-view Clustering [78.54355167448614]
We propose a novel Dual contrastive calibration network for Multi-View Clustering (DealMVC)
We first design a fusion mechanism to obtain a global cross-view feature. Then, a global contrastive calibration loss is proposed by aligning the view feature similarity graph and the high-confidence pseudo-label graph.
During the training procedure, the interacted cross-view feature is jointly optimized at both local and global levels.
arXiv Detail & Related papers (2023-08-17T14:14:28Z) - Adaptive Fine-Grained Sketch-Based Image Retrieval [100.90633284767205]
Recent focus on Fine-Grained Sketch-Based Image Retrieval has shifted towards generalising a model to new categories.
In real-world applications, a trained FG-SBIR model is often applied to both new categories and different human sketchers.
We introduce a novel model-agnostic meta-learning (MAML) based framework with several key modifications.
arXiv Detail & Related papers (2022-07-04T21:07:20Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Non-Linear Fusion for Self-Paced Multi-View Clustering [9.21606544185194]
Multi-view clustering (MVC) deals with assigning weights to each view and then combining them linearly.
In this paper, we propose Non-linear Fusion for Self-Paced MultiView Clustering (NSMVC), which is totally different from the the conventional linear NVC.
Experimental results on various real-world data sets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-04-19T12:53:23Z) - Auto-weighted Multi-view Feature Selection with Graph Optimization [90.26124046530319]
We propose a novel unsupervised multi-view feature selection model based on graph learning.
The contributions are threefold: (1) during the feature selection procedure, the consensus similarity graph shared by different views is learned.
Experiments on various datasets demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T03:25:25Z) - Multi-frame Super-resolution from Noisy Data [6.414055487487486]
We show the usefulness of two adaptive regularisers based on anisotropic diffusion ideas.
We also introduce a novel non-local one with one-sided differences and superior performance.
Surprisingly, the evaluation in a practically relevant noisy scenario produces a different ranking than the one in the noise-free setting.
arXiv Detail & Related papers (2021-03-25T12:07:08Z) - Double Self-weighted Multi-view Clustering via Adaptive View Fusion [6.061606963894415]
We propose a novel multi-view clustering framework Double Self-weighted Multi-view Clustering (DSMC)
DSMC performs double self-weighted operations to remove redundant features and noises from each graph, thereby obtaining robust graphs.
Experiments on six real-world datasets demonstrate its advantages over other state-of-the-art multi-view clustering methods.
arXiv Detail & Related papers (2020-11-20T13:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.