ShuffleMix: Improving Representations via Channel-Wise Shuffle of
Interpolated Hidden States
- URL: http://arxiv.org/abs/2305.18684v1
- Date: Tue, 30 May 2023 01:53:34 GMT
- Title: ShuffleMix: Improving Representations via Channel-Wise Shuffle of
Interpolated Hidden States
- Authors: Kangjun Liu, Ke Chen, Lihua Guo, Yaowei Wang, Kui Jia
- Abstract summary: This paper introduces a novel concept of ShuffleMix -- Shuffle of Mixed hidden features.
Our ShuffleMix method favors a simple linear shuffle randomly selected feature channels for feature mixup in-between supervision.
Compared to its direct competitor, the proposed ShuffleMix can gain superior generalization.
- Score: 41.628854241259226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixup style data augmentation algorithms have been widely adopted in various
tasks as implicit network regularization on representation learning to improve
model generalization, which can be achieved by a linear interpolation of
labeled samples in input or feature space as well as target space. Inspired by
good robustness of alternative dropout strategies against over-fitting on
limited patterns of training samples, this paper introduces a novel concept of
ShuffleMix -- Shuffle of Mixed hidden features, which can be interpreted as a
kind of dropout operation in feature space. Specifically, our ShuffleMix method
favors a simple linear shuffle of randomly selected feature channels for
feature mixup in-between training samples to leverage semantic interpolated
supervision signals, which can be extended to a generalized shuffle operation
via additionally combining linear interpolations of intra-channel features.
Compared to its direct competitor of feature augmentation -- the Manifold
Mixup, the proposed ShuffleMix can gain superior generalization, owing to
imposing more flexible and smooth constraints on generating samples and
achieving regularization effects of channel-wise feature dropout. Experimental
results on several public benchmarking datasets of single-label and multi-label
visual classification tasks can confirm the effectiveness of our method on
consistently improving representations over the state-of-the-art mixup
augmentation.
Related papers
- Infinite Class Mixup [26.48101652432502]
Mixup is a strategy for training deep networks where additional samples are augmented by interpolating inputs and labels of training pairs.
This paper seeks to address this cornerstone by mixing the classifiers directly instead of mixing the labels for each mixed pair.
We show that Infinite Class Mixup outperforms standard Mixup and variants such as RegMixup and Remix on balanced, long-tailed, and data-constrained benchmarks.
arXiv Detail & Related papers (2023-05-17T15:27:35Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - SMILE: Self-Distilled MIxup for Efficient Transfer LEarning [42.59451803498095]
In this work, we propose SMILE - Self-Distilled Mixup for EffIcient Transfer LEarning.
With mixed images as inputs, SMILE regularizes the outputs of CNN feature extractors to learn from the mixed feature vectors of inputs.
The triple regularizer balances the mixup effects in both feature and label spaces while bounding the linearity in-between samples for pre-training tasks.
arXiv Detail & Related papers (2021-03-25T16:02:21Z) - Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity [15.780905917870427]
We propose a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data.
We also propose an efficient modular approximation based iterative submodular computation algorithm for efficient mixup per each minibatch.
Our experiments show the proposed method achieves the state of the art generalization, calibration, and weakly supervised localization results.
arXiv Detail & Related papers (2021-02-05T09:12:02Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z) - Patch-level Neighborhood Interpolation: A General and Effective
Graph-based Regularization Strategy [77.34280933613226]
We propose a general regularizer called textbfPatch-level Neighborhood Interpolation(Pani) that conducts a non-local representation in the computation of networks.
Our proposal explicitly constructs patch-level graphs in different layers and then linearly interpolates neighborhood patch features, serving as a general and effective regularization strategy.
arXiv Detail & Related papers (2019-11-21T06:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.