Aggregated Learning: A Vector-Quantization Approach to Learning Neural
Network Classifiers
- URL: http://arxiv.org/abs/2001.03955v3
- Date: Tue, 1 Jun 2021 16:42:00 GMT
- Title: Aggregated Learning: A Vector-Quantization Approach to Learning Neural
Network Classifiers
- Authors: Masoumeh Soflaei, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, Richong
Zhang
- Abstract summary: We show that IB learning is, in fact, equivalent to a special class of the quantization problem.
We propose a novel learning framework, "Aggregated Learning", for classification with neural network models.
The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
- Score: 48.11796810425477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning a neural network classifier. Under the
information bottleneck (IB) principle, we associate with this classification
problem a representation learning problem, which we call "IB learning". We show
that IB learning is, in fact, equivalent to a special class of the quantization
problem. The classical results in rate-distortion theory then suggest that IB
learning can benefit from a "vector quantization" approach, namely,
simultaneously learning the representations of multiple input objects. Such an
approach assisted with some variational techniques, result in a novel learning
framework, "Aggregated Learning", for classification with neural network
models. In this framework, several objects are jointly classified by a single
neural network. The effectiveness of this framework is verified through
extensive experiments on standard image recognition and text classification
tasks.
Related papers
- Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - On the Role of Neural Collapse in Transfer Learning [29.972063833424215]
Recent results show that representations learned by a single classifier over many classes are competitive on few-shot learning problems.
We show that neural collapse generalizes to new samples from the training classes, and -- more importantly -- to new classes as well.
arXiv Detail & Related papers (2021-12-30T16:36:26Z) - Incremental Deep Neural Network Learning using Classification Confidence
Thresholding [4.061135251278187]
Most modern neural networks for classification fail to take into account the concept of the unknown.
This paper proposes the Classification Confidence Threshold approach to prime neural networks for incremental learning.
arXiv Detail & Related papers (2021-06-21T22:46:28Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Learning Local Complex Features using Randomized Neural Networks for
Texture Analysis [0.1474723404975345]
We present a new approach that combines a learning technique and the Complex Network (CN) theory for texture analysis.
This method takes advantage of the representation capacity of CN to model a texture image as a directed network.
This neural network has a single hidden layer and uses a fast learning algorithm, which is able to learn local CN patterns for texture characterization.
arXiv Detail & Related papers (2020-07-10T23:18:01Z) - A Deep Neural Network for Audio Classification with a Classifier
Attention Mechanism [2.3204178451683264]
We introduce a new attention-based neural network architecture called Audio-Based Convolutional Neural Network (CAB-CNN)
The algorithm uses a newly designed architecture consisting of a list of simple classifiers and an attention mechanism as a selector.
Compared to the state-of-the-art algorithms, our algorithm achieves more than 10% improvements on all selected test scores.
arXiv Detail & Related papers (2020-06-14T21:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.