Towards Learning Convolutions from Scratch
- URL: http://arxiv.org/abs/2007.13657v1
- Date: Mon, 27 Jul 2020 16:13:13 GMT
- Title: Towards Learning Convolutions from Scratch
- Authors: Behnam Neyshabur
- Abstract summary: Convolution is one of the most essential components of architectures used in computer vision.
Current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data.
We propose $beta$-LASSO, a simple variant of LASSO algorithm that learns architectures with local connections.
- Score: 34.71001535076825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolution is one of the most essential components of architectures used in
computer vision. As machine learning moves towards reducing the expert bias and
learning it from data, a natural next step seems to be learning
convolution-like structures from scratch. This, however, has proven elusive.
For example, current state-of-the-art architecture search algorithms use
convolution as one of the existing modules rather than learning it from data.
In an attempt to understand the inductive bias that gives rise to convolutions,
we investigate minimum description length as a guiding principle and show that
in some settings, it can indeed be indicative of the performance of
architectures. To find architectures with small description length, we propose
$\beta$-LASSO, a simple variant of LASSO algorithm that, when applied on
fully-connected networks for image classification tasks, learns architectures
with local connections and achieves state-of-the-art accuracies for training
fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%)
bridging the gap between fully-connected and convolutional nets.
Related papers
- Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Towards Disentangling Information Paths with Coded ResNeXt [11.884259630414515]
We take a novel approach to enhance the transparency of the function of the whole network.
We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths.
arXiv Detail & Related papers (2022-02-10T21:45:49Z) - DAFormer: Improving Network Architectures and Training Strategies for
Domain-Adaptive Semantic Segmentation [99.88539409432916]
We study the unsupervised domain adaptation (UDA) process.
We propose a novel UDA method, DAFormer, based on the benchmark results.
DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes.
arXiv Detail & Related papers (2021-11-29T19:00:46Z) - The Nonlinearity Coefficient -- A Practical Guide to Neural Architecture
Design [3.04585143845864]
We develop methods that can predict, without any training, whether an architecture will achieve a relatively high test or training error on a task after training.
We then go on to explain the error in terms of the architecture definition itself and develop tools for modifying the architecture.
Our first major contribution is to show that the 'degree of nonlinearity' of a neural architecture is a key causal driver behind its performance.
arXiv Detail & Related papers (2021-05-25T20:47:43Z) - Multi-objective Neural Architecture Search with Almost No Training [9.93048700248444]
We propose an effective alternative, dubbed Random-Weight Evaluation (RWE), to rapidly estimate the performance of network architectures.
RWE reduces the computational cost of evaluating an architecture from hours to seconds.
When integrated within an evolutionary multi-objective algorithm, RWE obtains a set of efficient architectures with state-of-the-art performance on CIFAR-10 with less than two hours' searching on a single GPU card.
arXiv Detail & Related papers (2020-11-27T07:39:17Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.