Automatic Cross-Domain Transfer Learning for Linear Regression
- URL: http://arxiv.org/abs/2005.04088v1
- Date: Fri, 8 May 2020 15:05:37 GMT
- Title: Automatic Cross-Domain Transfer Learning for Linear Regression
- Authors: Liu Xinshun, He Xin, Mao Hui, Liu Jing, Lai Weizhong, Ye Qingwen
- Abstract summary: This paper helps to extend the capability of transfer learning for linear regression problems.
For normal datasets, we assume that some latent domain information is available for transfer learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning research attempts to make model induction transferable
across different domains. This method assumes that specific information
regarding to which domain each instance belongs is known. This paper helps to
extend the capability of transfer learning for linear regression problems to
situations where the domain information is uncertain or unknown; in fact, the
framework can be extended to classification problems. For normal datasets, we
assume that some latent domain information is available for transfer learning.
The instances in each domain can be inferred by different parameters. We obtain
this domain information from the distribution of the regression coefficients
corresponding to the explanatory variable $x$ as well as the response variable
$y$ based on a Dirichlet process, which is more reasonable. As a result, we
transfer not only variable $x$ as usual but also variable $y$, which is
challenging since the testing data have no response value. Previous work mainly
overcomes the problem via pseudo-labelling based on transductive learning,
which introduces serious bias. We provide a novel framework for analysing the
problem and considering this general situation: the joint distribution of
variable $x$ and variable $y$. Furthermore, our method controls the bias well
compared with previous work. We perform linear regression on the new feature
space that consists of different latent domains and the target domain, which is
from the testing data. The experimental results show that the proposed model
performs well on real datasets.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Transforming to Yoked Neural Networks to Improve ANN Structure [0.0]
Most existing artificial neural networks (ANN) are designed as a tree structure to imitate neural networks.
We propose a model YNN to efficiently eliminate such structural bias.
In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information.
arXiv Detail & Related papers (2023-06-03T16:56:18Z) - Biologically inspired structure learning with reverse knowledge
distillation for spiking neural networks [19.33517163587031]
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility.
The performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy.
This paper proposes an evolutionary-based structure construction method for constructing more reasonable SNNs.
arXiv Detail & Related papers (2023-04-19T08:41:17Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Random Graph-Based Neuromorphic Learning with a Layer-Weaken Structure [4.477401614534202]
We transform the random graph theory into an NN model with practical meaning and based on clarifying the input-output relationship of each neuron.
Under the usage of this low-operation cost approach, neurons are assigned to several groups of which connection relationships can be regarded as uniform representations of random graphs they belong to.
We develop a joint classification mechanism involving information interaction between multiple RGNNs and realize significant performance improvements in supervised learning for three benchmark tasks.
arXiv Detail & Related papers (2021-11-17T03:37:06Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural
Networks [3.7277730514654555]
We use decision trees to capture relevant features and their interactions and define a mapping to encode extracted relationships into a neural network.
At the same time through feature selection it enables learning of compact representations compared to state of the art tree-based approaches.
arXiv Detail & Related papers (2020-02-11T11:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.