Precoder Learning by Leveraging Unitary Equivariance Property
- URL: http://arxiv.org/abs/2503.09398v1
- Date: Wed, 12 Mar 2025 13:48:34 GMT
- Title: Precoder Learning by Leveraging Unitary Equivariance Property
- Authors: Yilun Ge, Shuyao Liao, Shengqian Han, Chenyang Yang,
- Abstract summary: We study a stronger property than permutation equivariance, namely unitary equivariance, for precoder learning.<n>We develop a novel non-linear weighting process satisfying unitary equivariance and then construct a joint unitary and permutation equivariant DNN.
- Score: 11.165211531939997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating mathematical properties of a wireless policy to be learned into the design of deep neural networks (DNNs) is effective for enhancing learning efficiency. Multi-user precoding policy in multi-antenna system, which is the mapping from channel matrix to precoding matrix, possesses a permutation equivariance property, which has been harnessed to design the parameter sharing structure of the weight matrix of DNNs. In this paper, we study a stronger property than permutation equivariance, namely unitary equivariance, for precoder learning. We first show that a DNN with unitary equivariance designed by further introducing parameter sharing into a permutation equivariant DNN is unable to learn the optimal precoder. We proceed to develop a novel non-linear weighting process satisfying unitary equivariance and then construct a joint unitary and permutation equivariant DNN. Simulation results demonstrate that the proposed DNN not only outperforms existing learning methods in learning performance and generalizability but also reduces training complexity.
Related papers
- Precoder Learning for Weighted Sum Rate Maximization [5.305346885414619]
We propose a novel deep neural network (DNN) to learn the precoder for weighted sum precoding (WSRM)<n>Compared to existing unitarys, the proposed DNN leverage the joint and permutation balances inherent in the optimal precoding policy.<n> Simulation results demonstrate that the proposed method significantly outperforms learning methods in terms of both learning and generalization performance.
arXiv Detail & Related papers (2025-03-06T14:45:38Z) - SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with Equivariant Neural Networks [37.69303106863453]
SIDDA is an out-of-the-box DA training algorithm built upon the Sinkhorn divergence.<n>We find that SIDDA enhances the generalization capabilities of NNs.<n>We also study the efficacy of SIDDA on ENNs with respect to the varying group orders of the dihedral group $D_N$.
arXiv Detail & Related papers (2025-01-23T19:29:34Z) - NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference [20.404864470321897]
We introduce NeuralMatrix, which elastically transforms the computations of entire deep neural network (DNN) models into linear matrix operations.
Experiments with both CNN and transformer-based models demonstrate the potential of NeuralMatrix to accurately and efficiently execute a wide range of DNN models.
This level of efficiency is usually only attainable with the accelerator designed for a specific neural network.
arXiv Detail & Related papers (2023-05-23T12:03:51Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Relaxing Equivariance Constraints with Non-stationary Continuous Filters [20.74154804898478]
The proposed parameterization can be thought of as a building block to allow adjustable symmetry structure in neural networks.
Compared to non-equivariant or strict-equivariant baselines, we experimentally verify that soft equivariance leads to improved performance in terms of test accuracy on CIFAR-10 and CIFAR-100 image classification tasks.
arXiv Detail & Related papers (2022-04-14T18:08:36Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.