Co-domain Symmetry for Complex-Valued Deep Learning
- URL: http://arxiv.org/abs/2112.01525v2
- Date: Tue, 22 Apr 2025 18:09:36 GMT
- Title: Co-domain Symmetry for Complex-Valued Deep Learning
- Authors: Utkarsh Singhal, Yifei Xing, Stella X. Yu,
- Abstract summary: We study complex-valued scaling as a type of symmetry natural and unique to complex-valued measurements and representations.<n>We analyze complex-valued scaling as a co-domain transformation and design novel equivariant and invariant neural network layer functions for this special transformation.<n>We also propose novel complex-valued representations of RGB images, where complex-valued scaling indicates hue shift or correlated changes across color channels.
- Score: 34.16793679479781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study complex-valued scaling as a type of symmetry natural and unique to complex-valued measurements and representations. Deep Complex Networks (DCN) extends real-valued algebra to the complex domain without addressing complex-valued scaling. SurReal takes a restrictive manifold view of complex numbers, adopting a distance metric to achieve complex-scaling invariance while losing rich complex-valued information. We analyze complex-valued scaling as a co-domain transformation and design novel equivariant and invariant neural network layer functions for this special transformation. We also propose novel complex-valued representations of RGB images, where complex-valued scaling indicates hue shift or correlated changes across color channels. Benchmarked on MSTAR, CIFAR10, CIFAR100, and SVHN, our co-domain symmetric (CDS) classifiers deliver higher accuracy, better generalization, robustness to co-domain transformations, and lower model bias and variance than DCN and SurReal with far fewer parameters.
Related papers
- Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - ComplexFormer: Disruptively Advancing Transformer Inference Ability via Head-Specific Complex Vector Attention [9.470124763460904]
This paper introduces ComplexFormer, featuring Complex Multi-Head Attention-CMHA.<n>CMHA empowers each head to independently model semantic and positional differences unified within the complex plane.<n>Tests show ComplexFormer achieves superior performance, significantly lower generation perplexity, and improved long-context coherence.
arXiv Detail & Related papers (2025-05-15T12:30:33Z) - Hybrid Real- and Complex-valued Neural Network Architecture [2.6739705603496327]
We propose a emphhybrid real- and complex-valued emphneural network (HNN) architecture, designed to combine the computational efficiency of real-valued processing with the ability to handle complex-valued data.
Experiments with the AudioMNIST dataset demonstrate that the HNN reduces cross-entropy loss and consumes less parameters compared to an RVNN for all considered cases.
arXiv Detail & Related papers (2025-04-04T14:52:44Z) - Variable-size Symmetry-based Graph Fourier Transforms for image compression [65.7352685872625]
We propose a new family of Symmetry-based Graph Fourier Transforms of variable sizes into a coding framework.
Our proposed algorithm generates symmetric graphs on the grid by adding specific symmetrical connections between nodes.
Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection.
arXiv Detail & Related papers (2024-11-24T13:00:44Z) - Point-aware Interaction and CNN-induced Refinement Network for RGB-D
Salient Object Detection [95.84616822805664]
We introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement.
In order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation.
arXiv Detail & Related papers (2023-08-17T11:57:49Z) - Building Blocks for a Complex-Valued Transformer Architecture [5.177947445379688]
We aim to make deep learning applicable to complex-valued signals without using projections into $mathbbR2$.
We present multiple versions of a complex-valued Scaled Dot-Product Attention mechanism as well as a complex-valued layer normalization.
We test on a classification and a sequence generation task on the MusicNet dataset and show improved robustness to overfitting while maintaining on-par performance when compared to the real-valued transformer architecture.
arXiv Detail & Related papers (2023-06-16T13:11:15Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Convolutional Learning on Simplicial Complexes [13.604803091781926]
We propose a simplicial complex convolutional neural network (SCCNN) to learn data representations on simplicial complexes.
It performs convolutions based on the multi-hop simplicial adjacencies via common faces and cofaces independently.
arXiv Detail & Related papers (2023-01-26T15:08:11Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Analysis of Deep Complex-Valued Convolutional Neural Networks for MRI
Reconstruction [9.55767753037496]
We investigate end-to-end complex-valued convolutional neural networks for image reconstruction in lieu of two-channel real-valued networks.
We find that complex-valued CNNs with complex-valued convolutions provide superior reconstructions compared to real-valued convolutions with the same number of trainable parameters.
arXiv Detail & Related papers (2020-04-03T19:00:23Z) - Co-VeGAN: Complex-Valued Generative Adversarial Network for Compressive
Sensing MR Image Reconstruction [8.856953486775716]
We propose a novel framework based on a complex-valued adversarial network (Co-VeGAN) to process complex-valued input.
Our model can process complex-valued input, which enables it to perform high-quality reconstruction of the CS-MR images.
arXiv Detail & Related papers (2020-02-24T20:28:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.