RotaTouille: Rotation Equivariant Deep Learning for Contours
- URL: http://arxiv.org/abs/2508.16359v2
- Date: Mon, 27 Oct 2025 14:23:31 GMT
- Title: RotaTouille: Rotation Equivariant Deep Learning for Contours
- Authors: Odin Hoff Gardaa, Nello Blaser,
- Abstract summary: We present RotaTouille, a framework for learning from contour data.<n>It achieves both rotation and cyclic shift equivariant through complex-valued circular convolution.<n>We also introduce and characterize equivariant non-linearities, coarsening layers, and global pooling layers.
- Score: 0.02491171962188218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contours or closed planar curves are common in many domains. For example, they appear as object boundaries in computer vision, isolines in meteorology, and the orbits of rotating machinery. In many cases when learning from contour data, planar rotations of the input will result in correspondingly rotated outputs. It is therefore desirable that deep learning models be rotationally equivariant. In addition, contours are typically represented as an ordered sequence of edge points, where the choice of starting point is arbitrary. It is therefore also desirable for deep learning methods to be equivariant under cyclic shifts. We present RotaTouille, a deep learning framework for learning from contour data that achieves both rotation and cyclic shift equivariance through complex-valued circular convolution. We further introduce and characterize equivariant non-linearities, coarsening layers, and global pooling layers to obtain invariant representations for downstream tasks. Finally, we demonstrate the effectiveness of RotaTouille through experiments in shape classification, reconstruction, and contour regression.
Related papers
- Rethinking Rotation-Invariant Recognition of Fine-grained Shapes from the Perspective of Contour Points [0.0]
We propose an anti-noise rotation-invariant convolution module based on contour geometric aware for fine-grained shape recognition.<n>The results show that our method exhibits excellent performance in rotation-invariant recognition of fine-grained shapes.
arXiv Detail & Related papers (2025-03-14T01:34:20Z) - ESCAPE: Equivariant Shape Completion via Anchor Point Encoding [79.59829525431238]
We introduce ESCAPE, a framework designed to achieve rotation-equivariant shape completion.<n>ESCAPE employs a distinctive encoding strategy by selecting anchor points from a shape and representing all points as a distance to all anchor points.<n>ESCAPE achieves robust, high-quality reconstructions across arbitrary rotations and translations.
arXiv Detail & Related papers (2024-12-01T20:05:14Z) - Rotation-Invariant Transformer for Point Cloud Matching [42.5714375149213]
We introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task.
We propose a global transformer with rotation-invariant cross-frame spatial awareness learned by the self-attention mechanism.
RoITr surpasses the existing methods by at least 13 and 5 percentage points in terms of Inlier Ratio and Registration Recall.
arXiv Detail & Related papers (2023-03-14T20:55:27Z) - CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation
via Centrifugal Reference Frame [60.24797081117877]
We propose the CRIN, namely Centrifugal Rotation-Invariant Network.
CRIN directly takes the coordinates of points as input and transforms local points into rotation-invariant representations.
A continuous distribution for 3D rotations based on points is introduced.
arXiv Detail & Related papers (2023-03-06T13:14:10Z) - Rethinking Rotation Invariance with Point Cloud Registration [18.829454172955202]
We propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration.
Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work.
arXiv Detail & Related papers (2022-12-31T08:17:09Z) - Rotation invariant CNN using scattering transform for image
classification [0.0]
We propose a convolutional predictor that is invariant to rotations in the input.
The architecture is capable of predicting the angular orientation without angle-annotated data.
We validate the results by training with upright and randomly rotated samples.
arXiv Detail & Related papers (2021-05-21T07:36:34Z) - PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features [91.2054994193218]
We propose a point-set learning framework PRIN, focusing on rotation invariant feature extraction in point clouds analysis.
In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds.
Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation.
arXiv Detail & Related papers (2021-02-24T06:44:09Z) - Deep Positional and Relational Feature Learning for Rotation-Invariant
Point Cloud Analysis [107.9979381402172]
We propose a rotation-invariant deep network for point clouds analysis.
The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block.
Experiments show state-of-the-art classification and segmentation performances on benchmark datasets.
arXiv Detail & Related papers (2020-11-18T04:16:51Z) - Rotated Ring, Radial and Depth Wise Separable Radial Convolutions [13.481518628796692]
In this work, we address trainable rotation invariant convolutions and the construction of nets.
On the one hand, we show that our approach is rotationally invariant for different models and on different public data sets.
The rotationally adaptive convolution models presented are more computationally intensive than normal convolution models.
arXiv Detail & Related papers (2020-10-02T09:01:51Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z) - Quaternion Equivariant Capsule Networks for 3D Point Clouds [58.566467950463306]
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations.
We connect dynamic routing between capsules to the well-known Weiszfeld algorithm.
Based on our operator, we build a capsule network that disentangles geometry from pose.
arXiv Detail & Related papers (2019-12-27T13:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.