AttWalk: Attentive Cross-Walks for Deep Mesh Analysis
- URL: http://arxiv.org/abs/2104.11571v1
- Date: Fri, 23 Apr 2021 13:02:39 GMT
- Title: AttWalk: Attentive Cross-Walks for Deep Mesh Analysis
- Authors: Ran Ben Izhak, Alon Lahav and Ayellet Tal
- Abstract summary: Mesh representation by random walks has been shown to benefit deep learning.
We propose a novel walk-attention mechanism that leverages the fact that multiple walks are used.
- Score: 19.12196187222047
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mesh representation by random walks has been shown to benefit deep learning.
Randomness is indeed a powerful concept. However, it comes with a price: some
walks might wander around non-characteristic regions of the mesh, which might
be harmful to shape analysis, especially when only a few walks are utilized. We
propose a novel walk-attention mechanism that leverages the fact that multiple
walks are used. The key idea is that the walks may provide each other with
information regarding the meaningful (attentive) features of the mesh. We
utilize this mutual information to extract a single descriptor of the mesh.
This differs from common attention mechanisms that use attention to improve the
representation of each individual descriptor. Our approach achieves SOTA
results for two basic 3D shape analysis tasks: classification and retrieval.
Even a handful of walks along a mesh suffice for learning.
Related papers
- Occlusion Resilient 3D Human Pose Estimation [52.49366182230432]
Occlusions remain one of the key challenges in 3D body pose estimation from single-camera video sequences.
We demonstrate the effectiveness of this approach compared to state-of-the-art techniques that infer poses from single-camera sequences.
arXiv Detail & Related papers (2024-02-16T19:29:43Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks [69.94313958962165]
We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
arXiv Detail & Related papers (2021-12-13T03:13:11Z) - CloudWalker: Random walks for 3D point cloud shape analysis [20.11028799145883]
We propose CloudWalker, a novel method for learning 3D shapes using random walks.
Our approach achieves state-of-the-art results for two 3D shape analysis tasks: classification and retrieval.
arXiv Detail & Related papers (2021-12-02T08:24:01Z) - Grasp-Oriented Fine-grained Cloth Segmentation without Real Supervision [66.56535902642085]
This paper tackles the problem of fine-grained region detection in deformed clothes using only a depth image.
We define up to 6 semantic regions of varying extent, including edges on the neckline, sleeve cuffs, and hem, plus top and bottom grasping points.
We introduce a U-net based network to segment and label these parts.
We show that training our network solely with synthetic data and the proposed DA yields results competitive with models trained on real data.
arXiv Detail & Related papers (2021-10-06T16:31:20Z) - 3D Convolution Neural Network based Person Identification using Gait
cycles [0.0]
In this work, gait features are used to identify an individual. The steps involve object detection, background subtraction, silhouettes extraction, skeletonization, and training 3D Convolution Neural Network on these gait features.
The proposed method focuses more on the lower body part to extract features such as the angle between knee and thighs, hip angle, angle of contact, and many other features.
arXiv Detail & Related papers (2021-06-06T14:27:06Z) - Hidden Footprints: Learning Contextual Walkability from 3D Human Trails [70.01257397390361]
Current datasets only tell you where people are, not where they could be.
We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints.
We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss.
arXiv Detail & Related papers (2020-08-19T23:19:08Z) - MeshWalker: Deep Mesh Understanding by Random Walks [19.594977587417247]
We look at the most popular representation of 3D shapes in computer graphics - a triangular mesh - and ask how it can be utilized within deep learning.
This paper proposes a very different approach, termed MeshWalker, to learn the shape directly from a given mesh.
We show that our approach achieves state-of-the-art results for two fundamental shape analysis tasks.
arXiv Detail & Related papers (2020-06-09T15:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.