ReFiNe: Recursive Field Networks for Cross-modal Multi-scene Representation
- URL: http://arxiv.org/abs/2406.04309v1
- Date: Thu, 6 Jun 2024 17:55:34 GMT
- Title: ReFiNe: Recursive Field Networks for Cross-modal Multi-scene Representation
- Authors: Sergey Zakharov, Katherine Liu, Adrien Gaidon, Rares Ambrus,
- Abstract summary: We show how to encode multiple shapes represented as continuous neural fields with a higher degree of precision than previously possible.
We demonstrate state-of-the-art multi-scene reconstruction and compression results with a single network per dataset.
- Score: 37.24514001359966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The common trade-offs of state-of-the-art methods for multi-shape representation (a single model "packing" multiple objects) involve trading modeling accuracy against memory and storage. We show how to encode multiple shapes represented as continuous neural fields with a higher degree of precision than previously possible and with low memory usage. Key to our approach is a recursive hierarchical formulation that exploits object self-similarity, leading to a highly compressed and efficient shape latent space. Thanks to the recursive formulation, our method supports spatial and global-to-local latent feature fusion without needing to initialize and maintain auxiliary data structures, while still allowing for continuous field queries to enable applications such as raytracing. In experiments on a set of diverse datasets, we provide compelling qualitative results and demonstrate state-of-the-art multi-scene reconstruction and compression results with a single network per dataset.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Unity by Diversity: Improved Representation Learning in Multimodal VAEs [24.691068754720106]
We show that a better latent representation can be obtained by replacing hard constraints with a soft constraint.
We show improved learned latent representations and imputation of missing data modalities compared to existing methods.
arXiv Detail & Related papers (2024-03-08T13:29:46Z) - One for All: Toward Unified Foundation Models for Earth Vision [24.358013737755822]
Current remote sensing foundation models specialize in a single modality or a specific spatial resolution range.
We introduce OFA-Net: employing a single, shared Transformer backbone for multiple data modalities with different spatial resolutions.
The proposed method is evaluated on 12 distinct downstream tasks and demonstrates promising performance.
arXiv Detail & Related papers (2024-01-15T08:12:51Z) - Preserving Modality Structure Improves Multi-Modal Learning [64.10085674834252]
Self-supervised learning on large-scale multi-modal datasets allows learning semantically meaningful embeddings without relying on human annotations.
These methods often struggle to generalize well on out-of-domain data as they ignore the semantic structure present in modality-specific embeddings.
We propose a novel Semantic-Structure-Preserving Consistency approach to improve generalizability by preserving the modality-specific relationships in the joint embedding space.
arXiv Detail & Related papers (2023-08-24T20:46:48Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Learning Sequential Latent Variable Models from Multimodal Time Series
Data [6.107812768939553]
We present a self-supervised generative modelling framework to jointly learn a probabilistic latent state representation of multimodal data.
We demonstrate that our approach leads to significant improvements in prediction and representation quality.
arXiv Detail & Related papers (2022-04-21T21:59:24Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - CellSegmenter: unsupervised representation learning and instance
segmentation of modular images [0.0]
We introduce a structured deep generative model and an amortized inference framework for unsupervised representation learning and instance segmentation tasks.
The proposed inference algorithm is convolutional and parallelized, without any recurrent mechanisms.
We show segmentation results obtained for a cell nuclei imaging dataset, demonstrating the ability of our method to provide high-quality segmentations.
arXiv Detail & Related papers (2020-11-25T02:10:58Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.