Efficient Graphics Representation with Differentiable Indirection
- URL: http://arxiv.org/abs/2309.08387v2
- Date: Fri, 17 Nov 2023 21:12:56 GMT
- Title: Efficient Graphics Representation with Differentiable Indirection
- Authors: Sayantan Datta, Carl Marshall, Derek Nowrouzezahrai, Zhao Dong,
Zhengqin Li
- Abstract summary: We introduce differentiable indirection -- a novel learned primitive that employs differentiable multi-scale lookup tables.
In all cases, differentiable indirection seamlessly integrates into existing architectures, trains rapidly, and yields both versatile and efficient results.
- Score: 17.025494260380476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce differentiable indirection -- a novel learned primitive that
employs differentiable multi-scale lookup tables as an effective substitute for
traditional compute and data operations across the graphics pipeline. We
demonstrate its flexibility on a number of graphics tasks, i.e., geometric and
image representation, texture mapping, shading, and radiance field
representation. In all cases, differentiable indirection seamlessly integrates
into existing architectures, trains rapidly, and yields both versatile and
efficient results.
Related papers
- From Primes to Paths: Enabling Fast Multi-Relational Graph Analysis [5.008498268411793]
Multi-relational networks capture intricate relationships in data and have diverse applications across fields such as biomedical, financial, and social sciences.
This work extends the Prime Adjacency Matrices framework, which employs prime numbers to represent distinct relations within a network uniquely.
arXiv Detail & Related papers (2024-11-17T18:43:01Z) - Material Transforms from Disentangled NeRF Representations [23.688782106067166]
We propose a novel method for transferring material transformations across different scenes.
We learn to map Bidirectional Reflectance Distribution Functions (BRDF) from pairs of scenes observed in varying conditions.
The learned transformations can then be applied to unseen scenes with similar materials, therefore effectively rendering the transformation learned with an arbitrary level of intensity.
arXiv Detail & Related papers (2024-11-12T18:59:59Z) - Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - LEGO: Learnable Expansion of Graph Operators for Multi-Modal Feature Fusion [32.09145985103859]
In computer vision tasks, features often come from diverse representations, domains, and modalities, such as text, images, and videos.
In this paper, we shift from high-dimensional feature space to a lower-dimensional, interpretable graph space by constructing similarity graphs.
Our approach is relationship-centric, operates in a homogeneous space, and is mathematically principled.
arXiv Detail & Related papers (2024-10-02T12:58:55Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - A System for Morphology-Task Generalization via Unified Representation
and Behavior Distillation [28.041319351752485]
In this work, we explore a method for learning a single policy that manipulates various forms of agents to solve various tasks by distilling a large amount of proficient behavioral data.
We introduce morphology-task graph, which treats observations, actions and goals/task in a unified graph representation.
We also develop MxT-Bench for fast large-scale behavior generation, which supports procedural generation of diverse morphology-task combinations.
arXiv Detail & Related papers (2022-11-25T18:52:48Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z) - Unsupervised Discovery of Disentangled Manifolds in GANs [74.24771216154105]
Interpretable generation process is beneficial to various image editing applications.
We propose a framework to discover interpretable directions in the latent space given arbitrary pre-trained generative adversarial networks.
arXiv Detail & Related papers (2020-11-24T02:18:08Z) - The Immersion of Directed Multi-graphs in Embedding Fields.
Generalisations [0.0]
This paper outlines a generalised model for representing hybrid-categorical, symbolic, perceptual-sensory and perceptual-latent data.
This variety of representation is currently used by various machine-learning models in computer vision, NLP/NLU.
It is achieved by endowing a directed relational-Typed Multi-Graph with at least some edge attributes which represent the embeddings from various latent spaces.
arXiv Detail & Related papers (2020-04-28T09:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.