Towards Croppable Implicit Neural Representations
- URL: http://arxiv.org/abs/2409.19472v2
- Date: Wed, 23 Oct 2024 14:02:12 GMT
- Title: Towards Croppable Implicit Neural Representations
- Authors: Maor Ashkenazi, Eran Treister,
- Abstract summary: Implicit Neural Representations (INRs) have peaked interest in recent years due to their ability to encode natural signals using neural networks.
In this paper we explore the idea of editable INRs, and specifically focus on the widely used cropping operation.
We present Local-Global SIRENs -- a novel INR architecture that supports cropping by design.
- Score: 9.372436024276828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have peaked interest in recent years due to their ability to encode natural signals using neural networks. While INRs allow for useful applications such as interpolating new coordinates and signal compression, their black-box nature makes it difficult to modify them post-training. In this paper we explore the idea of editable INRs, and specifically focus on the widely used cropping operation. To this end, we present Local-Global SIRENs -- a novel INR architecture that supports cropping by design. Local-Global SIRENs are based on combining local and global feature extraction for signal encoding. What makes their design unique is the ability to effortlessly remove specific portions of an encoded signal, with a proportional weight decrease. This is achieved by eliminating the corresponding weights from the network, without the need for retraining. We further show how this architecture can be used to support the straightforward extension of previously encoded signals. Beyond signal editing, we examine how the Local-Global approach can accelerate training, enhance encoding of various signals, improve downstream performance, and be applied to modern INRs such as INCODE, highlighting its potential and flexibility. Code is available at https://github.com/maorash/Local-Global-INRs.
Related papers
- Predicting the Encoding Error of SIRENs [4.673285689826945]
Implicit Neural Representations (INRs) encode signals such as images, videos, and 3D shapes in the weights of neural networks.
We present a method which predicts the encoding error that a popular INR network (SIREN) will reach.
arXiv Detail & Related papers (2024-10-29T01:19:22Z) - Learning Transferable Features for Implicit Neural Representations [37.12083836826336]
Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering.
We introduce a new INR training framework, STRAINER, that learns transferrable features for fitting INRs to new signals.
We evaluate STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse problems.
arXiv Detail & Related papers (2024-09-15T00:53:44Z) - Encoding Semantic Priors into the Weights of Implicit Neural Representation [6.057991864861226]
Implicit neural representation (INR) has recently emerged as a promising paradigm for signal representations.
This paper proposes a re parameterization method termed as SPW, which encodes the semantic priors to the weights of INR.
Experimental results show that SPW can improve the performance of various INR models significantly on various tasks.
arXiv Detail & Related papers (2024-06-06T15:35:41Z) - Locality-Aware Generalizable Implicit Neural Representation [54.93702310461174]
Generalizable implicit neural representation (INR) enables a single continuous function to represent multiple data instances.
We propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder.
Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks.
arXiv Detail & Related papers (2023-10-09T11:26:58Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble [71.97020373520922]
Sign language is commonly used by deaf or mute people to communicate.
We propose a novel Multi-modal Framework with a Global Ensemble Model (GEM) for isolated Sign Language Recognition ( SLR)
Our proposed SAM- SLR-v2 framework is exceedingly effective and achieves state-of-the-art performance with significant margins.
arXiv Detail & Related papers (2021-10-12T16:57:18Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.