I-INR: Iterative Implicit Neural Representations
- URL: http://arxiv.org/abs/2504.17364v1
- Date: Thu, 24 Apr 2025 08:27:22 GMT
- Title: I-INR: Iterative Implicit Neural Representations
- Authors: Ali Haider, Muhammad Salman Ali, Maryam Qamar, Tahir Khalil, Soo Ye Kim, Jihyong Oh, Enzo Tartaglione, Sung-Ho Bae,
- Abstract summary: Implicit Neural Representations (INRs) have revolutionized signal processing and computer vision by modeling signals as continuous, differentiable functions parameterized by neural networks.<n>We propose Iterative Implicit Neural Representations (I-INRs) a novel plug-and-play framework that enhances signal reconstruction through an iterative refinement process.
- Score: 21.060226382403506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have revolutionized signal processing and computer vision by modeling signals as continuous, differentiable functions parameterized by neural networks. However, their inherent formulation as a regression problem makes them prone to regression to the mean, limiting their ability to capture fine details, retain high-frequency information, and handle noise effectively. To address these challenges, we propose Iterative Implicit Neural Representations (I-INRs) a novel plug-and-play framework that enhances signal reconstruction through an iterative refinement process. I-INRs effectively recover high-frequency details, improve robustness to noise, and achieve superior reconstruction quality. Our framework seamlessly integrates with existing INR architectures, delivering substantial performance gains across various tasks. Extensive experiments show that I-INRs outperform baseline methods, including WIRE, SIREN, and Gauss, in diverse computer vision applications such as image restoration, image denoising, and object occupancy prediction.
Related papers
- SR-NeRV: Improving Embedding Efficiency of Neural Video Representation via Super-Resolution [0.0]
Implicit Neural Representations (INRs) have garnered significant attention for their ability to model complex signals across a variety of domains.
We propose an INR-based video representation method that integrates a general-purpose super-resolution (SR) network.
arXiv Detail & Related papers (2025-04-30T03:31:40Z) - SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.<n>Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.<n>We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - Dynamic-Aware Spatio-temporal Representation Learning for Dynamic MRI Reconstruction [7.704793488616996]
We propose Dynamic-Aware INR (DA-INR), an INR-based model for dynamic MRI reconstruction.
It captures the spatial and temporal continuity of dynamic MRI data in the image domain and explicitly incorporates the temporal redundancy of the data into the model structure.
As a result, DA-INR outperforms other models in reconstruction quality even at extreme undersampling ratios.
arXiv Detail & Related papers (2025-01-15T12:11:33Z) - SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation [6.572456394600755]
Implicit Neural Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.<n>We show that these challenges can be alleviated by introducing a novel approach in INR architecture.<n>Specifically, we propose SL$2$A-INR, a hybrid network that combines a single-layer learnable activation function with an synthesis that uses traditional ReLU activations.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - INCODE: Implicit Neural Conditioning with Prior Knowledge Embeddings [4.639495398851869]
Implicit Neural Representations (INRs) have revolutionized signal representation by leveraging neural networks to provide continuous and smooth representations of complex data.
We introduce INCODE, a novel approach that enhances the control of the sinusoidal-based activation function in INRs using deep prior knowledge.
Our approach not only excels in representation, but also extends its prowess to tackle complex tasks such as audio, image, and 3D shape reconstructions.
arXiv Detail & Related papers (2023-10-28T23:16:49Z) - Regularization by Neural Style Transfer for MRI Field-Transfer Reconstruction with Limited Data [2.308563547164654]
Regularization by Neural Style Transfer is a novel framework that integrates a neural style transfer engine with a denoiser to enable magnetic field-transfer reconstruction.<n>Our experiment results demonstrate RNST's ability to reconstruct high-quality images across diverse anatomical planes.
arXiv Detail & Related papers (2023-08-21T18:26:35Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - WIRE: Wavelet Implicit Neural Representations [42.147899723673596]
Implicit neural representations (INRs) have recently advanced numerous vision-related areas.
Current INRs designed to have high accuracy also suffer from poor robustness.
We develop a new, highly accurate and robust INR that does not exhibit this tradeoff.
arXiv Detail & Related papers (2023-01-05T20:24:56Z) - Spatiotemporal implicit neural representation for unsupervised dynamic
MRI reconstruction [11.661657147506519]
Implicit Neuraltruth (INR) has appeared as powerful DL-based tool for solving the inverse problem.
In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data.
The proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks.
arXiv Detail & Related papers (2022-12-31T05:43:21Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.