Latent Neural Cellular Automata for Resource-Efficient Image Restoration
- URL: http://arxiv.org/abs/2403.15525v1
- Date: Fri, 22 Mar 2024 14:15:28 GMT
- Title: Latent Neural Cellular Automata for Resource-Efficient Image Restoration
- Authors: Andrea Menta, Alberto Archetti, Matteo Matteucci,
- Abstract summary: We introduce the Latent Neural Cellular Automata (LNCA) model, a novel architecture designed to address the resource limitations of neural cellular automata.
Our approach shifts the computation from the conventional input space to a specially designed latent space, relying on a pre-trained autoencoder.
This modification not only reduces the model's resource consumption but also maintains a flexible framework suitable for various applications.
- Score: 4.470499157873342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural cellular automata represent an evolution of the traditional cellular automata model, enhanced by the integration of a deep learning-based transition function. This shift from a manual to a data-driven approach significantly increases the adaptability of these models, enabling their application in diverse domains, including content generation and artificial life. However, their widespread application has been hampered by significant computational requirements. In this work, we introduce the Latent Neural Cellular Automata (LNCA) model, a novel architecture designed to address the resource limitations of neural cellular automata. Our approach shifts the computation from the conventional input space to a specially designed latent space, relying on a pre-trained autoencoder. We apply our model in the context of image restoration, which aims to reconstruct high-quality images from their degraded versions. This modification not only reduces the model's resource consumption but also maintains a flexible framework suitable for various applications. Our model achieves a significant reduction in computational requirements while maintaining high reconstruction fidelity. This increase in efficiency allows for inputs up to 16 times larger than current state-of-the-art neural cellular automata models, using the same resources.
Related papers
- Research on Personalized Compression Algorithm for Pre-trained Models Based on Homomorphic Entropy Increase [2.6513322539118582]
We explore the challenges and evolution of two key technologies in the current field of AI: Vision Transformer model and Large Language Model (LLM)
Vision Transformer captures global information by splitting images into small pieces, but its high reference count and compute overhead limit deployment on mobile devices.
LLM has revolutionized natural language processing, but it also faces huge deployment challenges.
arXiv Detail & Related papers (2024-08-16T11:56:49Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Symplectic Autoencoders for Model Reduction of Hamiltonian Systems [0.0]
It is crucial to preserve the symplectic structure associated with the system in order to ensure long-term numerical stability.
We propose a new neural network architecture in the spirit of autoencoders, which are established tools for dimension reduction.
In order to train the network, a non-standard gradient descent approach is applied.
arXiv Detail & Related papers (2023-12-15T18:20:25Z) - Composable Function-preserving Expansions for Transformer Architectures [2.579908688646812]
Training state-of-the-art neural networks requires a high cost in terms of compute and time.
We propose six composable transformations to incrementally increase the size of transformer-based neural networks.
arXiv Detail & Related papers (2023-08-11T12:27:22Z) - Locally adaptive cellular automata for goal-oriented self-organization [14.059479351946386]
We propose a new model class of adaptive cellular automata that allows for the generation of scalable and expressive models.
We show how to implement adaptation by coupling the update rule of the cellular automaton with itself and the system state in a localized way.
arXiv Detail & Related papers (2023-06-12T12:32:23Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - Generative Adversarial Neural Cellular Automata [13.850929935840659]
We introduce a concept using different initial environments as input while using a single Neural Cellular Automata to produce several outputs.
We also introduce GANCA, a novel algorithm that combines Neural Cellular Automata with Generative Adrial Networks.
arXiv Detail & Related papers (2021-07-19T06:23:11Z) - Towards self-organized control: Using neural cellular automata to
robustly control a cart-pole agent [62.997667081978825]
We use neural cellular automata to control a cart-pole agent.
We trained the model using deep-Q learning, where the states of the output cells were used as the Q-value estimates to be optimized.
arXiv Detail & Related papers (2021-06-29T10:49:42Z) - Anytime Sampling for Autoregressive Models via Ordered Autoencoding [88.01906682843618]
Autoregressive models are widely used for tasks such as image and audio generation.
The sampling process of these models does not allow interruptions and cannot adapt to real-time computational resources.
We propose a new family of autoregressive models that enables anytime sampling.
arXiv Detail & Related papers (2021-02-23T05:13:16Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.