Hitless memory-reconfigurable photonic reservoir computing architecture
- URL: http://arxiv.org/abs/2207.06245v2
- Date: Wed, 17 May 2023 05:39:17 GMT
- Title: Hitless memory-reconfigurable photonic reservoir computing architecture
- Authors: Mohab Abdalla, Cl\'ement Zrounba, Raphael Cardoso, Paul Jimenez,
Guanghui Ren, Andreas Boes, Arnan Mitchell, Alberto Bosio, Ian O'Connor,
Fabio Pavanello
- Abstract summary: Reservoir computing is an analog bio-inspired computation model for efficiently processing time-dependent signals.
We propose a novel TDRC architecture based on an asymmetric Mach-Zehnder interferometer integrated in a resonant cavity.
We demonstrate this approach on the temporal bitwise XOR task and conclude that this way of memory capacity reconfiguration allows optimal performance to be achieved.
- Score: 1.4479776639062198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reservoir computing is an analog bio-inspired computation model for
efficiently processing time-dependent signals, the photonic implementations of
which promise a combination of massive parallel information processing, low
power consumption, and high speed operation. However, most implementations,
especially for the case of time-delay reservoir computing (TDRC), require
signal attenuation in the reservoir to achieve the desired system dynamics for
a specific task, often resulting in large amounts of power being coupled
outside of the system. We propose a novel TDRC architecture based on an
asymmetric Mach-Zehnder interferometer (MZI) integrated in a resonant cavity
which allows the memory capacity of the system to be tuned without the need for
an optical attenuator block. Furthermore, this can be leveraged to find the
optimal value for the specific components of the total memory capacity metric.
We demonstrate this approach on the temporal bitwise XOR task and conclude that
this way of memory capacity reconfiguration allows optimal performance to be
achieved for memory-specific tasks.
Related papers
- Multi-task Photonic Reservoir Computing: Wavelength Division Multiplexing for Parallel Computing with a Silicon Microring Resonator [0.0]
We numerically show the use of time and wavelength division multiplexing (WDM) to solve four independent tasks at the same time in a single photonic chip.
The footprint of the system is reduced by using time-division multiplexing of the nodes that act as the neurons of the studied neural network scheme.
arXiv Detail & Related papers (2024-07-30T20:54:07Z) - CHIME: Energy-Efficient STT-RAM-based Concurrent Hierarchical In-Memory Processing [1.5566524830295307]
This paper introduces a novel PiC/PiM architecture, Concurrent Hierarchical In-Memory Processing (CHIME)
CHIME strategically incorporates heterogeneous compute units across multiple levels of the memory hierarchy.
Experiments reveal that, compared to the state-of-the-art bit-line computing approaches, CHIME achieves significant speedup and energy savings of 57.95% and 78.23%.
arXiv Detail & Related papers (2024-07-29T01:17:54Z) - Sparser is Faster and Less is More: Efficient Sparse Attention for Long-Range Transformers [58.5711048151424]
We introduce SPARSEK Attention, a novel sparse attention mechanism designed to overcome computational and memory obstacles.
Our approach integrates a scoring network and a differentiable top-k mask operator, SPARSEK, to select a constant number of KV pairs for each query.
Experimental results reveal that SPARSEK Attention outperforms previous sparse attention methods.
arXiv Detail & Related papers (2024-06-24T15:55:59Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Multi-Task Wavelength-Multiplexed Reservoir Computing Using a Silicon Microring Resonator [0.0]
We numerically demonstrate the simultaneous use of time and frequency (equivalently wavelength) multiplexing to solve three independent tasks at the same time on the same photonic circuit.
This work provides insight into the potential of WDM-based schemes for improving the computing capabilities of reservoir computing schemes.
arXiv Detail & Related papers (2023-10-25T12:24:56Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Master memory function for delay-based reservoir computers with
single-variable dynamics [0.0]
We show that many delay-based reservoir computers can be characterized by a universal master memory function (MMF)
Once computed for two independent parameters, this function provides linear memory capacity for any delay-based single-variable reservoir with small inputs.
arXiv Detail & Related papers (2021-08-28T13:17:24Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - The Computational Capacity of LRC, Memristive and Hybrid Reservoirs [1.657441317977376]
Reservoir computing is a machine learning paradigm that uses a high-dimensional dynamical system, or emphreservoir, to approximate and predict time series data.
We analyze the feasibility and optimal design of electronic reservoirs that include both linear elements (resistors, inductors, and capacitors) and nonlinear memory elements called memristors.
Our electronic reservoirs can match or exceed the performance of conventional "echo state network" reservoirs in a form that may be directly implemented in hardware.
arXiv Detail & Related papers (2020-08-31T21:24:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.