CrossLight: A Cross-Layer Optimized Silicon Photonic Neural Network
Accelerator
- URL: http://arxiv.org/abs/2102.06960v1
- Date: Sat, 13 Feb 2021 17:08:06 GMT
- Title: CrossLight: A Cross-Layer Optimized Silicon Photonic Neural Network
Accelerator
- Authors: Febin Sunny, Asif Mirza, Mahdi Nikdast, and Sudeep Pasricha
- Abstract summary: Domain-specific neural network accelerators have seen growing interest in recent years.
We propose a novel cross-layer optimized neural network accelerator called CrossLight.
On average, CrossLight offers 9.5x lower energy-per-bit and 15.9x higher performance-per-watt at 16-bit resolution than state-of-the-art photonic deep learning accelerators.
- Score: 3.49112071745966
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain-specific neural network accelerators have seen growing interest in
recent years due to their improved energy efficiency and inference performance
compared to CPUs and GPUs. In this paper, we propose a novel cross-layer
optimized neural network accelerator called CrossLight that leverages silicon
photonics. CrossLight includes device-level engineering for resilience to
process variations and thermal crosstalk, circuit-level tuning enhancements for
inference latency reduction, and architecture-level optimization to enable
higher resolution, better energy-efficiency, and improved throughput. On
average, CrossLight offers 9.5x lower energy-per-bit and 15.9x higher
performance-per-watt at 16-bit resolution than state-of-the-art photonic deep
learning accelerators.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - SCATTER: Algorithm-Circuit Co-Sparse Photonic Accelerator with Thermal-Tolerant, Power-Efficient In-situ Light Redistribution [7.378742476019604]
Photonic computing has emerged as a promising solution for accelerating computation-intensive artificial intelligence (AI) workloads.
However, limited reconfigurability, high electrical-optical conversion cost, and thermal sensitivity limit the deployment of current optical analog computing engines to support power-restricted, performance-sensitive AI workloads at scale.
We propose SCATTER, a novel algorithm-circuit co-sparse photonic accelerator featuring dynamically reconfigurable signal path via thermal-tolerant, power-efficient in-situ light redistribution and power gating.
arXiv Detail & Related papers (2024-07-07T22:57:44Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - Silicon photonic subspace neural chip for hardware-efficient deep
learning [11.374005508708995]
optical neural network (ONN) is a promising candidate for next-generation neurocomputing.
We devise a hardware-efficient photonic subspace neural network architecture.
We experimentally demonstrate our PSNN on a butterfly-style programmable silicon photonic integrated circuit.
arXiv Detail & Related papers (2021-11-11T06:34:05Z) - SONIC: A Sparse Neural Network Inference Accelerator with Silicon
Photonics for Energy-Efficient Deep Learning [4.286327408435937]
We propose a novel silicon photonics-based sparse neural network inference accelerator called SONIC.
SONIC can achieve up to 5.8x better performance-per-watt and 8.4x lower energy-per-bit than state-of-the-art sparse electronic neural network accelerators.
arXiv Detail & Related papers (2021-09-09T17:57:09Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Efficient On-Chip Learning for Optical Neural Networks Through
Power-Aware Sparse Zeroth-Order Optimization [12.052076188811052]
Optical neural networks (ONNs) have demonstrated record-breaking potential in neuromorphic computing.
We propose a novel on-chip learning framework to release the full potential of ONNs for power-efficient in situ training.
arXiv Detail & Related papers (2020-12-21T07:00:39Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.