PhotoGAN: Generative Adversarial Neural Network Acceleration with Silicon Photonics
- URL: http://arxiv.org/abs/2501.13828v1
- Date: Thu, 23 Jan 2025 16:53:31 GMT
- Title: PhotoGAN: Generative Adversarial Neural Network Acceleration with Silicon Photonics
- Authors: Tharini Suresh, Salma Afifi, Sudeep Pasricha,
- Abstract summary: PhotoGAN is the first silicon-photonic accelerator designed to handle the specialized operations of GAN models.
PhotoGAN achieves at least 4.4x higher GOPS and 2.18x lower energy-per-bit (EPB) compared to state-of-the-art accelerators.
- Score: 2.9699290794642366
- License:
- Abstract: Generative Adversarial Networks (GANs) are at the forefront of AI innovation, driving advancements in areas such as image synthesis, medical imaging, and data augmentation. However, the unique computational operations within GANs, such as transposed convolutions and instance normalization, introduce significant inefficiencies when executed on traditional electronic accelerators, resulting in high energy consumption and suboptimal performance. To address these challenges, we introduce PhotoGAN, the first silicon-photonic accelerator designed to handle the specialized operations of GAN models. By leveraging the inherent high throughput and energy efficiency of silicon photonics, PhotoGAN offers an innovative, reconfigurable architecture capable of accelerating transposed convolutions and other GAN-specific layers. The accelerator also incorporates a sparse computation optimization technique to reduce redundant operations, improving computational efficiency. Our experimental results demonstrate that PhotoGAN achieves at least 4.4x higher GOPS and 2.18x lower energy-per-bit (EPB) compared to state-of-the-art accelerators, including GPUs and TPUs. These findings showcase PhotoGAN as a promising solution for the next generation of GAN acceleration, providing substantial gains in both performance and energy efficiency.
Related papers
- Rethinking the Atmospheric Scattering-driven Attention via Channel and Gamma Correction Priors for Low-Light Image Enhancement [0.0]
We introduce an extended version of the Channel-Prior and Gamma-Estimation Network (CPGA-Net)
CPGA-Net+ incorporates an attention mechanism driven by a reformulated Atmospheric Scattering Model.
It effectively addresses both global and local image processing through Plug-in Attention with gamma correction.
arXiv Detail & Related papers (2024-09-09T01:50:01Z) - TeMPO: Efficient Time-Multiplexed Dynamic Photonic Tensor Core for Edge
AI with Compact Slow-Light Electro-Optic Modulator [44.74560543672329]
We present a time-multiplexed dynamic photonic tensor accelerator, dubbed TeMPO, with cross-layer device/circuit/architecture customization.
We achieve a 368.6 TOPS peak performance, 22.3 TOPS/W energy efficiency, and 1.2 TOPS/mm$2$ compute density.
This work signifies the power of cross-layer co-design and domain-specific customization, paving the way for future electronic-photonic accelerators.
arXiv Detail & Related papers (2024-02-12T03:40:32Z) - E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection [34.864059478265055]
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware.
We show that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators.
We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
arXiv Detail & Related papers (2023-09-28T18:22:41Z) - A Simple and Effective Baseline for Attentional Generative Adversarial
Networks [8.63558211869045]
A text-to-image model of high-quality images by guiding the generative model through the Text description is an innovative and challenging task.
In recent years, AttnGAN based on the Attention mechanism to guide GAN training has been proposed, SD-GAN, and Stack-GAN++.
We use the popular simple and effective idea (1) to remove redundancy structure and improve the backbone network of AttnGAN.
Our improvements have significantly improved the model size and training efficiency while ensuring that the model's performance is unchanged.
arXiv Detail & Related papers (2023-06-26T13:55:57Z) - PhotoFourier: A Photonic Joint Transform Correlator-Based Neural Network
Accelerator [2.1372541869293555]
Integrated photonics has the potential to dramatically accelerate neural networks because of its low-latency nature.
PhotoFourier accelerator achieves more than 28X better energy-delay product compared to state-of-art photonic neural network accelerators.
arXiv Detail & Related papers (2022-11-10T00:48:36Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Improved Transformer for High-Resolution GANs [69.42469272015481]
We introduce two key ingredients to Transformer to address this challenge.
We show in the experiments that the proposed HiT achieves state-of-the-art FID scores of 31.87 and 2.95 on unconditional ImageNet $128 times 128$ and FFHQ $256 times 256$, respectively.
arXiv Detail & Related papers (2021-06-14T17:39:49Z) - Improving the Speed and Quality of GAN by Adversarial Training [87.70013107142142]
We develop FastGAN to improve the speed and quality of GAN training based on the adversarial training technique.
Our training algorithm brings ImageNet training to the broader public by requiring 2-4 GPUs.
arXiv Detail & Related papers (2020-08-07T20:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.