Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection
- URL: http://arxiv.org/abs/2309.16783v2
- Date: Tue, 3 Oct 2023 16:34:13 GMT
- Title: Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection
- Authors: Lakshmi Nair, David Widemann, Brad Turcott, Nick Moore, Alexandra
Wleklinski, Darius Bunandar, Ioannis Papavasileiou, Shihu Wang, Eric Logan
- Abstract summary: Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware.
We show that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators.
We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
- Score: 34.864059478265055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Photonic computing promises faster and more energy-efficient deep neural
network (DNN) inference than traditional digital hardware. Advances in photonic
computing can have profound impacts on applications such as autonomous driving
and defect detection that depend on fast, accurate and energy efficient
execution of image segmentation models. In this paper, we investigate image
segmentation on photonic accelerators to explore: a) the types of image
segmentation DNN architectures that are best suited for photonic accelerators,
and b) the throughput and energy efficiency of executing the different image
segmentation models on photonic accelerators, along with the trade-offs
involved therein. Specifically, we demonstrate that certain segmentation models
exhibit negligible loss in accuracy (compared to digital float32 models) when
executed on photonic accelerators, and explore the empirical reasoning for
their robustness. We also discuss techniques for recovering accuracy in the
case of models that do not perform well. Further, we compare throughput
(inferences-per-second) and energy consumption estimates for different image
segmentation workloads on photonic accelerators. We discuss the challenges and
potential optimizations that can help improve the application of photonic
accelerators to such computer vision tasks.
Related papers
- Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks [12.726911225088443]
Physics-informed neural networks (PINNs) have shown promise in solving partial differential equations (PDEs)
Photonic computing offers a potential solution to achieve this goal because of its ultra-high operation speed.
This paper proposes a completely back-propagation-free (BP-free) and highly salable framework for training real-size PINNs on silicon photonic platforms.
arXiv Detail & Related papers (2025-02-17T23:45:23Z) - PhotoGAN: Generative Adversarial Neural Network Acceleration with Silicon Photonics [2.9699290794642366]
PhotoGAN is the first silicon-photonic accelerator designed to handle the specialized operations of GAN models.
PhotoGAN achieves at least 4.4x higher GOPS and 2.18x lower energy-per-bit (EPB) compared to state-of-the-art accelerators.
arXiv Detail & Related papers (2025-01-23T16:53:31Z) - Numerical Pruning for Efficient Autoregressive Models [87.56342118369123]
This paper focuses on compressing decoder-only transformer-based autoregressive models through structural weight pruning.
Specifically, we propose a training-free pruning method that calculates a numerical score with Newton's method for the Attention and modules, respectively.
To verify the effectiveness of our method, we provide both theoretical support and extensive experiments.
arXiv Detail & Related papers (2024-12-17T01:09:23Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - TeMPO: Efficient Time-Multiplexed Dynamic Photonic Tensor Core for Edge
AI with Compact Slow-Light Electro-Optic Modulator [44.74560543672329]
We present a time-multiplexed dynamic photonic tensor accelerator, dubbed TeMPO, with cross-layer device/circuit/architecture customization.
We achieve a 368.6 TOPS peak performance, 22.3 TOPS/W energy efficiency, and 1.2 TOPS/mm$2$ compute density.
This work signifies the power of cross-layer co-design and domain-specific customization, paving the way for future electronic-photonic accelerators.
arXiv Detail & Related papers (2024-02-12T03:40:32Z) - DeltaNN: Assessing the Impact of Computational Environment Parameters on the Performance of Image Recognition Models [2.379078565066793]
Failure in real-time image recognition tasks can occur due to sub-optimal mapping on hardware accelerators.
We present a differential testing framework, DeltaNN, that allows us to assess the impact of different computational environment parameters on the performance of image recognition models.
arXiv Detail & Related papers (2023-06-05T23:07:01Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.