DeltaNN: Assessing the Impact of Computational Environment Parameters on the Performance of Image Recognition Models
- URL: http://arxiv.org/abs/2306.06208v5
- Date: Mon, 25 Mar 2024 21:08:25 GMT
- Title: DeltaNN: Assessing the Impact of Computational Environment Parameters on the Performance of Image Recognition Models
- Authors: Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha Rajan,
- Abstract summary: Failure in real-time image recognition tasks can occur due to sub-optimal mapping on hardware accelerators.
We present a differential testing framework, DeltaNN, that allows us to assess the impact of different computational environment parameters on the performance of image recognition models.
- Score: 2.379078565066793
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Image recognition tasks typically use deep learning and require enormous processing power, thus relying on hardware accelerators like GPUs and TPUs for fast, timely processing. Failure in real-time image recognition tasks can occur due to sub-optimal mapping on hardware accelerators during model deployment, which may lead to timing uncertainty and erroneous behavior. Mapping on hardware accelerators is done using multiple software components like deep learning frameworks, compilers, and device libraries, that we refer to as the computational environment. Owing to the increased use of image recognition tasks in safety-critical applications like autonomous driving and medical imaging, it is imperative to assess their robustness to changes in the computational environment, as the impact of parameters like deep learning frameworks, compiler optimizations, and hardware devices on model performance and correctness is not yet well understood. In this paper we present a differential testing framework, DeltaNN, that allows us to assess the impact of different computational environment parameters on the performance of image recognition models during deployment, post training. DeltaNN generates different implementations of a given image recognition model for variations in environment parameters, namely, deep learning frameworks, compiler optimizations and hardware devices and analyzes differences in model performance as a result. Using DeltaNN, we conduct an empirical study of robustness analysis of three popular image recognition models using the ImageNet dataset. We report the impact in terms of misclassifications and inference time differences across different settings. In total, we observed up to 100% output label differences across deep learning frameworks, and up to 81% unexpected performance degradation in terms of inference time, when applying compiler optimizations.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Sensitivity-Informed Augmentation for Robust Segmentation [21.609070498399863]
Internal noises such as variations in camera quality or lens distortion can affect the performance of segmentation models.
We present an efficient, adaptable, and gradient-free method to enhance the robustness of learning-based segmentation models across training.
arXiv Detail & Related papers (2024-06-03T15:25:45Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection [34.864059478265055]
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware.
We show that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators.
We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
arXiv Detail & Related papers (2023-09-28T18:22:41Z) - An Ensemble Model for Distorted Images in Real Scenarios [0.0]
In this paper, we apply the object detector YOLOv7 to detect distorted images from the CDCOCO dataset.
Through carefully designed optimizations, our model achieves excellent performance on the CDCOCO test set.
Our denoising detection model can denoise and repair distorted images, making the model useful in a variety of real-world scenarios and environments.
arXiv Detail & Related papers (2023-09-26T15:12:55Z) - Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise
Reduction [0.0]
We propose a few-shot supervised learning framework for optical coherence tomography ( OCT) noise reduction.
This framework offers a dramatic increase in training speed and requires only a single image, or part of an image, and a corresponding speckle suppressed ground truth.
Our results demonstrate significant potential for improving sample complexity, generalization, and time efficiency.
arXiv Detail & Related papers (2023-06-13T19:46:40Z) - Exploring Effects of Computational Parameter Changes to Image
Recognition Systems [0.802904964931021]
Failure in real-time image recognition tasks can occur due to incorrect mapping on hardware accelerators.
It is imperative to assess their robustness to changes in the computational environment.
arXiv Detail & Related papers (2022-11-01T14:00:01Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.