SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision
Tasks with Real-time Performance on Mobile Device
- URL: http://arxiv.org/abs/2308.08137v1
- Date: Wed, 16 Aug 2023 04:03:59 GMT
- Title: SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision
Tasks with Real-time Performance on Mobile Device
- Authors: Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong
and Ke Xu
- Abstract summary: We propose a novel network, SYENet, to handle multiple low-level vision tasks on mobile devices in a real-time manner.
The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications.
- Score: 6.548475407783714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of AI hardware accelerators, applying deep
learning-based algorithms to solve various low-level vision tasks on mobile
devices has gradually become possible. However, two main problems still need to
be solved: task-specific algorithms make it difficult to integrate them into a
single neural network architecture, and large amounts of parameters make it
difficult to achieve real-time inference. To tackle these problems, we propose
a novel network, SYENet, with only $~$6K parameters, to handle multiple
low-level vision tasks on mobile devices in a real-time manner. The SYENet
consists of two asymmetrical branches with simple building blocks. To
effectively connect the results by asymmetrical branches, a Quadratic
Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new
Outlier-Aware Loss is proposed to process the image. The proposed method proves
its superior performance with the best PSNR as compared with other networks in
real-time applications such as Image Signal Processing(ISP), Low-Light
Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm
8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the
highest score in MAI 2022 Learned Smartphone ISP challenge.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Ev-Edge: Efficient Execution of Event-based Vision Algorithms on Commodity Edge Platforms [10.104371980353973]
Ev-Edge is a framework that contains three key optimizations to boost the performance of event-based vision systems on edge platforms.
On several state-of-art networks for a range of autonomous navigation tasks, Ev-Edge achieves 1.28x-2.05x improvements in latency and 1.23x-2.15x in energy.
arXiv Detail & Related papers (2024-03-23T04:44:55Z) - Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for
Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge [80.88063189896718]
High architectural and computational complexity can result in poor suitability for deployment on embedded devices.
Fast GraspNeXt is a fast self-attention neural network architecture tailored for embedded multi-task learning in computer vision tasks for robotic grasping.
arXiv Detail & Related papers (2023-04-21T18:07:14Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture
and Pruning Search [64.80878113422824]
We propose an automatic search framework that derives sparse super-resolution (SR) models with high image quality while satisfying the real-time inference requirement.
With the proposed framework, we are the first to achieve real-time SR inference (with only tens of milliseconds per frame) for implementing 720p resolution with competitive image quality.
arXiv Detail & Related papers (2021-08-18T06:47:31Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - Multi-Task Network Pruning and Embedded Optimization for Real-time
Deployment in ADAS [0.0]
Camera-based Deep Learning algorithms are increasingly needed for perception in Automated Driving systems.
constraints from the automotive industry challenge the deployment of CNNs by imposing embedded systems with limited computational resources.
We propose an approach to embed a multi-task CNN network under such conditions on a commercial prototype platform.
arXiv Detail & Related papers (2021-01-19T19:29:38Z) - AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via
Visual Attention Condensers [81.17461895644003]
We introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition.
AttendNets possess deep self-attention architectures based on visual attention condensers.
Results show AttendNets have significantly lower architectural and computational complexity when compared to several deep neural networks.
arXiv Detail & Related papers (2020-09-30T01:53:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.