A Real Time 1280x720 Object Detection Chip With 585MB/s Memory Traffic
- URL: http://arxiv.org/abs/2205.01571v1
- Date: Mon, 2 May 2022 09:58:39 GMT
- Title: A Real Time 1280x720 Object Detection Chip With 585MB/s Memory Traffic
- Authors: Kuo-Wei Chang, Hsu-Tung Shih, Tian-Sheuan Chang, Shang-Hong Tsai,
Chih-Chyau Yang, Chien-Ming Wu, Chun-Ming Huang
- Abstract summary: This paper proposes a low memory traffic DLA chip with joint hardware and software optimization.
To maximize hardware utilization under memory bandwidth, we morph and fuse the object detection model into a group fusion-ready model.
This reduces the YOLOv2's feature memory traffic from 2.9 GB/s to 0.15 GB/s.
- Score: 1.553339756999288
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Memory bandwidth has become the real-time bottleneck of current deep learning
accelerators (DLA), particularly for high definition (HD) object detection.
Under resource constraints, this paper proposes a low memory traffic DLA chip
with joint hardware and software optimization. To maximize hardware utilization
under memory bandwidth, we morph and fuse the object detection model into a
group fusion-ready model to reduce intermediate data access. This reduces the
YOLOv2's feature memory traffic from 2.9 GB/s to 0.15 GB/s. To support group
fusion, our previous DLA based hardware employes a unified buffer with
write-masking for simple layer-by-layer processing in a fusion group. When
compared to our previous DLA with the same PE numbers, the chip implemented in
a TSMC 40nm process supports 1280x720@30FPS object detection and consumes 7.9X
less external DRAM access energy, from 2607 mJ to 327.6 mJ.
Related papers
- Endor: Hardware-Friendly Sparse Format for Offloaded LLM Inference [47.043257902725294]
We propose a novel sparse format that compresses unstructured sparse pattern of pruned LLM weights to non-zero values with high compression ratio and low decompression overhead.
Compared to offloaded inference using the popular Huggingface Accelerate, applying Endor accelerates OPT-66B by 1.70x and Llama2-70B by 1.78x.
arXiv Detail & Related papers (2024-06-17T15:55:08Z) - Efficient Video Object Segmentation via Modulated Cross-Attention Memory [123.12273176475863]
We propose a transformer-based approach, named MAVOS, to model temporal smoothness without requiring frequent memory expansion.
Our MAVOS achieves a J&F score of 63.3% while operating at 37 frames per second (FPS) on a single V100 GPU.
arXiv Detail & Related papers (2024-03-26T17:59:58Z) - DISTFLASHATTN: Distributed Memory-efficient Attention for Long-context LLMs Training [82.06732962485754]
FlashAttention effectively reduces the quadratic peak memory usage to linear in training transformer-based large language models (LLMs) on a single GPU.
We introduce DISTFLASHATTN, a memory-efficient attention mechanism optimized for long-context LLMs training.
It achieves 1.67x and 1.26 - 1.88x speedup compared to recent Ring Attention and DeepSpeed-Ulysses.
arXiv Detail & Related papers (2023-10-05T03:47:57Z) - Region Aware Video Object Segmentation with Deep Motion Modeling [56.95836951559529]
Region Aware Video Object (RAVOS) is a method that predicts regions of interest for efficient object segmentation and memory storage.
For efficient segmentation, object features are extracted according to the ROIs, and an object decoder is designed for object-level segmentation.
For efficient memory storage, we propose motion path memory to filter out redundant context by memorizing the features within the motion path of objects between two frames.
arXiv Detail & Related papers (2022-07-21T01:44:40Z) - ETAD: A Unified Framework for Efficient Temporal Action Detection [70.21104995731085]
Untrimmed video understanding such as temporal action detection (TAD) often suffers from the pain of huge demand for computing resources.
We build a unified framework for efficient end-to-end temporal action detection (ETAD)
ETAD achieves state-of-the-art performance on both THUMOS-14 and ActivityNet-1.3.
arXiv Detail & Related papers (2022-05-14T21:16:21Z) - A Real Time Super Resolution Accelerator with Tilted Layer Fusion [0.10547353841674209]
This paper proposes a real-time hardware accelerator with the tilted layer fusion method that reduces the external DRAM bandwidth by 92% and just needs 102KB on-chip memory.
The design implemented with a 40nm CMOS process achieves 1920x1080@60fps throughput with 544.3K gate count when running at 600MHz; it has higher throughput and lower area cost than previous designs.
arXiv Detail & Related papers (2022-05-09T01:47:02Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - MAFAT: Memory-Aware Fusing and Tiling of Neural Networks for Accelerated
Edge Inference [1.7894377200944507]
Machine learning networks can easily exceed available memory, increasing latency due to excessive OS swapping.
We propose a memory usage predictor coupled with a search algorithm to provide optimized fusing and tiling configurations.
Results show that our approach can run in less than half the memory, and with a speedup of up to 2.78 under severe memory constraints.
arXiv Detail & Related papers (2021-07-14T19:45:49Z) - Training Large Neural Networks with Constant Memory using a New
Execution Algorithm [0.5424799109837065]
We introduce a new relay-style execution technique called L2L (layer-to-layer)
L2L is able to fit models up to 50 Billion parameters on a machine with a single 16GB V100 and 512GB CPU memory.
arXiv Detail & Related papers (2020-02-13T17:29:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.