Split Computing for Complex Object Detectors: Challenges and Preliminary
Results
- URL: http://arxiv.org/abs/2007.13312v2
- Date: Thu, 30 Jul 2020 05:14:59 GMT
- Title: Split Computing for Complex Object Detectors: Challenges and Preliminary
Results
- Authors: Yoshitomo Matsubara, Marco Levorato
- Abstract summary: We discuss the challenges in developing split computing methods for powerful R-CNN object detectors trained on a large dataset, COCO 2017.
We show that naive split computing methods would not reduce inference time.
This is the first study to inject small bottlenecks to such object detectors and unveil the potential of a split computing approach.
- Score: 8.291242737118482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Following the trends of mobile and edge computing for DNN models, an
intermediate option, split computing, has been attracting attentions from the
research community. Previous studies empirically showed that while mobile and
edge computing often would be the best options in terms of total inference
time, there are some scenarios where split computing methods can achieve
shorter inference time. All the proposed split computing approaches, however,
focus on image classification tasks, and most are assessed with small datasets
that are far from the practical scenarios. In this paper, we discuss the
challenges in developing split computing methods for powerful R-CNN object
detectors trained on a large dataset, COCO 2017. We extensively analyze the
object detectors in terms of layer-wise tensor size and model size, and show
that naive split computing methods would not reduce inference time. To the best
of our knowledge, this is the first study to inject small bottlenecks to such
object detectors and unveil the potential of a split computing approach. The
source code and trained models' weights used in this study are available at
https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors .
Related papers
- Debiased Novel Category Discovering and Localization [40.02326438622898]
We focus on the challenging problem of Novel Class Discovery and Localization (NCDL)
We propose an Debiased Region Mining (DRM) approach that combines class-agnostic Region Proposal Network (RPN) and class-aware RPN.
We conduct extensive experiments on the NCDL benchmark, and the results demonstrate that the proposed DRM approach significantly outperforms previous methods.
arXiv Detail & Related papers (2024-02-29T03:09:16Z) - Learning-Augmented K-Means Clustering Using Dimensional Reduction [1.7243216387069678]
We propose a solution to reduce the dimensionality of the dataset using Principal Component Analysis (PCA)
PCA is well-established in the literature and has become one of the most useful tools for data modeling, compression, and visualization.
arXiv Detail & Related papers (2024-01-06T12:02:33Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices [17.976792694929063]
We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
arXiv Detail & Related papers (2022-03-09T17:20:12Z) - DANCE: DAta-Network Co-optimization for Efficient Segmentation Model
Training and Inference [85.02494022662505]
DANCE is an automated simultaneous data-network co-optimization for efficient segmentation model training and inference.
It integrates automated data slimming which adaptively downsamples/drops input images and controls their corresponding contribution to the training loss guided by the images' spatial complexity.
Experiments and ablating studies demonstrate that DANCE can achieve "all-win" towards efficient segmentation.
arXiv Detail & Related papers (2021-07-16T04:58:58Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Hyperdimensional Computing for Efficient Distributed Classification with
Randomized Neural Networks [5.942847925681103]
We study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared.
We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents.
arXiv Detail & Related papers (2021-06-02T01:33:56Z) - Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on
Higher-Order Voronoi Diagrams [69.4411417775822]
Adversarial examples are a widely studied phenomenon in machine learning models.
We propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification.
arXiv Detail & Related papers (2020-11-19T08:49:10Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Selective Convolutional Network: An Efficient Object Detector with
Ignoring Background [28.591619763438054]
We introduce an efficient object detector called Selective Convolutional Network (SCN), which selectively calculates only on the locations that contain meaningful and conducive information.
To solve it, we design an elaborate structure with negligible overheads to guide the network where to look next.
arXiv Detail & Related papers (2020-02-04T10:07:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.