ODIN: Automated Drift Detection and Recovery in Video Analytics
- URL: http://arxiv.org/abs/2009.05440v1
- Date: Wed, 9 Sep 2020 12:13:40 GMT
- Title: ODIN: Automated Drift Detection and Recovery in Video Analytics
- Authors: Abhijit Suprem, Joy Arulraj, Calton Pu, Joao Ferreira
- Abstract summary: ODIN is a visual data analytics system that automatically detects and recovers from drift.
We present an unsupervised algorithm for detecting drift by comparing the distributions of the given data against that of previously seen data.
specialized models outperform their non-specialized counterpart on accuracy, performance, and memory footprint.
- Score: 7.292916882993351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in computer vision have led to a resurgence of interest in
visual data analytics. Researchers are developing systems for effectively and
efficiently analyzing visual data at scale. A significant challenge that these
systems encounter lies in the drift in real-world visual data. For instance, a
model for self-driving vehicles that is not trained on images containing snow
does not work well when it encounters them in practice. This drift phenomenon
limits the accuracy of models employed for visual data analytics. In this
paper, we present a visual data analytics system, called ODIN, that
automatically detects and recovers from drift. ODIN uses adversarial
autoencoders to learn the distribution of high-dimensional images. We present
an unsupervised algorithm for detecting drift by comparing the distributions of
the given data against that of previously seen data. When ODIN detects drift,
it invokes a drift recovery algorithm to deploy specialized models tailored
towards the novel data points. These specialized models outperform their
non-specialized counterpart on accuracy, performance, and memory footprint.
Lastly, we present a model selection algorithm for picking an ensemble of
best-fit specialized models to process a given input. We evaluate the efficacy
and efficiency of ODIN on high-resolution dashboard camera videos captured
under diverse environments from the Berkeley DeepDrive dataset. We demonstrate
that ODIN's models deliver 6x higher throughput, 2x higher accuracy, and 6x
smaller memory footprint compared to a baseline system without automated drift
detection and recovery.
Related papers
- Comparing Optical Flow and Deep Learning to Enable Computationally Efficient Traffic Event Detection with Space-Filling Curves [0.6322312717516407]
We compare Optical Flow (OF) and Deep Learning (DL) to feed computationally efficient event detection via space-filling curves on video data from a forward-facing, in-vehicle camera.
Our results yield that the OF approach excels in specificity and reduces false positives, while the DL approach demonstrates superior sensitivity.
arXiv Detail & Related papers (2024-07-15T13:44:52Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - Uncovering Drift in Textual Data: An Unsupervised Method for Detecting
and Mitigating Drift in Machine Learning Models [9.035254826664273]
Drift in machine learning refers to the phenomenon where the statistical properties of data or context, in which the model operates, change over time leading to a decrease in its performance.
In our proposed unsupervised drift detection method, we follow a two step process. Our first step involves encoding a sample of production data as the target distribution, and the model training data as the reference distribution.
Our method also identifies the subset of production data that is the root cause of the drift.
The models retrained using these identified high drift samples show improved performance on online customer experience quality metrics.
arXiv Detail & Related papers (2023-09-07T16:45:42Z) - Data Models for Dataset Drift Controls in Machine Learning With Optical
Images [8.818468649062932]
A primary failure mode are performance drops due to differences between the training and deployment data.
Existing approaches do not account for explicit models of the primary object of interest: the data.
We demonstrate how such data models can be constructed for image data and used to control downstream machine learning model performance related to dataset drift.
arXiv Detail & Related papers (2022-11-04T16:50:10Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Improving Variational Autoencoder based Out-of-Distribution Detection
for Embedded Real-time Applications [2.9327503320877457]
Out-of-distribution (OD) detection is an emerging approach to address the challenge of detecting out-of-distribution in real-time.
In this paper, we show how we can robustly detect hazardous motion around autonomous driving agents.
Our methods significantly improve detection capabilities of OoD factors to unique driving scenarios, 42% better than state-of-the-art approaches.
Our model also generalized near-perfectly, 97% better than the state-of-the-art across the real-world and simulation driving data sets experimented.
arXiv Detail & Related papers (2021-07-25T07:52:53Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.