SceneChecker: Boosting Scenario Verification using Symmetry Abstractions
- URL: http://arxiv.org/abs/2011.10713v2
- Date: Wed, 3 Mar 2021 01:39:28 GMT
- Title: SceneChecker: Boosting Scenario Verification using Symmetry Abstractions
- Authors: Hussein Sibai and Yangge Li and Sayan Mitra
- Abstract summary: SceneChecker is a tool for verifying scenarios involving vehicles executing complex plans in large cluttered workspaces.
SceneChecker shows 20x speedup in verification time, even while using those very tools as reachability subroutines.
- Score: 3.8995911009078816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We presentSceneChecker, a tool for verifying scenarios involving vehicles
executing complex plans in large cluttered workspaces. SceneChecker converts
the scenario verification problem to a standard hybrid system verification
problem, and solves it effectively by exploiting structural properties in the
plan and the vehicle dynamics. SceneChecker uses symmetry abstractions, a novel
refinement algorithm, and importantly, is built to boost the performance of any
existing reachability analysis tool as a plug-in subroutine. We evaluated
SceneChecker on several scenarios involving ground and aerial vehicles with
nonlinear dynamics and neural network controllers, employing different kinds of
symmetries, using different reachability subroutines, and following plans with
hundreds of way-points in complex workspaces. Compared to two leading tools,
DryVR and Flow*, SceneChecker shows 20x speedup in verification time, even
while using those very tools as reachability subroutines.
Related papers
- DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - Track Anything Rapter(TAR) [0.0]
Track Anything Rapter (TAR) is designed to detect, segment, and track objects of interest based on user-provided multimodal queries.
TAR utilizes cutting-edge pre-trained models like DINO, CLIP, and SAM to estimate the relative pose of the queried object.
We showcase how the integration of these foundational models with a custom high-level control algorithm results in a highly stable and precise tracking system.
arXiv Detail & Related papers (2024-05-19T19:51:41Z) - Identification of Fine-grained Systematic Errors via Controlled Scene Generation [41.398080398462994]
We propose a pipeline for generating realistic synthetic scenes with fine-grained control.
Our approach, BEV2EGO, allows for a realistic generation of the complete scene with road-contingent control.
In addition, we propose a benchmark for controlled scene generation to select the most appropriate generative outpainting model for BEV2EGO.
arXiv Detail & Related papers (2024-04-10T14:35:22Z) - Graph Convolutional Networks for Complex Traffic Scenario Classification [0.7919810878571297]
A scenario-based testing approach can reduce the time required to obtain statistically significant evidence of the safety of Automated Driving Systems.
Most methods on scenario classification do not work for complex scenarios with diverse environments.
We propose a method for complex traffic scenario classification that is able to model the interaction of a vehicle with the environment.
arXiv Detail & Related papers (2023-10-26T20:51:24Z) - MOST: Multiple Object localization with Self-supervised Transformers for
object discovery [97.47075050779085]
We present Multiple Object localization with Self-supervised Transformers (MOST)
MOST uses features of transformers trained using self-supervised learning to localize multiple objects in real world images.
We show MOST can be used for self-supervised pre-training of object detectors, and yields consistent improvements on fully, semi-supervised object detection and unsupervised region proposal generation.
arXiv Detail & Related papers (2023-04-11T17:57:27Z) - SPTS v2: Single-Point Scene Text Spotting [146.98118405786445]
New framework, SPTS v2, allows us to train high-performing text-spotting models using a single-point annotation.
Tests show SPTS v2 can outperform previous state-of-the-art single-point text spotters with fewer parameters.
Experiments suggest a potential preference for single-point representation in scene text spotting.
arXiv Detail & Related papers (2023-01-04T14:20:14Z) - An Application of Scenario Exploration to Find New Scenarios for the
Development and Testing of Automated Driving Systems in Urban Scenarios [2.480533141352916]
This work aims to find relevant, interesting, or critical parameter sets within logical scenarios by utilizing Bayes optimization and Gaussian processes.
A list of ideas this work leads to and should be investigated further is presented.
arXiv Detail & Related papers (2022-05-17T09:47:32Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - RGB-D Railway Platform Monitoring and Scene Understanding for Enhanced
Passenger Safety [3.4298729855744026]
This paper proposes a flexible analysis scheme to detect and track humans on a ground plane.
We consider multiple combinations within a set of RGB- and depth-based detection and tracking modalities.
Results indicate that the combined use of depth-based spatial information and learned representations yields substantially enhanced detection and tracking accuracies.
arXiv Detail & Related papers (2021-02-23T14:44:34Z) - Weakly Supervised Learning of Rigid 3D Scene Flow [81.37165332656612]
We propose a data-driven scene flow estimation algorithm exploiting the observation that many 3D scenes can be explained by a collection of agents moving as rigid bodies.
We showcase the effectiveness and generalization capacity of our method on four different autonomous driving datasets.
arXiv Detail & Related papers (2021-02-17T18:58:02Z) - Self-supervised Human Detection and Segmentation via Multi-view
Consensus [116.92405645348185]
We propose a multi-camera framework in which geometric constraints are embedded in the form of multi-view consistency during training.
We show that our approach outperforms state-of-the-art self-supervised person detection and segmentation techniques on images that visually depart from those of standard benchmarks.
arXiv Detail & Related papers (2020-12-09T15:47:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.