ReViVD: Exploration and Filtering of Trajectories in an Immersive
Environment using 3D Shapes
- URL: http://arxiv.org/abs/2202.10545v1
- Date: Mon, 21 Feb 2022 21:58:41 GMT
- Title: ReViVD: Exploration and Filtering of Trajectories in an Immersive
Environment using 3D Shapes
- Authors: Fran\c{c}ois Homps, Yohan Beugin, Romain Vuillemot
- Abstract summary: We present ReViVD, a tool for exploring and filtering large trajectory-based datasets using virtual reality.
ReViVD's novelty lies in using simple 3D shapes as queries for users to select and filter groups of trajectories.
We demonstrate the use of ReViVD in different application domains, from GPS position tracking to simulated data.
- Score: 3.308743964406687
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present ReViVD, a tool for exploring and filtering large trajectory-based
datasets using virtual reality. ReViVD's novelty lies in using simple 3D shapes
-- such as cuboids, spheres and cylinders -- as queries for users to select and
filter groups of trajectories. Building on this simple paradigm, more complex
queries can be created by combining previously made selection groups through a
system of user-created Boolean operations. We demonstrate the use of ReViVD in
different application domains, from GPS position tracking to simulated data
(e.g., turbulent particle flows and traffic simulation). Our results show the
ease of use and expressiveness of the 3D geometric shapes in a broad range of
exploratory tasks. ReViVD was found to be particularly useful for progressively
refining selections to isolate outlying behaviors. It also acts as a powerful
communication tool for conveying the structure of normally abstract datasets to
an audience.
Related papers
- Volumetric Environment Representation for Vision-Language Navigation [66.04379819772764]
Vision-language navigation (VLN) requires an agent to navigate through a 3D environment based on visual observations and natural language instructions.
We introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.
VER predicts 3D occupancy, 3D room layout, and 3D bounding boxes jointly.
arXiv Detail & Related papers (2024-03-21T06:14:46Z) - Collection Space Navigator: An Interactive Visualization Interface for
Multidimensional Datasets [0.0]
Collection Space Navigator (CSN) is a browser-based visualization tool to explore, research, and curate large collections of visual digital artifacts.
CSN provides a customizable interface that combines two-dimensional projections with a set of multidimensional filters.
Users can reconfigure the interface to fit their own data and research needs, including projections and filter controls.
arXiv Detail & Related papers (2023-05-11T14:03:26Z) - Exploring Point-BEV Fusion for 3D Point Cloud Object Tracking with
Transformer [62.68401838976208]
3D object tracking aims to predict the location and orientation of an object in consecutive frames given an object template.
Motivated by the success of transformers, we propose Point Tracking TRansformer (PTTR), which efficiently predicts high-quality 3D tracking results.
arXiv Detail & Related papers (2022-08-10T08:36:46Z) - SRCN3D: Sparse R-CNN 3D for Compact Convolutional Multi-View 3D Object
Detection and Tracking [12.285423418301683]
This paper proposes Sparse R-CNN 3D (SRCN3D), a novel two-stage fully-sparse detector that incorporates sparse queries, sparse attention with box-wise sampling, and sparse prediction.
Experiments on nuScenes dataset demonstrate that SRCN3D achieves competitive performance in both 3D object detection and multi-object tracking tasks.
arXiv Detail & Related papers (2022-06-29T07:58:39Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - The Devil is in the Task: Exploiting Reciprocal Appearance-Localization
Features for Monocular 3D Object Detection [62.1185839286255]
Low-cost monocular 3D object detection plays a fundamental role in autonomous driving.
We introduce a Dynamic Feature Reflecting Network, named DFR-Net.
We rank 1st among all the monocular 3D object detectors in the KITTI test set.
arXiv Detail & Related papers (2021-12-28T07:31:18Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - PREPRINT: Comparison of deep learning and hand crafted features for
mining simulation data [7.214140640112874]
This paper addresses the task of extracting meaningful results in an automated manner from high dimensional data sets.
We propose deep learning methods which are capable of processing such data and which can be trained to solve relevant tasks on simulation data.
We compile a large dataset of 2D simulations of the flow field around airfoils which contains 16000 flow fields with which we tested and compared approaches.
arXiv Detail & Related papers (2021-03-11T09:28:00Z) - It's All Around You: Range-Guided Cylindrical Network for 3D Object
Detection [4.518012967046983]
This work presents a novel approach for analyzing 3D data produced by 360-degree depth scanners.
We introduce a novel notion of range-guided convolutions, adapting the receptive field by distance from the ego vehicle and the object's scale.
Our network demonstrates powerful results on the nuScenes challenge, comparable to current state-of-the-art architectures.
arXiv Detail & Related papers (2020-12-05T21:02:18Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.