GO-Finder: A Registration-Free Wearable System for Assisting Users in
Finding Lost Objects via Hand-Held Object Discovery
- URL: http://arxiv.org/abs/2101.07314v2
- Date: Fri, 12 Feb 2021 11:16:44 GMT
- Title: GO-Finder: A Registration-Free Wearable System for Assisting Users in
Finding Lost Objects via Hand-Held Object Discovery
- Authors: Takuma Yagi, Takumi Nishiyasu, Kunimasa Kawasaki, Moe Matsuki, Yoichi
Sato
- Abstract summary: GO-Finder is a registration-free wearable camera based system for assisting people in finding objects.
Go-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects.
- Score: 23.33413589457104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People spend an enormous amount of time and effort looking for lost objects.
To help remind people of the location of lost objects, various computational
systems that provide information on their locations have been developed.
However, prior systems for assisting people in finding objects require users to
register the target objects in advance. This requirement imposes a cumbersome
burden on the users, and the system cannot help remind them of unexpectedly
lost objects. We propose GO-Finder ("Generic Object Finder"), a
registration-free wearable camera based system for assisting people in finding
an arbitrary number of objects based on two key features: automatic discovery
of hand-held objects and image-based candidate selection. Given a video taken
from a wearable camera, Go-Finder automatically detects and groups hand-held
objects to form a visual timeline of the objects. Users can retrieve the last
appearance of the object by browsing the timeline through a smartphone app. We
conducted a user study to investigate how users benefit from using GO-Finder
and confirmed improved accuracy and reduced mental load regarding the object
search task by providing clear visual cues on object locations.
Related papers
- Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection [54.78470057491049]
Occupancy has emerged as a promising alternative for 3D scene perception.
We introduce object-centric occupancy as a supplement to object bboxes.
We show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors.
arXiv Detail & Related papers (2024-12-06T16:12:38Z) - ObjectFinder: Open-Vocabulary Assistive System for Interactive Object Search by Blind People [39.57767207961938]
We created ObjectFinder, an open-vocabulary interactive object-search prototype.
It combines object detection with scene description and navigation.
We conducted need-finding interviews to better understand challenges in object search.
arXiv Detail & Related papers (2024-12-04T08:38:45Z) - Unsupervised Object Localization in the Era of Self-Supervised ViTs: A Survey [33.692534984177364]
Recent works show that it is possible to perform class-agnostic unsupervised object localization by exploiting self-supervised pre-trained features.
We propose here a survey of unsupervised object localization methods that discover objects in images without requiring any manual annotation.
arXiv Detail & Related papers (2023-10-19T16:57:49Z) - Object-Centric Multiple Object Tracking [124.30650395969126]
This paper proposes a video object-centric model for multiple-object tracking pipelines.
It consists of an index-merge module that adapts the object-centric slots into detection outputs and an object memory module.
Benefited from object-centric learning, we only require sparse detection labels for object localization and feature binding.
arXiv Detail & Related papers (2023-09-01T03:34:12Z) - SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving [160.57870373052577]
We propose a saliency-based OD algorithm (SalienDet) to detect unknown objects.
Our SalienDet utilizes a saliency-based algorithm to enhance image features for object proposal generation.
We design a dataset relabeling approach to differentiate the unknown objects from all objects in training sample set to achieve Open-World Detection.
arXiv Detail & Related papers (2023-05-11T16:19:44Z) - TactoFind: A Tactile Only System for Object Retrieval [14.732140705441992]
We study the problem of object retrieval in scenarios where visual sensing is absent.
Unlike vision, where cameras can observe the entire scene, touch sensors are local and only observe parts of the scene that are in contact with the manipulator.
We present a system capable of using sparse tactile feedback from fingertip touch sensors on a dexterous hand to localize, identify and grasp novel objects without any visual feedback.
arXiv Detail & Related papers (2023-03-23T17:50:09Z) - Towards Open-Set Object Detection and Discovery [38.81806249664884]
We present a new task, namely Open-Set Object Detection and Discovery (OSODD)
We propose a two-stage method that first uses an open-set object detector to predict both known and unknown objects.
Then, we study the representation of predicted objects in an unsupervised manner and discover new categories from the set of unknown objects.
arXiv Detail & Related papers (2022-04-12T08:07:01Z) - Object Manipulation via Visual Target Localization [64.05939029132394]
Training agents to manipulate objects, poses many challenges.
We propose an approach that explores the environment in search for target objects, computes their 3D coordinates once they are located, and then continues to estimate their 3D locations even when the objects are not visible.
Our evaluations show a massive 3x improvement in success rate over a model that has access to the same sensory suite.
arXiv Detail & Related papers (2022-03-15T17:59:01Z) - Learning to Track Object Position through Occlusion [32.458623495840904]
Occlusion is one of the most significant challenges encountered by object detectors and trackers.
We propose a tracking-by-detection approach that builds upon the success of region based video object detectors.
Our approach achieves superior results on a dataset of furniture assembly videos collected from the internet.
arXiv Detail & Related papers (2021-06-20T22:29:46Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.