Machine Learning Based Object Tracking
- URL: http://arxiv.org/abs/2401.07929v1
- Date: Mon, 15 Jan 2024 19:46:05 GMT
- Title: Machine Learning Based Object Tracking
- Authors: Md Rakibul Karim Akanda, Joshua Reynolds, Treylin Jackson, and Milijah
Gray
- Abstract summary: Authors were able to set a range of interest around an object using Open Computer Vision.
Next a tracking algorithm has been used to maintain tracking on an object while simultaneously operating two servo motors to keep the object centered in the frame.
- Score: 0.6466206145151128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning based object detection as well as tracking that object have
been performed in this paper. The authors were able to set a range of interest
(ROI) around an object using Open Computer Vision, better known as OpenCV. Next
a tracking algorithm has been used to maintain tracking on an object while
simultaneously operating two servo motors to keep the object centered in the
frame. Detailed procedure and code are included in this paper.
Related papers
- VOVTrack: Exploring the Potentiality in Videos for Open-Vocabulary Object Tracking [61.56592503861093]
This issue amalgamates the complexities of open-vocabulary object detection (OVD) and multi-object tracking (MOT)
Existing approaches to OVMOT often merge OVD and MOT methodologies as separate modules, predominantly focusing on the problem through an image-centric lens.
We propose VOVTrack, a novel method that integrates object states relevant to MOT and video-centric training to address this challenge from a video object tracking standpoint.
arXiv Detail & Related papers (2024-10-11T05:01:49Z) - Leveraging Object Priors for Point Tracking [25.030407197192]
Point tracking is a fundamental problem in computer vision with numerous applications in AR and robotics.
We propose a novel objectness regularization approach that guides points to be aware of object priors.
Our approach achieves state-of-the-art performance on three point tracking benchmarks.
arXiv Detail & Related papers (2024-09-09T16:48:42Z) - End-to-End 3D Object Detection using LiDAR Point Cloud [0.0]
We present an approach wherein, using a novel encoding of the LiDAR point cloud we infer the location of different classes near the autonomous vehicles.
The output is predictions about the location and orientation of objects in the scene in form of 3D bounding boxes and labels of scene objects.
arXiv Detail & Related papers (2023-12-24T00:52:14Z) - Follow Anything: Open-set detection, tracking, and following in
real-time [89.83421771766682]
We present a robotic system to detect, track, and follow any object in real-time.
Our approach, dubbed follow anything'' (FAn), is an open-vocabulary and multimodal model.
FAn can be deployed on a laptop with a lightweight (6-8 GB) graphics card, achieving a throughput of 6-20 frames per second.
arXiv Detail & Related papers (2023-08-10T17:57:06Z) - SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving [160.57870373052577]
We propose a saliency-based OD algorithm (SalienDet) to detect unknown objects.
Our SalienDet utilizes a saliency-based algorithm to enhance image features for object proposal generation.
We design a dataset relabeling approach to differentiate the unknown objects from all objects in training sample set to achieve Open-World Detection.
arXiv Detail & Related papers (2023-05-11T16:19:44Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - TripletTrack: 3D Object Tracking using Triplet Embeddings and LSTM [0.0]
3D object tracking is a critical task in autonomous driving systems.
In this paper we investigate the use of triplet embeddings in combination with motion representations for 3D object tracking.
arXiv Detail & Related papers (2022-10-28T15:23:50Z) - Semi-automatic 3D Object Keypoint Annotation and Detection for the
Masses [42.34064154798376]
We present a semi-automatic way of collecting and labeling datasets using a wrist mounted camera on a standard robotic arm.
We are able to obtain a working 3D object keypoint detector and go through the whole process of data collection, annotation and learning in just a couple hours of active time.
arXiv Detail & Related papers (2022-01-19T15:41:54Z) - Multiple Object Trackers in OpenCV: A Benchmark [0.0]
In this paper, we evaluate 7 trackers implemented in OpenCV against the MOT20 dataset.
The results are shown based on Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP) metrics.
arXiv Detail & Related papers (2021-10-11T09:12:02Z) - End-to-end Deep Object Tracking with Circular Loss Function for Rotated
Bounding Box [68.8204255655161]
We introduce a novel end-to-end deep learning method based on the Transformer Multi-Head Attention architecture.
We also present a new type of loss function, which takes into account the bounding box overlap and orientation.
arXiv Detail & Related papers (2020-12-17T17:29:29Z) - MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous
Driving Using Multiple Views [60.538802124885414]
We present Multi-View LidarNet (MVLidarNet), a two-stage deep neural network for multi-class object detection and drivable space segmentation.
MVLidarNet is able to detect and classify objects while simultaneously determining the drivable space using a single LiDAR scan as input.
We show results on both KITTI and a much larger internal dataset, thus demonstrating the method's ability to scale by an order of magnitude.
arXiv Detail & Related papers (2020-06-09T21:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.