Orientation-Constrained System for Lamp Detection in Buildings Based on
Computer Vision
- URL: http://arxiv.org/abs/2312.11380v1
- Date: Mon, 18 Dec 2023 17:43:55 GMT
- Title: Orientation-Constrained System for Lamp Detection in Buildings Based on
Computer Vision
- Authors: Francisco Troncoso-Pastoriza, Pablo Egu\'ia-Oller, Rebeca P.
D\'iaz-Redondo, Enrique Granada-\'Alvarez, Aitor Erkoreka
- Abstract summary: We introduce two new modifications to enhance the system.
Results show improvements in the number of detections, the percentage of correct model and state identifications, and the distance between detections and reference positions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision is used in this work to detect lighting elements in buildings
with the goal of improving the accuracy of previous methods to provide a
precise inventory of the location and state of lamps. Using the framework
developed in our previous works, we introduce two new modifications to enhance
the system: first, a constraint on the orientation of the detected poses in the
optimization methods for both the initial and the refined estimates based on
the geometric information of the building information modelling (BIM) model;
second, an additional reprojection error filtering step to discard the
erroneous poses introduced with the orientation restrictions, keeping the
identification and localization errors low while greatly increasing the number
of detections. These~enhancements are tested in five different case studies
with more than 30,000 images, with results showing improvements in the number
of detections, the percentage of correct model and state identifications, and
the distance between detections and reference positions
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Real-Time Object Detection in Occluded Environment with Background
Cluttering Effects Using Deep Learning [0.8192907805418583]
We concentrate on deep learning models for real-time detection of cars and tanks in an occluded environment with a cluttered background.
The developed method makes the custom dataset and employs a preprocessing technique to clean the noisy dataset.
The accuracy and frame per second of the SSD-Mobilenet v2 model are higher than YOLO V3 and YOLO V4.
arXiv Detail & Related papers (2024-01-02T01:30:03Z) - Use of BIM Data as Input and Output for Improved Detection of Lighting
Elements in Buildings [0.0]
This paper introduces a complete method for the automatic detection, identification and localization of lighting elements in buildings.
The detection system is heavily improved from our previous work, with the following two main contributions.
arXiv Detail & Related papers (2023-12-18T17:38:49Z) - Joint object detection and re-identification for 3D obstacle
multi-camera systems [47.87501281561605]
This research paper introduces a novel modification to an object detection network that uses camera and lidar information.
It incorporates an additional branch designed for the task of re-identifying objects across adjacent cameras within the same vehicle.
The results underscore the superiority of this method over traditional Non-Maximum Suppression (NMS) techniques.
arXiv Detail & Related papers (2023-10-09T15:16:35Z) - Predict to Detect: Prediction-guided 3D Object Detection using
Sequential Images [15.51093009875854]
We propose a novel 3D object detection model, P2D (Predict to Detect), that integrates a prediction scheme into a detection framework.
P2D predicts object information in the current frame using solely past frames to learn temporal motion features.
We then introduce a novel temporal feature aggregation method that attentively exploits Bird's-Eye-View (BEV) features based on predicted object information.
arXiv Detail & Related papers (2023-06-14T14:22:56Z) - Self-Calibrating Anomaly and Change Detection for Autonomous Inspection
Robots [0.07366405857677225]
A visual anomaly or change detection algorithm identifies regions of an image that differ from a reference image or dataset.
We propose a comprehensive deep learning framework for detecting anomalies and changes in a priori unknown environments.
arXiv Detail & Related papers (2022-08-26T09:52:12Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Deep few-shot learning for bi-temporal building change detection [0.0]
A new deep few-shot learning method is proposed for building change detection using Monte Carlo dropout and remote sensing observations.
The setup is based on a small dataset, including bitemporal optical images labeled for building change detection.
arXiv Detail & Related papers (2021-08-25T14:38:21Z) - City-scale Scene Change Detection using Point Clouds [71.73273007900717]
We propose a method for detecting structural changes in a city using images captured from mounted cameras over two different times.
A direct comparison of the two point clouds for change detection is not ideal due to inaccurate geo-location information.
To circumvent this problem, we propose a deep learning-based non-rigid registration on the point clouds.
Experiments show that our method is able to detect scene changes effectively, even in the presence of viewpoint and illumination differences.
arXiv Detail & Related papers (2021-03-26T08:04:13Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.