Deep Learning for Vision-Based Fall Detection System: Enhanced Optical
Dynamic Flow
- URL: http://arxiv.org/abs/2104.05744v1
- Date: Thu, 18 Mar 2021 08:14:25 GMT
- Title: Deep Learning for Vision-Based Fall Detection System: Enhanced Optical
Dynamic Flow
- Authors: Sagar Chhetri, Abeer Alsadoon, Thair Al Dala in, P. W. C. Prasad,
Tarik A. Rashid, Angelika Maag
- Abstract summary: The impact of deep learning has changed the landscape of the vision-based system, such as action recognition.
Deep learning technique has not been successfully implemented in vision-based fall detection systems.
This research aims to propose a vision-based fall detection system that improves the accuracy of fall detection.
- Score: 27.791093798619503
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate fall detection for the assistance of older people is crucial to
reduce incidents of deaths or injuries due to falls. Meanwhile, a vision-based
fall detection system has shown some significant results to detect falls.
Still, numerous challenges need to be resolved. The impact of deep learning has
changed the landscape of the vision-based system, such as action recognition.
The deep learning technique has not been successfully implemented in
vision-based fall detection systems due to the requirement of a large amount of
computation power and the requirement of a large amount of sample training
data. This research aims to propose a vision-based fall detection system that
improves the accuracy of fall detection in some complex environments such as
the change of light condition in the room. Also, this research aims to increase
the performance of the pre-processing of video images. The proposed system
consists of the Enhanced Dynamic Optical Flow technique that encodes the
temporal data of optical flow videos by the method of rank pooling, which
thereby improves the processing time of fall detection and improves the
classification accuracy in dynamic lighting conditions. The experimental
results showed that the classification accuracy of the fall detection improved
by around 3% and the processing time by 40 to 50ms. The proposed system
concentrates on decreasing the processing time of fall detection and improving
classification accuracy. Meanwhile, it provides a mechanism for summarizing a
video into a single image by using a dynamic optical flow technique, which
helps to increase the performance of image pre-processing steps.
Related papers
- Self-supervised denoising of visual field data improves detection of glaucoma progression [10.406307305469356]
We demonstrate the utility of self-supervised deep learning in denoising visual field data from over 4000 patients.
Masked autoencoders led to cleaner denoised data than previous methods.
arXiv Detail & Related papers (2024-11-19T00:50:01Z) - Perceptual Piercing: Human Visual Cue-based Object Detection in Low Visibility Conditions [2.0409124291940826]
This study proposes a novel deep learning framework inspired by atmospheric scattering and human visual cortex mechanisms to enhance object detection under poor visibility scenarios such as fog, smoke, and haze.
The objective is to enhance the precision and reliability of detection systems under adverse environmental conditions.
arXiv Detail & Related papers (2024-10-02T04:03:07Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise
Reduction [0.0]
We propose a few-shot supervised learning framework for optical coherence tomography ( OCT) noise reduction.
This framework offers a dramatic increase in training speed and requires only a single image, or part of an image, and a corresponding speckle suppressed ground truth.
Our results demonstrate significant potential for improving sample complexity, generalization, and time efficiency.
arXiv Detail & Related papers (2023-06-13T19:46:40Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Elderly Fall Detection Using CCTV Cameras under Partial Occlusion of the
Subjects Body [0.0]
Occlusion is one of the biggest challenges of vision-based fall detection systems.
We synthesize specifically-designed occluded videos for training fall detection systems.
We introduce a framework for weighted training of fall detection models using occluded and un-occluded videos.
arXiv Detail & Related papers (2022-08-15T16:02:18Z) - Efficient Human Vision Inspired Action Recognition using Adaptive
Spatiotemporal Sampling [13.427887784558168]
We introduce a novel adaptive vision system for efficient action recognition processing.
Our system pre-scans the global context sampling scheme at low-resolution and decides to skip or request high-resolution features at salient regions for further processing.
We validate the system on EPIC-KENS and UCF-101 datasets for action recognition, and show that our proposed approach can greatly speed up inference with a tolerable loss of accuracy compared with those from state-the-art baselines.
arXiv Detail & Related papers (2022-07-12T01:18:58Z) - On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation [60.780823530087446]
We show that improvements in image synthesis do not necessitate improvement in depth estimation.
We attribute this diverging phenomenon to aleatoric uncertainties, which originate from data.
This observed divergence has not been previously reported or studied in depth.
arXiv Detail & Related papers (2021-09-13T17:57:24Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.