IndGIC: Supervised Action Recognition under Low Illumination
- URL: http://arxiv.org/abs/2308.15345v1
- Date: Tue, 29 Aug 2023 14:41:10 GMT
- Title: IndGIC: Supervised Action Recognition under Low Illumination
- Authors: Jingbo Zeng
- Abstract summary: We propose action recognition method using deep multi-input network.
Ind-GIC is proposed to enhance poor-illumination video, generating one gamma for one frame to increase enhancement performance.
Experimental results show that our model achieves high accuracy in on ARID dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Technologies of human action recognition in the dark are gaining more and
more attention as huge demand in surveillance, motion control and
human-computer interaction. However, because of limitation in image enhancement
method and low-lighting video datasets, e.g. labeling cost, existing methods
meet some problems. Some video-based approached are effect and efficient in
specific datasets but cannot generalize to most cases while others methods
using multiple sensors rely heavily to prior knowledge to deal with noisy
nature from video stream. In this paper, we proposes action recognition method
using deep multi-input network. Furthermore, we proposed a Independent Gamma
Intensity Corretion (Ind-GIC) to enhance poor-illumination video, generating
one gamma for one frame to increase enhancement performance. To prove our
method is effective, there is some evaluation and comparison between our method
and existing methods. Experimental results show that our model achieves high
accuracy in on ARID dataset.
Related papers
- Multi-View People Detection in Large Scenes via Supervised View-Wise Contribution Weighting [44.48514301889318]
This paper focuses on improving multi-view people detection by developing a supervised view-wise contribution weighting approach.
A large synthetic dataset is adopted to enhance the model's generalization ability.
Experimental results demonstrate the effectiveness of our approach in achieving promising cross-scene multi-view people detection performance.
arXiv Detail & Related papers (2024-05-30T11:03:27Z) - LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for
Place Recognition [11.206532393178385]
We present a novel neural network named LCPR for robust multimodal place recognition.
Our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance.
arXiv Detail & Related papers (2023-11-06T15:39:48Z) - Egocentric RGB+Depth Action Recognition in Industry-Like Settings [50.38638300332429]
Our work focuses on recognizing actions from egocentric RGB and Depth modalities in an industry-like environment.
Our framework is based on the 3D Video SWIN Transformer to encode both RGB and Depth modalities effectively.
Our method also secured first place at the multimodal action recognition challenge at ICIAP 2023.
arXiv Detail & Related papers (2023-09-25T08:56:22Z) - Bright Channel Prior Attention for Multispectral Pedestrian Detection [1.441471691695475]
We propose a new method bright channel prior attention for enhancing pedestrian detection in low-light conditions.
The proposed method integrates image enhancement and detection within a unified framework.
arXiv Detail & Related papers (2023-05-22T09:10:22Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Video Segmentation Learning Using Cascade Residual Convolutional Neural
Network [0.0]
We propose a novel deep learning video segmentation approach that incorporates residual information into the foreground detection learning process.
Experiments conducted on the Change Detection 2014 and on the private dataset PetrobrasROUTES from Petrobras support the effectiveness of the proposed approach.
arXiv Detail & Related papers (2022-12-20T16:56:54Z) - Combining Contrastive and Supervised Learning for Video Super-Resolution
Detection [0.0]
We propose a new upscaled-resolution-detection method based on learning of visual representations using contrastive and cross-entropy losses.
Our method effectively detects upscaling even in compressed videos and outperforms the state-of-the-art alternatives.
arXiv Detail & Related papers (2022-05-20T18:58:13Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition [86.31412529187243]
Few-shot video recognition aims at learning new actions with only very few labeled samples.
We propose a depth guided Adaptive Meta-Fusion Network for few-shot video recognition which is termed as AMeFu-Net.
arXiv Detail & Related papers (2020-10-20T03:06:20Z) - AR-Net: Adaptive Frame Resolution for Efficient Action Recognition [70.62587948892633]
Action recognition is an open and challenging problem in computer vision.
We propose a novel approach, called AR-Net, that selects on-the-fly the optimal resolution for each frame conditioned on the input for efficient action recognition.
arXiv Detail & Related papers (2020-07-31T01:36:04Z) - TinyVIRAT: Low-resolution Video Action Recognition [70.37277191524755]
In real-world surveillance environments, the actions in videos are captured at a wide range of resolutions.
We introduce a benchmark dataset, TinyVIRAT, which contains natural low-resolution activities.
We propose a novel method for recognizing tiny actions in videos which utilizes a progressive generative approach.
arXiv Detail & Related papers (2020-07-14T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.