Automated Attendee Recognition System for Large-Scale Social Events or Conference Gathering
- URL: http://arxiv.org/abs/2503.03330v1
- Date: Wed, 05 Mar 2025 10:03:21 GMT
- Title: Automated Attendee Recognition System for Large-Scale Social Events or Conference Gathering
- Authors: Dhruv Motwani, Ankush Tyagi, Vipul Dabhi, Harshadkumar Prajapati,
- Abstract summary: We propose an automated, cloud-based attendance tracking system that uses cameras mounted at the entrance and exit gates.<n>The mounted cameras continuously capture video and send the video data to cloud services to perform real-time face detection and recognition.<n>This system achieves 100% accuracy for individuals without facial obstructions and successfully recognizes all attendees appearing within the camera's field of view.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manual attendance tracking at large-scale events, such as marriage functions or conferences, is often inefficient and prone to human error. To address this challenge, we propose an automated, cloud-based attendance tracking system that uses cameras mounted at the entrance and exit gates. The mounted cameras continuously capture video and send the video data to cloud services to perform real-time face detection and recognition. Unlike existing solutions, our system accurately identifies attendees even when they are not looking directly at the camera, allowing natural movements, such as looking around or talking while walking. To the best of our knowledge, this is the first system to achieve high recognition rates under such dynamic conditions. Our system demonstrates overall 90% accuracy, with each video frame processed in 5 seconds, ensuring real time operation without frame loss. In addition, notifications are sent promptly to security personnel within the same latency. This system achieves 100% accuracy for individuals without facial obstructions and successfully recognizes all attendees appearing within the camera's field of view, providing a robust solution for attendee recognition in large-scale social events.
Related papers
- Motion-Aware Optical Camera Communication with Event Cameras [28.041269887313042]
This paper unveils a novel system that utilizes event cameras.<n>We introduce a dynamic visual marker and design event-based tracking algorithms to achieve fast localization and data streaming.<n>Remarkably, the event camera's unique capabilities mitigate issues related to screen refresh rates and camera motion, enabling a high throughput of up to 114 Kbps in static conditions.
arXiv Detail & Related papers (2024-12-01T14:06:31Z) - Analysis of Unstructured High-Density Crowded Scenes for Crowd Monitoring [55.2480439325792]
We are interested in developing an automated system for detection of organized movements in human crowds.<n>Computer vision algorithms can extract information from videos of crowded scenes.<n>We can estimate the number of participants in an organized cohort.
arXiv Detail & Related papers (2024-08-06T22:09:50Z) - From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety [1.7904189757601403]
This article adopts and evaluates an AI-enabled Smart Video Solution (SVS) designed to enhance safety in the real world.
The system integrates with existing infrastructure camera networks, leveraging recent advancements in AI for easy adoption.
The article evaluates the end-to-end latency from the moment an AI algorithm detects anomalous behavior in real-time at the camera level to the time stakeholders receive a notification.
arXiv Detail & Related papers (2023-12-04T17:41:52Z) - YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking [2.5761958263376745]
We propose a person-tracking system that combines correlation filters and Intersection Over Union (IOU) constraints for robust tracking.
The proposed system quickly identifies and tracks suspect in real-time across multiple cameras.
It is computationally efficient and achieves a high F1-Score of 79% and an IOU of 59% comparable to existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-09-23T14:11:13Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Real-World Community-in-the-Loop Smart Video Surveillance -- A Case
Study at a Community College [2.4956060473718407]
This paper presents a case study for designing and deploying smart video surveillance systems based on a real-world testbed at a community college.
We focus on a smart camera-based system that can identify suspicious/abnormal activities and alert the stakeholders and residents immediately.
The system can run eight cameras simultaneously at a 32.41 Frame Per Second (FPS) rate.
arXiv Detail & Related papers (2023-03-22T22:16:17Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Smart Director: An Event-Driven Directing System for Live Broadcasting [110.30675947733167]
Smart Director aims at mimicking the typical human-in-the-loop broadcasting process to automatically create near-professional broadcasting programs in real-time.
Our system is the first end-to-end automated directing system for multi-camera sports broadcasting.
arXiv Detail & Related papers (2022-01-11T16:14:41Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Unveiling personnel movement in a larger indoor area with a
non-overlapping multi-camera system [23.195588088063577]
The paper expands the scope of indoor movement perception based on non-overlapping multiple cameras.
It improves the accuracy of pedestrian re-identification without introducing additional types of sensors.
arXiv Detail & Related papers (2021-04-10T01:44:26Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.