Monitoring Browsing Behavior of Customers in Retail Stores via RFID
Imaging
- URL: http://arxiv.org/abs/2007.03600v1
- Date: Tue, 7 Jul 2020 16:36:24 GMT
- Title: Monitoring Browsing Behavior of Customers in Retail Stores via RFID
Imaging
- Authors: Kamran Ali, Alex X. Liu, Eugene Chai, Karthik Sundaresan
- Abstract summary: We propose TagSee, a multi-person imaging system based on monostatic RFID imaging.
We implement TagSee using a Impinj Speedway R420 reader and SMARTRAC DogBone RFID tags.
TagSee can achieve a TPR of more than 90% and a FPR of less than 10% in multi-person scenarios using training data from just 3-4 users.
- Score: 24.007822566345943
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose to use commercial off-the-shelf (COTS) monostatic
RFID devices (i.e. which use a single antenna at a time for both transmitting
and receiving RFID signals to and from the tags) to monitor browsing activity
of customers in front of display items in places such as retail stores. To this
end, we propose TagSee, a multi-person imaging system based on monostatic RFID
imaging. TagSee is based on the insight that when customers are browsing the
items on a shelf, they stand between the tags deployed along the boundaries of
the shelf and the reader, which changes the multi-paths that the RFID signals
travel along, and both the RSS and phase values of the RFID signals that the
reader receives change. Based on these variations observed by the reader,
TagSee constructs a coarse grained image of the customers. Afterwards, TagSee
identifies the items that are being browsed by the customers by analyzing the
constructed images. The key novelty of this paper is on achieving browsing
behavior monitoring of multiple customers in front of display items by
constructing coarse grained images via robust, analytical model-driven deep
learning based, RFID imaging. To achieve this, we first mathematically
formulate the problem of imaging humans using monostatic RFID devices and
derive an approximate analytical imaging model that correlates the variations
caused by human obstructions in the RFID signals. Based on this model, we then
develop a deep learning framework to robustly image customers with high
accuracy. We implement TagSee scheme using a Impinj Speedway R420 reader and
SMARTRAC DogBone RFID tags. TagSee can achieve a TPR of more than ~90% and a
FPR of less than ~10% in multi-person scenarios using training data from just
3-4 users.
Related papers
- ViFi-ReID: A Two-Stream Vision-WiFi Multimodal Approach for Person Re-identification [3.3743041904085125]
Person re-identification (ReID) plays a vital role in safety inspections, personnel counting, and more.
Most current ReID approaches primarily extract features from images, which are easily affected by objective conditions.
We leverage widely available routers as sensing devices by capturing gait information from pedestrians through the Channel State Information (CSI) in WiFi signals.
arXiv Detail & Related papers (2024-10-13T15:34:11Z) - Reviewing FID and SID Metrics on Generative Adversarial Networks [0.0]
The growth of generative adversarial network (GAN) models has increased the ability of image processing.
Previous research has shown the Fr'echet Inception Distance (FID) to be an effective metric when testing these image-to-image GANs in real-world applications.
This paper uses public datasets that consist of faccades, cityscapes, and maps within Pix2Pix and CycleGAN models.
After training these models are evaluated on both distance metrics which measure the generating performance of the trained models.
arXiv Detail & Related papers (2024-02-06T03:02:39Z) - Reverse Engineering and Security Evaluation of Commercial Tags for RFID-Based IoT Applications [0.9999629695552193]
This paper presents a review of the most common flaws found in RFID-based IoT systems.
Second, a novel methodology that eases the detection and mitigation of such flaws is presented.
Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it.
arXiv Detail & Related papers (2024-02-05T23:55:46Z) - Follow Anything: Open-set detection, tracking, and following in
real-time [89.83421771766682]
We present a robotic system to detect, track, and follow any object in real-time.
Our approach, dubbed follow anything'' (FAn), is an open-vocabulary and multimodal model.
FAn can be deployed on a laptop with a lightweight (6-8 GB) graphics card, achieving a throughput of 6-20 frames per second.
arXiv Detail & Related papers (2023-08-10T17:57:06Z) - RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects
in Context [0.25019493958767397]
Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more.
We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment.
arXiv Detail & Related papers (2022-11-16T11:25:38Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Unsupervised Person Re-Identification with Wireless Positioning under
Weak Scene Labeling [131.18390399368997]
We propose to explore unsupervised person re-identification with both visual data and wireless positioning trajectories under weak scene labeling.
Specifically, we propose a novel unsupervised multimodal training framework (UMTF), which models the complementarity of visual data and wireless information.
Our UMTF contains a multimodal data association strategy (MMDA) and a multimodal graph neural network (MMGN)
arXiv Detail & Related papers (2021-10-29T08:25:44Z) - An Effective and Robust Detector for Logo Detection [58.448716977297565]
Some attackers fool the well-trained logo detection model for infringement.
A novel logo detector based on the mechanism of looking and thinking twice is proposed in this paper.
We extend detectoRS algorithm to a cascade schema with an equalization loss function, multi-scale transformations, and adversarial data augmentation.
arXiv Detail & Related papers (2021-08-01T10:17:53Z) - The Tags Are Alright: Robust Large-Scale RFID Clone Detection Through
Federated Data-Augmented Radio Fingerprinting [11.03108444237374]
We propose a novel training framework based on federated machine learning (FML) and data augmentation (DAG) to boost the accuracy of RFID clone detection.
To the best of our knowledge, this is the first paper experimentally demonstrating the efficacy of FML and DA on a large device population.
arXiv Detail & Related papers (2021-05-08T10:48:02Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - Backpropagating through Fr\'echet Inception Distance [79.81807680370677]
FastFID can efficiently train generative models with FID as a loss function.
Using FID as an additional loss for Generative Adversarial Networks improves their FID.
arXiv Detail & Related papers (2020-09-29T15:04:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.