On Correlated Knowledge Distillation for Monitoring Human Pose with
Radios
- URL: http://arxiv.org/abs/2305.14829v2
- Date: Tue, 30 May 2023 13:14:05 GMT
- Title: On Correlated Knowledge Distillation for Monitoring Human Pose with
Radios
- Authors: Shiva Raj Pokhrel, Jonathan Kua, Deol Satish, Phil Williams, Arkady
Zaslavsky, Seng W. Loke, Jinho Choi
- Abstract summary: We propose and develop a simple experimental testbed to study the feasibility of a novel idea by coupling radio frequency (RF) sensing technology with Correlated Knowledge Distillation (CKD) theory.
The proposed CKD framework transfers and fuses pose knowledge from a robust "Teacher" model to a parameterized "Student" model, which can be a promising technique for obtaining accurate yet lightweight pose estimates.
- Score: 41.74439665339141
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we propose and develop a simple experimental testbed to study
the feasibility of a novel idea by coupling radio frequency (RF) sensing
technology with Correlated Knowledge Distillation (CKD) theory towards
designing lightweight, near real-time and precise human pose monitoring
systems. The proposed CKD framework transfers and fuses pose knowledge from a
robust "Teacher" model to a parameterized "Student" model, which can be a
promising technique for obtaining accurate yet lightweight pose estimates. To
assure its efficacy, we implemented CKD for distilling logits in our integrated
Software Defined Radio (SDR)-based experimental setup and investigated the
RF-visual signal correlation. Our CKD-RF sensing technique is characterized by
two modes -- a camera-fed Teacher Class Network (e.g., images, videos) with an
SDR-fed Student Class Network (e.g., RF signals). Specifically, our CKD model
trains a dual multi-branch teacher and student network by distilling and fusing
knowledge bases. The resulting CKD models are then subsequently used to
identify the multimodal correlation and teach the student branch in reverse.
Instead of simply aggregating their learnings, CKD training comprised multiple
parallel transformations with the two domains, i.e., visual images and RF
signals. Once trained, our CKD model can efficiently preserve privacy and
utilize the multimodal correlated logits from the two different neural networks
for estimating poses without using visual signals/video frames (by using only
the RF signals).
Related papers
- Dual-frequency Selected Knowledge Distillation with Statistical-based Sample Rectification for PolSAR Image Classification [11.844199868924505]
The effect of regional consistency on classification information learning and the rational use of dual-frequency data are two main difficulties for dual-frequency collaborative classification.<n>A knowledge distillation network with statistical-based sample rectification (SKDNet-SSR) is proposed in this article.
arXiv Detail & Related papers (2025-07-04T02:56:28Z) - Revisiting Cross-Modal Knowledge Distillation: A Disentanglement Approach for RGBD Semantic Segmentation [4.7859023148002215]
We introduce CroDiNo-KD (Cross-Modal Disentanglement: a New Outlook on Knowledge Distillation), a novel cross-modal knowledge distillation framework for RGBD semantic segmentation.<n>Our approach simultaneously learns single-modality RGB and Depth models by exploiting disentanglement representation, contrastive learning and decoupled data augmentation.<n>Our findings illustrate the quality of CroDiNo-KD, and they suggest reconsidering the conventional teacher/student paradigm to distill information from multi-modal data to single-modality neural networks.
arXiv Detail & Related papers (2025-05-30T08:53:35Z) - 4D ASR: Joint Beam Search Integrating CTC, Attention, Transducer, and Mask Predict Decoders [53.297697898510194]
We propose a joint modeling scheme where four decoders share the same encoder -- we refer to this as 4D modeling.
To efficiently train the 4D model, we introduce a two-stage training strategy that stabilizes multitask learning.
In addition, we propose three novel one-pass beam search algorithms by combining three decoders.
arXiv Detail & Related papers (2024-06-05T05:18:20Z) - Dual-Student Knowledge Distillation Networks for Unsupervised Anomaly
Detection [2.06682776181122]
Student-teacher networks (S-T) are favored in unsupervised anomaly detection.
However, vanilla S-T networks are not stable.
We propose a novel dual-student knowledge distillation architecture.
arXiv Detail & Related papers (2024-02-01T09:32:39Z) - CMD: Self-supervised 3D Action Representation Learning with Cross-modal
Mutual Distillation [130.08432609780374]
In 3D action recognition, there exists rich complementary information between skeleton modalities.
We propose a new Cross-modal Mutual Distillation (CMD) framework with the following designs.
Our approach outperforms existing self-supervised methods and sets a series of new records.
arXiv Detail & Related papers (2022-08-26T06:06:09Z) - Multi-task Learning Approach for Modulation and Wireless Signal
Classification for 5G and Beyond: Edge Deployment via Model Compression [1.218340575383456]
Future communication networks must address the scarce spectrum to accommodate growth of heterogeneous wireless devices.
We exploit the potential of deep neural networks based multi-task learning framework to simultaneously learn modulation and signal classification tasks.
We provide a comprehensive heterogeneous wireless signals dataset for public use.
arXiv Detail & Related papers (2022-02-26T14:51:02Z) - How and When Adversarial Robustness Transfers in Knowledge Distillation? [137.11016173468457]
This paper studies how and when the adversarial robustness can be transferred from a teacher model to a student model in Knowledge distillation (KD)
We show that standard KD training fails to preserve adversarial robustness, and we propose KD with input gradient alignment (KDIGA) for remedy.
Under certain assumptions, we prove that the student model using our proposed KDIGA can achieve at least the same certified robustness as the teacher model.
arXiv Detail & Related papers (2021-10-22T21:30:53Z) - Cross-modal Knowledge Distillation for Vision-to-Sensor Action
Recognition [12.682984063354748]
This study introduces an end-to-end Vision-to-Sensor Knowledge Distillation (VSKD) framework.
In this VSKD framework, only time-series data, i.e., accelerometer data, is needed from wearable devices during the testing phase.
This framework will not only reduce the computational demands on edge devices, but also produce a learning model that closely matches the performance of the computational expensive multi-modal approach.
arXiv Detail & Related papers (2021-10-08T15:06:38Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Heterogeneous Knowledge Distillation using Information Flow Modeling [82.83891707250926]
We propose a novel KD method that works by modeling the information flow through the various layers of the teacher model.
The proposed method is capable of overcoming the aforementioned limitations by using an appropriate supervision scheme during the different phases of the training process.
arXiv Detail & Related papers (2020-05-02T06:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.