VFDS: Variational Foresight Dynamic Selection in Bayesian Neural
Networks for Efficient Human Activity Recognition
- URL: http://arxiv.org/abs/2204.00130v1
- Date: Thu, 31 Mar 2022 22:52:43 GMT
- Title: VFDS: Variational Foresight Dynamic Selection in Bayesian Neural
Networks for Efficient Human Activity Recognition
- Authors: Randy Ardywibowo, Shahin Boluki, Zhangyang Wang, Bobak Mortazavi,
Shuai Huang, Xiaoning Qian
- Abstract summary: Variational Foresight Dynamic Selection (VFDS) learns a policy that selects the next feature subset to observe.
We apply VFDS on the Human Activity Recognition (HAR) task where the performance-cost trade-off is critical in its practice.
- Score: 81.29900407096977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many machine learning tasks, input features with varying degrees of
predictive capability are acquired at varying costs. In order to optimize the
performance-cost trade-off, one would select features to observe a priori.
However, given the changing context with previous observations, the subset of
predictive features to select may change dynamically. Therefore, we face the
challenging new problem of foresight dynamic selection (FDS): finding a dynamic
and light-weight policy to decide which features to observe next, before
actually observing them, for overall performance-cost trade-offs. To tackle
FDS, this paper proposes a Bayesian learning framework of Variational Foresight
Dynamic Selection (VFDS). VFDS learns a policy that selects the next feature
subset to observe, by optimizing a variational Bayesian objective that
characterizes the trade-off between model performance and feature cost. At its
core is an implicit variational distribution on binary gates that are dependent
on previous observations, which will select the next subset of features to
observe. We apply VFDS on the Human Activity Recognition (HAR) task where the
performance-cost trade-off is critical in its practice. Extensive results
demonstrate that VFDS selects different features under changing contexts,
notably saving sensory costs while maintaining or improving the HAR accuracy.
Moreover, the features that VFDS dynamically select are shown to be
interpretable and associated with the different activity types. We will release
the code.
Related papers
- Active Prompt Learning with Vision-Language Model Priors [9.173468790066956]
We introduce a class-guided clustering that leverages the pre-trained image and text encoders of vision-language models.
We propose a budget-saving selective querying based on adaptive class-wise thresholds.
arXiv Detail & Related papers (2024-11-23T02:34:33Z) - Incorporating Group Prior into Variational Inference for Tail-User Behavior Modeling in CTR Prediction [8.213386595519928]
We propose a novel variational inference approach, namely Group Prior Sampler Variational Inference (GPSVI)
GPSVI introduces group preferences as priors to refine latent user interests for tail users.
Rigorous analysis and extensive experiments demonstrate that GPSVI consistently improves the performance of tail users.
arXiv Detail & Related papers (2024-10-19T13:15:36Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable
Diffusion Model [69.12623428463573]
AlignDiff is a novel framework to quantify human preferences, covering abstractness, and guide diffusion planning.
It can accurately match user-customized behaviors and efficiently switch from one to another.
We demonstrate its superior performance on preference matching, switching, and covering compared to other baselines.
arXiv Detail & Related papers (2023-10-03T13:53:08Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Estimating Conditional Mutual Information for Dynamic Feature Selection [14.706269510726356]
Dynamic feature selection is a promising paradigm to reduce feature acquisition costs and provide transparency into a model's predictions.
Here, we take an information-theoretic perspective and prioritize features based on their mutual information with the response variable.
Our method provides consistent gains over recent methods across a variety of datasets.
arXiv Detail & Related papers (2023-06-05T23:03:03Z) - MT-SLVR: Multi-Task Self-Supervised Learning for Transformation
In(Variant) Representations [2.94944680995069]
We propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner.
We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance.
arXiv Detail & Related papers (2023-05-29T09:10:50Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Adaptive Discrete Communication Bottlenecks with Dynamic Vector
Quantization [76.68866368409216]
We propose learning to dynamically select discretization tightness conditioned on inputs.
We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks.
arXiv Detail & Related papers (2022-02-02T23:54:26Z) - Discriminative and Semantic Feature Selection for Place Recognition
towards Dynamic Environments [12.973423183330961]
We propose a discriminative and semantic feature selection network, dubbed as DSFeat.
Supervised by both semantic information and attention mechanism, we can estimate pixel-wise stability of features.
It should be noticed that our proposal can be readily pluggable into any feature-based SLAM system.
arXiv Detail & Related papers (2021-03-18T05:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.