From the Lab to the Wild: Affect Modeling via Privileged Information
- URL: http://arxiv.org/abs/2305.10919v1
- Date: Thu, 18 May 2023 12:31:33 GMT
- Title: From the Lab to the Wild: Affect Modeling via Privileged Information
- Authors: Konstantinos Makantasis, Kosmas Pinitas, Antonios Liapis, Georgios N.
Yannakakis
- Abstract summary: How can we reliably transfer affect models trained in controlled laboratory conditions (in-vitro) to uncontrolled real-world settings (in-vivo)?
Privileged information enables affect models to be trained across multiple modalities available in a lab, and ignore, without significant performance drops, those modalities that are not available when they operate in the wild.
- Score: 2.570570340104555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How can we reliably transfer affect models trained in controlled laboratory
conditions (in-vitro) to uncontrolled real-world settings (in-vivo)? The
information gap between in-vitro and in-vivo applications defines a core
challenge of affective computing. This gap is caused by limitations related to
affect sensing including intrusiveness, hardware malfunctions and availability
of sensors. As a response to these limitations, we introduce the concept of
privileged information for operating affect models in real-world scenarios (in
the wild). Privileged information enables affect models to be trained across
multiple modalities available in a lab, and ignore, without significant
performance drops, those modalities that are not available when they operate in
the wild. Our approach is tested in two multimodal affect databases one of
which is designed for testing models of affect in the wild. By training our
affect models using all modalities and then using solely raw footage frames for
testing the models, we reach the performance of models that fuse all available
modalities for both training and testing. The results are robust across both
classification and regression affect modeling tasks which are dominant
paradigms in affective computing. Our findings make a decisive step towards
realizing affect interaction in the wild.
Related papers
- Small-to-Large Generalization: Data Influences Models Consistently Across Scale [76.87199303408161]
We find that small- and large-scale language model predictions (generally) do highly correlate across choice of training data.<n>We also characterize how proxy scale affects effectiveness in two downstream proxy model applications: data attribution and dataset selection.
arXiv Detail & Related papers (2025-05-22T05:50:19Z) - Linear Control of Test Awareness Reveals Differential Compliance in Reasoning Models [13.379003220832825]
Reasoning-focused large language models (LLMs) sometimes alter their behavior when they detect that they are being evaluated.<n>We present the first quantitative study of how such "test awareness" impacts model behavior, particularly its safety alignment.
arXiv Detail & Related papers (2025-05-20T17:03:12Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data [2.6016285265085526]
Student models show a significant drop in accuracy compared to models trained on real data.
By training these layers using either real or synthetic data, we reveal that the drop mainly stems from the model's final layers.
Our results suggest an improved trade-off between the amount of real training data used and the model's accuracy.
arXiv Detail & Related papers (2024-05-06T07:51:13Z) - Representation Learning for Wearable-Based Applications in the Case of
Missing Data [20.37256375888501]
multimodal sensor data in real-world environments is still challenging due to low data quality and limited data annotations.
We investigate representation learning for imputing missing wearable data and compare it with state-of-the-art statistical approaches.
Our study provides insights for the design and development of masking-based self-supervised learning tasks.
arXiv Detail & Related papers (2024-01-08T08:21:37Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - On the Connection between Pre-training Data Diversity and Fine-tuning
Robustness [66.30369048726145]
We find that the primary factor influencing downstream effective robustness is data quantity.
We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources.
arXiv Detail & Related papers (2023-07-24T05:36:19Z) - Supervised Contrastive Learning for Affect Modelling [2.570570340104555]
We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
arXiv Detail & Related papers (2022-08-25T17:40:19Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - A Weighted Solution to SVM Actionability and Interpretability [0.0]
Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic.
This paper finds a solution to the question of actionability on both linear and non-linear SVM models.
arXiv Detail & Related papers (2020-12-06T20:35:25Z) - Reducing Risk of Model Inversion Using Privacy-Guided Training [0.0]
Several recent attacks have been able to infer sensitive information from trained models.
We present a solution for countering model inversion attacks in tree-based models.
arXiv Detail & Related papers (2020-06-29T09:02:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.