Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition
- URL: http://arxiv.org/abs/2306.05058v1
- Date: Thu, 8 Jun 2023 09:23:09 GMT
- Title: Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition
- Authors: Luca Arrotta, Gabriele Civitarese, Claudio Bettini
- Abstract summary: We propose a novel approach based on a semantic loss function that infuses knowledge constraints in the Human Activity Recognition model during the training phase.
Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model.
- Score: 0.7734726150561088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning models are a standard solution for sensor-based Human Activity
Recognition (HAR), but their deployment is often limited by labeled data
scarcity and models' opacity. Neuro-Symbolic AI (NeSy) provides an interesting
research direction to mitigate these issues by infusing knowledge about context
information into HAR deep learning classifiers. However, existing NeSy methods
for context-aware HAR require computationally expensive symbolic reasoners
during classification, making them less suitable for deployment on
resource-constrained devices (e.g., mobile devices). Additionally, NeSy
approaches for context-aware HAR have never been evaluated on in-the-wild
datasets, and their generalization capabilities in real-world scenarios are
questionable. In this work, we propose a novel approach based on a semantic
loss function that infuses knowledge constraints in the HAR model during the
training phase, avoiding symbolic reasoning during classification. Our results
on scripted and in-the-wild datasets show the impact of different semantic loss
functions in outperforming a purely data-driven model. We also compare our
solution with existing NeSy methods and analyze each approach's strengths and
weaknesses. Our semantic loss remains the only NeSy solution that can be
deployed as a single DNN without the need for symbolic reasoning modules,
reaching recognition rates close (and better in some cases) to existing
approaches.
Related papers
- xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as oversmoothing, lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer that leverages explainability to pinpoint noisy network elements.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity
Recognition Models [0.3277163122167433]
We propose ContextGPT: a novel prompt engineering approach to retrieve from common-sense knowledge about human activities.
An evaluation carried out on two public datasets shows how a NeSy model obtained by infusing common-sense knowledge from ContextGPT is effective in data scarcity scenarios.
arXiv Detail & Related papers (2024-03-11T10:32:23Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Learning Prompt-Enhanced Context Features for Weakly-Supervised Video
Anomaly Detection [37.99031842449251]
Video anomaly detection under weak supervision presents significant challenges.
We present a weakly supervised anomaly detection framework that focuses on efficient context modeling and enhanced semantic discriminability.
Our approach significantly improves the detection accuracy of certain anomaly sub-classes, underscoring its practical value and efficacy.
arXiv Detail & Related papers (2023-06-26T06:45:16Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - A case for new neural network smoothness constraints [34.373610792075205]
We show that model smoothness is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning.
We conclude that new advances in the field are hinging on finding ways to incorporate data, tasks and learning into our definitions of smoothness.
arXiv Detail & Related papers (2020-12-14T22:07:32Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.