Multi-Modal Subjective Context Modelling and Recognition
- URL: http://arxiv.org/abs/2011.09671v1
- Date: Thu, 19 Nov 2020 05:42:03 GMT
- Title: Multi-Modal Subjective Context Modelling and Recognition
- Authors: Qiang Shen and Stefano Teso and Wanyi Zhang and Hao Xu and Fausto
Giunchiglia
- Abstract summary: We present a novel ontological context model that captures five dimensions, namely time, location, activity, social relations and object.
An initial context recognition experiment on real-world data hints at the promise of our model.
- Score: 19.80579219657159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications like personal assistants need to be aware ofthe user's context,
e.g., where they are, what they are doing, and with whom. Context information
is usually inferred from sensor data, like GPS sensors and accelerometers on
the user's smartphone. This prediction task is known as context recognition. A
well-defined context model is fundamental for successful recognition. Existing
models, however, have two major limitations. First, they focus on few aspects,
like location or activity, meaning that recognition methods based onthem can
only compute and leverage few inter-aspect correlations. Second, existing
models typically assume that context is objective, whereas in most applications
context is best viewed from the user's perspective. Neglecting these factors
limits the usefulness of the context model and hinders recognition. We present
a novel ontological context model that captures five dimensions, namely time,
location, activity, social relations and object. Moreover, our model defines
three levels of description(objective context, machine context and subjective
context) that naturally support subjective annotations and reasoning.An initial
context recognition experiment on real-world data hints at the promise of our
model.
Related papers
- Controllable Context Sensitivity and the Knob Behind It [53.70327066130381]
When making predictions, a language model must trade off how much it relies on its context vs. its prior knowledge.
We search for a knob which controls this sensitivity, determining whether language models answer from the context or their prior knowledge.
arXiv Detail & Related papers (2024-11-11T22:22:21Z) - Context versus Prior Knowledge in Language Models [49.17879668110546]
Language models often need to integrate prior knowledge learned during pretraining and new information presented in context.
We propose two mutual information-based metrics to measure a model's dependency on a context and on its prior about an entity.
arXiv Detail & Related papers (2024-04-06T13:46:53Z) - On-device modeling of user's social context and familiar places from
smartphone-embedded sensor data [7.310043452300736]
This paper proposes an unsupervised and lightweight approach to model the user's social context and locations directly on the mobile device.
For the social context, the approach utilizes data on physical and cyber social interactions among users and their devices.
The effectiveness of the proposed approach is demonstrated through three sets of experiments, employing five real-world datasets.
arXiv Detail & Related papers (2023-06-27T12:53:14Z) - Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis [20.316056261749946]
We propose an end-to-end vision and language model incorporating explicit knowledge graphs.
We also introduce an interactive out-of-distribution layer using implicit network operator.
In practice, we apply our model on several vision and language downstream tasks including visual question answering, visual reasoning, and image-text retrieval.
arXiv Detail & Related papers (2023-02-11T05:46:21Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - On-device modeling of user's social context and familiar places from
smartphone-embedded sensor data [7.310043452300736]
We propose a novel, unsupervised and lightweight approach to model the user's social context and her locations.
We exploit data related to both physical and cyber social interactions among users and their devices.
We show the performance of 3 machine learning algorithms to recognize daily-life situations.
arXiv Detail & Related papers (2022-05-18T08:32:26Z) - Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition [48.5398871460388]
We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
arXiv Detail & Related papers (2021-10-08T11:31:58Z) - CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language
Learning [78.3857991931479]
We present GROLLA, an evaluation framework for Grounded Language Learning with Attributes.
We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations.
arXiv Detail & Related papers (2020-06-03T11:21:42Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.