An Ontology-based Context Model in Intelligent Environments
- URL: http://arxiv.org/abs/2003.05055v1
- Date: Fri, 6 Mar 2020 12:15:20 GMT
- Title: An Ontology-based Context Model in Intelligent Environments
- Authors: Tao Gu, Xiao Hang Wang, Hung Keng Pung, Da Qing Zhang
- Abstract summary: We propose a formal context model based on OWL to address issues including semantic context representation, context reasoning and knowledge sharing.
We also present a Service-Oriented Context-Aware Middleware (SOCAM) architecture for building of context-aware services.
- Score: 11.393194142678505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computing becomes increasingly mobile and pervasive today; these changes
imply that applications and services must be aware of and adapt to their
changing contexts in highly dynamic environments. Today, building context-aware
systems is a complex task due to lack of an appropriate infrastructure support
in intelligent environments. A context-aware infrastructure requires an
appropriate context model to represent, manipulate and access context
information. In this paper, we propose a formal context model based on ontology
using OWL to address issues including semantic context representation, context
reasoning and knowledge sharing, context classification, context dependency and
quality of context. The main benefit of this model is the ability to reason
about various contexts. Based on our context model, we also present a
Service-Oriented Context-Aware Middleware (SOCAM) architecture for building of
context-aware services.
Related papers
- Towards Intelligent Augmented Reality (iAR): A Taxonomy of Context, an Architecture for iAR, and an Empirical Study [46.21335713342863]
We propose a framework for context-aware inference and adaptation in iAR.
We present an empirical AR experiment to observe user behavior and record user performance, context, and user-specified adaptations to the AR interfaces.
arXiv Detail & Related papers (2024-11-04T23:52:43Z) - Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity [0.0]
Large Language Models (LLMs) are increasingly sophisticated and ubiquitous in natural language processing (NLP) applications.
This paper presents a novel framework for contextual grounding in textual models, with a particular emphasis on the Context Representation stage.
Our findings have significant implications for the deployment of LLMs in sensitive domains such as healthcare, legal systems, and social services.
arXiv Detail & Related papers (2024-08-07T18:12:02Z) - Towards More Unified In-context Visual Understanding [74.55332581979292]
We present a new ICL framework for visual understanding with multi-modal output enabled.
First, we quantize and embed both text and visual prompt into a unified representational space.
Then a decoder-only sparse transformer architecture is employed to perform generative modeling on them.
arXiv Detail & Related papers (2023-12-05T06:02:21Z) - Dynamics Generalisation in Reinforcement Learning via Adaptive
Context-Aware Policies [13.410372954752496]
We present an investigation into how context should be incorporated into behaviour learning to improve generalisation.
We introduce a neural network architecture, the Decision Adapter, which generates the weights of an adapter module and conditions the behaviour of an agent on the context information.
We show that the Decision Adapter is a useful generalisation of a previously proposed architecture and empirically demonstrate that it results in superior generalisation performance.
arXiv Detail & Related papers (2023-10-25T14:50:05Z) - Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability [39.42414275888214]
We evaluate 13 models capable of causal language modeling across a suite of synthetic in-context learning tasks.
All the considered architectures can perform in-context learning under a wider range of conditions than previously documented.
Several attention alternatives are sometimes competitive with or better in-context learners than transformers.
arXiv Detail & Related papers (2023-10-12T05:43:06Z) - Decoupled Context Processing for Context Augmented Language Modeling [33.89636308731306]
Language models can be augmented with a context retriever to incorporate knowledge from large external databases.
By leveraging retrieved context, the neural network does not have to memorize the massive amount of world knowledge within its internal parameters, leading to better efficiency, interpretability and modularity.
arXiv Detail & Related papers (2022-10-11T20:05:09Z) - Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition [48.5398871460388]
We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
arXiv Detail & Related papers (2021-10-08T11:31:58Z) - Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis [54.735400754548635]
ABSA aims to predict the sentiment expressed in a review with respect to a given aspect.
The given aspect should be considered as a new clue out of context in the context modeling process.
We design several aspect-aware context encoders based on different backbones.
arXiv Detail & Related papers (2021-06-21T02:26:03Z) - Measuring and Increasing Context Usage in Context-Aware Machine
Translation [64.5726087590283]
We introduce a new metric, conditional cross-mutual information, to quantify the usage of context by machine translation models.
We then introduce a new, simple training method, context-aware word dropout, to increase the usage of context by context-aware models.
arXiv Detail & Related papers (2021-05-07T19:55:35Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.