A Generalizable Knowledge Framework for Semantic Indoor Mapping Based on
Markov Logic Networks and Data Driven MCMC
- URL: http://arxiv.org/abs/2002.08402v1
- Date: Wed, 19 Feb 2020 19:30:10 GMT
- Title: A Generalizable Knowledge Framework for Semantic Indoor Mapping Based on
Markov Logic Networks and Data Driven MCMC
- Authors: Ziyuan Liu, Georg von Wichert
- Abstract summary: We propose a generalizable knowledge framework for data abstraction.
Based on these abstract terms, intelligent autonomous systems should be able to make inference according to specific knowledge base.
We show in detail how to adapt this framework to a certain task, in particular, semantic robot mapping.
- Score: 2.4214518935746185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a generalizable knowledge framework for data
abstraction, i.e. finding compact abstract model for input data using
predefined abstract terms. Based on these abstract terms, intelligent
autonomous systems, such as a robot, should be able to make inference according
to specific knowledge base, so that they can better handle the complexity and
uncertainty of the real world. We propose to realize this framework by
combining Markov logic networks (MLNs) and data driven MCMC sampling, because
the former are a powerful tool for modelling uncertain knowledge and the latter
provides an efficient way to draw samples from unknown complex distributions.
Furthermore, we show in detail how to adapt this framework to a certain task,
in particular, semantic robot mapping. Based on MLNs, we formulate
task-specific context knowledge as descriptive soft rules. Experiments on real
world data and simulated data confirm the usefulness of our framework.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Flex: End-to-End Text-Instructed Visual Navigation with Foundation Models [59.892436892964376]
We investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies.
Our findings are synthesized in Flex (Fly-lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors.
We demonstrate the effectiveness of this approach on quadrotor fly-to-target tasks, where agents trained via behavior cloning successfully generalize to real-world scenes.
arXiv Detail & Related papers (2024-10-16T19:59:31Z) - Knowledge-Aware Reasoning over Multimodal Semi-structured Tables [85.24395216111462]
This study investigates whether current AI models can perform knowledge-aware reasoning on multimodal structured data.
We introduce MMTabQA, a new dataset designed for this purpose.
Our experiments highlight substantial challenges for current AI models in effectively integrating and interpreting multiple text and image inputs.
arXiv Detail & Related papers (2024-08-25T15:17:43Z) - DiscoveryBench: Towards Data-Driven Discovery with Large Language Models [50.36636396660163]
We present DiscoveryBench, the first comprehensive benchmark that formalizes the multi-step process of data-driven discovery.
Our benchmark contains 264 tasks collected across 6 diverse domains, such as sociology and engineering.
Our benchmark, thus, illustrates the challenges in autonomous data-driven discovery and serves as a valuable resource for the community to make progress.
arXiv Detail & Related papers (2024-07-01T18:58:22Z) - Are You Being Tracked? Discover the Power of Zero-Shot Trajectory
Tracing with LLMs! [3.844253028598048]
This study introduces LLMTrack, a model that illustrates how LLMs can be leveraged for Zero-Shot Trajectory Recognition.
We evaluate the model using real-world datasets designed to challenge it with distinct trajectories characterized by indoor and outdoor scenarios.
arXiv Detail & Related papers (2024-03-10T12:50:35Z) - Images in Discrete Choice Modeling: Addressing Data Isomorphism in
Multi-Modality Inputs [77.54052164713394]
This paper explores the intersection of Discrete Choice Modeling (DCM) and machine learning.
We investigate the consequences of embedding high-dimensional image data that shares isomorphic information with traditional tabular inputs within a DCM framework.
arXiv Detail & Related papers (2023-12-22T14:33:54Z) - On the verification of Embeddings using Hybrid Markov Logic [2.113770213797994]
We propose a framework to verify complex properties of a learned representation.
We present an approach to learn parameters for the properties within this framework.
We illustrate verification in Graph Neural Networks, Deep Knowledge Tracing and Intelligent Tutoring Systems.
arXiv Detail & Related papers (2023-12-13T17:04:09Z) - Surprisal Driven $k$-NN for Robust and Interpretable Nonparametric
Learning [1.4293924404819704]
We shed new light on the traditional nearest neighbors algorithm from the perspective of information theory.
We propose a robust and interpretable framework for tasks such as classification, regression, density estimation, and anomaly detection using a single model.
Our work showcases the architecture's versatility by achieving state-of-the-art results in classification and anomaly detection.
arXiv Detail & Related papers (2023-11-17T00:35:38Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Applying Rule-Based Context Knowledge to Build Abstract Semantic Maps of
Indoor Environments [2.4214518935746185]
We propose a method that combines data driven MCMC samplingand inference using rule-based context knowledge for data abstraction.
The product of our system is a parametric abstract model of the perceived environment.
Experiments on real world data show promising results and thus confirm the usefulness of our system.
arXiv Detail & Related papers (2020-02-21T20:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.