An Object Model for the Representation of Empirical Knowledge
- URL: http://arxiv.org/abs/2005.07464v1
- Date: Fri, 15 May 2020 10:45:58 GMT
- Title: An Object Model for the Representation of Empirical Knowledge
- Authors: Jo\"el Colloc (IDEES), Danielle Boulanger
- Abstract summary: We are currently designing an object oriented model which describes static and dynamical knowledge in diff'erent domains.
The internal level proposes: the object structure composed of sub-objects hierarchy, structure evolution with dynamical functions, same type objects comparison with evaluation functions.
The external level describes: object environment, it enforces object types and uses external simple inheritance from the type to the sub-types.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are currently designing an object oriented model which describes static
and dynamical knowledge in diff{\'e}rent domains. It provides a twin conceptual
level. The internal level proposes: the object structure composed of
sub-objects hierarchy, structure evolution with dynamical functions, same type
objects comparison with evaluation functions. It uses multiple upward
inheritance from sub-objects properties to the Object. The external level
describes: object environment, it enforces object types and uses external
simple inheritance from the type to the sub-types.
Related papers
- Learning Dynamic Attribute-factored World Models for Efficient
Multi-object Reinforcement Learning [6.447052211404121]
In many reinforcement learning tasks, the agent has to learn to interact with many objects of different types and generalize to unseen combinations and numbers of objects.
Recent works have shown the benefits of object-factored representations and hierarchical abstractions for improving sample efficiency.
We introduce the Dynamic Attribute FacTored RL (DAFT-RL) framework to exploit the benefits of factorization in terms of object attributes.
arXiv Detail & Related papers (2023-07-18T12:41:28Z) - Object-Category Aware Reinforcement Learning [18.106722478831113]
We propose a novel framework named Object-Category Aware Reinforcement Learning (OCARL)
OCARL uses the category information of objects to facilitate both perception and reasoning.
Our experiments show that OCARL can improve both the sample efficiency and generalization in the OORL domain.
arXiv Detail & Related papers (2022-10-13T11:31:32Z) - Object-Compositional Neural Implicit Surfaces [45.274466719163925]
The neural implicit representation has shown its effectiveness in novel view synthesis and high-quality 3D reconstruction from multi-view images.
This paper proposes a novel framework, ObjectSDF, to build an object-compositional neural implicit representation with high fidelity in 3D reconstruction and object representation.
arXiv Detail & Related papers (2022-07-20T06:38:04Z) - Complex-Valued Autoencoders for Object Discovery [62.26260974933819]
We propose a distributed approach to object-centric representations: the Complex AutoEncoder.
We show that this simple and efficient approach achieves better reconstruction performance than an equivalent real-valued autoencoder on simple multi-object datasets.
We also show that it achieves competitive unsupervised object discovery performance to a SlotAttention model on two datasets, and manages to disentangle objects in a third dataset where SlotAttention fails - all while being 7-70 times faster to train.
arXiv Detail & Related papers (2022-04-05T09:25:28Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Object-Region Video Transformers [100.23380634952083]
We present Object-Region Transformers Video (ORViT), an emphobject-centric approach that extends transformer video layers with object representations.
Our ORViT block consists of two object-level streams: appearance and dynamics.
We show strong improvement in performance across all tasks and considered, demonstrating the value of a model that incorporates object representations into a transformer architecture.
arXiv Detail & Related papers (2021-10-13T17:51:46Z) - Hierarchical Relational Inference [80.00374471991246]
We propose a novel approach to physical reasoning that models objects as hierarchies of parts that may locally behave separately, but also act more globally as a single whole.
Unlike prior approaches, our method learns in an unsupervised fashion directly from raw visual images.
It explicitly distinguishes multiple levels of abstraction and improves over a strong baseline at modeling synthetic and real-world videos.
arXiv Detail & Related papers (2020-10-07T20:19:10Z) - Object Files and Schemata: Factorizing Declarative and Procedural
Knowledge in Dynamical Systems [135.10772866688404]
Black-box models with a monolithic hidden state often fail to apply procedural knowledge consistently and uniformly.
We address this issue via an architecture that factorizes declarative and procedural knowledge.
arXiv Detail & Related papers (2020-06-29T17:45:03Z) - Look-into-Object: Self-supervised Structure Modeling for Object
Recognition [71.68524003173219]
We propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions.
We show the recognition backbone can be substantially enhanced for more robust representation learning.
Our approach achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft)
arXiv Detail & Related papers (2020-03-31T12:22:51Z) - Learning Object Permanence from Video [46.34427538905761]
This paper introduces the setup of learning Object Permanence from data.
We explain why this learning problem should be dissected into four components, where objects are visible, (2) occluded, (3) contained by another object and (4) carried by a containing object.
We then present a unified deep architecture that learns to predict object location under these four scenarios.
arXiv Detail & Related papers (2020-03-23T18:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.