Supervised learning on heterogeneous, attributed entities interacting
over time
- URL: http://arxiv.org/abs/2007.11455v1
- Date: Wed, 22 Jul 2020 14:19:11 GMT
- Title: Supervised learning on heterogeneous, attributed entities interacting
over time
- Authors: Amine Laghaout
- Abstract summary: The current state of graph machine learning remains inadequate and needs to be augmented with a comprehensive feature engineering paradigm in space and time.
This proposal shows how, to this end, the current state of graph machine learning remains inadequate and needs to be augmented with a comprehensive feature engineering paradigm in space and time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most physical or social phenomena can be represented by ontologies where the
constituent entities are interacting in various ways with each other and with
their environment. Furthermore, those entities are likely heterogeneous and
attributed with features that evolve dynamically in time as a response to their
successive interactions. In order to apply machine learning on such entities,
e.g., for classification purposes, one therefore needs to integrate the
interactions into the feature engineering in a systematic way. This proposal
shows how, to this end, the current state of graph machine learning remains
inadequate and needs to be be augmented with a comprehensive feature
engineering paradigm in space and time.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Decomposing heterogeneous dynamical systems with graph neural networks [0.16492989697868887]
We show that graph neural networks can be designed to jointly learn the interaction rules and the structure of the heterogeneous system.
The learned latent structure and dynamics can be used to virtually decompose the complex system.
arXiv Detail & Related papers (2024-07-27T04:03:12Z) - Multi-modal perception for soft robotic interactions using generative models [2.4100803794273]
Perception is essential for the active interaction of physical agents with the external environment.
The integration of multiple sensory modalities, such as touch and vision, enhances this process.
This paper introduces a perception model that harmonizes data from diverse modalities to build a holistic state representation.
arXiv Detail & Related papers (2024-04-05T17:06:03Z) - Binding Dynamics in Rotating Features [72.80071820194273]
We propose an alternative "cosine binding" mechanism, which explicitly computes the alignment between features and adjusts weights accordingly.
This allows us to draw direct connections to self-attention and biological neural processes, and to shed light on the fundamental dynamics for object-centric representations to emerge in Rotating Features.
arXiv Detail & Related papers (2024-02-08T12:31:08Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - Collective Relational Inference for learning heterogeneous interactions [8.215734914005845]
We propose a novel probabilistic method for relational inference, which possesses two distinctive characteristics compared to existing methods.
We evaluate the proposed methodology across several benchmark datasets and demonstrate that it outperforms existing methods in accurately inferring interaction types.
Overall the proposed model is data-efficient and generalizable to large systems when trained on smaller ones.
arXiv Detail & Related papers (2023-04-30T19:45:04Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Interaction Transformer for Human Reaction Generation [61.22481606720487]
We propose a novel interaction Transformer (InterFormer) consisting of a Transformer network with both temporal and spatial attentions.
Our method is general and can be used to generate more complex and long-term interactions.
arXiv Detail & Related papers (2022-07-04T19:30:41Z) - SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding [34.19666841489646]
We show how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment.
We develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model.
In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work.
arXiv Detail & Related papers (2022-06-21T18:41:51Z) - Learning Asynchronous and Sparse Human-Object Interaction in Videos [56.73059840294019]
Asynchronous-Sparse Interaction Graph Networks (ASSIGN) is able to automatically detect the structure of interaction events associated with entities in a video scene.
ASSIGN is tested on human-object interaction recognition and shows superior performance in segmenting and labeling of human sub-activities and object affordances from raw videos.
arXiv Detail & Related papers (2021-03-03T23:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.