Improved Structural Discovery and Representation Learning of Multi-Agent
Data
- URL: http://arxiv.org/abs/1912.13107v1
- Date: Mon, 30 Dec 2019 22:49:55 GMT
- Title: Improved Structural Discovery and Representation Learning of Multi-Agent
Data
- Authors: Jennifer Hobbs, Matthew Holbrook, Nathan Frank, Long Sha, Patrick
Lucey
- Abstract summary: We present a dynamic alignment method which provides a robust ordering of structured multi-agent data.
We demonstrate the value of this approach using a large amount of soccer tracking data from a professional league.
- Score: 5.40729975786985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Central to all machine learning algorithms is data representation. For
multi-agent systems, selecting a representation which adequately captures the
interactions among agents is challenging due to the latent group structure
which tends to vary depending on context. However, in multi-agent systems with
strong group structure, we can simultaneously learn this structure and map a
set of agents to a consistently ordered representation for further learning. In
this paper, we present a dynamic alignment method which provides a robust
ordering of structured multi-agent data enabling representation learning to
occur in a fraction of the time of previous methods. We demonstrate the value
of this approach using a large amount of soccer tracking data from a
professional league.
Related papers
- Learning Collective Dynamics of Multi-Agent Systems using Event-based Vision [15.26086907502649]
This paper proposes a novel problem: vision-based perception to learn and predict the collective dynamics of multi-agent systems.
We focus on deep learning models to directly predict collective dynamics from visual data, captured as frames or events.
We empirically demonstrate the effectiveness of event-based representation over traditional frame-based methods in predicting these collective behaviors.
arXiv Detail & Related papers (2024-11-11T14:45:47Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - SEA: A Spatially Explicit Architecture for Multi-Agent Reinforcement
Learning [14.935456456463731]
We propose a spatial information extraction structure for multi-agent reinforcement learning.
Agents can effectively share the neighborhood and global information through a spatially encoder-decoder structure.
arXiv Detail & Related papers (2023-04-25T03:00:09Z) - Self-Supervised Representation Learning from Temporal Ordering of
Automated Driving Sequences [49.91741677556553]
We propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks.
We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for object detection or tracking systems.
Extensive evaluations on the BDD100K, nuImages, and MOT17 datasets show that our TempO pre-training approach outperforms single-frame self-supervised learning methods.
arXiv Detail & Related papers (2023-02-17T18:18:27Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Multimodal Clustering Networks for Self-supervised Learning from
Unlabeled Videos [69.61522804742427]
This paper proposes a self-supervised training framework that learns a common multimodal embedding space.
We extend the concept of instance-level contrastive learning with a multimodal clustering step to capture semantic similarities across modalities.
The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains.
arXiv Detail & Related papers (2021-04-26T15:55:01Z) - Set Representation Learning with Generalized Sliced-Wasserstein
Embeddings [22.845403993200932]
We propose a geometrically-interpretable framework for learning representations from set-structured data.
In particular, we treat elements of a set as samples from a probability measure and propose an exact Euclidean embedding for Generalized Sliced Wasserstein.
We evaluate our proposed framework on multiple supervised and unsupervised set learning tasks and demonstrate its superiority over state-of-the-art set representation learning approaches.
arXiv Detail & Related papers (2021-03-05T19:00:34Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Structure by Architecture: Structured Representations without
Regularization [31.75200752252397]
We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling.
We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization.
We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation.
arXiv Detail & Related papers (2020-06-14T04:37:08Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.