Summarising and Comparing Agent Dynamics with Contrastive Spatiotemporal
Abstraction
- URL: http://arxiv.org/abs/2201.07749v1
- Date: Mon, 17 Jan 2022 11:34:59 GMT
- Title: Summarising and Comparing Agent Dynamics with Contrastive Spatiotemporal
Abstraction
- Authors: Tom Bewley, Jonathan Lawry, Arthur Richards
- Abstract summary: We introduce a data-driven, model-agnostic technique for generating a human-interpretable summary of the salient points of contrast within an evolving dynamical system.
A practical algorithm is outlined for continuous state spaces, and deployed to summarise the learning histories of deep reinforcement learning agents.
- Score: 12.858982225307809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a data-driven, model-agnostic technique for generating a
human-interpretable summary of the salient points of contrast within an
evolving dynamical system, such as the learning process of a control agent. It
involves the aggregation of transition data along both spatial and temporal
dimensions according to an information-theoretic divergence measure. A
practical algorithm is outlined for continuous state spaces, and deployed to
summarise the learning histories of deep reinforcement learning agents with the
aid of graphical and textual communication methods. We expect our method to be
complementary to existing techniques in the realm of agent interpretability.
Related papers
- Deep ContourFlow: Advancing Active Contours with Deep Learning [3.9948520633731026]
We present a framework for both unsupervised and one-shot approaches for image segmentation.
It is capable of capturing complex object boundaries without the need for extensive labeled training data.
This is particularly required in histology, a field facing a significant shortage of annotations.
arXiv Detail & Related papers (2024-07-15T13:12:34Z) - Active Learning of Dynamics Using Prior Domain Knowledge in the Sampling Process [18.406992961818368]
We present an active learning algorithm for learning dynamics that leverages side information by explicitly incorporating prior domain knowledge into the sampling process.
Our proposed algorithm guides the exploration toward regions that demonstrate high empirical discrepancy between the observed data and an imperfect prior model of the dynamics derived from side information.
We rigorously prove that our active learning algorithm yields a consistent estimate of the underlying dynamics by providing an explicit rate of convergence for the maximum predictive variance.
arXiv Detail & Related papers (2024-03-25T22:20:45Z) - Learning Collective Behaviors from Observation [13.278752237440022]
We present a comprehensive examination of learning methodologies employed for the structural identification of dynamical systems.
Our approach not only ensures theoretical convergence guarantees but also exhibits computational efficiency when handling high-dimensional observational data.
arXiv Detail & Related papers (2023-11-01T22:02:08Z) - SemanticBoost: Elevating Motion Generation with Augmented Textual Cues [73.83255805408126]
Our framework comprises a Semantic Enhancement module and a Context-Attuned Motion Denoiser (CAMD)
The CAMD approach provides an all-encompassing solution for generating high-quality, semantically consistent motion sequences.
Our experimental results demonstrate that SemanticBoost, as a diffusion-based method, outperforms auto-regressive-based techniques.
arXiv Detail & Related papers (2023-10-31T09:58:11Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Continual Vision-Language Representation Learning with Off-Diagonal
Information [112.39419069447902]
Multi-modal contrastive learning frameworks like CLIP typically require a large amount of image-text samples for training.
This paper discusses the feasibility of continual CLIP training using streaming data.
arXiv Detail & Related papers (2023-05-11T08:04:46Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Measuring disentangled generative spatio-temporal representation [9.264758623908813]
We adopt two state-the-art disentangled representation learning methods and apply them to three large-scale public-temporal datasets.
We find that our methods can be used to discover real-world-world semantics to describe the variables in the learned representation.
arXiv Detail & Related papers (2022-02-10T03:57:06Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Temporal Embeddings and Transformer Models for Narrative Text
Understanding [72.88083067388155]
We present two approaches to narrative text understanding for character relationship modelling.
The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time.
A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters.
arXiv Detail & Related papers (2020-03-19T14:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.