Cross-Skeleton Interaction Graph Aggregation Network for Representation
Learning of Mouse Social Behaviour
- URL: http://arxiv.org/abs/2208.03819v1
- Date: Sun, 7 Aug 2022 21:06:42 GMT
- Title: Cross-Skeleton Interaction Graph Aggregation Network for Representation
Learning of Mouse Social Behaviour
- Authors: Feixiang Zhou, Xinyu Yang, Fang Chen, Long Chen, Zheheng Jiang, Hui
Zhu, Reiko Heckel, Haikuan Wang, Minrui Fei and Huiyu Zhou
- Abstract summary: Social behaviour analysis of mice has become an increasingly popular research area in behavioural neuroscience.
It is challenging to model complex social interactions between mice due to highly deformable body shapes and ambiguous movement patterns.
We propose a Cross-Skeleton Interaction Graph Aggregation Network (CS-IGANet) to learn abundant dynamics of freely interacting mice.
- Score: 24.716092330419123
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Automated social behaviour analysis of mice has become an increasingly
popular research area in behavioural neuroscience. Recently, pose information
(i.e., locations of keypoints or skeleton) has been used to interpret social
behaviours of mice. Nevertheless, effective encoding and decoding of social
interaction information underlying the keypoints of mice has been rarely
investigated in the existing methods. In particular, it is challenging to model
complex social interactions between mice due to highly deformable body shapes
and ambiguous movement patterns. To deal with the interaction modelling
problem, we here propose a Cross-Skeleton Interaction Graph Aggregation Network
(CS-IGANet) to learn abundant dynamics of freely interacting mice, where a
Cross-Skeleton Node-level Interaction module (CS-NLI) is used to model
multi-level interactions (i.e., intra-, inter- and cross-skeleton
interactions). Furthermore, we design a novel Interaction-Aware Transformer
(IAT) to dynamically learn the graph-level representation of social behaviours
and update the node-level representation, guided by our proposed
interaction-aware self-attention mechanism. Finally, to enhance the
representation ability of our model, an auxiliary self-supervised learning task
is proposed for measuring the similarity between cross-skeleton nodes.
Experimental results on the standard CRMI13-Skeleton and our PDMB-Skeleton
datasets show that our proposed model outperforms several other
state-of-the-art approaches.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Learning Mutual Excitation for Hand-to-Hand and Human-to-Human
Interaction Recognition [22.538114033191313]
We propose a mutual excitation graph convolutional network (me-GCN) by stacking mutual excitation graph convolution layers.
Me-GC learns mutual information in each layer and each stage of graph convolution operations.
Our proposed me-GC outperforms state-of-the-art GCN-based and Transformer-based methods.
arXiv Detail & Related papers (2024-02-04T10:00:00Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Towards a Unified Transformer-based Framework for Scene Graph Generation
and Human-object Interaction Detection [116.21529970404653]
We introduce SG2HOI+, a unified one-step model based on the Transformer architecture.
Our approach employs two interactive hierarchical Transformers to seamlessly unify the tasks of SGG and HOI detection.
Our approach achieves competitive performance when compared to state-of-the-art HOI methods.
arXiv Detail & Related papers (2023-11-03T07:25:57Z) - IGFormer: Interaction Graph Transformer for Skeleton-based Human
Interaction Recognition [26.05948629634753]
We propose a novel Interaction Graph Transformer (IGFormer) network for skeleton-based interaction recognition.
IGFormer constructs interaction graphs according to the semantic and distance correlations between the interactive body parts.
We also propose a Semantic Partition Module to transform each human skeleton sequence into a Body-Part-Time sequence.
arXiv Detail & Related papers (2022-07-25T12:11:15Z) - A Skeleton-aware Graph Convolutional Network for Human-Object
Interaction Detection [14.900704382194013]
We propose a skeleton-aware graph convolutional network for human-object interaction detection, named SGCN4HOI.
Our network exploits the spatial connections between human keypoints and object keypoints to capture their fine-grained structural interactions via graph convolutions.
It fuses such geometric features with visual features and spatial configuration features obtained from human-object pairs.
arXiv Detail & Related papers (2022-07-11T15:20:18Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Convolutions for Spatial Interaction Modeling [9.408751013132624]
We consider the problem of spatial interaction modeling in the context of predicting the motion of actors around autonomous vehicles.
We revisit convolutions and show that they can demonstrate comparable performance to graph networks in modeling spatial interactions with lower latency.
arXiv Detail & Related papers (2021-04-15T00:41:30Z) - Muti-view Mouse Social Behaviour Recognition with Deep Graphical Model [124.26611454540813]
Social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases.
Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention.
We propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures.
arXiv Detail & Related papers (2020-11-04T18:09:58Z) - A Graph-based Interactive Reasoning for Human-Object Interaction
Detection [71.50535113279551]
We present a novel graph-based interactive reasoning model called Interactive Graph (abbr. in-Graph) to infer HOIs.
We construct a new framework to assemble in-Graph models for detecting HOIs, namely in-GraphNet.
Our framework is end-to-end trainable and free from costly annotations like human pose.
arXiv Detail & Related papers (2020-07-14T09:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.