Learning Keypoints for Multi-Agent Behavior Analysis using Self-Supervision
- URL: http://arxiv.org/abs/2409.09455v1
- Date: Sat, 14 Sep 2024 14:46:44 GMT
- Title: Learning Keypoints for Multi-Agent Behavior Analysis using Self-Supervision
- Authors: Daniel Khalil, Christina Liu, Pietro Perona, Jennifer J. Sun, Markus Marks,
- Abstract summary: B-KinD-multi is a novel approach that leverages pre-trained video segmentation models to guide keypoint discovery in multi-agent scenarios.
Extensive evaluations demonstrate improved keypoint regression and downstream behavioral classification in videos of flies, mice, and rats.
Our method generalizes well to other species, including ants, bees, and humans.
- Score: 15.308050177798453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of social interactions and collective behaviors through multi-agent video analysis is crucial in biology. While self-supervised keypoint discovery has emerged as a promising solution to reduce the need for manual keypoint annotations, existing methods often struggle with videos containing multiple interacting agents, especially those of the same species and color. To address this, we introduce B-KinD-multi, a novel approach that leverages pre-trained video segmentation models to guide keypoint discovery in multi-agent scenarios. This eliminates the need for time-consuming manual annotations on new experimental settings and organisms. Extensive evaluations demonstrate improved keypoint regression and downstream behavioral classification in videos of flies, mice, and rats. Furthermore, our method generalizes well to other species, including ants, bees, and humans, highlighting its potential for broad applications in automated keypoint annotation for multi-agent behavior analysis. Code available under: https://danielpkhalil.github.io/B-KinD-Multi
Related papers
- Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Automated Behavioral Analysis Using Instance Segmentation [2.043437148047176]
Animal behavior analysis plays a crucial role in various fields, such as life science and biomedical research.
The scarcity of available data and the high cost associated with obtaining a large number of labeled datasets pose significant challenges.
We propose a novel approach that leverages instance segmentation-based transfer learning to address these issues.
arXiv Detail & Related papers (2023-12-12T20:36:36Z) - Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles [95.49699178874683]
We propose DiffDiv, an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs)
We show that DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features.
We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals.
arXiv Detail & Related papers (2023-11-23T15:47:33Z) - Open-Vocabulary Animal Keypoint Detection with Semantic-feature Matching [74.75284453828017]
Open-Vocabulary Keypoint Detection (OVKD) task is innovatively designed to use text prompts for identifying arbitrary keypoints across any species.
We have developed a novel framework named Open-Vocabulary Keypoint Detection with Semantic-feature Matching (KDSM)
This framework combines vision and language models, creating an interplay between language features and local keypoint visual features.
arXiv Detail & Related papers (2023-10-08T07:42:41Z) - SuperAnimal pretrained pose estimation models for behavioral analysis [42.206265576708255]
Quantification of behavior is critical in applications ranging from neuroscience, veterinary medicine and animal conservation efforts.
We present a series of technical innovations that enable a new method, collectively called SuperAnimal, to develop unified foundation models.
arXiv Detail & Related papers (2022-03-14T18:46:57Z) - Self-Supervised Keypoint Discovery in Behavioral Videos [37.367739727481016]
We propose a method for learning the posture and structure of agents from unlabelled behavioral videos.
Our method uses an encoder-decoder architecture with a geometric bottleneck to reconstruct the difference between video frames.
By focusing only on regions of movement, our approach works directly on input videos without requiring manual annotations.
arXiv Detail & Related papers (2021-12-09T18:55:53Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion
Retargeting [53.28477676794658]
unsupervised motion in videos has seen substantial advancements through the use of deep neural networks.
We introduce JOKR - a JOint Keypoint Representation that handles both the source and target videos, without requiring any object prior or data collection.
We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans.
arXiv Detail & Related papers (2021-06-17T17:32:32Z) - Time-series Imputation of Temporally-occluded Multiagent Trajectories [18.862173210927658]
We study the problem of multiagent time-series imputation, where available past and future observations of subsets of agents are used to estimate missing observations for other agents.
Our approach, called the Graph Imputer, uses forward- and backward-information in combination with graph networks and variational autoencoders.
We evaluate our approach on a dataset of football matches, using a projective camera module to train and evaluate our model for the off-screen player state estimation setting.
arXiv Detail & Related papers (2021-06-08T09:58:43Z) - Muti-view Mouse Social Behaviour Recognition with Deep Graphical Model [124.26611454540813]
Social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases.
Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention.
We propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures.
arXiv Detail & Related papers (2020-11-04T18:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.