LISBET: a self-supervised Transformer model for the automatic
segmentation of social behavior motifs
- URL: http://arxiv.org/abs/2311.04069v1
- Date: Tue, 7 Nov 2023 15:35:17 GMT
- Title: LISBET: a self-supervised Transformer model for the automatic
segmentation of social behavior motifs
- Authors: Giuseppe Chindemi, Benoit Girard, Camilla Bellone
- Abstract summary: We introduce LISBET (seLf-supervIsed Social BEhavioral Transformer), a model designed to detect and segment social interactions.
Our model eliminates the need for feature selection and extensive human annotation by using self-supervised learning.
LISBET can be used in hypothesis-driven mode to automate behavior classification using supervised finetuning, and in discovery-driven mode to segment social behavior motifs using unsupervised learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social behavior, defined as the process by which individuals act and react in
response to others, is crucial for the function of societies and holds profound
implications for mental health. To fully grasp the intricacies of social
behavior and identify potential therapeutic targets for addressing social
deficits, it is essential to understand its core principles. Although machine
learning algorithms have made it easier to study specific aspects of complex
behavior, current methodologies tend to focus primarily on single-animal
behavior. In this study, we introduce LISBET (seLf-supervIsed Social BEhavioral
Transformer), a model designed to detect and segment social interactions. Our
model eliminates the need for feature selection and extensive human annotation
by using self-supervised learning to detect and quantify social behaviors from
dynamic body parts tracking data. LISBET can be used in hypothesis-driven mode
to automate behavior classification using supervised finetuning, and in
discovery-driven mode to segment social behavior motifs using unsupervised
learning. We found that motifs recognized using the discovery-driven approach
not only closely match the human annotations but also correlate with the
electrophysiological activity of dopaminergic neurons in the Ventral Tegmental
Area (VTA). We hope LISBET will help the community improve our understanding of
social behaviors and their neural underpinnings.
Related papers
- Self-supervised pretraining of vision transformers for animal behavioral analysis and neural encoding [12.25140375320834]
BEAST (BEhavioral Analysis via Self-supervised pretraining of Transformers) is a novel framework that pretrains experiment-specific vision transformers for diverse neuro-behavior analyses.<n>Our method establishes a powerful and versatile backbone model that accelerates behavioral analysis in scenarios where labeled data remains scarce.
arXiv Detail & Related papers (2025-07-13T06:43:05Z) - Self-Supervised Learning-Based Multimodal Prediction on Prosocial Behavior Intentions [6.782784535456252]
There are no large, labeled datasets available for prosocial behavior.<n>Small-scale datasets make it difficult to train deep-learning models effectively.<n>We propose a self-supervised learning approach that harnesses multi-modal data.
arXiv Detail & Related papers (2025-07-11T00:49:46Z) - Emotion-Oriented Behavior Model Using Deep Learning [0.9176056742068812]
The accuracy of emotion-based behavior predictions is statistically validated using the 2-tailed Pearson correlation.
This study is a steppingstone to a multi-faceted artificial agent interaction based on emotion-oriented behaviors.
arXiv Detail & Related papers (2023-10-28T17:27:59Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - CNN-Based Action Recognition and Pose Estimation for Classifying Animal
Behavior from Videos: A Survey [0.0]
Action recognition, classifying activities performed by one or more subjects in a trimmed video, forms the basis of many techniques.
Deep learning models for human action recognition have progressed over the last decade.
Recent interest in research that incorporates deep learning-based action recognition for classification has increased.
arXiv Detail & Related papers (2023-01-15T20:54:44Z) - Bodily Behaviors in Social Interaction: Novel Annotations and
State-of-the-Art Evaluation [0.0]
We present BBSI, the first set of annotations of complex Bodily Behaviors embedded in continuous Social Interactions.
Based on previous work in psychology, we manually annotated 26 hours of spontaneous human behavior.
We adapt the Pyramid Dilated Attention Network (PDAN), a state-of-the-art approach for human action detection.
arXiv Detail & Related papers (2022-07-26T11:24:00Z) - Incorporating Heterogeneous User Behaviors and Social Influences for
Predictive Analysis [32.31161268928372]
We aim to incorporate heterogeneous user behaviors and social influences for behavior predictions.
This paper proposes a variant of Long-Short Term Memory (LSTM) which can consider context while a behavior sequence.
A residual learning-based decoder is designed to automatically construct multiple high-order cross features based on social behavior representation.
arXiv Detail & Related papers (2022-07-24T17:05:37Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Beyond Tracking: Using Deep Learning to Discover Novel Interactions in
Biological Swarms [3.441021278275805]
We propose training deep network models to predict system-level states directly from generic graphical features from the entire view.
Because the resulting predictive models are not based on human-understood predictors, we use explanatory modules.
This represents an example of augmented intelligence in behavioral ecology -- knowledge co-creation in a human-AI team.
arXiv Detail & Related papers (2021-08-20T22:50:41Z) - Muti-view Mouse Social Behaviour Recognition with Deep Graphical Model [124.26611454540813]
Social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases.
Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention.
We propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures.
arXiv Detail & Related papers (2020-11-04T18:09:58Z) - Learning Human-Object Interaction Detection using Interaction Points [140.0200950601552]
We propose a novel fully-convolutional approach that directly detects the interactions between human-object pairs.
Our network predicts interaction points, which directly localize and classify the inter-action.
Experiments are performed on two popular benchmarks: V-COCO and HICO-DET.
arXiv Detail & Related papers (2020-03-31T08:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.