ROBUST: Rodent Observable Behavior Understanding and Study Toolkit

Overview GIF

A toolkit for Unsupervised Behavior Analysis of Rats using self-supervised Variational Autoencoder (VAE) based representation learning and Hidden Markov Model (HMM) clustering.


The GUI Tool for Unsupervised Behavior Analysis is an innovative application designed to transform the way researchers understand and analyze animal behavior, specifically focusing on rats. It seamlessly integrates cutting-edge machine learning models with intuitive user interfaces to provide comprehensive behavioral insights.

Background: With the exponential growth in video data, traditional manual analysis of animal behavior becomes time-consuming and is subject to human biases. Automated tools, especially when built on machine learning foundations, offer rapid, unbiased, and deeper insights into animal behavior.

Core Functionality:

  1. 2D Pose Estimation: Utilizing state-of-the-art DeepLabCut2.3 for rat video processing, the tool estimates 2D poses of rats, providing key data points about their position and movements.
  2. Motion Dynamics Learning: A deep learning approach involving Recurrent Neural Network and Variational Autoencoder (RNN-VAE) is employed. This combination aids in the learning of latent representations (𝑍) of the motion dynamics across the video sequences.
  3. Behavioral Segmentation and Clustering: The learned latent representations serve as the foundation for unsupervised clustering techniques. This facilitates the segmentation of different behaviors exhibited in the video sequences. The visual representation of these behaviors is further enhanced by the UMAP visualization of the latent space.
  4. Video Action Segmentation: Leveraging the clusters formed, the tool breaks down the video into segments, each labeled based on the detected behavior. This offers a timeline-based understanding of behavioral patterns.

Technical Specifications:

  • Dependencies: DeepLabCut2.3 for pose estimation and VAME for behavioral segmentation.
  • Model Architecture: A combination of RNN and VAE for motion dynamics learning.
  • Clustering and Visualization: Utilizes UMAP for reducing dimensions and visualizing high-dimensional latent spaces, offering clear clusters of behavior.

Limitations & Future Directions: While the tool has achieved significant milestones in automating behavior analysis, it currently operates on 2D pose estimation. Future iterations may benefit from the integration of 3D pose estimation for richer data and insights. Additionally, newer and state-of-the-art VAE models can be explored to further enhance latent space learning and representation. Features such as post-hoc analysis for advanced visualizations, generation of synthetic neural data based on motion patterns, and other cutting-edge behavioral metrics can be incorporated in subsequent versions.

This work stands at the intersection of behavioral science and AI, offering researchers a powerful and automated solution to unlock complex behavioral patterns in lab animal videos.

Overview GIF