An interactive software tool for tracking and visualizing animal motion patterns using computer vision
Table of Contents
- Introduction
- Section 1: Video demonstration of AnimalMotionViz
- Section 2: Setting up the environment for running AnimalMotionViz locally
- Section 3 (optional): Creating a custom mask for tracking motion patterns in a specific region
- Section 4: How AnimalMotionViz works and its application
- 4.1: Overview of AnimalMotionViz
- 4.2: Guidelines on setting up the video processing parameters
- 4.2.1: Uploading the input video
- 4.2.2: Uploading the mask image if available
- 4.2.3: Selecting a background subtraction algorithm
- 4.2.4: Specifying the frame processing interval
- 4.2.5: Selecting a kernel size for morphological operation
- 4.2.6: Selecting a threshold value for filtering detected motions
- 4.2.7: Choosing the overlay parameters
- 4.2.8: Selecting a colormap
- 4.2.9: Starting the video processing
- Section 5: Results
- Contact information and help
- References
- License
To provide novel insights into the movement and space use of dairy cattle, we developed AnimalMotionViz, an open-source software tool that processes video data to monitor animal movement patterns using computer vision. The software generates a space-use distribution map image, a space-use distribution map video, a core and full-range image, and motion metrics, including the total and within-quadrant percentages of area used, as well as the top three peak intensity locations. This software tool aims to support the broader adoption of computer vision systems, thereby further enabling precision livestock farming.
We provided an online video to demonstrate how to use our AnimalMotionViz software.
1.1 AnimalMotionViz online demonstration video [Video demo]
2.1 Installing Conda on your local computer
# cloning "AnimalMotionViz/" folder from GitHub
git clone https://github.com/uf-aiaos/AnimalMotionViz.git
# change to the cloned/downloaded folder
cd AnimalMotionViz/# "environment.yml" is available under "AnimalMotionViz/"
conda env create -f environment.ymlconda activate animalmotionviz# change to the source code directory
cd AnimalMotionViz_sourcecode/
# run the app
python app.pyAfter that, open the following link http://127.0.0.1:8050/ in your web browser, and you can now use AnimalMotionViz locally!
For users interested in tracking animal motion patterns within a specific region, a mask image created using annotation tools can be uploaded to specify a region of interest in the image. While this step is optional, it is recommended as it helps define specific areas to be considered during video processing, increasing the focus and relevance of the analysis. Below, we have provided a tutorial on creating a mask image using the open-source graphical annotation tool LabelMe.
3.1.1 Creating a Conda environment with Python installed
# create a conda env named `labelme`
conda create --name labelme
# activate the created `labelme`
conda activate labelme#installing LabelMe
pip install labelmelabelme --version# opening the app
labelme Extract a frame from your video and use it to create a mask. Make sure the frame size matches your video’s resolution. If you are using VLC or QuickTime Player, you can use a built-in function to capture a frame from the video.
# go to directory where you save the .json file
cd Pictures\New Folder
#convert the .json files
labelme_json_to_dataset annotation.json -o annotation_jsonNote that only the mask image, label.png, will need to be uploaded in step 4.2.2 to specify a region of interest.
After setting up the necessary dependencies, we are ready to use the AnimalMotionViz software. This section provides detailed instructions on how to run and operate the AnimalMotionViz app. The source code of AnimalMotionViz is available at AnimalMotionViz_sourcecode AnimalMotionViz_sourcecode.
AnimalMotionViz facilitates the uploading of video files via the dash-uploader component, with an option to include an optional mask defining the Region of Interest (ROI). The software offers various background subtraction algorithms like MOG2, KNN, GMG, CNT, GSOC, and LSBP, implemented using OpenCV. Then users can specify the interval for frame processing (e.g., every nth frame). Users can also select a kernel size for the morphological operation, which can mitigate small noise (birds, leaves, etc.), as well as the weight (alpha and beta) of the original frame and motion map overlay, respectively. Additionally, core and full range analysis is computed by applying Kernel Density Estimation (KDE) to the centroids of detected contours representing motion. The core range identifies areas with the highest motion density (50% isopleth), while the full range captures almost all detected motion (95% isopleth). Convex hulls are also computed to outline the external boundaries of detected motion regions, providing a comprehensive view of the movements. Several colormaps like Bone, Ocean, Pink, and Hot are available for enhancing the visualization of the motion map. The space-use distribution map is generated by applying a colormap to the accumulated image obtained from background subtraction and filtering, which is then overlaid on the original frame.
This section provides clear and concise guidelines for configuring parameters in AnimalMotionViz, including uploading video and mask files, selecting background subtraction algorithms, adjusting overlay settings, and applying colormaps. By following these instructions, users can fully utilize AnimalMotionViz to track and visualize animal movement patterns.
Upload the video file for tracking animal motion patterns. The software supports various video formats, including mp4, avi, mov, wmv, etc. We have also provided an example video to demonstrate the usage of the software.
You may upload a mask (e.g., the label.png file created above using LabelMe) to define the region or area of interest, which can be applied during video processing.
Choose a background subtraction algorithm from MOG2, KNN, GMG, CNT, GSOC, and LSBP, implemented using OpenCV.
In this step, the frame processing interval allows users to select a subset of frames from the video for analysis by choosing to process every nth frame (e.g., every 5th frame), which reduces computational load. This approach is particularly useful for long videos or when changes between consecutive frames are trivial, allowing users to capture key movements efficiently.
Here, users can specify the kernel size for the morphological operation. A smaller kernel focuses on removing minor noise while maintaining the integrity of key structures in the image. On the other hand, a larger kernel is more effective at eliminating substantial noise but may also risk removing important details or features, depending on the complexity of the image.
A threshold value can be set to filter the detected movements, focusing on more significant motions when calculating the core and full range. Increasing the threshold will exclude smaller movements and noise, allowing the analysis to focus on substantial motion patterns while reducing the effect of small noise.
Customize the weight (alpha and beta) of the original frame and the motion map overlay to achieve the desired visual effect. Alpha is the weight of the first array elements (frame image), while Beta is the weight of the second array elements (overlay).
Select from various colormaps such as Bone, Ocean, Pink, and Hot to enhance the visualization of the returned motion space-use distribution map.
Run the video analysis to track and visualize animal motion patterns based on the provided parameters.
-
Peak Intensity Location: This metric refers to the specific location within the region where the most movement is detected. The table shows three peak intensity locations, each with an X and Y coordinate.
-
Overall Percentage of Used Region: This metric represents the portion of the entire region that’s being used for movement.
-
Quadrant X Percentage of Used Region: This metric breaks down how much space is being used for movement in each quadrant of the region. There are four quadrants, labelled 1 to 4. For each quadrant, the metric shows the percentage of the quadrant’s space that is being used.
-
Core Range: This metric measures the area of the 50% isopleth in pixels, representing the most frequently used space within the frame.
-
Full Range: This metric measures the area of the 95% isopleth in pixels, capturing the majority of the detected movement within the frame.
- Angelo De Castro (decastro.a@ufl.edu)
- Haipeng Yu (haipengyu@ufl.edu)
If you use the materials provided in this repository, we kindly ask you to cite our papers:
-
The software paper: De Castro, A. L., Wang, J., Bonnie-King, J. G., Morota, G., Miller-Cushon, E. K., & Yu, H. (2024). AnimalMotionViz: An interactive software tool for tracking and visualizing animal motion patterns using computer vision. bioRxiv. https://doi.org/10.1101/2024.10.22.619671
-
The application paper: Marin, M.U., Gingerich, K.N., Wang, J., Yu, H. and Miller-Cushon, E.K., 2024. Effects of space allowance on patterns of activity in group-housed dairy calves. JDS Communications.
This project is primarily licensed under the GNU General Public License version 3 (GPLv3).






