zamba¶
zamba means "forest" in Lingala, a Bantu language spoken throughout the Democratic Republic of the Congo and the Republic of the Congo.
zamba
is a tool built in Python that uses machine learning and computer vision to automatically detect and classify animals in camera trap videos. You can use zamba
to:
- Identify which species appear in each video
- Filter out blank videos
The models in zamba
can identify blank videos (where no animal is present) along with 32 species common to Africa and 11 species commmon to Europe. Users can also finetune models using their own labeled videos to then make predictions for new species and/or new ecologies.
zamba
can be used both as a command-line tool and as a Python package. It is also available as a user-friendly website application, Zamba Cloud.
Check out the Wiki for community-submitted models.
Installing zamba
¶
First, make sure you have the prerequisites installed:
- Python 3.7 or 3.8
- FFmpeg
Then run:
pip install zamba
See the Installation page of the documentation for details.
Getting started¶
Once you have zamba
installed, some good starting points are:
- The Quickstart page for basic examples of usage
- The user tutorial for either classifying videos or training a model depending on what you want to do with
zamba
Example usage¶
Once zamba
is installed, you can see the basic command options with:
$ zamba --help
Usage: zamba [OPTIONS] COMMAND [ARGS]...
Zamba is a tool built in Python to automatically identify the species seen
in camera trap videos from sites in Africa and Europe. Visit
https://zamba.drivendata.org/docs for more in-depth documentation.
Options:
--version Show zamba version and exit.
--install-completion Install completion for the current shell.
--show-completion Show completion for the current shell, to copy it or
customize the installation.
--help Show this message and exit.
Commands:
densepose Run densepose algorithm on videos.
predict Identify species in a video.
train Train a model on your labeled data.
zamba
can be used "out of the box" to generate predictions or train a model using your own videos. zamba
supports the same video formats as FFmpeg, which are listed here. Any videos that fail a set of FFmpeg checks will be skipped during inference or training.
Classifying unlabeled videos¶
$ zamba predict --data-dir path/to/videos
By default, predictions will be saved to zamba_predictions.csv
. Run zamba predict --help
to list all possible options to pass to predict
.
See the Quickstart page or the user tutorial on classifying videos for more details.
Training a model¶
$ zamba train --data-dir path/to/videos --labels path_to_labels.csv --save_dir my_trained_model
The newly trained model will be saved to the specified save directory. The folder will contain a model checkpoint as well as training configuration, model hyperparameters, and validation and test metrics. Run zamba train --help
to list all possible options to pass to train
.
See the Quickstart page or the user tutorial on training a model for more details.
Running the zamba
test suite¶
The included Makefile
contains code that uses pytest to run all tests in zamba/tests
.
The command is (from the project root):
$ make tests
See the docs page on contributing to zamba
for details.