SpeciesNet
AI models trained to classify species in images from motion-triggered wildlife cameras.
https://github.com/google/cameratrapai
Category: Biosphere
Sub Category: Terrestrial Wildlife
Keywords from Contributors
camera-traps conservation megadetector wildlife cameratraps ecology pytorch-wildlife
Last synced: about 10 hours ago
JSON representation
Repository metadata
AI models trained by Google to classify species in images from motion-triggered wildlife cameras.
- Host: GitHub
- URL: https://github.com/google/cameratrapai
- Owner: google
- License: apache-2.0
- Created: 2024-09-06T17:43:58.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2026-04-25T21:41:32.000Z (17 days ago)
- Last Synced: 2026-05-09T11:07:50.839Z (3 days ago)
- Language: Python
- Homepage:
- Size: 12.5 MB
- Stars: 510
- Watchers: 11
- Forks: 56
- Open Issues: 5
- Releases: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: citation.cff
README.md
SpeciesNet
An ensemble of AI models for classifying wildlife in camera trap images.
Table of Contents
- Overview
- Running SpeciesNet
- Downloading SpeciesNet model weights directly
- Contacting us
- Citing SpeciesNet
- Supported models
- Output format
- Visualizing SpeciesNet output
- Ensemble decision-making
- Advanced topics
- Animal picture
Overview
Effective wildlife monitoring relies heavily on motion-triggered wildlife cameras, or “camera traps”, which generate vast quantities of image data. Manual processing of these images is a significant bottleneck. AI can accelerate that processing, helping conservation practitioners spend more time on conservation, and less time reviewing images.
This repository hosts code for running an ensemble of two AI models: (1) an object detector that finds objects of interest in wildlife camera images, and (2) an image classifier that classifies those objects to the species level. This ensemble is used for species recognition in the Wildlife Insights platform.
The object detector used in this ensemble is MegaDetector, which finds animals, humans, and vehicles in camera trap images, but does not classify animals to species level.
The species classifier (SpeciesNet) was trained at Google using a large dataset of camera trap images and an EfficientNet V2 M architecture. It is designed to classify images into one of more than 2000 labels, covering diverse animal species, higher-level taxa (like "mammalia" or "felidae"), and non-animal classes ("blank", "vehicle"). SpeciesNet has been trained on a geographically diverse dataset of over 65M images, including curated images from the Wildlife Insights user community, as well as images from publicly-available repositories.
The SpeciesNet ensemble combines these two models using a set of heuristics and, optionally, geographic information to assign each image to a single category. See the "ensemble decision-making" section for more information about how the ensemble combines information for each image to make a single prediction.
The full details of the models and the ensemble process are discussed in this research paper:
Gadot T, Istrate Ș, Kim H, Morris D, Beery S, Birch T, Ahumada J. To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images. IET Computer Vision. 2024 Dec;18(8):1193-208.
Running SpeciesNet
Do I have to do all this command line stuff?
No, you don't have to run anything at the command line to use SpeciesNet: there are a number of tools that help you run SpeciesNet on your computer or on cloud-based systems. Details are beyond the scope of this README, but cloud-based systems that support SpeciesNet include Wildlife Insights and Animl. AddaxAI is a popular graphical tool for running SpeciesNet on your computer.
This README, though, is about running SpeciesNet at the command line, so, on to instructions...
Setting up your Python environment
The instructions on this page will assume that you have a Python virtual environment set up. If you have not installed Python, or you are not familiar with Python virtual environments, start with our installing Python page. If you see a prompt that looks something like the following, you're all set to proceed to the next step:

Installing the SpeciesNet Python package
You can install the SpeciesNet Python package via:
pip install speciesnet
If you are on a Mac, and you receive an error during this step, add the "--use-pep517" option, like this:
pip install speciesnet --use-pep517
To confirm that the package has been installed, you can run:
python -m speciesnet.scripts.run_model --help
You should see help text related to the main script you'll use to run SpeciesNet.
Running SpeciesNet
The easiest way to run SpeciesNet is via the "run_model" script, like this:
python -m speciesnet.scripts.run_model --folders "c:\your\image\folder" --predictions_json "c:\your\output\file.json"
Change c:\your\image\folder to the root folder where your images live, and change c:\your\output\file.json to the location where you want to put the output file containing the SpeciesNet results.
This will automatically download and run the detector and the classifier. This command periodically logs output to the output file, and if this command doesn't finish (e.g. you have to cancel or reboot), you can just run the same command, and it will pick up where it left off.
These commands produce an output file in .json format; for details about this format, and information about converting it to other formats, see the "output format" section below.
You can also run the three steps (detector, classifier, ensemble) separately; see the "running each component separately" section for more information.
In the above example, we didn't tell the ensemble what part of the world your images came from, so it may, for example, predict a kangaroo for an image from England. If you want to let our ensemble filter predictions geographically, add, for example:
--country GBR
You can use any ISO 3166-1 alpha-3 three-letter country code.
If your images are from the USA, you can also specify a state name using the two-letter state abbreviation, by adding, for example:
--admin1_region CA
Running SpeciesNet on multiple detections per image (or on videos)
The run_model script described above uses MegaDetector to find animals in each image, then runs the SpeciesNet classifier on just the highest-confidence detection in each image. The goal of this script is to propose the single species that is most likely to be present in each image, and in most cases, processing every object detected in the image through the classifier would be slower, without changing the proposed species.
This is a problem, however, when you frequently have multi-species images, or images with both humans and domestic animals. If this is a concern for your scenario, instead of using run_model, we recommend using run_md_and_speciesnet, from the MegaDetector Python package. This looks like the following:
pip install megadetector
pip install speciesnet
python -m megadetector.detection.run_md_and_speciesnet
For example:
python -m megadetector.detection.run_md_and_speciesnet "c:\your\image\folder" "c:\your\output\file.json" --country USA --state CA
Output from this script will be in the MegaDetector output format. This format is supported by other tools for reviewing camera trap images, like Timelapse.
This script also supports video (run_model supports only still images).
We know it's a little confusing that there are two separate scripts right now; we will merge them soon.
Using GPUs
If you don't have an NVIDIA GPU, you can ignore this section.
If you have an NVIDIA GPU, SpeciesNet should use it. If SpeciesNet is using your GPU, when you start run_model, in the output, you will see something like this:
"CUDA" is good news, that means "GPU".
If SpeciesNet is not using your GPU, you will see something like this instead:
You can also directly check whether SpeciesNet can see your GPU by running:
python -m speciesnet.scripts.gpu_test
99% of the time, after you install SpeciesNet on Linux, it will correctly see your GPU right away. On Windows, you will likely need to take one more step:
-
Install the GPU version of PyTorch, by activating your speciesnet Python environment (e.g. by running "conda activate speciesnet"), then running:
pip install torch torchvision --upgrade --force-reinstall --index-url https://download.pytorch.org/whl/cu118 -
If the GPU doesn't work immediately after that step, update your GPU driver, then reboot. Really, don't skip the reboot part, most problems related to GPU access can be fixed by upgrading your driver and rebooting.
Downloading SpeciesNet model weights directly
Both scripts described above (run_model and run_md_and_speciesnet) will download model weights automatically. If you want to use the SpeciesNet model weights outside of our script, or if you plan to be offline when you first run the script, you can download model weights directly from Kaggle. Running our ensemble also requires MegaDetector, so in this list of links, we also include a direct link to the MegaDetector model weights.
- SpeciesNet page on Kaggle
- Direct link to version 4.0.2a weights (the crop classifier)
- Direct link to version 4.0.2b weights (the whole-image classifier)
- Direct link to MegaDetector weights
Contacting us
If you have issues or questions, either file an issue or email us at cameratraps@google.com.
We love hearing from users, so please reach out if you try SpeciesNet, whether you find it to be amazing or a total catastrophe.
Citing SpeciesNet
If you use this model, please cite:
@article{gadot2024crop,
title={To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images},
author={Gadot, Tomer and Istrate, Ștefan and Kim, Hyungwon and Morris, Dan and Beery, Sara and Birch, Tanya and Ahumada, Jorge},
journal={IET Computer Vision},
year={2024},
publisher={Wiley Online Library}
}
Output format from run_model
run_model.py produces output in .json format, containing an array called "predictions", with one element per image. We provide a script to convert this format to the format used by MegaDetector, which can be imported into Timelapse, see speciesnet_to_md.py.
Each element always contains field called "filepath"; the exact content of those elements will vary depending on which elements of the ensemble you ran. If you didn't go out of your way to do something unusual, you ran the entire ensemble (i.e., both the detector and the classifier), so the "full ensemble" output format applies. Output formats for other scenarios are described in the advanced topics documentation.
Full ensemble output format
In the full ensemble output, the "classifications" field contains raw classifier output, before geofencing is applied. So even if you specify a country code, you may see taxa in the "classifications" field that are not found in the country you specified. The "prediction" field is the result of integrating the classification, detection, and geofencing information; if you specify a country code, the "prediction" field should only contain taxa that are found in the country you specified.
{
"predictions": [
{
"filepath": str => Image filepath.
"failures": list[str] (optional) => List of internal components that failed during prediction (e.g. "CLASSIFIER", "DETECTOR", "GEOLOCATION"). If absent, the prediction was successful.
"country": str (optional) => 3-letter country code (ISO 3166-1 Alpha-3) for the location where the image was taken. It can be overwritten if the country from the request doesn't match the country of (latitude, longitude).
"admin1_region": str (optional) => First-level administrative division (in ISO 3166-2 format) within the country above. If not provided in the request, it can be computed from (latitude, longitude) when those coordinates are specified. Included in the response only for some countries that are used in geofencing (e.g. "USA").
"latitude": float (optional) => Latitude where the image was taken, included only if (latitude, longitude) were present in the request.
"longitude": float (optional) => Longitude where the image was taken, included only if (latitude, longitude) were present in the request.
"classifications": { => dict (optional) => Top-5 classifications. Included only if "CLASSIFIER" if not part of the "failures" field.
"classes": list[str] => List of top-5 classes predicted by the classifier, matching the decreasing order of their scores below.
"scores": list[float] => List of scores corresponding to top-5 classes predicted by the classifier, in decreasing order.
"target_classes": list[str] (optional) => List of target classes, only present if target classes are passed as arguments.
"target_logits": list[float] (optional) => Raw confidence scores (logits) of the target classes, only present if target classes are passed as arguments.
},
"detections": [ => list (optional) => List of detections with confidence scores > 0.01, in decreasing order of their scores. Included only if "DETECTOR" if not part of the "failures" field.
{
"category": str => Detection class "1" (= animal), "2" (= human) or "3" (= vehicle) from MegaDetector's raw output.
"label": str => Detection class "animal", "human" or "vehicle", matching the "category" field above. Added for readability purposes.
"conf": float => Confidence score of the current detection.
"bbox": list[float] => Bounding box coordinates, in (xmin, ymin, width, height) format, of the current detection. Coordinates are normalized to the [0.0, 1.0] range, relative to the image dimensions.
},
... => A prediction can contain zero or multiple detections.
],
"prediction": str (optional) => Final prediction of the SpeciesNet ensemble. Included only if "CLASSIFIER" and "DETECTOR" are not part of the "failures" field.
"prediction_score": float (optional) => Final prediction score of the SpeciesNet ensemble. Included only if the "prediction" field above is included.
"prediction_source": str (optional) => Internal component that produced the final prediction. Used to collect information about which parts of the SpeciesNet ensemble fired. Included only if the "prediction" field above is included.
"model_version": str => A string representing the version of the model that produced the current prediction.
},
... => A response will contain one prediction for each instance in the request.
]
}
Visualizing SpeciesNet output
As per above, many users will work with SpeciesNet results in open-source tools like Timelapse, which support the file format used by MegaDetector (the format is described here). If you used run_md_and_speciesnet to run SpeciesNet, you already have output in this format. If you used run_model, we provide a speciesnet_to_md script to convert to this format. Tools like Timelapse are a good way to visualize and interact with your SpeciesNet results.
If you want to use the command line or Python code to visualize SpeciesNet results, we recommend using the visualization tools provided in the megadetector-utils Python package. For example, if you just ran either of these commands:
python -m speciesnet.scripts.run_model --folders "c:\your\image\folder" --predictions_json "c:\your\output\file.json"
python -m megadetector.detection.run_md_and_speciesnet "c:\your\image\folder" "c:\your\output\file.json"
You can use the visualize_detector_output script from the megadetector-utils package, like this:
pip install megadetector-utils
python -m megadetector.visualization.visualize_detector_output "c:\your\output\file.json" "c:\folder\where\you\want\visualized\output"
That will produce a folder of images with SpeciesNet results visualized on each image. A typical use of this script would also use the --sample argument (to render a random subset of images, if what you want is to quickly grok how SpeciesNet did on a large dataset), and often the --html_output_file argument, to wrap the results in an HTML page that makes it quick to scroll through them. Putting those together will give you pages like these:
- Fun preview page for Caltech Camera Traps
- Fun preview page for Idaho Camera Traps
- Fun preview page for Orinoquía Camera Traps
To see all the options, run:
python -m megadetector.visualization.visualize_detector_output --help
Another relevant script is postprocess_batch_results, which also renders sample images, but instead of just putting them in a flat folder, the purpose of this script is to allow you to quickly see samples of detections/non-detections, and to quickly see samples broken out by species. So, for example, you can do:
python -m megadetector.postprocessing.postprocess_batch_results "c:\your\output\file.json" "c:\folder\where\you\want\preview\output"
...to get pages like these:
- Fancy postprocessing page for Caltech Camera Traps
- Fancy postprocessing page for Idaho Camera Traps
- Fancy postprocessing page for Orinoquía Camera Traps
To see all the options, run:
python -m megadetector.postprocessing.postprocess_batch_results --help
Both of these modules can also be called from Python code instead of from the command line.
Ensemble decision-making
As discussed above, run_model uses multiple steps to predict a single category for each image, combining the strengths of the detector and the classifier. The ensembling strategy (i.e., the strategy used to combine the information from the detector and classifier) was primarily optimized for minimizing the human effort required to review collections of images.
The guiding principles of the ensembling strategy are:
- Help users to quickly filter out unwanted images (e.g., blanks): identify as many blank images as possible while minimizing missed animals, which can be more costly than misclassifying a non-blank image as one of the possible animal classes.
- Provide high-confidence predictions for frequent classes (e.g., deer).
- Make predictions on the lowest taxonomic level possible, while balancing precision: if the ensemble is not confident enough all the way to the species level, we would rather return a prediction we are confident about in a higher taxonomic level (e.g., family, or sometimes even "animal"), instead of risking an incorrect prediction on the species level.
Here is a breakdown of the steps:
-
Input processing: Raw images are preprocessed and passed to both the object detector (MegaDetector) and the image classifier. The type of preprocessing will depend on the selected model. For "always crop" models, images are first processed by the object detector and then cropped based on the detection bounding box before being fed to the classifier. For "full image" models, images are preprocessed independently for both models.
-
Object detection: The detector identifies potential objects (animals, humans, or vehicles) in the image, providing their bounding box coordinates and confidence scores.
-
Species classification: The species classifier analyzes the (potentially cropped) image to identify the most likely species present. It provides a list of top-5 species classifications, each with a confidence score. The species classifier is a fully supervised model that classifies images into a fixed set of animal species, higher taxa, and non-animal labels.
-
Detection-based human/vehicle decisions: If the detector is highly confident about the presence of a human or vehicle, that label will be returned as the final prediction regardless of what the classifier predicts. If the detection is less confident and the classifier also returns human or vehicle as a top-5 prediction, with a reasonable score, that top prediction will be returned. This step prevents high-confidence detector predictions from being overridden by lower-confidence classifier predictions.
-
Blank decisions: If the classifier predicts "blank" with a high confidence score, and the detector has very low confidence about the presence of an animal (or is absent), that "blank" label is returned as a final prediction. Similarly, if a classification is "blank" with extra-high confidence (above 0.99), that label is returned as a final prediction regardless of the detector's output. This enables the model to filter out images with high confidence in being blank.
-
Geofencing: If the most likely species is an animal and a location (country and optional admin1 region) is provided for the image, a geofencing rule is applied. If that species is explicitly disallowed for that region based on the available geofencing rules, the prediction will be rolled up (as explained below) to a higher taxa level on that allow list.
-
Label rollup: If all of the previous steps do not yield a final prediction, a "rollup" is applied when there is a good classification score for an animal. "Rollup" is the process of propagating the classification predictions to the first matching ancestor in the taxonomy, provided there is a good score at that level. This means the model may assign classifications at the genus, family, order, class, or kingdom level, if those scores are higher than the score at the species level. This is a common strategy to handle long-tail distributions, common in wildlife datasets.
-
Detection-based animal decisions: If the detector has a reasonable confidence
animalprediction,animalwill be returned along with the detector confidence. -
Unknown: If no other rule applies, the
unknownclass is returned as the final prediction, to avoid making low-confidence predictions. -
Prediction source: At each step of the prediction workflow, a
prediction_sourceis stored. This will be included in the final results to help diagnose which parts of the overall SpeciesNet ensemble were actually used.
The "geofencing" and "label rollup" steps are also used when running run_md_and_speciesnet; the other steps don't apply in this scenario, since the goal of run_md_and_speciesnet is to classify each detection, rather than to classify the whole image.
Advanced topics
For information about any of the following topics, see the advanced topics documentation:
- Using
run_modelto run individual components of the ensemble - Alternative installation variants of the Python package
- Alternative variants of the SpeciesNet model weights (in particular, the whole-image classifier that does not use a detection stage)
- Alternative input formats for
run_model - Development conventions/contributing code
Animal picture
It would be unfortunate if this whole README about camera trap images didn't show you a single camera trap image, so...

Image credit University of Minnesota, from the Orinoquía Camera Traps dataset.
Citation (citation.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it using the metadata in this file."
authors:
- family-names: Gadot
given-names: Tomer
- family-names: Istrate
given-names: Ștefan
- family-names: Kim
given-names: Hyungwon
- family-names: Morris
given-names: Dan
- family-names: Beery
given-names: Sara
- family-names: Birch
given-names: Tanya
- family-names: Ahumada
given-names: Jorge
title: "To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images"
version: "1.0.0"
date-released: "2024-11-24"
publisher: "Wiley Online Library"
journal: "IET Computer Vision"
volume: "18"
issue: "8"
pages: "1193--1208"
year: "2024"
doi: "10.1049/cvi2.12318"
type: software
keywords:
- Camera traps
- Conservation
- Computer vision
Owner metadata
- Name: Google
- Login: google
- Email: opensource@google.com
- Kind: organization
- Description: Google ❤️ Open Source
- Website: https://opensource.google/
- Location: United States of America
- Twitter: GoogleOSS
- Company:
- Icon url: https://avatars.githubusercontent.com/u/1342004?v=4
- Repositories: 2773
- Last ynced at: 2025-08-12T15:55:14.931Z
- Profile URL: https://github.com/google
GitHub Events
Total
- Delete event: 15
- Member event: 2
- Pull request event: 35
- Fork event: 33
- Issues event: 44
- Watch event: 324
- Issue comment event: 107
- Public event: 1
- Push event: 100
- Pull request review comment event: 7
- Pull request review event: 9
- Create event: 20
Last Year
- Delete event: 4
- Member event: 2
- Pull request event: 17
- Fork event: 13
- Issues event: 11
- Watch event: 86
- Issue comment event: 27
- Push event: 24
- Pull request review comment event: 7
- Pull request review event: 8
- Create event: 7
Committers metadata
Last synced: 2 days ago
Total Commits: 175
Total Committers: 9
Avg Commits per committer: 19.444
Development Distribution Score (DDS): 0.446
Commits in past year: 58
Committers in past year: 3
Avg Commits per committer in past year: 19.333
Development Distribution Score (DDS) in past year: 0.034
| Name | Commits | |
|---|---|---|
| Dan Morris | a****s@g****m | 97 |
| Ștefan Istrate | s****e@g****m | 63 |
| Tomer Gadot | t****g@g****m | 4 |
| Tanya Birch | 4****h | 4 |
| Timm Haucke | h****e@m****u | 2 |
| Val. Lucet | V****t | 2 |
| oksachi | 6****i | 1 |
| Viktor Domazetoski | 1****i | 1 |
| CharlesCNorton | 1****n | 1 |
Committer domains:
- mit.edu: 1
- google.com: 1
Issue and Pull Request metadata
Last synced: 20 days ago
Total issues: 25
Total pull requests: 16
Average time to close issues: 3 days
Average time to close pull requests: about 21 hours
Total issue authors: 16
Total pull request authors: 8
Average comments per issue: 3.12
Average comments per pull request: 0.69
Merged pull request: 9
Bot issues: 0
Bot pull requests: 0
Past year issues: 8
Past year pull requests: 9
Past year average time to close issues: about 8 hours
Past year average time to close pull requests: 1 day
Past year issue authors: 7
Past year pull request authors: 4
Past year average comments per issue: 2.5
Past year average comments per pull request: 0.44
Past year merged pull request: 5
Past year bot issues: 0
Past year bot pull requests: 0
Top Issue Authors
- VLucet (6)
- HugoMarkoff (4)
- rsmiller74 (2)
- cheperboy (1)
- robinsandfort (1)
- PetervanLunteren (1)
- aman5319 (1)
- eric-catman (1)
- tinytosa (1)
- ioRekz (1)
- GuangyiLu (1)
- FreekDB (1)
- ismaelvbrack (1)
- verrassendhollands (1)
- sergewich (1)
Top Pull Request Authors
- agentmorris (7)
- timmh (2)
- VLucet (2)
- stefanistrate (1)
- ViktorDomazetoski (1)
- CharlesCNorton (1)
- YoussefBayouli (1)
- oksachi (1)
Top Issue Labels
- duplicate (1)
Top Pull Request Labels
Package metadata
- Total packages: 1
-
Total downloads:
- pypi: 1,822 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 9
- Total maintainers: 4
pypi.org: speciesnet
Tools for classifying species in images from motion-triggered wildlife cameras.
- Homepage: https://github.com/google/cameratrapai
- Documentation: https://speciesnet.readthedocs.io/
- Licenses: Apache-2.0
- Latest release: 5.0.3 (published 5 months ago)
- Last Synced: 2026-05-09T13:07:18.138Z (3 days ago)
- Versions: 9
- Dependent Packages: 0
- Dependent Repositories: 0
- Downloads: 1,822 Last month
-
Rankings:
- Dependent packages count: 9.761%
- Average: 32.353%
- Dependent repos count: 54.946%
- Maintainers (4)
Dependencies
- actions/checkout v4 composite
- actions/setup-python v5 composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- absl-py *
- cloudpathlib *
- huggingface_hub *
- humanfriendly *
- kagglehub *
- matplotlib *
- numpy *
- pandas *
- pillow *
- requests *
- reverse_geocoder *
- tensorflow >= 2.12, < 2.16 ; sys_platform != 'darwin' or platform_machine != 'arm64'
- tensorflow-macos >= 2.12, < 2.15 ; sys_platform == 'darwin' and platform_machine == 'arm64'
- tensorflow-metal sys_platform == 'darwin' and platform_machine == 'arm64'
- torch >= 2.0
- tqdm *
- yolov5 >= 7.0.8, < 7.0.12
Score: 15.95127453915487