MegaDetector
Deep learning tools that accelerate the review of motion-triggered wildlife camera images.
https://github.com/microsoft/CameraTraps
Category: Biosphere
Sub Category: Terrestrial Wildlife
Keywords
camera-traps computer-vision conservation machine-learning megadetector pytorch pytorch-wildlife wildlife
Keywords from Contributors
ecology cameratraps aiforearth measurement transformers observability web-map distributed sustainable conversation
Last synced: about 12 hours ago
JSON representation
Repository metadata
PyTorch Wildlife: a Collaborative Deep Learning Framework for Conservation.
- Host: GitHub
- URL: https://github.com/microsoft/CameraTraps
- Owner: microsoft
- License: mit
- Created: 2018-10-11T18:02:42.000Z (over 6 years ago)
- Default Branch: main
- Last Pushed: 2025-04-25T04:37:14.000Z (2 days ago)
- Last Synced: 2025-04-25T05:21:40.109Z (2 days ago)
- Topics: camera-traps, computer-vision, conservation, machine-learning, megadetector, pytorch, pytorch-wildlife, wildlife
- Language: Python
- Homepage: https://cameratraps.readthedocs.io/en/latest/
- Size: 485 MB
- Stars: 871
- Watchers: 48
- Forks: 263
- Open Issues: 16
- Releases: 9
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Security: SECURITY.md
README.md
📣 Announcement
🤜🤛 Collaboration with EcoAssist!
We are thrilled to announce our collaboration with EcoAssist---a powerful user interface software that enables users to directly load models from the PyTorch-Wildlife model zoo for image analysis on local computers. With EcoAssist, you can now utilize MegaDetectorV5 and the classification models---AI4GAmazonRainforest and AI4GOpossum---for automatic animal detection and identification, alongside a comprehensive suite of pre- and post-processing tools. This partnership aims to enhance the overall user experience with PyTorch-Wildlife models for a general audience. We will work closely to bring more features together for more efficient and effective wildlife analysis in the future.
🏎️💨💨 SMALLER, BETTER, and FASTER! MegaDetectorV6 public beta testing started!
The public beta testing for MegaDetectorV6 has officially started! In the next generation of MegaDetector, we are focusing on computational efficiency and performance. We have trained multiple new models using the latest YOLO-v9 architecture, and in the public beta testing, we will allow people to test the compact version of MegaDetectorV6 (MDv6-c). We want to make sure these models work as expected on real-world datasets.
This MDv6-c model has only one-sixth (SMALLER) of the parameters of the current MegaDetectorV5 and exhibits 12% higher recall (BETTER) on animal detection in our validation datasets. In other words, MDv6-c has significantly fewer false negatives when detecting animals, making it a more robust animal detection model than MegaDetectorV5. Furthermore, one of our testers reported that the speed of MDv6-c is at least 5 times FASTER than MegaDetectorV5 on their datasets.
Models | Parameters | Precision | Recall |
---|---|---|---|
MegaDetectorV5 | 121M | 0.96 | 0.73 |
MegaDetectroV6-c | 22M | 0.92 | 0.85 |
We are also working on an extra-large version of MegaDetectorV6 for optimal performance and a transformer-based model using the RT-Detr architecture to prepare ourselves for the future of transformers. These models will be available in the official release of MegaDetectorV6.
If you want to join the beta testing, please come to our discord channel and DM the admins there:
🎉 Pytorch-Wildlife ready for citation
In addition, we have recently published a summary paper on Pytorch-Wildlife. The paper has been accepted as an oral presentation at the CV4Animals workshop at this year's CVPR. Please feel free to cite us!
🛠️ Compatibility with CUDA 12.x
The new version of PytorchWildlife uses the latest version of Pytorch (currently 2.3.1), which is compatible with CUDA 12.x.
✅ Feature highlights (Version 1.0.2.15)
- Added a file separation function. You can now automatically separate your files between animals and non-animals into different folders using our
detection_folder_separation
function. Please see the Python demo file and Jupyter demo! - 🥳 Added Timelapse compatibility! Check the Gradio interface or notebooks.
🔥 Future highlights
- MegaDetectorV6 with multiple model sizes for both optimized performance and low-budget devices like camera systems (Public beta testing has started!!).
- Supervision 0.19+ and Python 3.10+ compatibility.
- A detection model fine-tuning module to fine-tune your own detection model for Pytorch-Wildlife.
- Direct LILA connection for more training/validation data.
- More pretrained detection and classification models to expand the current model zoo.
To check the full version of the roadmap with completed tasks and long term goals, please click here!.
🐾 Introduction
At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite. Where they're able to share, grow, use datasets and deep learning architectures for wildlife conservation.
We've been inspired by the potential and capabilities of Megadetector, and we deeply value its contributions to the community. As we forge ahead with Pytorch-Wildlife, under which Megadetector now resides, please know that we remain committed to supporting, maintaining, and developing Megadetector, ensuring its continued relevance, expansion, and utility.
Pytorch-Wildlife is pip installable:
pip install PytorchWildlife
To use the newest version of MegaDetector with all the existing functionalities, you can use our Hugging Face interface or simply load the model with Pytorch-Wildlife. The weights will be automatically downloaded:
from PytorchWildlife.models import detection as pw_detection
detection_model = pw_detection.MegaDetectorV5()
For those interested in accessing the previous MegaDetector repository, which utilizes the same MegaDetector v5
model weights and was primarily developed by Dan Morris during his time at Microsoft, please visit the archive directory, or you can visit this forked repository that Dan Morris is actively maintaining.
[!TIP]
If you have any questions regarding MegaDetector and Pytorch-Wildlife, please email us or join us in our discord channel:
👋 Welcome to Pytorch-Wildlife Version 1.0
PyTorch-Wildlife is a platform to create, modify, and share powerful AI conservation models. These models can be used for a variety of applications, including camera trap images, overhead images, underwater images, or bioacoustics. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.
The Pytorch-Wildlife library allows users to directly load the MegaDetector v5
model weights for animal detection. We've fully refactored our codebase, prioritizing ease of use in model deployment and expansion. In addition to MegaDetector v5
, Pytorch-Wildlife also accommodates a range of classification weights, such as those derived from the Amazon Rainforest dataset and the Opossum classification dataset. Explore the codebase and functionalities of Pytorch-Wildlife through our interactive HuggingFace web app or local demos and notebooks, designed to showcase the practical applications of our enhancements at PyTorchWildlife. You can find more information in our documentation.
👇 Here is a brief example on how to perform detection and classification on a single image using PyTorch-wildlife
import torch
from PytorchWildlife.models import detection as pw_detection
from PytorchWildlife.models import classification as pw_classification
img = torch.randn((3, 1280, 1280))
# Detection
detection_model = pw_detection.MegaDetectorV5() # Model weights are automatically downloaded.
detection_result = detection_model.single_image_detection(img)
#Classification
classification_model = pw_classification.AI4GAmazonRainforest() # Model weights are automatically downloaded.
classification_results = classification_model.single_image_classification(img)
⚙️ Install Pytorch-Wildlife
pip install PytorchWildlife
Please refer to our installation guide for more installation information.
🕵️ Explore Pytorch-Wildlife and MegaDetector with our Demo User Interface
If you want to directly try Pytorch-Wildlife with the AI models available, including MegaDetector v5
, you can use our Gradio interface. This interface allows users to directly load the MegaDetector v5
model weights for animal detection. In addition, Pytorch-Wildlife also has two classification models in our initial version. One is trained from an Amazon Rainforest camera trap dataset and the other from a Galapagos opossum classification dataset (more details of these datasets will be published soon). To start, please follow the installation instructions on how to run the Gradio interface! We also provide multiple Jupyter notebooks for demonstration.
🛠️ Core Features
What are the core components of Pytorch-Wildlife?
🌐 Unified Framework:
Pytorch-Wildlife integrates four pivotal elements:
▪ Machine Learning Models
▪ Pre-trained Weights
▪ Datasets
▪ Utilities
👷 Our work:
In the provided graph, boxes outlined in red represent elements that will be added and remained fixed, while those in blue will be part of our development.
🚀 Inaugural Model:
We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from MegaDetector v5
. This is the same MegaDetector v5
model from the previous repository.
📚 Expandable Repository:
As we move forward, our platform will welcome new models and pre-trained weights for camera traps and bioacoustic analysis. We're excited to host contributions from global researchers through a dedicated submission platform.
📊 Datasets from LILA:
Pytorch-Wildlife will also incorporate the vast datasets hosted on LILA, making it a treasure trove for conservation research.
🧰 Versatile Utilities:
Our set of utilities spans from visualization tools to task-specific utilities, many inherited from Megadetector.
💻 User Interface Flexibility:
While we provide a foundational user interface, our platform is designed to inspire. We encourage researchers to craft and share their unique interfaces, and we'll list both existing and new UIs from other collaborators for the community's benefit.
Let's shape the future of wildlife research, together! 🙌
📈 Progress on core tasks
- Animal detection fine-tuning
- MegaDetectorV5 integration
- MegaDetectorV6 integration
- User submitted weights
- Animal classification fine-tuning
- Amazon Rainforest classification
- Amazon Opossum classification
- User submitted weights
- Visualization tools
- MegaDetector utils
- User submitted utils
- Animal Datasets
- LILA datasets
- Basic user interface for demonstration
- UI Dev tools
- List of available UIs
🖼️ Examples
MegaDetector v5
Image detection using
Credits to Universidad de los Andes, Colombia.
MegaDetector v5
and AI4GAmazonRainforest
Image classification with
Credits to Universidad de los Andes, Colombia.
MegaDetector v5
and AI4GOpossum
Opossum ID with
Credits to the Agency for Regulation and Control of Biosecurity and Quarantine for Galápagos (ABG), Ecuador.
Cite us
@misc{hernandez2024pytorchwildlife,
title={Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation},
author={Andres Hernandez and Zhongqi Miao and Luisa Vargas and Rahul Dodhia and Juan Lavista},
year={2024},
eprint={2405.12930},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
🤝 Contributing
This project is open to your ideas and contributions. If you want to submit a pull request, we'll have some guidelines available soon.
We have adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact us with any additional questions or comments.
License
This repository is licensed with the MIT license.
👥 Existing Collaborators
The extensive collaborative efforts of Megadetector have genuinely inspired us, and we deeply value its significant contributions to the community. As we continue to advance with Pytorch-Wildlife, our commitment to delivering technical support to our existing partners on MegaDetector remains the same.
Here we list a few of the organizations that have used MegaDetector. We're only listing organizations who have given us permission to refer to them here or have posted publicly about their use of MegaDetector.
(Newly Added) TerrOïko (OCAPI platform)
Arizona Department of Environmental Quality
Canadian Parks and Wilderness Society (CPAWS) Northern Alberta Chapter
Czech University of Life Sciences Prague
Idaho Department of Fish and Game
SPEA (Portuguese Society for the Study of Birds)
The Nature Conservancy in Wyoming
Upper Yellowstone Watershed Group
Applied Conservation Macro Ecology Lab, University of Victoria
Banff National Park Resource Conservation, Parks Canada(https://www.pc.gc.ca/en/pn-np/ab/banff/nature/conservation)
Blumstein Lab, UCLA
Borderlands Research Institute, Sul Ross State University
Capitol Reef National Park / Utah Valley University
Center for Biodiversity and Conservation, American Museum of Natural History
Centre for Ecosystem Science, UNSW Sydney
Cross-Cultural Ecology Lab, Macquarie University
DC Cat Count, led by the Humane Rescue Alliance
Department of Fish and Wildlife Sciences, University of Idaho
Department of Wildlife Ecology and Conservation, University of Florida
Ecology and Conservation of Amazonian Vertebrates Research Group, Federal University of Amapá
Gola Forest Programma, Royal Society for the Protection of Birds (RSPB)
Graeme Shannon's Research Group, Bangor University
Hamaarag, The Steinhardt Museum of Natural History, Tel Aviv University
Institut des Science de la Forêt Tempérée (ISFORT), Université du Québec en Outaouais
Lab of Dr. Bilal Habib, the Wildlife Institute of India
Mammal Spatial Ecology and Conservation Lab, Washington State University
McLoughlin Lab in Population Ecology, University of Saskatchewan
National Wildlife Refuge System, Southwest Region, U.S. Fish & Wildlife Service
Northern Great Plains Program, Smithsonian
Quantitative Ecology Lab, University of Washington
Santa Monica Mountains Recreation Area, National Park Service
Seattle Urban Carnivore Project, Woodland Park Zoo
Serra dos Órgãos National Park, ICMBio
Snapshot USA, Smithsonian
Wildlife Coexistence Lab, University of British Columbia
Wildlife Research, Oregon Department of Fish and Wildlife
Wildlife Division, Michigan Department of Natural Resources
Department of Ecology, TU Berlin
Ghost Cat Analytics
Protected Areas Unit, Canadian Wildlife Service
School of Natural Sciences, University of Tasmania (story)
Kenai National Wildlife Refuge, U.S. Fish & Wildlife Service (story)
Australian Wildlife Conservancy (blog, blog)
Felidae Conservation Fund (WildePod platform) (blog post)
Alberta Biodiversity Monitoring Institute (ABMI) (WildTrax platform) (blog post)
Shan Shui Conservation Center (blog post) (translated blog post)
Irvine Ranch Conservancy (story)
Wildlife Protection Solutions (story, story)
Road Ecology Center, University of California, Davis (Wildlife Observer Network platform)
The Nature Conservancy in California (Animl platform)
San Diego Zoo Wildlife Alliance (Animl R package)
[!IMPORTANT]
If you would like to be added to this list or have any questions regarding MegaDetector and Pytorch-Wildlife, please email us or join us in our Discord channel:
Citation (https://github.com/microsoft/CameraTraps/blob/main/)
cff-version: 1.2.0 title: Efficient Pipeline for Camera Trap Image Review message: >- If you use this software, please cite it using the metadata from this file. type: software authors: - given-names: Sara family-names: Beery - given-names: Dan family-names: Morris email: [email protected] - given-names: Siyu family-names: Yang identifiers: - type: url value: 'https://arxiv.org/abs/1907.06772' description: 'arXiv preprint, 1907.06772, 2019' repository-code: 'http://github.com/ecologize/CameraTraps' keywords: - Camera traps - Conservation - Computer vision license: MIT
Owner metadata
- Name: Microsoft
- Login: microsoft
- Email:
- Kind: organization
- Description: Open source projects and samples from Microsoft
- Website: https://opensource.microsoft.com
- Location: Redmond, WA
- Twitter: OpenAtMicrosoft
- Company:
- Icon url: https://avatars.githubusercontent.com/u/6154722?v=4
- Repositories: 6961
- Last ynced at: 2025-04-20T10:12:39.710Z
- Profile URL: https://github.com/microsoft
GitHub Events
Total
- Create event: 12
- Release event: 3
- Issues event: 36
- Watch event: 99
- Delete event: 6
- Member event: 2
- Issue comment event: 46
- Push event: 97
- Pull request review comment event: 4
- Pull request review event: 12
- Pull request event: 54
- Fork event: 25
Last Year
- Create event: 12
- Release event: 3
- Issues event: 36
- Watch event: 99
- Delete event: 6
- Member event: 2
- Issue comment event: 46
- Push event: 97
- Pull request review comment event: 4
- Pull request review event: 12
- Pull request event: 54
- Fork event: 25
Committers metadata
Last synced: 4 days ago
Total Commits: 3,045
Total Committers: 62
Avg Commits per committer: 49.113
Development Distribution Score (DDS): 0.551
Commits in past year: 174
Committers in past year: 12
Avg Commits per committer in past year: 14.5
Development Distribution Score (DDS) in past year: 0.471
Name | Commits | |
---|---|---|
Dan Morris | d****s@c****u | 1368 |
Marcel Simon | a****n@m****m | 340 |
Siyu Yang | y****u@m****m | 282 |
amritagupta | g****0@g****m | 268 |
Christopher Yeh | c****6 | 169 |
zhmiao | z****o@m****m | 148 |
aa-hernandez | 6****z | 76 |
annie.enchakattu | a****u@g****m | 63 |
Vardhan Duvvuri | G****r@G****g | 62 |
Isai Daniel | 8****8 | 38 |
v-andreshern | v****n@m****m | 18 |
Annie Enchakattu | 3****u | 15 |
Daniela Ruiz | d****1@u****o | 13 |
Vardhan Duvvuri | v****i@o****m | 12 |
Ubuntu | l****x@l****t | 12 |
Vardhan duvvuri | v****i@g****g | 12 |
Daniela | v****z@m****m | 10 |
Ubuntu | f****s@n****t | 10 |
arashno | a****h@g****m | 10 |
Patrick Flickinger | p****n@m****m | 10 |
Default User | u****r@u****t | 7 |
Daniela Ruiz | d****1@d****1@u****o | 6 |
Ubuntu | c****e@c****t | 6 |
Ubuntu | m****t@m****t | 6 |
SuhailSaify | s****8@g****m | 6 |
dependabot[bot] | 4****] | 6 |
Sara Beery | s****y@g****m | 6 |
Siyu Yang | y****7@i****m | 5 |
Darío Hereñú | m****a@g****m | 4 |
luvargas2 | l****0@u****o | 4 |
and 32 more... |
Committer domains:
- microsoft.com: 8
- uniandes.edu.co: 2
- cs.stanford.edu: 1
- gramener.com: 1
- glp-098.gramener.org: 1
- lynxvm.vby1cnsztnbubpoopriasnvbxa.jx.internal.cloudapp.net: 1
- gramener.org: 1
- nactidownloadvm.5svuschbwnqu5b5t3fbzyv3y5b.xx.internal.cloudapp.net: 1
- ubuntu.localhost: 1
- da.ruizl1: 1
- coyotevm.vby1cnsztnbubpoopriasnvbxa.jx.internal.cloudapp.net: 1
- meerkatvm.vby1cnsztnbubpoopriasnvbxa.jx.internal.cloudapp.net: 1
- meyerperin.com: 1
- condorvm.vby1cnsztnbubpoopriasnvbxa.jx.internal.cloudapp.net: 1
- mallardvm.vby1cnsztnbubpoopriasnvbxa.jx.internal.cloudapp.net: 1
- onemoremallard.5zrdgxxmlslenhl3jfsvnatlfg.jx.internal.cloudapp.net: 1
- megadetector-float.fywutlkdhe2etlddvntzl12x1a.ex.internal.cloudapp.net: 1
- dmultiplicitas.onmicrosoft.com: 1
- synthetaic.com: 1
- brunel.ac.uk: 1
- bhconsulting.ca: 1
- pa.h4nqyl4svykevn4x5gbo3tfbhh.cx.internal.cloudapp.net: 1
- megadetector-float2.fywutlkdhe2etlddvntzl12x1a.ex.internal.cloudapp.net: 1
- google.com: 1
- amnh.org: 1
Issue and Pull Request metadata
Last synced: 1 day ago
Total issues: 112
Total pull requests: 229
Average time to close issues: about 2 months
Average time to close pull requests: 16 days
Total issue authors: 74
Total pull request authors: 30
Average comments per issue: 3.03
Average comments per pull request: 0.43
Merged pull request: 165
Bot issues: 5
Bot pull requests: 49
Past year issues: 37
Past year pull requests: 59
Past year average time to close issues: about 1 month
Past year average time to close pull requests: 27 days
Past year issue authors: 27
Past year pull request authors: 13
Past year average comments per issue: 1.81
Past year average comments per pull request: 0.39
Past year merged pull request: 39
Past year bot issues: 0
Past year bot pull requests: 13
Top Issue Authors
- microsoft-github-policy-service[bot] (5)
- aweaver1fandm (5)
- VLucet (4)
- JaimyvS (4)
- yodaka0 (3)
- arky (3)
- nathanielrindlaub (3)
- VYRION-Ai (3)
- barlavi1 (3)
- ehallein (2)
- davidwhealey (2)
- rozimurodnorhojaev05 (2)
- dvelasco3 (2)
- AP-DP (2)
- NetZissou (2)
Top Pull Request Authors
- zhmiao (79)
- aa-hernandez (47)
- dependabot[bot] (45)
- agentmorris (12)
- JoejynWan (4)
- yangsiyu007 (4)
- microsoft-github-policy-service[bot] (4)
- chrisyeh96 (3)
- BenCretois (3)
- luvargas2 (2)
- PetervanLunteren (2)
- brianhogg (2)
- jgoodheart (2)
- persts (2)
- VLucet (2)
Top Issue Labels
- bug (24)
- enhancement (12)
- question (11)
- good first issue (4)
- Waiting for more info (2)
- help wanted (1)
- New Feature (1)
- discussion (1)
Top Pull Request Labels
- dependencies (45)
- python (38)
- .NET (2)
Package metadata
- Total packages: 2
-
Total downloads:
- pypi: 1,614 last-month
- Total dependent packages: 0 (may contain duplicates)
- Total dependent repositories: 0 (may contain duplicates)
- Total versions: 33
- Total maintainers: 3
pypi.org: animalsearch
A CLI for sorting animal images.
- Homepage:
- Documentation: https://animalsearch.readthedocs.io/
- Licenses: MIT
- Latest release: 0.0.4 (published about 1 year ago)
- Last Synced: 2025-04-25T13:35:54.545Z (1 day ago)
- Versions: 4
- Dependent Packages: 0
- Dependent Repositories: 0
- Downloads: 159 Last month
-
Rankings:
- Dependent packages count: 9.779%
- Average: 37.15%
- Dependent repos count: 64.52%
- Maintainers (1)
pypi.org: pytorchwildlife
a PyTorch Collaborative Deep Learning Framework for Conservation.
- Homepage: https://github.com/microsoft/CameraTraps/
- Documentation: https://pytorchwildlife.readthedocs.io/
- Licenses: MIT
- Latest release: 1.2.2 (published 3 days ago)
- Last Synced: 2025-04-25T13:35:54.532Z (1 day ago)
- Versions: 29
- Dependent Packages: 0
- Dependent Repositories: 0
- Downloads: 1,455 Last month
-
Rankings:
- Dependent packages count: 9.379%
- Average: 38.75%
- Dependent repos count: 68.121%
- Maintainers (2)
Dependencies
- PytorchWildlife *
- ipywidgets *
- nbsphinx *
- sphinx *
- Pillow ==10.1.0
- gradio ==4.8.0
- numpy *
- supervision ==0.16.0
- torch ==1.10.1
- torchaudio ==0.10.1
- torchvision ==0.11.2
- tqdm ==4.66.1
- ultralytics-yolov5 *
- python 3.8-slim build
- absl-py ==2.1.0
- aiofiles ==23.2.1
- aiohttp ==3.9.3
- aiosignal ==1.3.1
- altair ==5.2.0
- annotated-types ==0.6.0
- anyio ==4.2.0
- asttokens ==2.4.1
- async-timeout ==4.0.3
- attrs ==23.2.0
- backcall ==0.2.0
- cachetools ==5.3.2
- certifi ==2023.11.17
- charset-normalizer ==3.3.2
- click ==8.1.7
- colorama ==0.4.6
- contourpy ==1.1.1
- cycler ==0.12.1
- decorator ==5.1.1
- exceptiongroup ==1.2.0
- executing ==2.0.1
- fastapi ==0.109.0
- ffmpy ==0.3.1
- filelock ==3.13.1
- fire ==0.5.0
- fonttools ==4.47.2
- frozenlist ==1.4.1
- fsspec ==2023.12.2
- google-auth ==2.27.0
- google-auth-oauthlib ==1.0.0
- gradio ==4.8.0
- gradio-client ==0.7.1
- grpcio ==1.60.0
- h11 ==0.14.0
- httpcore ==1.0.2
- httpx ==0.26.0
- huggingface-hub ==0.20.3
- idna ==3.6
- importlib-metadata ==7.0.1
- importlib-resources ==6.1.1
- ipython ==8.12.3
- jedi ==0.19.1
- jinja2 ==3.1.3
- joblib ==1.3.2
- jsonschema ==4.21.1
- jsonschema-specifications ==2023.12.1
- kiwisolver ==1.4.5
- lightning-utilities ==0.10.1
- markdown ==3.5.2
- markdown-it-py ==3.0.0
- markupsafe ==2.1.4
- matplotlib ==3.7.4
- matplotlib-inline ==0.1.6
- mdurl ==0.1.2
- multidict ==6.0.4
- munch ==2.5.0
- numpy ==1.24.4
- oauthlib ==3.2.2
- opencv-python ==4.9.0.80
- opencv-python-headless ==4.9.0.80
- orjson ==3.9.12
- packaging ==23.2
- pandas ==2.0.3
- parso ==0.8.3
- pexpect ==4.9.0
- pickleshare ==0.7.5
- pillow ==10.1.0
- pkgutil-resolve-name ==1.3.10
- prompt-toolkit ==3.0.43
- protobuf ==3.20.1
- psutil ==5.9.8
- ptyprocess ==0.7.0
- pure-eval ==0.2.2
- pyasn1 ==0.5.1
- pyasn1-modules ==0.3.0
- pydantic ==2.6.0
- pydantic-core ==2.16.1
- pydub ==0.25.1
- pygments ==2.17.2
- pyparsing ==3.1.1
- python-dateutil ==2.8.2
- python-multipart ==0.0.6
- pytorch-lightning ==1.9.0
- pytorchwildlife *
- pytz ==2023.4
- pyyaml ==6.0.1
- referencing ==0.33.0
- requests ==2.31.0
- requests-oauthlib ==1.3.1
- rich ==13.7.0
- rpds-py ==0.17.1
- rsa ==4.9
- scikit-learn ==1.2.0
- scipy ==1.10.1
- seaborn ==0.13.2
- semantic-version ==2.10.0
- shellingham ==1.5.4
- six ==1.16.0
- sniffio ==1.3.0
- stack-data ==0.6.3
- starlette ==0.35.1
- supervision ==0.16.0
- tensorboard ==2.14.0
- tensorboard-data-server ==0.7.2
- termcolor ==2.4.0
- thop ==0.1.1
- threadpoolctl ==3.2.0
- tomlkit ==0.12.0
- toolz ==0.12.1
- torch ==1.10.1
- torchaudio ==0.10.1
- torchmetrics ==1.3.0.post0
- torchvision ==0.11.2
- tqdm ==4.66.1
- traitlets ==5.14.1
- typer ==0.9.0
- typing-extensions ==4.9.0
- tzdata ==2023.4
- ultralytics-yolov5 ==0.1.1
- urllib3 ==2.2.0
- uvicorn ==0.27.0.post1
- wcwidth ==0.2.13
- websockets ==11.0.3
- werkzeug ==3.0.1
- yarl ==1.9.4
- zipp ==3.17.0
- Pillow *
- chardet *
- gradio *
- supervision ==0.19.0
- torch *
- torchaudio *
- torchvision *
- tqdm *
- ultralytics-yolov5 *
- PytorchWildlife *
- munch *
- ultralytics *
- wget *
- _libgcc_mutex 0.1
- _openmp_mutex 4.5
- bzip2 1.0.8
- ca-certificates 2023.11.17
- ld_impl_linux-64 2.40
- libffi 3.4.2
- libgcc-ng 13.2.0
- libgomp 13.2.0
- libnsl 2.0.1
- libsqlite 3.44.2
- libuuid 2.38.1
- libxcrypt 4.4.36
- libzlib 1.2.13
- ncurses 6.4
- openssl 3.2.0
- pip 23.3.2
- python 3.8.18
- readline 8.2
- setuptools 69.0.3
- tk 8.6.13
- wheel 0.42.0
- xz 5.2.6
- absl-py ==2.1.0
- aiofiles ==23.2.1
- annotated-types ==0.7.0
- antlr4-python3-runtime ==4.9.3
- anyio ==4.6.0
- appdirs ==1.4.4
- asttokens ==2.4.1
- attrs ==24.2.0
- certifi ==2024.8.30
- cffi ==1.17.1
- chardet ==5.2.0
- charset-normalizer ==3.3.2
- click ==8.1.7
- contourpy ==1.3.0
- crowsetta ==5.1.0
- cycler ==0.12.1
- decorator ==5.1.1
- defusedxml ==0.7.1
- exceptiongroup ==1.2.2
- executing ==2.1.0
- fastapi ==0.115.0
- ffmpy ==0.4.0
- filelock ==3.16.1
- fire ==0.6.0
- fonttools ==4.54.0
- fsspec ==2024.9.0
- gradio ==4.44.0
- gradio-client ==1.3.0
- grpcio ==1.66.1
- h11 ==0.14.0
- httpcore ==1.0.5
- httpx ==0.27.2
- huggingface-hub ==0.25.1
- idna ==3.10
- importlib-resources ==6.4.5
- ipython ==8.27.0
- jedi ==0.19.1
- jinja2 ==3.1.4
- joblib ==1.4.2
- kiwisolver ==1.4.7
- markdown ==3.7
- markdown-it-py ==3.0.0
- markupsafe ==2.1.5
- matplotlib ==3.9.2
- matplotlib-inline ==0.1.7
- mdurl ==0.1.2
- mpmath ==1.3.0
- multimethod ==1.12
- munch ==4.0.0
- mypy-extensions ==1.0.0
- networkx ==3.3
- numpy ==1.26.4
- nvidia-cublas-cu12 ==12.1.3.1
- nvidia-cuda-cupti-cu12 ==12.1.105
- nvidia-cuda-nvrtc-cu12 ==12.1.105
- nvidia-cuda-runtime-cu12 ==12.1.105
- nvidia-cudnn-cu12 ==9.1.0.70
- nvidia-cufft-cu12 ==11.0.2.54
- nvidia-curand-cu12 ==10.3.2.106
- nvidia-cusolver-cu12 ==11.4.5.107
- nvidia-cusparse-cu12 ==12.1.0.106
- nvidia-nccl-cu12 ==2.20.5
- nvidia-nvjitlink-cu12 ==12.6.68
- nvidia-nvtx-cu12 ==12.1.105
- omegaconf ==2.3.0
- opencv-python ==4.10.0.84
- opencv-python-headless ==4.10.0.84
- orjson ==3.10.7
- packaging ==24.1
- pandas ==2.2.3
- pandera ==0.21.0
- parso ==0.8.4
- pexpect ==4.9.0
- pillow ==10.4.0
- prompt-toolkit ==3.0.47
- protobuf ==3.20.1
- psutil ==6.0.0
- ptyprocess ==0.7.0
- pure-eval ==0.2.3
- py-cpuinfo ==9.0.0
- pycparser ==2.22
- pydantic ==2.9.2
- pydantic-core ==2.23.4
- pydub ==0.25.1
- pygments ==2.18.0
- pyparsing ==3.1.4
- python-dateutil ==2.9.0.post0
- python-multipart ==0.0.10
- pytorchwildlife *
- pytz ==2024.2
- pyyaml ==6.0.2
- requests ==2.32.3
- rich ==13.8.1
- ruff ==0.6.7
- scikit-learn ==1.6.0
- scipy ==1.14.1
- seaborn ==0.13.2
- semantic-version ==2.10.0
- setuptools ==75.6.0
- shellingham ==1.5.4
- six ==1.16.0
- sniffio ==1.3.1
- soundfile ==0.12.1
- stack-data ==0.6.3
- starlette ==0.38.6
- supervision ==0.23.0
- sympy ==1.13.3
- tensorboard ==2.17.1
- tensorboard-data-server ==0.7.2
- termcolor ==2.4.0
- thop ==0.1.1
- threadpoolctl ==3.5.0
- tomlkit ==0.12.0
- torch ==2.4.1
- torchaudio ==2.4.1
- torchvision ==0.19.1
- tqdm ==4.66.5
- traitlets ==5.14.3
- triton ==3.0.0
- typeguard ==4.4.1
- typer ==0.12.5
- typing-extensions ==4.12.2
- typing-inspect ==0.9.0
- tzdata ==2024.2
- ultralytics ==8.2.100
- ultralytics-thop ==2.0.8
- ultralytics-yolov5 ==0.1.1
- urllib3 ==2.2.3
- uvicorn ==0.30.6
- wcwidth ==0.2.13
- websockets ==12.0
- werkzeug ==3.0.4
- wget ==3.2
- wrapt ==1.17.0
Score: 18.30330722693178