{"id":350827,"name":"SpeciesNet","description":"AI models trained to classify species in images from motion-triggered wildlife cameras.","url":"https://github.com/google/cameratrapai","last_synced_at":"2026-05-12T12:30:20.528Z","repository":{"id":280011580,"uuid":"853459855","full_name":"google/cameratrapai","owner":"google","description":"AI models trained by Google to classify species in images from motion-triggered wildlife cameras.","archived":false,"fork":false,"pushed_at":"2026-04-25T21:41:32.000Z","size":13128,"stargazers_count":510,"open_issues_count":5,"forks_count":56,"subscribers_count":11,"default_branch":"main","last_synced_at":"2026-05-09T11:07:50.839Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"citation.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-09-06T17:43:58.000Z","updated_at":"2026-05-08T19:26:26.000Z","dependencies_parsed_at":"2025-03-23T13:29:51.768Z","dependency_job_id":"6efaa294-301e-42cf-b643-10b9e359de0b","html_url":"https://github.com/google/cameratrapai","commit_stats":null,"previous_names":["google/cameratrapai"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/google/cameratrapai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google","download_url":"https://codeload.github.com/google/cameratrapai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32894003,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-05-10T13:40:02.631Z","status":"online","status_checked_at":"2026-05-11T02:00:05.975Z","response_time":120,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"owner":{"login":"google","name":"Google","uuid":"1342004","kind":"organization","description":"Google ❤️ Open Source","email":"opensource@google.com","website":"https://opensource.google/","location":"United States of America","twitter":"GoogleOSS","company":null,"icon_url":"https://avatars.githubusercontent.com/u/1342004?v=4","repositories_count":2773,"last_synced_at":"2025-08-12T15:55:14.931Z","metadata":{"has_sponsors_listing":false},"html_url":"https://github.com/google","funding_links":[],"total_stars":1967885,"followers":58475,"following":0,"created_at":"2022-11-02T16:20:58.973Z","updated_at":"2025-08-12T15:55:14.931Z","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google/repositories"},"packages":[{"id":11265797,"name":"speciesnet","ecosystem":"pypi","description":"Tools for classifying species in images from motion-triggered wildlife cameras.","homepage":"https://github.com/google/cameratrapai","licenses":"Apache-2.0","normalized_licenses":["Apache-2.0"],"repository_url":"https://github.com/google/cameratrapai","keywords_array":["camera traps","conservation","wildlife","ai","species classification","wildlife insights","speciesnet"],"namespace":null,"versions_count":9,"first_release_published_at":"2025-01-15T01:45:18.000Z","latest_release_published_at":"2025-12-25T17:50:45.000Z","latest_release_number":"5.0.3","last_synced_at":"2026-05-09T13:07:18.138Z","created_at":"2025-01-15T02:00:47.522Z","updated_at":"2026-05-09T13:45:33.003Z","registry_url":"https://pypi.org/project/speciesnet/","install_command":"pip install speciesnet --index-url https://pypi.org/simple","documentation_url":"https://speciesnet.readthedocs.io/","metadata":{"funding":null,"documentation":null,"classifiers":["Intended Audience :: Developers","Intended Audience :: Education","Intended Audience :: Science/Research","Operating System :: OS Independent","Programming Language :: Python :: 3","Programming Language :: Python :: 3.10","Programming Language :: Python :: 3.11","Programming Language :: Python :: 3.12","Programming Language :: Python :: 3.9","Topic :: Scientific/Engineering :: Artificial Intelligence","Topic :: Scientific/Engineering :: Image Recognition"],"normalized_name":"speciesnet","project_status":null},"repo_metadata":{"id":280011580,"uuid":"853459855","full_name":"google/cameratrapai","owner":"google","description":"AI models trained by Google to classify species in images from motion-triggered wildlife cameras.","archived":false,"fork":false,"pushed_at":"2026-04-25T21:41:32.000Z","size":13128,"stargazers_count":506,"open_issues_count":5,"forks_count":55,"subscribers_count":11,"default_branch":"main","last_synced_at":"2026-05-01T00:03:48.751Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"citation.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-09-06T17:43:58.000Z","updated_at":"2026-04-30T08:27:54.000Z","dependencies_parsed_at":"2025-03-23T13:29:51.768Z","dependency_job_id":"6efaa294-301e-42cf-b643-10b9e359de0b","html_url":"https://github.com/google/cameratrapai","commit_stats":null,"previous_names":["google/cameratrapai"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/google/cameratrapai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google","download_url":"https://codeload.github.com/google/cameratrapai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32487739,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-30T13:12:12.517Z","status":"online","status_checked_at":"2026-05-01T02:00:05.856Z","response_time":64,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"},"owner_record":{"login":"google","name":"Google","uuid":"1342004","kind":"organization","description":"Google ❤️ Open Source","email":"opensource@google.com","website":"https://opensource.google/","location":"United States of America","twitter":"GoogleOSS","company":null,"icon_url":"https://avatars.githubusercontent.com/u/1342004?v=4","repositories_count":2773,"last_synced_at":"2025-08-12T15:55:14.931Z","metadata":{"has_sponsors_listing":false},"html_url":"https://github.com/google","funding_links":[],"total_stars":1967885,"followers":58475,"following":0,"created_at":"2022-11-02T16:20:58.973Z","updated_at":"2025-08-12T15:55:14.931Z","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google/repositories"},"tags":[{"name":"last_tf_commit","sha":"a47bfac3407c58c724e41eb90a16241fbe3dcbf8","kind":"tag","published_at":"2025-04-25T20:07:16.000Z","download_url":"https://codeload.github.com/google/cameratrapai/tar.gz/last_tf_commit","html_url":"https://github.com/google/cameratrapai/releases/tag/last_tf_commit","dependencies_parsed_at":null,"dependency_job_id":null,"purl":"pkg:github/google/cameratrapai@last_tf_commit","tag_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/tags/last_tf_commit","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/tags/last_tf_commit/manifests"}]},"repo_metadata_updated_at":"2026-05-09T13:45:32.979Z","dependent_packages_count":0,"downloads":1822,"downloads_period":"last-month","dependent_repos_count":0,"rankings":{"downloads":null,"dependent_repos_count":54.945816847798866,"dependent_packages_count":9.760949316372013,"stargazers_count":null,"forks_count":null,"docker_downloads_count":null,"average":32.35338308208544},"purl":"pkg:pypi/speciesnet","advisories":[],"docker_usage_url":"https://docker.ecosyste.ms/usage/pypi/speciesnet","docker_dependents_count":null,"docker_downloads_count":null,"usage_url":"https://repos.ecosyste.ms/usage/pypi/speciesnet","dependent_repositories_url":"https://repos.ecosyste.ms/api/v1/usage/pypi/speciesnet/dependencies","status":null,"funding_links":[],"critical":null,"issue_metadata":{"last_synced_at":"2026-04-23T03:02:06.559Z","issues_count":25,"pull_requests_count":16,"avg_time_to_close_issue":272088.5789473684,"avg_time_to_close_pull_request":75659.16666666667,"issues_closed_count":19,"pull_requests_closed_count":12,"pull_request_authors_count":8,"issue_authors_count":16,"avg_comments_per_issue":3.12,"avg_comments_per_pull_request":0.6875,"merged_pull_requests_count":9,"bot_issues_count":0,"bot_pull_requests_count":0,"past_year_issues_count":8,"past_year_pull_requests_count":9,"past_year_avg_time_to_close_issue":27915.833333333332,"past_year_avg_time_to_close_pull_request":119638.33333333333,"past_year_issues_closed_count":6,"past_year_pull_requests_closed_count":6,"past_year_pull_request_authors_count":4,"past_year_issue_authors_count":7,"past_year_avg_comments_per_issue":2.5,"past_year_avg_comments_per_pull_request":0.4444444444444444,"past_year_bot_issues_count":0,"past_year_bot_pull_requests_count":0,"past_year_merged_pull_requests_count":5,"issues_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/issues","maintainers":[{"login":"agentmorris","count":7,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/agentmorris"},{"login":"timmh","count":2,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/timmh"},{"login":"PetervanLunteren","count":1,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/PetervanLunteren"},{"login":"stefanistrate","count":1,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/stefanistrate"}],"active_maintainers":[{"login":"agentmorris","count":6,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/agentmorris"}]},"versions_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/versions","version_numbers_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/version_numbers","latest_version_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/latest_version","dependent_packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/dependent_packages","related_packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/related_packages","codemeta_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/speciesnet/codemeta","maintainers":[{"uuid":"google_opensource","login":"google_opensource","name":null,"email":null,"url":null,"packages_count":446,"html_url":"https://pypi.org/user/google_opensource/","role":"Owner","created_at":"2025-03-15T06:42:33.071Z","updated_at":"2025-03-15T06:42:33.071Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers/google_opensource/packages"},{"uuid":"agentmorris","login":"agentmorris","name":null,"email":null,"url":null,"packages_count":3,"html_url":"https://pypi.org/user/agentmorris/","role":"Owner","created_at":"2025-01-15T10:13:27.644Z","updated_at":"2025-01-15T10:13:27.644Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers/agentmorris/packages"},{"uuid":"stefanistrate","login":"stefanistrate","name":null,"email":null,"url":null,"packages_count":2,"html_url":"https://pypi.org/user/stefanistrate/","role":"Maintainer","created_at":"2025-03-08T02:22:51.191Z","updated_at":"2025-03-08T02:22:51.191Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers/stefanistrate/packages"},{"uuid":"speciesnet","login":"speciesnet","name":null,"email":null,"url":null,"packages_count":1,"html_url":"https://pypi.org/user/speciesnet/","role":"Owner","created_at":"2025-03-15T06:42:33.118Z","updated_at":"2025-03-15T06:42:33.118Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers/speciesnet/packages"}],"registry":{"name":"pypi.org","url":"https://pypi.org","ecosystem":"pypi","default":true,"packages_count":861190,"maintainers_count":367888,"namespaces_count":0,"keywords_count":271316,"github":"pypi","metadata":{"funded_packages_count":53172},"icon_url":"https://github.com/pypi.png","created_at":"2022-04-04T15:19:23.364Z","updated_at":"2026-04-09T05:08:03.587Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages","maintainers_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers","namespaces_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/namespaces"}}],"commits":{"id":10756345,"full_name":"google/cameratrapai","default_branch":"main","total_commits":175,"total_committers":9,"total_bot_commits":0,"total_bot_committers":0,"mean_commits":19.444444444444443,"dds":0.44571428571428573,"past_year_total_commits":58,"past_year_total_committers":3,"past_year_total_bot_commits":0,"past_year_total_bot_committers":0,"past_year_mean_commits":19.333333333333332,"past_year_dds":0.03448275862068961,"last_synced_at":"2026-05-10T19:21:04.973Z","last_synced_commit":"609df1a829595f974b31b31560b8150fb98e40fb","created_at":"2025-08-12T15:16:53.396Z","updated_at":"2026-05-10T19:20:56.592Z","committers":[{"name":"Dan Morris","email":"agentmorris@gmail.com","login":"agentmorris","count":97},{"name":"Ștefan Istrate","email":"stefan.istrate@gmail.com","login":"stefanistrate","count":63},{"name":"Tomer Gadot","email":"tomerg@google.com","login":"tomergadot","count":4},{"name":"Tanya Birch","email":"41585183+tanyabirch","login":"tanyabirch","count":4},{"name":"Timm Haucke","email":"haucke@mit.edu","login":"timmh","count":2},{"name":"Val. Lucet","email":"VLucet","login":"VLucet","count":2},{"name":"oksachi","email":"60711465+oksachi","login":"oksachi","count":1},{"name":"Viktor Domazetoski","email":"101590116+ViktorDomazetoski","login":"ViktorDomazetoski","count":1},{"name":"CharlesCNorton","email":"135471798+CharlesCNorton","login":"CharlesCNorton","count":1}],"past_year_committers":[{"name":"Dan Morris","email":"agentmorris@gmail.com","login":"agentmorris","count":56},{"name":"oksachi","email":"60711465+oksachi","login":"oksachi","count":1},{"name":"Viktor Domazetoski","email":"101590116+ViktorDomazetoski","login":"ViktorDomazetoski","count":1}],"commits_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/commits","host":{"name":"GitHub","url":"https://github.com","kind":"github","last_synced_at":"2026-05-11T00:00:23.725Z","repositories_count":6232531,"commits_count":895044206,"contributors_count":34895384,"owners_count":1151502,"icon_url":"https://github.com/github.png","host_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub/repositories"}},"issues_stats":{"full_name":"google/cameratrapai","html_url":"https://github.com/google/cameratrapai","last_synced_at":"2026-04-23T03:02:06.559Z","status":"error","issues_count":25,"pull_requests_count":16,"avg_time_to_close_issue":272088.5789473684,"avg_time_to_close_pull_request":75659.16666666667,"issues_closed_count":19,"pull_requests_closed_count":12,"pull_request_authors_count":8,"issue_authors_count":16,"avg_comments_per_issue":3.12,"avg_comments_per_pull_request":0.6875,"merged_pull_requests_count":9,"bot_issues_count":0,"bot_pull_requests_count":0,"past_year_issues_count":8,"past_year_pull_requests_count":9,"past_year_avg_time_to_close_issue":27915.833333333332,"past_year_avg_time_to_close_pull_request":119638.33333333333,"past_year_issues_closed_count":6,"past_year_pull_requests_closed_count":6,"past_year_pull_request_authors_count":4,"past_year_issue_authors_count":7,"past_year_avg_comments_per_issue":2.5,"past_year_avg_comments_per_pull_request":0.4444444444444444,"past_year_bot_issues_count":0,"past_year_bot_pull_requests_count":0,"past_year_merged_pull_requests_count":5,"created_at":"2025-08-12T15:16:52.791Z","updated_at":"2026-04-23T03:02:06.559Z","repository_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai","issues_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fcameratrapai/issues","issue_labels_count":{"duplicate":1},"pull_request_labels_count":{},"issue_author_associations_count":{"NONE":19,"CONTRIBUTOR":5,"COLLABORATOR":1},"pull_request_author_associations_count":{"COLLABORATOR":10,"NONE":4,"CONTRIBUTOR":2},"issue_authors":{"VLucet":6,"HugoMarkoff":4,"rsmiller74":2,"cheperboy":1,"robinsandfort":1,"PetervanLunteren":1,"aman5319":1,"eric-catman":1,"tinytosa":1,"ioRekz":1,"GuangyiLu":1,"FreekDB":1,"ismaelvbrack":1,"verrassendhollands":1,"sergewich":1,"hooge104":1},"pull_request_authors":{"agentmorris":7,"timmh":2,"VLucet":2,"stefanistrate":1,"ViktorDomazetoski":1,"CharlesCNorton":1,"YoussefBayouli":1,"oksachi":1},"host":{"name":"GitHub","url":"https://github.com","kind":"github","last_synced_at":"2026-04-25T00:00:14.967Z","repositories_count":14383872,"issues_count":34382842,"pull_requests_count":112614090,"authors_count":11244668,"icon_url":"https://github.com/github.png","host_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories","owners_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/owners","authors_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors"},"past_year_issue_labels_count":{},"past_year_pull_request_labels_count":{},"past_year_issue_author_associations_count":{"NONE":8},"past_year_pull_request_author_associations_count":{"COLLABORATOR":6,"NONE":3},"past_year_issue_authors":{"HugoMarkoff":2,"cheperboy":1,"FreekDB":1,"GuangyiLu":1,"ioRekz":1,"ismaelvbrack":1,"tinytosa":1},"past_year_pull_request_authors":{"agentmorris":6,"oksachi":1,"ViktorDomazetoski":1,"YoussefBayouli":1},"maintainers":[{"login":"agentmorris","count":7,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/agentmorris"},{"login":"timmh","count":2,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/timmh"},{"login":"PetervanLunteren","count":1,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/PetervanLunteren"},{"login":"stefanistrate","count":1,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/stefanistrate"}],"active_maintainers":[{"login":"agentmorris","count":6,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/agentmorris"}]},"events":{"total":{"DeleteEvent":15,"MemberEvent":2,"PullRequestEvent":35,"ForkEvent":33,"IssuesEvent":44,"WatchEvent":324,"IssueCommentEvent":107,"PublicEvent":1,"PushEvent":100,"PullRequestReviewCommentEvent":7,"PullRequestReviewEvent":9,"CreateEvent":20},"last_year":{"DeleteEvent":4,"MemberEvent":2,"PullRequestEvent":17,"ForkEvent":13,"IssuesEvent":11,"WatchEvent":86,"IssueCommentEvent":27,"PushEvent":24,"PullRequestReviewCommentEvent":7,"PullRequestReviewEvent":8,"CreateEvent":7}},"keywords":[],"dependencies":[{"ecosystem":"actions","filepath":".github/workflows/markdown_style_checks.yml","sha":null,"kind":"manifest","created_at":"2025-02-28T22:26:32.582Z","updated_at":"2025-02-28T22:26:32.582Z","repository_link":"https://github.com/google/cameratrapai/blob/main/.github/workflows/markdown_style_checks.yml","dependencies":[{"id":22072020902,"package_name":"actions/checkout","ecosystem":"actions","requirements":"v4","direct":true,"kind":"composite","optional":false},{"id":22072020903,"package_name":"actions/setup-python","ecosystem":"actions","requirements":"v5","direct":true,"kind":"composite","optional":false}]},{"ecosystem":"actions","filepath":".github/workflows/python_style_checks.yml","sha":null,"kind":"manifest","created_at":"2025-02-28T22:26:32.927Z","updated_at":"2025-02-28T22:26:32.927Z","repository_link":"https://github.com/google/cameratrapai/blob/main/.github/workflows/python_style_checks.yml","dependencies":[{"id":22072020920,"package_name":"actions/checkout","ecosystem":"actions","requirements":"v4","direct":true,"kind":"composite","optional":false},{"id":22072020921,"package_name":"actions/setup-python","ecosystem":"actions","requirements":"v5","direct":true,"kind":"composite","optional":false}]},{"ecosystem":"actions","filepath":".github/workflows/python_tests.yml","sha":null,"kind":"manifest","created_at":"2025-02-28T22:26:33.072Z","updated_at":"2025-02-28T22:26:33.072Z","repository_link":"https://github.com/google/cameratrapai/blob/main/.github/workflows/python_tests.yml","dependencies":[{"id":22072021062,"package_name":"actions/checkout","ecosystem":"actions","requirements":"v4","direct":true,"kind":"composite","optional":false},{"id":22072021063,"package_name":"actions/setup-python","ecosystem":"actions","requirements":"v5","direct":true,"kind":"composite","optional":false}]},{"ecosystem":"pypi","filepath":"pyproject.toml","sha":null,"kind":"manifest","created_at":"2025-02-28T22:26:33.609Z","updated_at":"2025-02-28T22:26:33.609Z","repository_link":"https://github.com/google/cameratrapai/blob/main/pyproject.toml","dependencies":[{"id":22072021318,"package_name":"absl-py","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021319,"package_name":"cloudpathlib","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021320,"package_name":"huggingface_hub","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021321,"package_name":"humanfriendly","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021322,"package_name":"kagglehub","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021323,"package_name":"matplotlib","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021324,"package_name":"numpy","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021325,"package_name":"pandas","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021326,"package_name":"pillow","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021327,"package_name":"requests","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021328,"package_name":"reverse_geocoder","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021638,"package_name":"tensorflow","ecosystem":"pypi","requirements":"\u003e= 2.12, \u003c 2.16 ; sys_platform != 'darwin' or platform_machine != 'arm64'","direct":true,"kind":"runtime","optional":false},{"id":22072021639,"package_name":"tensorflow-macos","ecosystem":"pypi","requirements":"\u003e= 2.12, \u003c 2.15 ; sys_platform == 'darwin' and platform_machine == 'arm64'","direct":true,"kind":"runtime","optional":false},{"id":22072021640,"package_name":"tensorflow-metal","ecosystem":"pypi","requirements":"sys_platform == 'darwin' and platform_machine == 'arm64'","direct":true,"kind":"runtime","optional":false},{"id":22072021641,"package_name":"tqdm","ecosystem":"pypi","requirements":"*","direct":true,"kind":"runtime","optional":false},{"id":22072021642,"package_name":"torch","ecosystem":"pypi","requirements":"\u003e= 2.0","direct":true,"kind":"runtime","optional":false},{"id":22072021643,"package_name":"yolov5","ecosystem":"pypi","requirements":"\u003e= 7.0.8, \u003c 7.0.12","direct":true,"kind":"runtime","optional":false}]}],"score":15.95127453915487,"created_at":"2026-04-06T09:02:11.486Z","updated_at":"2026-05-12T12:30:20.556Z","avatar_url":"https://github.com/google.png","language":"Python","category":"Biosphere","sub_category":"Terrestrial Wildlife","monthly_downloads":1822,"total_dependent_repos":0,"total_dependent_packages":0,"readme":"# SpeciesNet\n\nAn ensemble of AI models for classifying wildlife in camera trap images.\n\n## Table of Contents\n\n- [Overview](#overview)\n- [Running SpeciesNet](#running-speciesnet)\n  - [Do I have to do all this command-line stuff?](#do-i-have-to-do-all-this-command-line-stuff)\n  - [Setting up your Python environment](#setting-up-your-python-environment)\n  - [Installing the SpeciesNet Python package](#installing-the-speciesnet-python-package)\n  - [Running SpeciesNet](#running-speciesnet)\n  - [Running SpeciesNet on multiple detections per image (or on videos)](#running-speciesnet-on-multiple-detections-per-image-or-on-videos)\n  - [Using GPUs](#using-gpus)\n- [Downloading SpeciesNet model weights directly](#downloading-speciesnet-model-weights-directly)\n- [Contacting us](#contacting-us)\n- [Citing SpeciesNet](#citing-speciesnet)\n- [Supported models](#supported-models)\n- [Output format](#output-format-from-run_model)\n- [Visualizing SpeciesNet output](#visualizing-speciesnet-output)\n- [Ensemble decision-making](#ensemble-decision-making)\n- [Advanced topics](#advanced-topics)\n- [Animal picture](#animal-picture)\n\n## Overview\n\nEffective wildlife monitoring relies heavily on motion-triggered wildlife cameras, or “camera traps”, which generate vast quantities of image data. Manual processing of these images is a significant bottleneck. AI can accelerate that processing, helping conservation practitioners spend more time on conservation, and less time reviewing images.\n\nThis repository hosts code for running an ensemble of two AI models: (1) an object detector that finds objects of interest in wildlife camera images, and (2) an image classifier that classifies those objects to the species level. This ensemble is used for species recognition in the [Wildlife Insights](https://www.wildlifeinsights.org/) platform.\n\nThe object detector used in this ensemble is [MegaDetector](https://github.com/agentmorris/MegaDetector), which finds animals, humans, and vehicles in camera trap images, but does not classify animals to species level.\n\nThe species classifier ([SpeciesNet](https://www.kaggle.com/models/google/speciesnet)) was trained at Google using a large dataset of camera trap images and an [EfficientNet V2 M](https://arxiv.org/abs/2104.00298) architecture. It is designed to classify images into one of more than 2000 labels, covering diverse animal species, higher-level taxa (like \"mammalia\" or \"felidae\"), and non-animal classes (\"blank\", \"vehicle\"). SpeciesNet has been trained on a geographically diverse dataset of over 65M images, including curated images from the Wildlife Insights user community, as well as images from publicly-available repositories.\n\nThe SpeciesNet ensemble combines these two models using a set of heuristics and, optionally, geographic information to assign each image to a single category.  See the \"[ensemble decision-making](#ensemble-decision-making)\" section for more information about how the ensemble combines information for each image to make a single prediction.\n\nThe full details of the models and the ensemble process are discussed in this research paper:\n\nGadot T, Istrate Ș, Kim H, Morris D, Beery S, Birch T, Ahumada J. [To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images](https://doi.org/10.1049/cvi2.12318). IET Computer Vision. 2024 Dec;18(8):1193-208.\n\n## Running SpeciesNet\n\n### Do I have to do all this command line stuff?\n\nNo, you don't have to run anything at the command line to use SpeciesNet: there are a number of tools that help you run SpeciesNet on your computer or on cloud-based systems.  Details are beyond the scope of this README, but cloud-based systems that support SpeciesNet include [Wildlife Insights](https://www.wildlifeinsights.org/) and [Animl](https://animl.camera/). [AddaxAI](https://addaxdatascience.com/addaxai/) is a popular graphical tool for running SpeciesNet on your computer.\n\nThis README, though, is about running SpeciesNet at the command line, so, on to instructions...\n\n### Setting up your Python environment\n\nThe instructions on this page will assume that you have a Python virtual environment set up.  If you have not installed Python, or you are not familiar with Python virtual environments, start with our [installing Python](installing-python.md) page.  If you see a prompt that looks something like the following, you're all set to proceed to the next step:\n\n![speciesnet conda prompt](https://github.com/google/cameratrapai/raw/main/images/conda-prompt-speciesnet.png)\n\n### Installing the SpeciesNet Python package\n\nYou can install the SpeciesNet Python package via:\n\n`pip install speciesnet`\n\nIf you are on a Mac, and you receive an error during this step, add the \"--use-pep517\" option, like this:\n\n`pip install speciesnet --use-pep517`\n\nTo confirm that the package has been installed, you can run:\n\n`python -m speciesnet.scripts.run_model --help`\n\nYou should see help text related to the main script you'll use to run SpeciesNet.\n\n### Running SpeciesNet\n\nThe easiest way to run SpeciesNet is via the \"run_model\" script, like this:\n\n\u003e ```python -m speciesnet.scripts.run_model --folders \"c:\\your\\image\\folder\" --predictions_json \"c:\\your\\output\\file.json\"```\n\nChange `c:\\your\\image\\folder` to the root folder where your images live, and change `c:\\your\\output\\file.json` to the location where you want to put the output file containing the SpeciesNet results.\n\nThis will automatically download and run the detector and the classifier.  This command periodically logs output to the output file, and if this command doesn't finish (e.g. you have to cancel or reboot), you can just run the same command, and it will pick up where it left off.\n\nThese commands produce an output file in .json format; for details about this format, and information about converting it to other formats, see the \"[output format](#output-format)\" section below.\n\nYou can also run the three steps (detector, classifier, ensemble) separately; see the \"[running each component separately](#running-each-component-separately)\" section for more information.\n\nIn the above example, we didn't tell the ensemble what part of the world your images came from, so it may, for example, predict a kangaroo for an image from England.  If you want to let our ensemble filter predictions geographically, add, for example:\n\n`--country GBR`\n\nYou can use any [ISO 3166-1 alpha-3 three-letter country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3).\n\nIf your images are from the USA, you can also specify a state name using the two-letter state abbreviation, by adding, for example:\n\n`--admin1_region CA`\n\n### Running SpeciesNet on multiple detections per image (or on videos)\n\nThe `run_model` script described above uses [MegaDetector](https://github.com/agentmorris/MegaDetector) to find animals in each image, then runs the SpeciesNet classifier on \u003ci\u003ejust the highest-confidence detection in each image\u003c/i\u003e.  The goal of this script is to propose the single species that is most likely to be present in each image, and in most cases, processing every object detected in the image through the classifier would be slower, without changing the proposed species.\n\nThis is a problem, however, when you frequently have multi-species images, or images with both humans and domestic animals.  If this is a concern for your scenario, instead of using `run_model`, we recommend using [run_md_and_speciesnet](https://megadetector.readthedocs.io/en/latest/detection.html#run_md_and_speciesnet---CLI-interface), from the [MegaDetector Python package](https://megadetector.readthedocs.io/).  This looks like the following:\n\n```bash\npip install megadetector\npip install speciesnet\npython -m megadetector.detection.run_md_and_speciesnet\n```\n\nFor example:\n\n```bash\npython -m megadetector.detection.run_md_and_speciesnet \"c:\\your\\image\\folder\" \"c:\\your\\output\\file.json\" --country USA --state CA\n```\n\nOutput from this script will be in the [MegaDetector output format](https://lila.science/megadetector-output-format).  This format is supported by other tools for reviewing camera trap images, like [Timelapse](https://timelapse.ucalgary.ca/).\n\nThis script also supports video (`run_model` supports only still images).\n\nWe know it's a little confusing that there are two separate scripts right now; we will merge them soon.\n\n### Using GPUs\n\nIf you don't have an NVIDIA GPU, you can ignore this section.\n\nIf you have an NVIDIA GPU, SpeciesNet should use it.  If SpeciesNet is using your GPU, when you start `run_model`, in the output, you will see something like this:\n\n\u003cpre\u003eLoaded SpeciesNetClassifier in 0.96 seconds on \u003cb\u003eCUDA\u003c/b\u003e.\nLoaded SpeciesNetDetector in 0.7 seconds on \u003cb\u003eCUDA\u003c/b\u003e\u003c/pre\u003e\n\n\"CUDA\" is good news, that means \"GPU\".  \n\nIf SpeciesNet is \u003ci\u003enot\u003c/i\u003e using your GPU, you will see something like this instead:\n\n\u003cpre\u003eLoaded SpeciesNetClassifier in 9.45 seconds on \u003cb\u003eCPU\u003c/b\u003e\nLoaded SpeciesNetDetector in 0.57 seconds on \u003cb\u003eCPU\u003c/b\u003e\u003c/pre\u003e\n\nYou can also directly check whether SpeciesNet can see your GPU by running:\n\n`python -m speciesnet.scripts.gpu_test`\n\n99% of the time, after you install SpeciesNet on Linux, it will correctly see your GPU right away.  On Windows, you will likely need to take one more step:\n\n1. Install the GPU version of PyTorch, by activating your speciesnet Python environment (e.g. by running \"conda activate speciesnet\"), then running:\n\n   \u003e ```pip install torch torchvision --upgrade --force-reinstall --index-url https://download.pytorch.org/whl/cu118```\n   \n2. If the GPU doesn't work immediately after that step, update your [GPU driver](https://www.nvidia.com/en-us/geforce/drivers/), then reboot.  Really, don't skip the reboot part, most problems related to GPU access can be fixed by upgrading your driver and rebooting.\n\n## Downloading SpeciesNet model weights directly\n\nBoth scripts described above (`run_model` and `run_md_and_speciesnet`) will download model weights automatically.  If you want to use the SpeciesNet model weights outside of our script, or if you plan to be offline when you first run the script, you can download model weights directly from Kaggle.  Running our ensemble also requires [MegaDetector](https://github.com/agentmorris/MegaDetector), so in this list of links, we also include a direct link to the MegaDetector model weights.\n\n- [SpeciesNet page on Kaggle](https://www.kaggle.com/models/google/speciesnet)\n- [Direct link to version 4.0.2a weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.2a/1/download) (the crop classifier)\n- [Direct link to version 4.0.2b weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.2b/1/download) (the whole-image classifier)\n- [Direct link to MegaDetector weights](https://github.com/agentmorris/MegaDetector/releases/download/v5.0/md_v5a.0.0.pt)\n\n## Contacting us\n\nIf you have issues or questions, either [file an issue](https://github.com/google/cameratrapai/issues) or email us at [cameratraps@google.com](mailto:cameratraps@google.com).\n\nWe love hearing from users, so please reach out if you try SpeciesNet, whether you find it to be amazing or a total catastrophe.\n\n## Citing SpeciesNet\n\nIf you use this model, please cite:\n\n```text\n@article{gadot2024crop,\n  title={To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images},\n  author={Gadot, Tomer and Istrate, Ștefan and Kim, Hyungwon and Morris, Dan and Beery, Sara and Birch, Tanya and Ahumada, Jorge},\n  journal={IET Computer Vision},\n  year={2024},\n  publisher={Wiley Online Library}\n}\n```\n\n## Output format from run_model\n\n`run_model.py` produces output in .json format, containing an array called \"predictions\", with one element per image.  We provide a script to convert this format to the format used by [MegaDetector](https://github.com/agentmorris/MegaDetector), which can be imported into [Timelapse](https://timelapse.ucalgary.ca/), see [speciesnet_to_md.py](speciesnet/scripts/speciesnet_to_md.py).\n\nEach element always contains  field called \"filepath\"; the exact content of those elements will vary depending on which elements of the ensemble you ran.  If you didn't go out of your way to do something unusual, you ran the entire ensemble (i.e., both the detector and the classifier), so the \"full ensemble\" output format applies.  Output formats for other scenarios are described in the [advanced topics documentation](advances_topics.md).\n\n### Full ensemble output format\n\nIn the full ensemble output, the \"classifications\" field contains raw classifier output, before geofencing is applied.  So even if you specify a country code, you may see taxa in the \"classifications\" field that are not found in the country you specified.  The \"prediction\" field is the result of integrating the classification, detection, and geofencing information; if you specify a country code, the \"prediction\" field should only contain taxa that are found in the country you specified.\n\n```text\n{\n    \"predictions\": [\n        {\n            \"filepath\": str  =\u003e Image filepath.\n            \"failures\": list[str] (optional)  =\u003e List of internal components that failed during prediction (e.g. \"CLASSIFIER\", \"DETECTOR\", \"GEOLOCATION\"). If absent, the prediction was successful.\n            \"country\": str (optional)  =\u003e 3-letter country code (ISO 3166-1 Alpha-3) for the location where the image was taken. It can be overwritten if the country from the request doesn't match the country of (latitude, longitude).\n            \"admin1_region\": str (optional)  =\u003e First-level administrative division (in ISO 3166-2 format) within the country above. If not provided in the request, it can be computed from (latitude, longitude) when those coordinates are specified. Included in the response only for some countries that are used in geofencing (e.g. \"USA\").\n            \"latitude\": float (optional)  =\u003e Latitude where the image was taken, included only if (latitude, longitude) were present in the request.\n            \"longitude\": float (optional)  =\u003e Longitude where the image was taken, included only if (latitude, longitude) were present in the request.\n            \"classifications\": {  =\u003e dict (optional)  =\u003e Top-5 classifications. Included only if \"CLASSIFIER\" if not part of the \"failures\" field.\n                \"classes\": list[str]  =\u003e List of top-5 classes predicted by the classifier, matching the decreasing order of their scores below.\n                \"scores\": list[float]  =\u003e List of scores corresponding to top-5 classes predicted by the classifier, in decreasing order.\n                \"target_classes\": list[str] (optional)  =\u003e List of target classes, only present if target classes are passed as arguments.\n                \"target_logits\": list[float] (optional)  =\u003e Raw confidence scores (logits) of the target classes, only present if target classes are passed as arguments.\n            },\n            \"detections\": [  =\u003e list (optional)  =\u003e List of detections with confidence scores \u003e 0.01, in decreasing order of their scores. Included only if \"DETECTOR\" if not part of the \"failures\" field.\n                {\n                    \"category\": str  =\u003e Detection class \"1\" (= animal), \"2\" (= human) or \"3\" (= vehicle) from MegaDetector's raw output.\n                    \"label\": str  =\u003e Detection class \"animal\", \"human\" or \"vehicle\", matching the \"category\" field above. Added for readability purposes.\n                    \"conf\": float  =\u003e Confidence score of the current detection.\n                    \"bbox\": list[float]  =\u003e Bounding box coordinates, in (xmin, ymin, width, height) format, of the current detection. Coordinates are normalized to the [0.0, 1.0] range, relative to the image dimensions.\n                },\n                ...  =\u003e A prediction can contain zero or multiple detections.\n            ],\n            \"prediction\": str (optional)  =\u003e Final prediction of the SpeciesNet ensemble. Included only if \"CLASSIFIER\" and \"DETECTOR\" are not part of the \"failures\" field.\n            \"prediction_score\": float (optional)  =\u003e Final prediction score of the SpeciesNet ensemble. Included only if the \"prediction\" field above is included.\n            \"prediction_source\": str (optional)  =\u003e Internal component that produced the final prediction. Used to collect information about which parts of the SpeciesNet ensemble fired. Included only if the \"prediction\" field above is included.\n            \"model_version\": str  =\u003e A string representing the version of the model that produced the current prediction.\n        },\n        ...  =\u003e A response will contain one prediction for each instance in the request.\n    ]\n}\n```\n\n## Visualizing SpeciesNet output\n\nAs per above, many users will work with SpeciesNet results in open-source tools like [Timelapse](https://timelapse.ucalgary.ca/), which support the file format used by [MegaDetector](https://github.com/agentmorris/MegaDetector) (the format is described [here](https://lila.science/megadetector-output-format)).  If you used `run_md_and_speciesnet` to run SpeciesNet, you already have output in this format.  If you used `run_model`, we provide a [speciesnet_to_md](speciesnet/scripts/speciesnet_to_md.py) script to convert to this format.  Tools like Timelapse are a good way to visualize and interact with your SpeciesNet results.\n\nIf you want to use the command line or Python code to visualize SpeciesNet results, we recommend using the visualization tools provided in the [megadetector-utils Python package](https://pypi.org/project/megadetector-utils/).  For example, if you just ran either of these commands:\n\n`python -m speciesnet.scripts.run_model --folders \"c:\\your\\image\\folder\" --predictions_json \"c:\\your\\output\\file.json\"`\n\n`python -m megadetector.detection.run_md_and_speciesnet \"c:\\your\\image\\folder\" \"c:\\your\\output\\file.json\"`\n\nYou can use the [visualize_detector_output](https://megadetector.readthedocs.io/en/latest/visualization.html#visualize_detector_output---CLI-interface) script from the megadetector-utils package, like this:\n\n```bash\npip install megadetector-utils\npython -m megadetector.visualization.visualize_detector_output \"c:\\your\\output\\file.json\" \"c:\\folder\\where\\you\\want\\visualized\\output\"\n```\n\nThat will produce a folder of images with SpeciesNet results visualized on each image.  A typical use of this script would also use the --sample argument (to render a random subset of images, if what you want is to quickly grok how SpeciesNet did on a large dataset), and often the --html_output_file argument, to wrap the results in an HTML page that makes it quick to scroll through them.  Putting those together will give you pages like these:\n\n* [Fun preview page for Caltech Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-visualization-examples/caltech-camera-traps/)\n* [Fun preview page for Idaho Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-visualization-examples/idaho-camera-traps/)\n* [Fun preview page for Orinoquía Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-visualization-examples/orinoquia-camera-traps/)\n\nTo see all the options, run:\n\n```bash\n python -m megadetector.visualization.visualize_detector_output --help\n```\n\nAnother relevant script is [postprocess_batch_results](https://megadetector.readthedocs.io/en/latest/postprocessing.html#postprocess_batch_results---CLI-interface), which also renders sample images, but instead of just putting them in a flat folder, the purpose of this script is to allow you to quickly see samples of detections/non-detections, and to quickly see samples broken out by species.  So, for example, you can do:\n\n```bash\npython -m megadetector.postprocessing.postprocess_batch_results \"c:\\your\\output\\file.json\" \"c:\\folder\\where\\you\\want\\preview\\output\"\n```\n\n...to get pages like these:\n\n* [Fancy postprocessing page for Caltech Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-postprocessing-examples/caltech-camera-traps/)\n* [Fancy postprocessing page for Idaho Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-postprocessing-examples/idaho-camera-traps/)\n* [Fancy postprocessing page for Orinoquía Camera Traps](https://lila.science/public/speciesnet-previews/speciesnet-postprocessing-examples/orinoquia-camera-traps/)\n\nTo see all the options, run:\n\n```bash\npython -m megadetector.postprocessing.postprocess_batch_results --help\n```\n\nBoth of these modules can also be called from Python code instead of from the command line.\n\n\n## Ensemble decision-making\n\nAs discussed above, `run_model` uses multiple steps to predict a single category for each image, combining the strengths of the detector and the classifier.  The ensembling strategy (i.e., the strategy used to combine the information from the detector and classifier) was primarily optimized for minimizing the human effort required to review collections of images.\n\nThe guiding principles of the ensembling strategy are:\n\n- Help users to quickly filter out unwanted images (e.g., blanks): identify as many blank images as possible while minimizing missed animals, which can be more costly than misclassifying a non-blank image as one of the possible animal classes.\n- Provide high-confidence predictions for frequent classes (e.g., deer).\n- Make predictions on the lowest taxonomic level possible, while balancing precision: if the ensemble is not confident enough all the way to the species level, we would rather return a prediction we are confident about in a higher taxonomic level (e.g., family, or sometimes even \"animal\"), instead of risking an incorrect prediction on the species level.\n\nHere is a breakdown of the steps:\n\n1. **Input processing:** Raw images are preprocessed and passed to both the object detector (MegaDetector) and the image classifier. The type of preprocessing will depend on the selected model. For \"always crop\" models, images are first processed by the object detector and then cropped based on the detection bounding box before being fed to the classifier. For \"full image\" models, images are preprocessed independently for both models.\n\n2. **Object detection:** The detector identifies potential objects (animals, humans, or vehicles) in the image, providing their bounding box coordinates and confidence scores.\n\n3. **Species classification:** The species classifier analyzes the (potentially cropped) image to identify the most likely species present. It provides a list of top-5 species classifications, each with a confidence score. The species classifier is a fully supervised model that classifies images into a fixed set of animal species, higher taxa, and non-animal labels.\n\n4. **Detection-based human/vehicle decisions:** If the detector is highly confident about the presence of a human or vehicle, that label will be returned as the final prediction regardless of what the classifier predicts. If the detection is less confident and the classifier also returns human or vehicle as a top-5 prediction, with a reasonable score, that top prediction will be returned. This step prevents high-confidence detector predictions from being overridden by lower-confidence classifier predictions.\n\n5. **Blank decisions:** If the classifier predicts \"blank\" with a high confidence score, and the detector has very low confidence about the presence of an animal (or is absent), that \"blank\" label is returned as a final prediction. Similarly, if a classification is \"blank\" with extra-high confidence (above 0.99), that label is returned as a final prediction regardless of the detector's output. This enables the model to filter out images with high confidence in being blank.\n\n6. **Geofencing:** If the most likely species is an animal and a location (country and optional admin1 region) is provided for the image, a geofencing rule is applied. If that species is explicitly disallowed for that region based on the available geofencing rules, the prediction will be rolled up (as explained below) to a higher taxa level on that allow list.\n\n7. **Label rollup:** If all of the previous steps do not yield a final prediction, a \"rollup\" is applied when there is a good classification score for an animal. \"Rollup\" is the process of propagating the classification predictions to the first matching ancestor in the taxonomy, provided there is a good score at that level. This means the model may assign classifications at the genus, family, order, class, or kingdom level, if those scores are higher than the score at the species level. This is a common strategy to handle long-tail distributions, common in wildlife datasets.\n\n8. **Detection-based animal decisions:**  If the detector has a reasonable confidence `animal` prediction, `animal` will be returned along with the detector confidence.\n\n9. **Unknown:** If no other rule applies, the `unknown` class is returned as the final prediction, to avoid making low-confidence predictions.\n\n10. **Prediction source:** At each step of the prediction workflow, a `prediction_source` is stored. This will be included in the final results to help diagnose which parts of the overall SpeciesNet ensemble were actually used.\n\nThe \"geofencing\" and \"label rollup\" steps are also used when running `run_md_and_speciesnet`; the other steps don't apply in this scenario, since the goal of `run_md_and_speciesnet` is to classify each detection, rather than to classify the whole image.\n\n## Advanced topics\n\nFor information about any of the following topics, see the [advanced topics documentation](advanced_topics.md):\n\n* Using `run_model` to run individual components of the ensemble\n* Alternative installation variants of the Python package\n* Alternative variants of the SpeciesNet model weights (in particular, the whole-image classifier that does not use a detection stage)\n* Alternative input formats for `run_model`\n* Development conventions/contributing code\n\n## Animal picture\n\nIt would be unfortunate if this whole README about camera trap images didn't show you a single camera trap image, so...\n\n![giant armadillo](https://github.com/google/cameratrapai/raw/main/images/sample_image_oct.jpg)\n\nImage credit University of Minnesota, from the [Orinoquía Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/) dataset.\n\n","funding_links":[],"readme_doi_urls":["https://doi.org/10.1049/cvi2.12318"],"works":{},"citation_counts":{},"total_citations":0,"keywords_from_contributors":["camera-traps","conservation","megadetector","wildlife","cameratraps","ecology","pytorch-wildlife"],"project_url":"https://ost.ecosyste.ms/api/v1/projects/350827","html_url":"https://ost.ecosyste.ms/projects/350827"}