A curated list of open technology projects to sustain a stable climate, energy supply, biodiversity and natural resources.

BioCLIP

A foundation model for the tree of life, leveraging the unique properties of biology captured by TreeOfLife-10M, namely the abundance and variety of images of plants, animals, and fungi, together with the availability of rich structured biological knowledge.
https://github.com/imageomics/bioclip

Category: Biosphere
Sub Category: Biodiversity Data Access and Management

Keywords

clip computer-vision imageomics knowledge-guided-machine-learning taxonomy

Keywords from Contributors

transformers language-model jax bert speech pretrained-models charting benchmarking resnet tensor

Last synced: about 8 hours ago
JSON representation

Repository metadata

This is the repository for the BioCLIP model and the TreeOfLife-10M dataset [CVPR'24 Oral, Best Student Paper].

README.md

BioCLIP DOI

This is the repository for the BioCLIP model and the TreeOfLife-10M dataset. It contains the code used for training and the evaluation of BioCLIP (testing and visualizing embeddings). Additionally, we include a collection of scripts for forming, evaluating, and visualizing the data used for TreeOfLife-10M and the Rare Species benchmark we created alongside it. The BioCLIP website is hosted from the gh-pages branch of this repository.

Paper | Model | Data | Demo

BioCLIP is a CLIP model trained on a new 10M-image dataset of biological organisms with fine-grained taxonomic labels.
BioCLIP outperforms general domain baselines on a wide spread of biology-related tasks, including zero-shot and few-shot classification.

Table of Contents

  1. Model
  2. Data
  3. Paper, website, and docs
  4. Citation

Model

The BioCLIP model is a ViT-B/16 pre-trained with the CLIP objective.
Both the ViT and the (small) autoregressive text encoder are available to download on Hugging Face.

The only dependency is the open_clip package.

See the examples/ directory on the Hugging Face model repo for an example implementation.
You can also use the pybioclip package or the BioCLIP demo on Hugging Face.

Data

BioCLIP was trained on TreeOfLife-10M (ToL-10M).
The data is a combination of iNat21, BIOSCAN-1M, and data we collected and cleaned from Encyclopedia of Life (EOL). It contains images for more than 450K distinct taxa, as measured by 7-rank Linnaean taxonomy (kingdom through species); this taxonomic string is associated to each image along with its common (or vernacular name) where available.

We cannot re-release the iNat21 or the BIOSCAN-1M datasets; however, we have uploaded our cleaned EOL data to TreeOfLife-10M on Hugging Face.
After downloading iNat21 and BIOSCAN-1M, the three datasets can be combined into TreeOfLife-10M in the webdataset format for model training by following the directions in treeoflife10m.md.

10 biologically-relevant datasets were used for various tests of BioCLIP, they are described (briefly) and linked to below. For more information about the contents of these datasets, see Table 2 and associated sections of our paper. Annotations used alongside the datasets for evaluation are provided in subfolders of the data/ directory named for the associated dataset.

Test Sets

  • Meta-Album: Specifically, we used the Plankton, Insects, Insects 2, PlantNet, Fungi, PlantVillage, Medicinal Leaf, and PlantDoc datasets from Set-0 through Set-2 (Set-3 had not yet been released).
  • Birds 525: We evaluated on the 2,625 test images provided with the dataset.
  • Rare Species: A new dataset we curated for the purpose of testing this model and to contribute to the ML for Conservation community. It consists of nearly 12K images representing 400 species labeled Near Threatened through Extinct in the Wild by the IUCN Red List. For more information, see our Rare Species dataset.

We have a preprint on arXiv and a project website.
We also will link to the upcoming CVPR 2024 version when it is publicly available.

The docs/ directory is divided into two subfolders: imageomics/ and open_clip/. The former is documentation relating to the creation of BioCLIP, TreeOfLife-10M, and the Rare Species dataset, while the latter is documentation from the open_clip package (this has not been altered).
We plan on adding more docs on how to use BioCLIP in a variety of settings.
For now, if it is unclear how to integrate BioCLIP into your project, please open an issue with your questions.

Citation

Our paper:

@inproceedings{stevens2024bioclip,
  title = {{B}io{CLIP}: A Vision Foundation Model for the Tree of Life}, 
  author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2024},
  pages = {19412-19424}
}

Our code (this repository):

@software{bioclip2023code,
  author = {Samuel Stevens and Jiaman Wu and Matthew J. Thompson and Elizabeth G. Campolongo and Chan Hee Song and David Edward Carlyn},
  doi = {10.5281/zenodo.10895871},
  title = {BioCLIP},
  version = {v1.0.0},
  year = {2024}
}

Also consider citing OpenCLIP, iNat21 and BIOSCAN-1M:

@software{ilharco_gabriel_2021_5143773,
  author={Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig},
  title={OpenCLIP},
  year={2021},
  doi={10.5281/zenodo.5143773},
}
@misc{inat2021,
  author={Van Horn, Grant and Mac Aodha, Oisin},
  title={iNat Challenge 2021 - FGVC8},
  publisher={Kaggle},
  year={2021},
  url={https://kaggle.com/competitions/inaturalist-2021}
}
@inproceedings{gharaee2023step,
  author={Gharaee, Z. and Gong, Z. and Pellegrino, N. and Zarubiieva, I. and Haurum, J. B. and Lowe, S. C. and McKeown, J. T. A. and Ho, C. Y. and McLeod, J. and Wei, Y. C. and Agda, J. and Ratnasingham, S. and Steinke, D. and Chang, A. X. and Taylor, G. W. and Fieguth, P.},
  title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN-1M} Insect Dataset},
  booktitle={Advances in Neural Information Processing Systems ({NeurIPS}) Datasets \& Benchmarks Track},
  year={2023},
}

Citation (CITATION.cff)

---
abstract: "Images of the natural world are an abundant source of biological
  information. There are many computational methods and tools, particularly
  computer vision, for extracting information from images. However, existing
  methods consist of bespoke models, not adaptable or extendable from their
  targeted task to new questions, contexts, and datasets. We thus develop the
  first large-scale multimodal model, BioCLIP, for general biology questions on
  images. We leverage the unique properties of biology (abundance and variety of
  images and availability of rich structured biological knowledge) as the the
  application domain for computer vision."
authors:
  - family-names: Stevens
    given-names: Samuel
  - family-names: Wu
    given-names: Jiaman
  - family-names: Thompson
    given-names: "Matthew J."
  - family-names: Campolongo
    given-names: "Elizabeth G."
  - family-names: Song
    given-names: "Chan Hee"
  - family-names: Carlyn
    given-names: "David Edward"
cff-version: 1.2.0
date-released: "2024-09-19"
identifiers:
  - doi: "10.5281/zenodo.10895870"
  - description: "The GitHub release URL of tag v1.0.2."
    type: url
    value: "https://github.com/Imageomics/bioclip/releases/tag/v1.0.2"
  - description: "The GitHub URL of the commit tagged with v1.0.2."
    type: url
    value: "https://github.com/Imageomics/bioclip/tree/b750fa8758d16e78ccc3de7ab86c1523d5de6148"
keywords:
  - clip
  - biology
  - CV
  - imageomics
  - animals
  - species
  - images
  - taxonomy
  - "rare species"
  - "endangered species"
  - "evolutionary biology"
  - multimodal
  - "knowledge-guided"
license: MIT
message: "If you use this software, please cite both the article from preferred-citation and the software itself."
repository-code: "https://github.com/Imageomics/bioclip"
title: BioCLIP
version: 1.0.2
type: software
preferred-citation:
  type: conference-paper
  authors:
    - family-names: Stevens
      given-names: Samuel
    - family-names: Wu
      given-names: Jiaman
    - family-names: Thompson
      given-names: "Matthew J."
    - family-names: Campolongo
      given-names: "Elizabeth G."
    - family-names: Song
      given-names: "Chan Hee"
    - family-names: Carlyn
      given-names: "David Edward"
    - family-names: Dong
      given-names: Li
    - family-names: Dahdul
      given-names: "Wasila M"
    - family-names: Stewart
      given-names: Charles
    - family-names: "Berger-Wolf"
      given-names: Tanya
    - family-names: Chao
      given-names: "Wei-Lun"
    - family-names: Su
      given-names: Yu
  title: "BioCLIP: A Vision Foundation Model for the Tree of Life"
  year: 2024
  pages: "19412-19424"
  collection-title: "Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)"
references:
  - authors:
      - family-names: Ilharco
        given-names: Gabriel
      - family-names: Wortsman
        given-names: Mitchell
      - family-names: Wightman
        given-names: Ross
      - family-names: Gordon
        given-names: Cade
      - family-names: Carlini
        given-names: Nicholas
      - family-names: Taori
        given-names: Rohan
      - family-names: Dave
        given-names: Achal
      - family-names: Shankar
        given-names: Vaishaal
      - family-names: Namkoong
        given-names: Hongseok
      - family-names: Miller
        given-names: John
      - family-names: Hajishirzi
        given-names: Hannaneh
      - family-names: Farhadi
        given-names: Ali
      - family-names: Schmidt
        given-names: Ludwig
    title: OpenCLIP
    version: v0.1
    type: software
    doi: "10.5281/zenodo.5143773"
    date-released: "2021-07-28"

Owner metadata


GitHub Events

Total
Last Year

Committers metadata

Last synced: 7 days ago

Total Commits: 397
Total Committers: 40
Avg Commits per committer: 9.925
Development Distribution Score (DDS): 0.678

Commits in past year: 34
Committers in past year: 3
Avg Commits per committer in past year: 11.333
Development Distribution Score (DDS) in past year: 0.529

Name Email Commits
Ross Wightman r****n@g****m 128
Romain Beaumont r****1@g****m 63
Mitchell Wortsman m****n@g****m 34
Cade Gordon c****1@h****m 26
iejMac k****6@g****m 25
Gabriel Ilharco Magalhães g****o 23
Elizabeth Campolongo 3****9 18
Matthew Thompson t****9@o****u 16
Giovanni Puccetti g****2@g****m 8
lopho l****o 7
Jitsev, Jenia j****v@f****e 5
Nicholas Carlini n****s@c****m 5
Phil Wang l****s@g****m 4
Ludwig Schmidt l****2@g****m 3
Jiaman Wu 4****s 3
Mehdi Cherti m****i@g****m 3
Sayak Paul s****l@g****m 2
Lysandre l****t@r****r 2
iejmac i****c@i****l 1
Mitchell Wortsman m****w@i****l 1
Mitchell Wortsman m****w@i****l 1
YangXiuyu g****y@g****m 1
Vaishaal Shankar v****l@b****u 1
Taiqi He 8****e 1
Steve 8****e 1
Santiago Castro s****o@u****u 1
Samuel Stevens s****s@g****m 1
Richard Löwenström s****i@g****m 1
Quan Sun 3****n 1
ProGamerGov P****v 1
and 10 more...

Committer domains:


Issue and Pull Request metadata

Last synced: 1 day ago

Total issues: 17
Total pull requests: 29
Average time to close issues: about 1 month
Average time to close pull requests: 8 days
Total issue authors: 12
Total pull request authors: 5
Average comments per issue: 3.18
Average comments per pull request: 0.38
Merged pull request: 24
Bot issues: 0
Bot pull requests: 0

Past year issues: 12
Past year pull requests: 17
Past year average time to close issues: 14 days
Past year average time to close pull requests: 15 days
Past year issue authors: 9
Past year pull request authors: 4
Past year average comments per issue: 3.92
Past year average comments per pull request: 0.53
Past year merged pull request: 14
Past year bot issues: 0
Past year bot pull requests: 0

More stats: https://issues.ecosyste.ms/repositories/lookup?url=https://github.com/imageomics/bioclip

Top Issue Authors

  • VGrondin (2)
  • XavierHeart (2)
  • cauchy-max (2)
  • penfever (2)
  • egrace479 (2)
  • afaulconbridge (1)
  • Lagunaxx (1)
  • boly38 (1)
  • Yuyan-C (1)
  • Forainest789 (1)
  • qiancunayllen (1)
  • ccc524 (1)

Top Pull Request Authors

  • egrace479 (13)
  • work4cs (7)
  • thompsonmj (6)
  • samuelstevens (2)
  • jackedney (1)

Top Issue Labels

  • bug (2)
  • Hugging Face (1)
  • duplicate (1)

Top Pull Request Labels

  • documentation (11)
  • bug (2)

Dependencies

pyproject.toml pypi
requirements-training.txt pypi
  • Jinja2 ==3.1.3
  • Markdown ==3.6
  • MarkupSafe ==2.1.5
  • PyYAML ==6.0.1
  • Werkzeug ==3.0.1
  • absl-py ==2.1.0
  • braceexpand ==0.1.7
  • cachetools ==5.3.3
  • certifi ==2024.2.2
  • charset-normalizer ==3.3.2
  • filelock ==3.13.1
  • fsspec ==2024.3.1
  • ftfy ==6.2.0
  • google-auth ==2.29.0
  • google-auth-oauthlib ==1.0.0
  • grpcio ==1.62.1
  • huggingface-hub ==0.21.4
  • idna ==3.6
  • mpmath ==1.3.0
  • networkx ==3.2.1
  • numpy ==1.26.4
  • nvidia-cublas-cu12 ==12.1.3.1
  • nvidia-cuda-cupti-cu12 ==12.1.105
  • nvidia-cuda-nvrtc-cu12 ==12.1.105
  • nvidia-cuda-runtime-cu12 ==12.1.105
  • nvidia-cudnn-cu12 ==8.9.2.26
  • nvidia-cufft-cu12 ==11.0.2.54
  • nvidia-curand-cu12 ==10.3.2.106
  • nvidia-cusolver-cu12 ==11.4.5.107
  • nvidia-cusparse-cu12 ==12.1.0.106
  • nvidia-nccl-cu12 ==2.19.3
  • nvidia-nvjitlink-cu12 ==12.4.99
  • nvidia-nvtx-cu12 ==12.1.105
  • oauthlib ==3.2.2
  • packaging ==24.0
  • pandas ==2.2.1
  • pillow ==10.2.0
  • protobuf ==5.26.0
  • pyasn1 ==0.5.1
  • pyasn1-modules ==0.3.0
  • python-dateutil ==2.9.0.post0
  • pytz ==2024.1
  • regex ==2023.12.25
  • requests ==2.31.0
  • requests-oauthlib ==2.0.0
  • rsa ==4.9
  • safetensors ==0.4.2
  • six ==1.16.0
  • sympy ==1.12
  • tensorboard ==2.14.0
  • tensorboard-data-server ==0.7.2
  • timm ==0.9.16
  • tokenizers ==0.15.2
  • torch ==2.2.1
  • torchvision ==0.17.1
  • tqdm ==4.66.2
  • transformers ==4.39.1
  • triton ==2.2.0
  • typing_extensions ==4.10.0
  • tzdata ==2024.1
  • urllib3 ==2.2.1
  • wcwidth ==0.2.13
  • webdataset ==0.2.86
requirements-viz.txt pypi
  • pandas ==2.1.2
  • plotly =5.18.0
requirements.txt pypi
  • Jinja2 ==3.1.3
  • MarkupSafe ==2.1.5
  • PyYAML ==6.0.1
  • certifi ==2024.2.2
  • charset-normalizer ==3.3.2
  • cmake ==3.28.4
  • filelock ==3.13.1
  • fsspec ==2024.2.0
  • ftfy ==6.1.1
  • huggingface-hub ==0.21.3
  • idna ==3.6
  • lit ==18.1.2
  • mpmath ==1.3.0
  • networkx ==3.2.1
  • numpy ==1.26.4
  • nvidia-cublas-cu11 ==11.10.3.66
  • nvidia-cuda-cupti-cu11 ==11.7.101
  • nvidia-cuda-nvrtc-cu11 ==11.7.99
  • nvidia-cuda-runtime-cu11 ==11.7.99
  • nvidia-cudnn-cu11 ==8.5.0.96
  • nvidia-cufft-cu11 ==10.9.0.58
  • nvidia-curand-cu11 ==10.2.10.91
  • nvidia-cusolver-cu11 ==11.4.0.1
  • nvidia-cusparse-cu11 ==11.7.4.91
  • nvidia-nccl-cu11 ==2.14.3
  • nvidia-nvtx-cu11 ==11.7.91
  • packaging ==23.2
  • pandas ==2.0.2
  • pillow ==10.2.0
  • python-dateutil ==2.9.0.post0
  • pytz ==2024.1
  • regex ==2023.12.25
  • requests ==2.31.0
  • safetensors ==0.4.2
  • scipy ==1.10.1
  • six ==1.16.0
  • sympy ==1.12
  • tokenizers ==0.15.2
  • torch ==2.0.1
  • torchvision ==0.15.2
  • tqdm ==4.66.2
  • transformers ==4.38.2
  • triton ==2.0.0
  • typing_extensions ==4.10.0
  • tzdata ==2024.1
  • urllib3 ==2.2.1
  • wcwidth ==0.2.13
setup.py pypi

Score: 8.992184362173012