Open Sustainable Technology
A curated list of open technology projects to sustain a stable climate, energy supply, biodiversity and natural resources.
Browse accepted projects | Review proposed projects | Propose new project | Open Issues
vak
A neural network framework for animal acoustic communication and bioacoustics.
https://github.com/vocalpy/vak
animal-communication animal-vocalizations bioacoustic-analysis bioacoustics birdsong python python3 pytorch spectrograms speech-processing torch torchvision vocalizations
Last synced: about 13 hours ago
JSON representation
Repository metadata
A neural network framework for researchers studying acoustic communication
- Host: GitHub
- URL: https://github.com/vocalpy/vak
- Owner: vocalpy
- License: bsd-3-clause
- Created: 2019-03-03T11:34:38.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2024-05-11T01:52:17.000Z (about 21 hours ago)
- Last Synced: 2024-05-11T02:25:22.996Z (about 21 hours ago)
- Topics: animal-communication, animal-vocalizations, bioacoustic-analysis, bioacoustics, birdsong, python, python3, pytorch, spectrograms, speech-processing, torch, torchvision, vocalizations
- Language: Python
- Homepage: https://vak.readthedocs.io
- Size: 197 MB
- Stars: 69
- Watchers: 4
- Forks: 16
- Open Issues: 119
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Citation: CITATION.cff
README
## A neural network framework for researchers studying acoustic communication
[![DOI](https://zenodo.org/badge/173566541.svg)](https://zenodo.org/badge/latestdoi/173566541)
[![All Contributors](https://img.shields.io/badge/all_contributors-25-orange.svg?style=flat-square)](#contributors-)
[![PyPI version](https://badge.fury.io/py/vak.svg)](https://badge.fury.io/py/vak)
[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
[![Build Status](https://github.com/vocalpy/vak/actions/workflows/ci-linux.yml/badge.svg)](https://github.com/vocalpy/vak/actions/workflows/ci-linux.yml/badge.svg)
[![codecov](https://codecov.io/gh/vocalpy/vak/branch/main/graph/badge.svg?token=9Y4XXB2ELA)](https://codecov.io/gh/vocalpy/vak)π§ vak version 1.0.0 is in development! π§ π£ Test out the alpha release:
pip install vak==1.0.0a3
. π£ For more info, please see this forum post.`vak` is a Python framework for neural network models,
designed for researchers studying acoustic communication:
how and why animals communicate with sound.
Many people will be familiar with work in this area on
animal vocalizations such as birdsong, bat calls, and even human speech.
Neural network models have provided a powerful new tool for researchers in this area,
as in many other fields.The library has two main goals:
1. Make it easier for researchers studying acoustic communication to
apply neural network algorithms to their data
2. Provide a common framework that will facilitate benchmarking neural
network algorithms on tasks related to acoustic communicationCurrently, the main use is an automatic *annotation* of vocalizations and other animal sounds.
By *annotation*, we mean something like the example of annotated birdsong shown below:
You give `vak` training data in the form of audio or spectrogram files with annotations,
and then `vak` helps you train neural network models
and use the trained models to predict annotations for new files.We developed `vak` to benchmark a neural network model we call [`tweetynet`](https://github.com/yardencsGitHub/tweetynet).
Please see the eLife article here: https://elifesciences.org/articles/63853To learn more about the goals and design of vak,
please see this talk from the SciPy 2023 conference,
and the associated Proceedings paper
[here](https://conference.scipy.org/proceedings/scipy2023/pdfs/david_nicholson.pdf).For more background on animal acoustic communication and deep learning,
and how these intersect with related fields like
computational ethology and neuroscience,
please see the ["About"](#About) section below.### Installation
Short version:#### with `pip`
```console
$ pip install vak
```#### with `conda`
```console
$ conda install vak -c pytorch -c conda-forge
$ # ^ notice additional channel!
```Notice that for `conda` you specify two channels,
and that the `pytorch` channel should come first,
so it takes priority when installing the dependencies `pytorch` and `torchvision`.For more details, please see:
https://vak.readthedocs.io/en/latest/get_started/installation.htmlWe test `vak` on Ubuntu and MacOS. We have run on Windows and
know of other users successfully running `vak` on that operating system,
but installation on Windows may require some troubleshooting.
A good place to start is by searching the [issues](https://github.com/vocalpy/vak/issues).### Usage
#### Tutorial
Currently the easiest way to work with `vak` is through the command line.
![terminal showing vak help command output](https://github.com/vocalpy/vak/blob/main/doc/images/terminalizer/vak-help.gif?raw=True)You run it with configuration files, using one of a handful of commands.
For more details, please see the "autoannotate" tutorial here:
https://vak.readthedocs.io/en/latest/get_started/autoannotate.html#### How can I use my data with `vak`?
Please see the How-To Guides in the documentation here:
https://vak.readthedocs.io/en/latest/howto/index.html### Support / Contributing
For help, please begin by checking out the Frequently Asked Questions:
https://vak.readthedocs.io/en/latest/faq.html.To ask a question about vak, discuss its development,
or share how you are using it,
please start a new "Q&A" topic on the VocalPy forum
with the vak tag:To report a bug, or to request a feature,
please use the issue tracker on GitHub:For a guide on how you can contribute to `vak`, please see:
https://vak.readthedocs.io/en/latest/development/index.html### Citation
If you use vak for a publication, please cite both the Proceedings paper and the software.#### Proceedings paper (BiBTex)
```
@inproceedings{nicholson2023vak,
title={vak: a neural network framework for researchers studying animal acoustic communication},
author={Nicholson, David and Cohen, Yarden},
booktitle={Python in Science Conference},
pages={59--67},
year={2023}
}```
#### Software
[![DOI](https://zenodo.org/badge/173566541.svg)](https://zenodo.org/badge/latestdoi/173566541)
### License
[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
is [here](./LICENSE).### About
Are humans unique among animals?
We speak languages, but is speech somehow like other animal behaviors, such as birdsong?
Questions like these are answered by studying how animals communicate with sound.
This research requires cutting edge computational methods and big team science across a wide range of disciplines,
including ecology, ethology, bioacoustics, psychology, neuroscience, linguistics, and genomics [^1][^2][^3].
As in many other domains, this research is being revolutionized by deep learning algorithms [^1][^2][^3].
Deep neural network models enable answering questions that were previously impossible to address,
in part because these models automate analysis of very large datasets.
Within the study of animal acoustic communication, multiple models have been proposed for similar tasks,
often implemented as research code with different libraries, such as Keras and Pytorch.
This situation has created a real need for a framework that allows researchers to easily benchmark models
and apply trained models to their own data. To address this need, we developed vak.
We originally developed vak to benchmark a neural network model, TweetyNet [^4][^5],
that automates annotation of birdsong by segmenting spectrograms.
TweetyNet and vak have been used in both neuroscience [^6][^7][^8] and bioacoustics [^9].
For additional background and papers that have used `vak`,
please see: https://vak.readthedocs.io/en/latest/reference/about.html[^1]: https://www.frontiersin.org/articles/10.3389/fnbeh.2021.811737/full
[^2]: https://peerj.com/articles/13152/
[^3]: https://www.jneurosci.org/content/42/45/8514
[^4]: https://elifesciences.org/articles/63853
[^5]: https://github.com/yardencsGitHub/tweetynet
[^6]: https://www.nature.com/articles/s41586-020-2397-3
[^7]: https://elifesciences.org/articles/67855
[^8]: https://elifesciences.org/articles/75691
[^9]: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0278522#### "Why this name, vak?"
It has only three letters, so it is quick to type,
and it wasn't taken on [pypi](https://pypi.org/) yet.
Also I guess it has [something to do with speech](https://en.wikipedia.org/wiki/V%C4%81c).
"vak" rhymes with "squawk" and "talk".#### Does your library have any poems?
[Yes.](https://vak.readthedocs.io/en/latest/poems/index.html)## Contributors β¨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
avanikop
π
Luke Poeppel
π
yardencsGitHub
π» π€ π’ π π¬
David Nicholson
π π» π£ π π‘ π€ π π§ π§βπ« π π π¬ π’ β οΈ β
marichard123
π
Therese Koch
π π
alyndanoel
π€
adamfishbein
π
vivinastase
π π π€
kaiyaprovost
π» π€
ymk12345
π π
neuronalX
π π
Khoa
π
sthaar
π π π€
yangzheng-121
π π€
lmpascual
π
ItamarFruchter
π
Hjalmar K. Turesson
π π€
nhoglen
π
Ja-sonYun
π»
Jacqueline
π
Mark Muldoon
π
zhileiz1992
π π»
Maris Basha
π€ π»
Daniel MΓΌller-Komorowska
π
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit. # Visit https://bit.ly/cffinit to generate yours today! cff-version: 1.2.0 title: vak message: >- a neural network toolbox for animal vocalizations and bioacoustics type: software authors: - given-names: David family-names: Nicholson email: [email protected] affiliation: Emory University orcid: 'https://orcid.org/0000-0002-4261-4719' - given-names: Yarden family-names: Cohen orcid: 'https://orcid.org/0000-0002-8149-6954' affiliation: Weizmann Institute email: [email protected] identifiers: - type: doi value: 10.5281/zenodo.5828090 repository-code: 'https://github.com/NickleDave/vak' url: 'https://vak.readthedocs.io' repository-artifact: 'https://pypi.org/project/vak/' keywords: - python - animal vocalizations - neural networks - bioacoustics license: BSD-3-Clause commit: ad802dcad34b524533b765e5dfb3709b308a3152 version: 0.4.2 date-released: '2022-03-29'
Owner metadata
- Name: VocalPy
- Login: vocalpy
- Email:
- Kind: organization
- Description:
- Website: https://forum.vocalpy.org/
- Location:
- Twitter:
- Company:
- Icon url: https://avatars.githubusercontent.com/u/99543036?v=4
- Repositories: 8
- Last ynced at: 2023-08-21T08:10:23.154Z
- Profile URL: https://github.com/vocalpy
GitHub Events
Total
- Create event: 110
- Commit comment event: 16
- Release event: 11
- Issues event: 246
- Watch event: 49
- Delete event: 92
- Issue comment event: 393
- Push event: 480
- Pull request review event: 25
- Pull request review comment event: 22
- Pull request event: 200
- Fork event: 10
Last Year
- Commit comment event: 16
- Create event: 58
- Delete event: 44
- Fork event: 5
- Issue comment event: 129
- Issues event: 97
- Pull request event: 100
- Pull request review comment event: 2
- Pull request review event: 5
- Push event: 278
- Release event: 5
- Watch event: 29
Committers metadata
Last synced: 2 days ago
Total Commits: 2,551
Total Committers: 10
Avg Commits per committer: 255.1
Development Distribution Score (DDS): 0.209
Commits in past year: 78
Committers in past year: 4
Avg Commits per committer in past year: 19.5
Development Distribution Score (DDS) in past year: 0.474
Name | Commits | |
---|---|---|
David Nicholson | n****e | 2018 |
NickleDave | n****v@g****m | 389 |
allcontributors[bot] | 4****] | 53 |
yardencsGitHub | y****c@b****u | 51 |
David Nicholson | N****e | 35 |
Ja-sonYun | k****7@g****m | 1 |
Luke Poeppel | l****l@g****m | 1 |
Ikko Ashimine | e****r@g****m | 1 |
Khoa | 5****7 | 1 |
kaiyaprovost | 1****t | 1 |
Committer domains:
- bu.edu: 1
Issue and Pull Request metadata
Last synced: 1 day ago
Total issues: 130
Total pull requests: 76
Average time to close issues: 6 months
Average time to close pull requests: 5 days
Total issue authors: 13
Total pull request authors: 7
Average comments per issue: 1.55
Average comments per pull request: 0.87
Merged pull request: 68
Bot issues: 0
Bot pull requests: 12
Past year issues: 57
Past year pull requests: 46
Past year average time to close issues: about 1 month
Past year average time to close pull requests: 4 days
Past year issue authors: 9
Past year pull request authors: 6
Past year average comments per issue: 1.6
Past year average comments per pull request: 0.72
Past year merged pull request: 41
Past year bot issues: 0
Past year bot pull requests: 9
Top Issue Authors
- NickleDave (113)
- athenasyarifa (3)
- yardencsGitHub (3)
- harshidapancholi (2)
- avanikop (1)
- cantonsir (1)
- danielmk (1)
- ItamarFruchter (1)
- kalleknast (1)
- nhoglen (1)
- vivinastase (1)
- wendtalexander (1)
- zhileiz1992 (1)
Top Pull Request Authors
- NickleDave (58)
- allcontributors[bot] (12)
- marisbasha (2)
- nosrednab (1)
- TrellixVulnTeam (1)
- Ja-sonYun (1)
- zhileiz1992 (1)
Top Issue Labels
- ENH: enhancement (40)
- BUG (19)
- DOC: documentation (16)
- CLN: clean / refactor (6)
- DEV: development (4)
- TST: testing (4)
- Models (4)
- api (1)
- dependencies (1)
- Metrics (1)
Top Pull Request Labels
- DOC: documentation (1)
Package metadata
- Total packages: 2
-
Total downloads:
- pypi: 381 last-month
- Total dependent packages: 2 (may contain duplicates)
- Total dependent repositories: 1 (may contain duplicates)
- Total versions: 43
- Total maintainers: 1
pypi.org: vak
a neural network toolbox for animal vocalizations and bioacoustics
- Homepage:
- Documentation: https://vak.readthedocs.io
- Licenses: BSD License
- Latest release: 0.8.2 (published 7 months ago)
- Last Synced: 2024-05-10T09:04:43.691Z (1 day ago)
- Versions: 39
- Dependent Packages: 1
- Dependent Repositories: 1
- Downloads: 381 Last month
-
Rankings:
- Dependent packages count: 3.271%
- Stargazers count: 8.458%
- Forks count: 9.146%
- Average: 11.971%
- Downloads: 16.746%
- Dependent repos count: 22.233%
- Maintainers (1)
conda-forge.org: vak
- Homepage: https://pypi.org/project/vak/
- Licenses: BSD-3-Clause
- Latest release: 0.6.0 (published almost 2 years ago)
- Last Synced: 2024-05-10T09:04:48.085Z (1 day ago)
- Versions: 4
- Dependent Packages: 1
- Dependent Repositories: 0
-
Rankings:
- Dependent packages count: 28.82%
- Dependent repos count: 34.025%
- Average: 36.017%
- Stargazers count: 39.052%
- Forks count: 42.171%
Dependencies
- SoundFile >=0.10.3
- attrs >=19.3.0
- crowsetta >=5.0.1
- dask >=2.10.1
- evfuncs >=0.3.4
- joblib >=0.14.1
- matplotlib >=3.3.3
- numpy >=1.18.1
- pandas >=1.0.1
- pynndescent >=0.5.10
- pytorch-lightning >=2.0.7
- scipy >=1.4.1
- tensorboard >=2.8.0
- toml >=0.10.2
- torch >= 2.0.1
- torchvision >=0.15.2
- tqdm >=4.42.1
- umap-learn >=0.5.3
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v3 composite
- excitedleigh/setup-nox v2.1.0 composite
Score: 13.49227039011178