gbifdb
Provide a relational database interface to a parquet based serializations of gbif's AWS snapshots of its public data.
https://github.com/ropensci/gbifdb
Category: Biosphere
Sub Category: Biodiversity Data Access and Management
Last synced: about 7 hours ago
JSON representation
Repository metadata
:package: A High Performance Interface to GBIF
- Host: GitHub
- URL: https://github.com/ropensci/gbifdb
- Owner: ropensci
- License: other
- Created: 2021-11-06T19:25:41.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-02-05T23:52:40.000Z (about 1 year ago)
- Last Synced: 2025-04-10T06:39:07.633Z (19 days ago)
- Language: R
- Homepage: https://docs.ropensci.org/gbifdb/
- Size: 391 KB
- Stars: 36
- Watchers: 5
- Forks: 4
- Open Issues: 0
- Releases: 0
-
Metadata Files:
- Readme: README.Rmd
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
README.Rmd
--- output: github_document --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", warning = FALSE, message = FALSE, fig.path = "man/figures/README-", out.width = "100%" ) Sys.unsetenv("AWS_ACCESS_KEY_ID") Sys.unsetenv("AWS_SECRET_ACCESS_KEY") Sys.setenv("GBIF_HOME"="/home/shared-data/gbif") ``` # gbifdb [](https://CRAN.R-project.org/package=gbifdb) [](https://github.com/cboettig/gbifdb/actions/workflows/R-CMD-check.yaml) The goal of `gbifdb` is to provide a relational database interface to a `parquet` based serializations of `gbif`'s AWS snapshots of its public data [^1]. Instead of requiring custom functions for filtering and selecting data from the central GBIF server (as in `rgbif`), `gbifdb` users can take advantage of the full array of `dplyr` and `tidyr` functions which can be automatically translated to SQL by `dbplyr`. Users already familiar with SQL can construct SQL queries directly with `DBI` instead. `gbifdb` sends these queries to [`duckdb`](https://duckdb.org), a high-performance, columnar-oriented database engine which runs entirely inside the client, (unlike server-client databases such as MySQL or Postgres, no additional setup is needed outside of installing `gbifdb`.) `duckdb` is able to execute these SQL queries directly on-disk against the Parquet data files, side-stepping limitations of available RAM or the need to import the data. It's highly optimized implementation can be faster even than in-memory operations in `dplyr`. `duckdb` supports the full set of SQL instructions, including windowed operations like `group_by`+`summarise` as well as table joins. [^1]: all CC0 and CC-BY licensed data in GBIF that have coordinates which passed automated quality checks, [see GBIF docs]https://github.com/gbif/occurrence/blob/master/aws-public-data.md) `gbifdb` has two mechanisms for providing database connections: one which the Parquet snapshot of GBIF must first be downloaded locally, and a second where the GBIF parquet snapshot can be accessed directly from an Amazon Public Data Registry S3 bucket without downloading a copy. The latter approach will be faster for one-off operations and is also suitable when using a cloud-based computing provider in the same region. ## Installation And the development version from [GitHub](https://github.com/) with: ``` r # install.packages("devtools") devtools::install_github("ropensci/gbifdb") ``` `gbifdb` has few dependencies: `arrow`, `duckdb` and `DBI` are required. ## Getting Started ```{r message=FALSE} library(gbifdb) library(dplyr) # optional, for dplyr-based operations ``` ### Remote data access To begin working with GBIF data directly without downloading the data first, simply establish a remote connection using `gbif_remote()`. ```{r remote1} gbif <- gbif_remote() ``` We can now perform most `dplyr` operations: ```{r remote2} gbif %>% filter(phylum == "Chordata", year > 1990) %>% count(class, year) %>% collect() ``` By default, this relies on an `arrow` connection, which currently lacks support for some more complex windowed operations in `dplyr`. A user can specify the option `to_duckdb = TRUE` in `gbif_remote()` (or simply pass the connection to `arrow::to_duckdb()`) to create a `duckdb` connection. This is slightly slower at this time. Keep in mind that as with any database connection, to use non-`dplyr` functions the user will generally need to call `dplyr::collect()`, which pulls the data into working memory. Be sure to subset the data appropriately first (e.g. with `filter`, `summarise`, etc), as attempting to `collect()` a large table will probably exceed available RAM and crash your R session! When using a `gbif_remote()` connection, all I/O operations will be conducted over the network storage instead of your local disk, without downloading the full dataset first. Consequently, this mechanism will perform best on platforms with faster network connections. These operations will be considerably slower than they would be if you download the entire dataset first (see below, unless you are on an AWS cloud instance in the same region as the remote host), but this does avoid the download step all-together, which may be necessary if you do not have 100+ GB free storage space or the time to download the whole dataset first (e.g. for one-off queries). ### Local data For extended analysis of GBIF, users may prefer to download the entire GBIF parquet data first. This requires over 100 GB free disk space, and will be a time-consuming process the first time. However, once downloaded, future queries will run much much faster, particularly if you are network-limited. Users can download the current release of GBIF to local storage like so: ```{r download, eval=FALSE} gbif_download() ``` By default, this will download to the dir given by `gbif_dir()`. An alternative directory can be provided by setting the environmental variable, `GBIF_HOME`, or providing the path to the directory containing the parquet files directly. Once you have downloaded the parquet-formatted GBIF data, `gbif_local()` will establish a connection to these local parquet files. ```{r local} gbif <- gbif_local() gbif ``` ```{r colnames} colnames(gbif) ``` Now, we can use `dplyr` to perform standard queries: ```{r growth} growth <- gbif %>% filter(phylum == "Chordata", year > 1990) %>% count(class, year) %>% arrange(year) growth ``` Recall that when database connections in `dplyr`, the data remains in the database (i.e. on disk, not in working RAM). This is fine for any further operations using `dplyr`/`tidyr` functions which can be translated into SQL. Using such functions we can usually reduce our resulting table to something much smaller, which can then be pulled into memory in R for further analysis using `collect()`: ```{r plot} library(ggplot2) library(forcats) # GBIF: the global bird information facility? growth %>% collect() %>% mutate(class = fct_lump_n(class, 6)) %>% ggplot(aes(year, n, fill=class)) + geom_col() + ggtitle("GBIF observations of vertebrates by class") ``` ## Visualizing all of GBIF Database operations such as rounding provide an easy way to "rasterize" the data for spatial visualizations. Here we quickly generate where color intensity reflects the logarithmic occurrence count in that pixel: ```{r} library(terra) library(viridisLite) df <- gbif |> mutate(latitude = round(decimallatitude,1), longitude = round(decimallongitude,1)) |> count(longitude, latitude) |> collect() |> mutate(n = log(n)) |> filter(!is.na(n)) r <- rast(df, crs="epsg:4326") plot(r, col= viridis(1e3), legend=FALSE, maxcell=6e6, colNA="black", axes=FALSE) ``` ## Performance notes Because `parquet` is a columnar-oriented dataset, performance can be improved by including a `select()` call at the end of a dplyr function chain to only return the columns you actually need. This can be particularly helpful on remote connections using `gbif_remote()`. ```{r include=FALSE} Sys.unsetenv("GBIF_HOME") codemeta::write_codemeta() ```
Owner metadata
- Name: rOpenSci
- Login: ropensci
- Email: [email protected]
- Kind: organization
- Description:
- Website: https://ropensci.org/
- Location: Berkeley, CA
- Twitter: rOpenSci
- Company:
- Icon url: https://avatars.githubusercontent.com/u/1200269?v=4
- Repositories: 307
- Last ynced at: 2023-03-10T20:30:59.242Z
- Profile URL: https://github.com/ropensci
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Committers metadata
Last synced: 9 days ago
Total Commits: 81
Total Committers: 3
Avg Commits per committer: 27.0
Development Distribution Score (DDS): 0.062
Commits in past year: 4
Committers in past year: 1
Avg Commits per committer in past year: 4.0
Development Distribution Score (DDS) in past year: 0.0
Name | Commits | |
---|---|---|
Carl | c****g@g****m | 76 |
Carl Boettiger | c****g@g****m | 4 |
Daniel Noesgaard | d****d@g****g | 1 |
Committer domains:
- gbif.org: 1
Issue and Pull Request metadata
Last synced: 1 day ago
Total issues: 4
Total pull requests: 8
Average time to close issues: 4 days
Average time to close pull requests: 4 days
Total issue authors: 3
Total pull request authors: 3
Average comments per issue: 4.0
Average comments per pull request: 0.5
Merged pull request: 6
Bot issues: 0
Bot pull requests: 0
Past year issues: 2
Past year pull requests: 0
Past year average time to close issues: 6 days
Past year average time to close pull requests: N/A
Past year issue authors: 1
Past year pull request authors: 0
Past year average comments per issue: 6.0
Past year average comments per pull request: 0
Past year merged pull request: 0
Past year bot issues: 0
Past year bot pull requests: 0
Top Issue Authors
- jtmiller28 (2)
- Pakillo (1)
- cboettig (1)
Top Pull Request Authors
- cboettig (6)
- dnoesgaard (1)
- nealrichardson (1)
Top Issue Labels
Top Pull Request Labels
Package metadata
- Total packages: 1
-
Total downloads:
- cran: 243 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 2
- Total maintainers: 1
cran.r-project.org: gbifdb
High Performance Interface to 'GBIF'
- Homepage: https://docs.ropensci.org/gbifdb/
- Documentation: http://cran.r-project.org/web/packages/gbifdb/gbifdb.pdf
- Licenses: Apache License (≥ 2)
- Latest release: 1.0.0 (published over 1 year ago)
- Last Synced: 2025-04-27T15:01:46.586Z (1 day ago)
- Versions: 2
- Dependent Packages: 0
- Dependent Repositories: 0
- Downloads: 243 Last month
-
Rankings:
- Stargazers count: 11.325%
- Forks count: 14.856%
- Dependent packages count: 29.797%
- Average: 34.141%
- Dependent repos count: 35.455%
- Downloads: 79.273%
- Maintainers (1)
Score: 10.179299452417421