Marxan Cloud platform
Supports collaboration and decision-making for biodiversity conservation and socio-economic objectives for land, freshwater and ocean systems.
https://github.com/vizzuality/marxan-cloud
Category: Biosphere
Sub Category: Conservation and Restoration
Keywords from Contributors
deforestation forest-monitoring redux climate-change wri-api arcgisjs biodiversity half-earth conservation sustainability
Last synced: 29 minutes ago
JSON representation
Repository metadata
Monorepo for the Marxan Cloud platform
- Host: GitHub
- URL: https://github.com/vizzuality/marxan-cloud
- Owner: Vizzuality
- License: mit
- Created: 2020-11-19T13:22:41.000Z (over 5 years ago)
- Default Branch: main
- Last Pushed: 2026-04-15T06:31:42.000Z (14 days ago)
- Last Synced: 2026-04-23T02:02:22.459Z (6 days ago)
- Language: TypeScript
- Homepage: https://marxanplanning.org
- Size: 104 MB
- Stars: 15
- Watchers: 5
- Forks: 5
- Open Issues: 12
- Releases: 41
-
Metadata Files:
- Readme: README.md
- License: LICENSE
README.md
Marxan Cloud platform
Welcome to the Marxan Cloud platform. We aim to bring to the planet the finest
workflows for conservation planning.
Quick start
This repository is a monorepo which includes all the microservices of the Marxan
Cloud platform. Each microservice lives in a top-level folder.
Services are packaged as Docker images.
Microservices are set up to be run with or without Docker Compose for local
development - see the sections below for more details.
The recommended setup for new developers is to run all the backend services (api
and geoprocessing services, alongside their PostgreSQL and Redis databases) via
Docker Compose, and the frontend app natively.
In CI, testing, staging and production environments, microservices are
orchestrated via Kubernetes (see the relevant
documentation).
Most of the commands listed in this README and referenced elsewhere in the
repository are targeted at a GNU/Linux OS environment such as a recent Ubuntu,
Arch or Debian system, whether running natively or in a VM or under Windows
Subsystem for Linux 2 (WSL 2). They should also work identically on MacOS, while
they may need some adaptation to run on Windows systems.
Platform architecture
In a nutshell, the Marxan solution is composed by the following components:
- A frontend application accessible through the browser - the
app - A public, backend API - the
api - A geoprocessing-focused service used by the
api- thegeoprocessing api/application - An HTML-to-PDF/HTML-to-PNG service - the
webshotservice
Besides these 4, there are other components that may be used in one-off
situations, like seeding source data (see /data), testing
(/e2e-product-testing) and others.
See ARCHITECTURE_infrastructure.md for
details.
Dependencies
For development environments, a separate Sparkpost account than what is used for
staging/production should be used. Unless the transactional email components of
the platform are being actively worked on (email verification on signup, email
confirmation for password changes, email flow for resetting forgotten passwords,
etc.), there will be no need to set up email templates within the Sparkpost
account, and only a Sparkpost API key will be needed (see documentation on
environment variables for details on this).
Running Marxan using Docker
Before attempting to use the following steps, be sure to:
- Install Docker (19.03+):
- Install Docker Compose
- Create an
.envat the root of the repository, defining all the required
environment variables. In most cases, for variables other
than secrets, the defaults inenv.defaultmay just work - your mileage may
vary.
For development environments, a .env file can be generated with
default/generated values suitable to run a development instance, via the
following command:
make .env
This will only generate a file if no .env is present in the root of the
repository.
The PostgreSQL credentials set via environment variables are used to create a
database user when the PostgreSQL container is started for the first time.
PostgreSQL data is persisted via a Docker volume.
Running the Marxan Cloud platform
Run make start-api to start all the 4 services needed to run Marxan, as well
as the required database services, in containers via Docker Compose.
The docker build process may take a few minutes, depending on your hardware,
software and internet connection. Once completed, the applications will start,
and you should be able to access the Marxan site on localhost, on the port
specified as APP_SERVICE_PORT.
Debugging via Node inspector
To enable the Node inspector while running the MarxanCloud API services in
containers, use make debug-api instead. Example configuration
files for debugger setup in popular editors are provided in the
docs/developers/editors/ documentation folder.
When enabled, the Node inspector will start listening on the default port
9229/tcp both in the API and geoprocessing containers, and by default Docker
will forward this port to port 9230/tcp on the host for the API service, and
to port 9240/tcp for the geoprocessing service, where the inspector can be
reached by clients.
For security reasons (in case the host is, for example, a VM with a public IP
address and without a firewall in front, for whatever reason), the inspector
port will only be open on the loopback interface.
Running Marxan natively
Make sure you have installed and configured all the
dependencies locally. PostgreSQL (with PostGIS) and Redis need
to be up and running.
Running API and Geoprocessing services
When running the API and Geoprocessing services without relying on Docker
Compose for container orchestration, be sure to review and set the correct
environment variables before executing the application.
The env.default file and the docker-compose configuration files may give
you some example values that work for docker-based executions, and that may
be useful when implementing your native execution configuration.
The included Makefile has some useful build targets (commands) specifically
targeted at native execution (prefixed with native-) that you'll find helpful.
Refer to the Makefile inline documentation for more details.
If you'd like to run the application directly using Yarn, you can find a
package.json inside the /app folder with dependencies and commands for both
applications. After installing the nodejs dependencies, this is how you can
start either application:
// Run the API
yarn start
// Run the geoprocessing service
yarn start geoprocessing
Running the Frontend application
The Frontend application can be found in /app. Be sure to populate the
spp/.env file (note: this is an .env file distinct from the top-level .env
file which is used to configure backend microservices and data processing
pipelines) according to the app documentation, as well as
install the necessary nodejs packages.
To start the application, run:
yarn dev
The frontend app will then be available on http://localhost:3000 (or at the URL
shown when the app starts, if a different port has been configured).
Running the webshot service
The webshot service can be found in the /webshot folder. After installing
the necessary nodejs packages, you can start it by running:
yarn start:dev
Due to upstream packaging of the Chrome browser used by the Webshot service, it
may not be possible to run the webshot service in aarch64 environments (such
as MacOS on Apple silicon).
Setting up test seed data
make native-seed-api-with-test-data
Running tests
Running the whole test suite requires running 3 commands, each focused on a
specific type of test:
To run the unit tests for both the API and the Geoprocessing app:
yarn run test
To run the E2E tests for the API:
yarn run api:test:e2e
To run the E2E tests for the Geoprocessing app:
yarn run geoprocessing:test:e2e
Note that E2E tests may trigger cross-application requests, so:
- When running E2E tests for the API, you must have the Geoprocessing
application running in the background. - When running E2E tests for the Geoprocessing application, you must have the
API running in the background.
Running tests require previously loading the test seed
data, and may modify data in the database - do not
run tests using a database whose data you don't want to lose.
Seed data
All fresh installations of Marxan (be it locally for development or in a cloud
provider for production) start off with empty databases, that need to be
populated with seed data before the Marxan platform is fully functional. The
seed data you'll want to import will depend on the goal of the installation you
are currently setting up.
Please make sure to wait for all of the backend services (api, geoprocessing and
webshot) to fully start, as database migrations will be run while the services
are started: attempting to import seed data before migrations have run fully
will result in errors.
There are three types of seed data available with the application:
- Geographic data: platform-wide spatial data for admin boundaries (GADM),
protected areas (WDPA) and conservation features, such as the World
Terrestrial Ecosystems database. These datasets should be available in every
Marxan instance. - User data: user accounts, intended only for development instances and for
e2e/unit tests; these must not be imported in production-grade environments. - Test data: intended only for environments where development or e2e/unit tests
execution takes place, and must not be imported in production-grade
environments.
Please review the following sections carefully to determine which best fits your
needs for each deployment
User data
User data is necessary for all types of Marxan installations, but different user
data import processes will best fit different use cases.
There are two ways to create user accounts:
Using the nodejs CLI
cd api
yarn run console create:user EMAIL_ADDRESS PASSWORD [-f, --firstname <first name>] [-l, --lastname <last name>] [-d, --displayname <display name>]
Using Make
// For Marxan running on Docker
make seed-api-init-data
// For Marxan running natively
make native-seed-api-init-data
The first option will allow you to create a custom user, and is targeted at
environments where user accounts are meaningful - for example, production. To
execute this on a cloud hosted version of Marxan, you should run the command
above on the VM instance/docker container running the api application.
In contrast, the second approach will batch-create several users with insecure
passwords and generic details, and it's only suited for development, testing
or otherwise ephemeral environments.
Geographic data
Importing the initial geographic data executes a long-running data ETL pipeline
that imports large amounts of data from publicly available datasets onto
Marxan's PostgreSQL server - using both api and geoprocessing databases.
First, either set up a new Marxan instance from scratch, or reset an existing
one to a clean-slate status (make clean-slate && make start-api) - this allows
to import spatial data into a clean database, avoiding that any user-uploaded
data may end up in seed data.
Once a clean Marxan instance is running, the easiest way to execute the spatial
data import process is using the following make task, which runs a dockerized
version of the tool:
make seed-geodb-data
Note this process can complete successfully and exit with code 0, but have
errors in the output logs. This is expected, and said log errors can be ignored.
The actual implementation can be found in the /data folder
This will populate the metadata DB and will trigger the geoprocessing ETL
pipelines to seed the geoprocessing DB with the full data that is needed for
production-grade instances of Marxan.
Please note that this full DB set up will require at least 16GB of RAM and 40GB
of disk space in order to carry out some of these tasks (GADM and WDPA data
import pipelines). Also, the number of CPU cores will impact the time needed to
seed a new instance with the complete GADM and WDPA datasets, which will be 1h+
on ideal hardware.
To execute this on a cloud hosted version of Marxan, you have a couple of
options:
- Run the import process locally on local running PostgreSQL servers, then
export the resulting.sqllocally and import it remotely. - Run the import process locally, while having it connect directly to the remote
apiandgeoprocessingdatabases usingkubectlto set up port forwarding.
You may need to modify the container's network
mode
tohostfor this to work.
While geographic data is technically necessary on all Marxan environments, there
is a faster alternative to import equivalent data on development/test
environments, which is discussed in the next section.
Test data
Test data includes both user data and (a small subset of) the geographical data
described above, as well as extra data necessary to run certain types of
automated tests. This data is meant for development/testing environments only,
and should not be imported in production environments.
// For Marxan running on Docker
make seed-dbs
// For Marxan running natively
make native-seed-api-init-data
These commands will:
- Import generic user data (equivalent to
seed-api-init-data/native-seed-api-init-datadescribed above) - Import a precomputed subset of the geographical data
- Create sample/test Marxan resources, like organizations, scenarios, etc.
Maintenance
Resetting data to a clean slate status (docker only)
The main Makefile provides a way to reset db instances from scratch. This can
be useful to do regularly, to avoid keeping obsolete data in the local
development instance.
make clean-slate
Update seed data (GADM, WDPA) from newer upstream releases
The main Makefile provides a set of commands to create new db dumps from
upstream data sources, upload these dumps to an Azure storage bucket, and
populating both dbs from these dumps.
Populating clean dbs this way will be much quicker than triggering the full
ETL pipelines to import geographic data.
When uploading new dumps of seed data to an Azure storage container, or when
downloading pre-prepared data seeds from it, the following environment variables
must be defined in the root .env file:
DATA_SEEDS_AZURE_STORAGE_ACCOUNT_NAME=
DATA_SEEDS_AZURE_STORAGE_CONTAINER_NAME=
This will allow to run the az storage blob commands in the relevant Make
recipes with suitable authorization.
Users should have suitable access to the storage container configured.
For data uploads, they will need to be logged into an Azure account that is
allowed to write to the container:
- install the Azure CLI tool
(az) - get an Azure user set up, with suitable permissions to write to the relevant
Azure storage account and container - log in to this Azure account via the
azCLI tool
(https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli)
For data downloads, the container itself needs to be created with "public blobs"
settings so that individual blobs can be fetched via non-authenticated HTTP
requests.
To run the geoprocessing ETL pipelines (such as when using the Seed data,
option 1 above) to dump data from a previously seeded instance and upload the
processed data to an Azure bucket:
make generate-content-dumps && make upload-dump-data
Other developers can then benefit from these pre-prepared data seeds when
populating new development instances after their initial setup, by running the
following command on a clean Marxan instance (that is, from empty databases, and
after letting all the migrations run for both api and geoprocessing
services):
make restore-dumps
Running the notebooks
This step is only needed when developing Python notebooks for Marxan.
Run make notebooks to start the jupyterlab service.
Development workflow
We use a lightweight git flow workflow. develop, main, feature/bug fix
branches, release branches (release/vX.Y.Z-etc).
Please use per component+task feature branches: <feature type>/<component>/NNNN-brief-description. For example:
feature/api/12345-helm-setup.
PRs should be rebased on develop.
As feature types:
featurebugfix(regular bug fix)hotfix(urgent bug fixes fast-tracked tomain)
Devops
Infrastructure
Infrastructure code and documentation can be found under /infrastructure
CI/CD
CI/CD is handled with
GitHub Actions. More details can be found
by reviewing the actual content of the .github/workflows folder but, in a nutshell,
GitHub Action will automatically run tests on code pushed as part of a Pull Request.
For code merged to key branches (currently main and develop), once tests run
successfully, Docker images are built and pushed to a
private Azure Container Registry.
The GitHub Actions workflows currently configured requires a few secrets
to be set on GitHub in order to work properly:
AZURE_CLIENT_ID: Obtain from Terraform'sazure_client_idoutputAZURE_TENANT_ID: Obtain from Terraform'sazure_tenant_idoutputAZURE_SUBSCRIPTION_ID: Obtain from Terraform'sazure_subscription_idoutputREGISTRY_LOGIN_SERVER: Obtain from Terraform'sazurerm_container_registry_login_serveroutputREGISTRY_USERNAME: Obtain from Terraform'sazure_client_idoutputREGISTRY_PASSWORD: Obtain from Terraform'sazuread_application_passwordoutput
Some of these values are obtained from Terraform output values, which will be documented
in more detail in the Infrastructure docs.
Bugs
Please use the Marxan Cloud issue
tracker to report bugs.
License
(C) Copyright 2020-2023 Vizzuality.
This program is free software: you can redistribute it and/or modify it under
the terms of the MIT License as included in this repository.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the MIT License for more details.
You should have received a copy of the MIT License along with this program. If
not, see https://spdx.org/licenses/MIT.html.
Owner metadata
- Name: Vizzuality
- Login: Vizzuality
- Email: hello@vizzuality.com
- Kind: organization
- Description:
- Website: http://www.vizzuality.com
- Location: Madrid, Cambridge, Barcelona, Washington DC
- Twitter:
- Company:
- Icon url: https://avatars.githubusercontent.com/u/305994?v=4
- Repositories: 424
- Last ynced at: 2026-02-21T06:51:57.226Z
- Profile URL: https://github.com/Vizzuality
GitHub Events
Total
- Delete event: 10
- Pull request event: 23
- Issue comment event: 15
- Push event: 20
- Pull request review event: 1
- Create event: 16
Last Year
- Delete event: 6
- Pull request event: 15
- Issue comment event: 8
- Push event: 11
- Pull request review event: 1
- Create event: 11
Committers metadata
Last synced: 1 day ago
Total Commits: 6,625
Total Committers: 34
Avg Commits per committer: 194.853
Development Distribution Score (DDS): 0.76
Commits in past year: 27
Committers in past year: 3
Avg Commits per committer in past year: 9.0
Development Distribution Score (DDS) in past year: 0.593
| Name | Commits | |
|---|---|---|
| andrea rota | a****a@v****m | 1589 |
| anamontiaga | a****a@g****m | 1192 |
| Miguel Barrenechea Sánchez | m****a@v****m | 773 |
| Alicia | a****a@g****m | 416 |
| Ruben Vallejo | r****9@g****m | 377 |
| Kamil Gajowy | k****y@g****m | 315 |
| Andrés González Muñoz | a****z@v****m | 251 |
| Yulia Belyakova | y****a@g****m | 204 |
| Tiago Garcia | t****g@g****m | 178 |
| Ángel Higuera Vaquero | a****a@a****m | 176 |
| Daute Rodríguez Rodríguez | 9****e | 156 |
| Kevin | k****z@g****m | 134 |
| elpamart | e****o@v****m | 115 |
| Dyostiq | 1****q | 111 |
| Alex Larranaga | a****a@v****m | 108 |
| Aaron Perez | a****r@g****m | 86 |
| Pablo Pareja Tobes | p****a@v****m | 84 |
| dependabot[bot] | 4****] | 68 |
| Simao Rodrigues | a****o@g****m | 60 |
| Eric Coffman | e****n@t****g | 37 |
| tamaramegan | t****e@g****m | 37 |
| Henrique Pacheco | h****o@v****m | 34 |
| Pedro Pimenta | p****o@p****o | 33 |
| javiabia | w****i@g****m | 23 |
| Clément Prod'homme | c****t@c****r | 21 |
| David Inga | d****a@v****m | 20 |
| mluena | m****d@g****m | 12 |
| Maciej Sikorski | m****i@v****l | 3 |
| Kamil Gajowy | k****y@s****m | 3 |
| Maciej Sikorski | m****0@g****m | 2 |
| and 4 more... | ||
Committer domains:
- vizzuality.com: 9
- acidtango.com: 2
- scotts.com: 1
- valueadd.pl: 1
- clementprodhomme.fr: 1
- pimenta.co: 1
- tnc.org: 1
Issue and Pull Request metadata
Last synced: 8 days ago
Total issues: 0
Total pull requests: 324
Average time to close issues: N/A
Average time to close pull requests: 24 days
Total issue authors: 0
Total pull request authors: 12
Average comments per issue: 0
Average comments per pull request: 1.29
Merged pull request: 261
Bot issues: 0
Bot pull requests: 32
Past year issues: 0
Past year pull requests: 14
Past year average time to close issues: N/A
Past year average time to close pull requests: 25 days
Past year issue authors: 0
Past year pull request authors: 4
Past year average comments per issue: 0
Past year average comments per pull request: 0.86
Past year merged pull request: 5
Past year bot issues: 0
Past year bot pull requests: 4
Top Issue Authors
Top Pull Request Authors
- andresgnlez (130)
- hotzevzl (60)
- anamontiaga (37)
- dependabot[bot] (32)
- KevSanchez (26)
- yulia-bel (19)
- alexeh (8)
- agnlez (6)
- aagm (3)
- tiagojsag (1)
- SARodrigues (1)
- mbarrenechea (1)
Top Issue Labels
Top Pull Request Labels
- Frontend (148)
- dependencies (33)
- Ready to review (15)
- WIP (6)
- API (5)
- javascript (4)
- Don't merge (2)
- infrastructure (1)
- New feature (1)
- refactor (1)
Dependencies
- 258 dependencies
- actions/checkout v3 composite
- github/codeql-action/analyze v2 composite
- github/codeql-action/autobuild v2 composite
- github/codeql-action/init v2 composite
- actions/checkout v3 composite
- actions/dependency-review-action v2 composite
- actions/checkout v3 composite
- azure/login v1 composite
- fountainhead/action-wait-for-check v1.0.0 composite
- actions/checkout v3 composite
- azure/docker-login v1 composite
- azure/login v1 composite
- fountainhead/action-wait-for-check v1.0.0 composite
- actions/checkout v3 composite
- azure/docker-login v1 composite
- azure/login v1 composite
- actions/checkout v3 composite
- node 14.21.2-alpine3.17 build
- osgeo/gdal ubuntu-full-3.2.1 build
- osgeo/gdal ubuntu-full-3.2.1 build
- jupyter/datascience-notebook python-3.8.6 build
- rediscommander/redis-commander latest
- gcr.io/distroless/base latest build
- redis 6.0.9-alpine3.12 build
- node 16.14.0 build
- @mapbox/vector-tile 1.3.1 development
- @nestjs/cli ^7.5.1 development
- @nestjs/schematics ^7.1.3 development
- @nestjs/testing ^7.5.1 development
- @types/archiver 5.1.0 development
- @types/axios 0.14.0 development
- @types/bcrypt ^3.0.0 development
- @types/byline 4.2.33 development
- @types/config ^0.0.38 development
- @types/cookie-parser ^1.4.2 development
- @types/cors ^2.8.9 development
- @types/cron ^2.0.0 development
- @types/express ^4.17.8 development
- @types/faker ^5.1.5 development
- @types/fs-extra 9.0.11 development
- @types/geojson 7946.0.7 development
- @types/http-errors ^1.8.2 development
- @types/http-proxy ^1.17.5 development
- @types/jest ^26.0.15 development
- @types/lodash ^4.14.167 development
- @types/morgan ^1.9.3 development
- @types/ms 0.7.31 development
- @types/multer ^1.4.5 development
- @types/multistream 2.1.1 development
- @types/node ^14.17.27 development
- @types/passport-jwt ^3.0.3 development
- @types/passport-local ^1.0.33 development
- @types/pbf ^3.0.2 development
- @types/pg-large-object 2.0.4 development
- @types/puppeteer ^5.4.5 development
- @types/slug ^5.0.3 development
- @types/sparkpost ^2.1.5 development
- @types/supertest ^2.0.10 development
- @types/unzipper ^0.10.3 development
- @types/uuid 8.3.0 development
- @typescript-eslint/eslint-plugin ^4.28.2 development
- @typescript-eslint/parser ^4.28.2 development
- axios-mock-adapter 1.19.0 development
- bunyan ^1.8.14 development
- eslint ^7.30.0 development
- eslint-config-prettier ^8.3.0 development
- eslint-plugin-prettier ^3.4.0 development
- fs-extra 10.0.0 development
- geojson2shp 0.4.0 development
- husky ^4.3.0 development
- jest ^26.6.3 development
- lint-staged ^10.5.2 development
- nock 13.1.0 development
- nodemon ^2.0.6 development
- pbf ^3.2.1 development
- prettier ^2.1.2 development
- supertest ^6.0.0 development
- ts-jest ^26.4.3 development
- ts-loader ^8.0.8 development
- ts-node ^9.0.0 development
- tsconfig-paths ^3.9.0 development
- typed-emitter 1.3.1 development
- typescript ^4.2.2 development
- typescript-coverage-report ^0.4.0 development
- utility-types ^3.10.0 development
- wait-for-expect ^3.0.2 development
- @golevelup/nestjs-discovery 2.3.1
- @greenelab/hclust 0.0.1
- @nestjs-architects/typed-cqrs 0.1.1
- @nestjs/common 7.6.18
- @nestjs/config 1.1.5
- @nestjs/core 7.6.18
- @nestjs/cqrs 7.0.1
- @nestjs/jwt ^7.2.0
- @nestjs/passport ^7.1.5
- @nestjs/platform-express 7.6.18
- @nestjs/schedule ^2.0.1
- @nestjs/swagger ^4.8.0
- @nestjs/throttler ^3.0.0
- @nestjs/typeorm ^7.1.5
- @pgtyped/cli 0.11.0
- @pgtyped/query 0.11.0
- @turf/turf 6.5.0
- abort-controller 3.0.0
- archiver 5.3.0
- axios 0.21.2
- bcrypt ^5.0.0
- bullmq 1.46.0
- byline 5.0.0
- class-transformer ^0.3.1
- class-validator ^0.12.2
- commander 7.2.0
- config ^3.3.3
- cookie-parser ^1.4.6
- express ^4.17.3
- faker ^5.1.0
- fast-csv ^4.3.6
- fast-password-entropy ^1.1.1
- fp-ts 2.10.5
- geojson 0.5.0
- helmet ^4.3.1
- jsonapi-serializer ^3.6.7
- lodash ^4.17.21
- mapshaper 0.5.91
- morgan ^1.10.0
- ms 2.1.3
- multer ^1.4.2
- multistream 4.1.0
- nestjs-base-service 0.6.0
- nestjs-console 5.0.1
- passport ^0.6.0
- passport-jwt ^4.0.0
- passport-local ^1.0.0
- pg 8.7.1
- pg-large-object 2.0.0
- pg-query-stream 4.2.1
- pure-geojson-validation 0.3.0
- reflect-metadata ^0.1.13
- rimraf ^3.0.2
- rxjs ^6.6.3
- slug ^5.3.0
- sparkpost 2.1.4
- stronger-typed-streams 0.2.0
- swagger-ui-express ^4.1.6
- temp-dir ^2.0.0
- tiny-types 1.16.1
- typeorm 0.2.30
- unzipper ^0.10.11
- uuid 8.3.2
- 1481 dependencies
- @babel/core ^7.12.9 development
- @storybook/addon-actions ^6.2.9 development
- @storybook/addon-essentials ^6.2.9 development
- @storybook/addon-links ^6.2.9 development
- @types/d3 ^6.3.0 development
- @types/lodash ^4.14.165 development
- @types/next-auth ^3.1.25 development
- @types/node ^14.14.10 development
- @types/react ^17.0.0 development
- @types/react-map-gl ^5.2.9 development
- @typescript-eslint/eslint-plugin ^4.8.2 development
- @typescript-eslint/parser ^4.8.2 development
- autoprefixer ^10.0.2 development
- babel-loader ^8.2.2 development
- cypress ^6.0.0 development
- eslint ^7.14.0 development
- eslint-config-airbnb-typescript ^12.0.0 development
- eslint-plugin-cypress ^2.11.2 development
- eslint-plugin-import ^2.23.4 development
- eslint-plugin-jsx-a11y ^6.4.1 development
- eslint-plugin-react ^7.21.5 development
- eslint-plugin-react-hooks ^4.2.0 development
- husky ^4.3.0 development
- lint-staged ^10.5.2 development
- postcss ^8.1.10 development
- prettier ^2.2.0 development
- svg-sprite-loader ^5.0.0 development
- svgo ^1.3.2 development
- svgo-loader ^2.2.1 development
- typescript ^4.1.2 development
- @artsy/fresnel ^1.9.0
- @dnd-kit/core ^3.0.3
- @dnd-kit/modifiers ^2.0.0
- @dnd-kit/sortable ^3.0.1
- @dnd-kit/utilities ^2.0.0
- @egjs/flicking-plugins ^4.2.2
- @egjs/react-flicking ^4.2.2
- @hapi/iron ^6.0.0
- @math.gl/web-mercator ^3.3.2
- @popperjs/core ^2.6.0
- @react-aria/button ^3.3.0
- @react-aria/dialog ^3.1.2
- @react-aria/focus ^3.2.3
- @react-aria/i18n ^3.2.0
- @react-aria/interactions ^3.3.2
- @react-aria/overlays ^3.6.1
- @react-aria/searchfield ^3.1.1
- @react-aria/slider ^3.0.0
- @react-aria/utils ^3.5.0
- @react-aria/visually-hidden ^3.2.1
- @react-stately/overlays ^3.1.1
- @react-stately/searchfield ^3.1.1
- @react-stately/slider ^3.0.0
- @reduxjs/toolkit ^1.5.0
- @storybook/react ^6.2.9
- @tailwindcss/custom-forms ^0.2.1
- @tanem/react-nprogress ^4.0.8
- @tippyjs/react ^4.2.0
- @turf/area ^6.3.0
- @vizzuality/layer-manager-plugin-mapboxgl ^1.0.0-alpha.3
- @vizzuality/layer-manager-provider-carto ^1.0.0-alpha.3
- @vizzuality/layer-manager-react ^1.0.0-alpha.3
- axios ^0.21.1
- chroma-js ^2.4.2
- classnames ^2.2.6
- cookie ^0.4.1
- d3 ^6.5.0
- d3-ease ^2.0.0
- date-fns ^2.19.0
- deck.gl 7.3.6
- downshift ^6.0.15
- fast-password-entropy ^1.1.1
- final-form ^4.20.1
- framer-motion ^3.9.1
- fuse.js ^6.4.6
- jsona ^1.9.2
- jsonwebtoken ^8.5.1
- lodash ^4.17.20
- luma.gl 7.3.2
- next 10.0.3
- next-auth ^3.13.3
- next-compose-plugins ^2.2.1
- next-connect ^0.9.1
- next-optimized-images ^2.6.2
- next-plausible ^2.0.0
- nprogress ^0.2.0
- passport ^0.4.1
- passport-local ^1.0.0
- popmotion 9.3.1
- react ^17.0.1
- react-aria ^3.3.0
- react-cookie 4.1.1
- react-dom 17.0.1
- react-dropzone ^11.3.1
- react-final-form ^6.5.2
- react-intersection-observer ^8.31.1
- react-joyride ^2.3.0
- react-map-gl ^6.1.13
- react-map-gl-draw 0.21.1
- react-modal ^3.12.1
- react-paginate ^8.1.2
- react-popper ^2.2.4
- react-query ^3.6.0
- react-redux ^7.2.2
- react-resize-detector ^6.7.3
- react-table ^7.7.0
- storybook-addon-next-router ^2.0.4
- tailwindcss ^2.0.2
- use-debounce ^6.0.1
- validate.js ^0.13.1
- 2129 dependencies
- @types/cypress-cucumber-preprocessor 4.0.1 development
- cypress ^12.0 development
- cypress-cucumber-preprocessor ^4.3 development
- typescript ^4.2.4 development
- 583 dependencies
- @types/config ^0.0.38 development
- @types/cors 2.8.12 development
- @types/express 4.17.13 development
- bunyan 1.8.15 development
- nodemon 2.0.15 development
- prettier 2.5.1 development
- rimraf 3.0.2 development
- ts-node 10.5.0 development
- tsconfig-paths 3.12.0 development
- config 3.3.7
- cors 2.8.5
- dotenv 16.0.0
- express 4.17.3
- helmet 5.0.2
- puppeteer 13.3.2
- typescript 4.5.5
- Click >=6.0
- LMIPy *
- appdirs *
- cachetools *
- carto ==1.11.1
- cufflinks *
- datamodel-code-generator *
- descartes *
- earthengine-api *
- fiona *
- gcsfs *
- geocube *
- geopandas *
- google-api-python-client *
- google-cloud-storage *
- graphviz *
- kneed *
- matplotlib *
- papermill *
- pivottablejs *
- plotly *
- pycountry *
- python-dotenv >=0.5.1
- rasterio *
- rasterstats *
- rio-cogeo *
- rio-tiler-mosaic *
- rtree *
- s3fs *
- sa2schema *
- simpledbf *
- simplejson *
- statsmodels *
- termcolor *
- tinycss2 *
- tqdm *
- zarr *
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/upload-artifact v3 composite
- actions/checkout v3 composite
- actions/checkout v3 composite
- actions/checkout v3 composite
- docker/build-push-action v4 composite
- docker/login-action v2 composite
- docker/setup-buildx-action v2 composite
- actions/checkout v3 composite
- actions/checkout v3 composite
- @swc/jest ^0.2.20 development
- @jest/create-cache-key-function 27.5.1
- @jest/types 27.5.1
- @swc/jest 0.2.20
- @types/istanbul-lib-coverage 2.0.4
- @types/istanbul-lib-report 3.0.0
- @types/istanbul-reports 3.0.1
- @types/node 17.0.21
- @types/yargs 16.0.4
- @types/yargs-parser 21.0.0
- ansi-styles 4.3.0
- chalk 4.1.2
- color-convert 2.0.1
- color-name 1.1.4
- has-flag 4.0.0
- supports-color 7.2.0
- geopandas ==0.13.2
- numpy ==1.25.1
Score: 6.822197390620491