Inundation Mapping
Flood inundation mapping and evaluation software configured to work with U.S. National Water Model.
https://github.com/NOAA-OWP/inundation-mapping
Category: Climate Change
Sub Category: Natural Hazard and Storm
Keywords
evaluation flood-inundation-maps gis hydrology inundation mapping national-hydrography-dataset national-water-center noaa
Keywords from Contributors
river flood floodplain hec-ras hydraulics rating-curve transforms measur archiving observation
Last synced: about 17 hours ago
JSON representation
Repository metadata
Flood inundation mapping and evaluation software configured to work with U.S. National Water Model.
- Host: GitHub
- URL: https://github.com/NOAA-OWP/inundation-mapping
- Owner: NOAA-OWP
- License: apache-2.0
- Created: 2020-05-14T17:21:09.000Z (almost 5 years ago)
- Default Branch: dev
- Last Pushed: 2025-04-25T22:14:15.000Z (1 day ago)
- Last Synced: 2025-04-25T23:19:50.032Z (1 day ago)
- Topics: evaluation, flood-inundation-maps, gis, hydrology, inundation, mapping, national-hydrography-dataset, national-water-center, noaa
- Language: Python
- Homepage:
- Size: 28.9 MB
- Stars: 108
- Watchers: 9
- Forks: 33
- Open Issues: 258
- Releases: 12
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: CITATION.cff
- Security: docs/SECURITY.md
README.md
Inundation Mapping: Flood Inundation Mapping for U.S. National Water Model
This repository includes flood inundation mapping software configured to work with the U.S. National Water Model operated and maintained by the National Oceanic and Atmospheric Administration (NOAA) National Water Center (NWC).
This software uses the Height Above Nearest Drainage (HAND) method to generate Relative Elevation Models (REMs), Synthetic Rating Curves (SRCs), and catchment grids. This repository also includes functionality to generate flood inundation maps (FIMs) and evaluate FIM accuracy.
Inundation Mapping Wiki.
For more information, see theFIM Version 4
Note: While we use the phrase "FIM" regularily, the phrase "HAND" is also used and is generally interchangeable. Most output folders now follow the convenction of "hand_4_x_x_x".
Accessing Data through ESIP S3 Bucket
The latest national generated HAND data and a subset of the inputs can be found in an Amazon S3 Bucket hosted by Earth Science Information Partners (ESIP). These data can be accessed using the AWS CLI tools. Please contact Carson Pruitt ([email protected]) or Fernando Salas ([email protected]) if you experience issues with permissions.
AWS Region: US East (N. Virginia) us-east-1
AWS Resource Name: arn:aws:s3:::noaa-nws-owp-fim
Configuring the AWS CLI
Accessing Data using the AWS CLI
Before attempting to download, you will need ESIP AWS cli credentials (Access key ID and Secret Access Key). You do not have to have your own AWS account. Please contact Carson Pruitt ([email protected]) or Fernando Salas ([email protected]).
Once you get AWS credentials, open your terminal window and type:
aws configure --profile esip
It will ask you for the Access key ID, Secret Access Key, Region and default language (just hit tab for that entry).
With the keys in place, you can test your credentials get a list folders prior to download as well as execute other S3 cli commands:
aws s3 ls s3://noaa-nws-owp-fim --profile esip
Examples
Note: All examples are based on linux pathing. Also, for each sample below, remove the line breaks [backslash(s) ""] before running the command.
The available inputs, test cases, and versioned FIM outputs can be found by running:
aws s3 ls s3://noaa-nws-owp-fim/hand_fim/ --profile esip
By adjusting pathing, you can also download entire directories such as the hand_4_5_2_11
folder. An entire output HAND set is approximately 1.7 TB.
Note: There may be newer editions than hand_4_5_11_1
, and it is recommended to adjust the command above for the latest version.
Setting up your Environment
Folder Structure
You are welcome to set up your folder structure in any pattern you like. For example purposes, we will use a folder structure shown below.
Starting with a base folder, e.g /home/projects/
add the following folders:
fim
code
data
inputs
outputs
outputs_temp
Getting FIM source code
(based on sample pathing described above)
cd /home/projects/fim/code
git clone https://github.com/NOAA-OWP/inundation-mapping.git
Git will auto create a subfolder named inundation-mapping
where the code will be. Your Docker mounts should include this inundation-mapping
folder.
Dependencies
Installation
- Install Docker : Docker
- Build Docker Image :
docker build -f Dockerfile.dev -t <image_name>:<tag> <path/to/repository>
- Create FIM group on host machine:
- Linux:
groupadd -g 1370800178 fim
- Linux:
- Change group ownership of repo (needs to be redone when a new file occurs in the repo):
- Linux:
chgrp -R fim <path/to/repository>
- Linux:
Input Data
Input data can be found on the ESIP S3 Bucket (see "Accessing Data through ESIP S3 Bucket" section above). The FIM inputs directory can be found at s3://noaa-nws-owp-fim/hand_fim/inputs
. It is appx 400GB and it needs to be in your data
folder.
aws s3 sync s3://noaa-nws-owp-fim/hand_fim/inputs /home/projects/fim/data/inputs --profile esip --dryrun
Note: When you include the --dryrun
argument in the command, a large list will be returned showing you exactly which files are to be downloaded and where they will be saved. We recommend including this argument the first time you run the command, then quickly aborting it (CTRL-C) so you don't get the full list. However, you can see that your chosen target path on your machine is correct. When you are happy with the pathing, run the aws s3
command again and leave off the --dryrun
argument.
The S3 inputs directory has all of the folders and files you need to run FIM. It includes some publicly available and some non-publicly availible data.
Running the Code
Configuration
There are two ways, which can be used together, to configure the system and/or data processing. Some configuration is based on input arguments when running fim_pipeline.sh
described below in the "Produce HAND Hydrofabric" section. Another configuration option is based on using a file named params_template.env
, found in the config
directory. To use this latter technique, copy the params_template.env
file before editing and remove the word "template" from the filename. The params_template.env
file includes, among other options, a calibrated parameters set of Manning’s n values. The new params.env
becomes one of the arguments submitted when running fim_pipeline.sh
.
Make sure to set the config folder group to fim
recursively using the chown command.
This application has an default optional tool called the calibration points tool
. In order to disable its' use, you can:
- Disable it by providing the
-skipcal
command line option tofim_pipeline.sh
orfim_pre_processing.sh
. - Disable it in the
params_template.env
file by settingsrc_adjust_spatial="FALSE"
.
Start/run the Docker Container
Since all of the dependencies are managed by utilizing a Docker container, we must issue the docker run
command to start a container as the run-time environment. The container is launched from a Docker image which was built in Installation. The -v <input_path>:/data
must contain a subdirectory named inputs
(similar to s3://noaa-nws-owp-fim/hand_fim
). If the pathing is set correctly, we do not need to adjust the params_template.env
file, and can use the default file paths provided.
docker run --rm -it --name <your_container_name> \
-v <path/to/repository>/:/foss_fim \
-v <desired_output_path>/:/outputs \
-v <desired_outputs_temp_path>/:/fim_temp \
-v <input_path>:/data \
<image_name>:<tag>
For example:
docker run --rm -it --name Robs_container \
-v /home/projects/fim/code/inundation-mapping/:/foss_fim \
-v /home/projects/fim/data/outputs/:/outputs \
-v /home/projects/fim/data/outputs_temp/:/fim_temp \
-v /home/projects/fim/data/:/data \
fim_4:dev_20230620
Subsetting input data
A subset of the data required to run and evaluate FIM can be obtained with the use of ESIP AWS keys. In order to generate these data:
- Start a Docker container as in the previous step
- Run
/foss_fim/data/get_sample_data.py
replacing<aws_access_key_id>
and<aws_secret_access_key>
with your AWS access keys. To generate data for HUC 03100204, for example:
python /foss_fim/data/get_sample_data.py -u 03100204 -i s3://noaa-nws-owp-fim/hand_fim/ -o /outputs/sample-data -r hand_fim -s3 -ak <aws_access_key_id> -sk <aws_secret_access_key>
- Exit the Docker container by typing
exit
. Alternatively, you can leave this container running and run the next command in a new terminal tab or window. - Start a new Docker container with the
/data
volume mount pointing at the local output location (-o
) used inget_sample_data.py
(step 2). For example:
docker run --rm -it --name Robs_data_container \
-v /home/projects/fim/code/inundation-mapping/:/foss_fim \
-v /home/projects/fim/data/outputs/:/outputs \
-v /home/projects/fim/data/outputs_temp/:/fim_temp \
-v /home/projects/fim/data/outputs/sample-data:/data \
fim_4:dev_20230620
- Now you can run the following commands with the sample data.
Produce HAND Hydrofabric
fim_pipeline.sh -u <huc8> -n <name_your_run> -o
- There are a wide number of options and defaulted values, for details run
fim_pipeline.sh -h
. - Mandatory arguments:
-u
can be a single huc, a series passed in quotes space delimited, or a line-delimited (.lst) file. To run the entire domain of available data use one of the/data/inputs/included_huc8.lst
files or a HUC list file of your choice. Depending on the performance of your server, especially the number of CPU cores, running the full domain can take multiple days.-n
is a name of your run (only alphanumeric). This becomes the name of the folder in youroutputs
folder.-o
is an optional param but means "overwrite". Add this argument if you want to allow the command to overwrite the folder created as part of the-n
(name) argument.- While not mandatory, if you override the
params_template.env
file, you may want to use the-c
argument to point to your adjusted file.
- Outputs can be found under
/outputs/<name_your_run>
.
Processing of HUCs in FIM4 occurs in three sections.
You can run fim_pipeline.sh
which automatically runs all of three major section,
OR you can run each of the sections independently if you like.
The three sections are:
fim_pre_processing.sh
: This section must be run first as it creates the basic output folder for the run. It also creates a number of key files and folders for the next two sections.fim_process_unit_wb.sh
: This script processes one and exactly one HUC8 plus all of it's related branches. While it can only process one, you can run this script multiple times, each with different HUC (or overwriting a HUC). When you runfim_pipeline.sh
, it automatically iterates when more than one HUC number has been supplied either by command line arguments or via a HUC list. For each HUC provided,fim_pipeline.sh
will runfim_process_unit_wb.sh
. Using thefim_process_unit_wb.sh
script allows for a run / rerun of a HUC, or running other HUCs at different times / days or even different docker containers.fim_post_processing.sh
: This section takes all of the HUCs that have been processed, aggregates key information from each HUC directory and looks for errors across all HUC folders. It also processes the HUC group in sub-steps such as usgs guages processesing, rating curve adjustments and more. Naturally, running or re-running this script can only be done after runningfim_pre_processing.sh
and at least one run offim_process_unit_wb.sh
.
Running the fim_pipeline.sh
is a quicker process than running all three steps independently, but you can run some sections more than once if you like.
Evaluating Inundation Map Performance
After fim_pipeline.sh
completes, or combinations of the three major steps described above, you can evaluate the model's skill. The evaluation benchmark datasets are available through ESIP in the test_cases
directory.
To evaluate model skill, run the following:
python /foss_fim/tools/synthesize_test_cases.py \
-c DEV \
-v <fim_run_name> \
-m <path/to/output/metrics.csv> \
-jh [num of jobs (cores and/or procs) per huc] \
-jb [num of jobs (cores and/or procs) per branch]
More information can be found by running:
python /foss_fim/tools/synthesize_test_cases.py --help
Managing Dependencies
Dependencies are managed via Pipenv.
When you execute docker build
from the Installation
section above, all of the dependencies you need are included. This includes dependencies for you to work in JupyterLab for testing purposes.
While very rare, you may want to add more dependencies. You can follow the following steps:
-
From inside your docker container, run the following command from your root directory in your docker container :
pipenv install <your package name> --dev
The
--dev
flag adds development dependencies, omit it if you want to add a production dependency.This will automatically update the Pipfile in the root of your docker container directory. If the environment looks goods after adding dependencies, lock it with:
pipenv lock
This will update the
Pipfile.lock
. Copy the new updatedPipfile
andPipfile.lock
in the FIM source directory and include both in your git commits. The docker image installs the environment from the lock file.
Make sure you test it heavily including create new docker images and that it continues to work with the code.
If you are on a machine that has a particularly slow internet connection, you may need to increase the timeout of pipenv. To do this simply add PIPENV_INSTALL_TIMEOUT=10000000
in front of any of your pipenv commands.
Citing This Work
Please cite this work in your research and projects according to the CITATION.cff
file.
Known Issues & Getting Help
Please see the issue tracker on GitHub and the Inundation Mapping Wiki for known issues and getting help.
Getting Involved
NOAA's National Water Center welcomes anyone to contribute to the Inundation Mapping repository to improve flood inundation mapping capabilities. Please contact Carson Pruitt ([email protected]) or Fernando Salas ([email protected]) to get started.
Open Source Licensing Info
Credits and References
- Office of Water Prediction (OWP)
- Fernando Aristizabal, Fernando Salas, Gregory Petrochenkov, Trevor Grout, Brian Avant, Bradford Bates, Ryan Spies, Nick Chadwick, Zachary Wills, Jasmeet Judge. 2023. "Extending Height Above Nearest Drainage to Model Multiple Fluvial Sources in Flood Inundation Mapping Applications for the U.S. National Water Model.' Water Resources Research.
- National Flood Interoperability Experiment (NFIE)
- Garousi‐Nejad, I., Tarboton, D. G.,Aboutalebi, M., & Torres‐Rua, A.(2019). Terrain analysis enhancements to the Height Above Nearest Drainage flood inundation mapping method. Water Resources Research, 55 , 7983–8009.
- Zheng, X., D.G. Tarboton, D.R. Maidment, Y.Y. Liu, and P. Passalacqua. 2018. “River Channel Geometry and Rating Curve Estimation Using Height above the Nearest Drainage.” Journal of the American Water Resources Association 54 (4): 785–806.
- Liu, Y. Y., D. R. Maidment, D. G. Tarboton, X. Zheng and S. Wang, (2018), "A CyberGIS Integration and Computation Framework for High-Resolution Continental-Scale Flood Inundation Mapping," JAWRA Journal of the American Water Resources Association, 54(4): 770-784.
- Barnes, Richard. 2016. RichDEM: Terrain Analysis Software
- TauDEM
- Federal Emergency Management Agency (FEMA) Base Level Engineering (BLE)
- Verdin, James; Verdin, Kristine; Mathis, Melissa; Magadzire, Tamuka; Kabuchanga, Eric; Woodbury, Mark; and Gadain, Hussein, 2016, A software tool for rapid flood inundation mapping: U.S. Geological Survey Open-File Report 2016–1038, 26
- United States Geological Survey (USGS) National Hydrography Dataset Plus High Resolution (NHDPlusHR)
- Esri Arc Hydro
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below. To ensure you have latest and correct version, please look at the changelog.md at 'https://github.com/NOAA-OWP/inundation-mapping/blob/dev/docs/CHANGELOG.md'" authors: - family-names: "NOAA Office of Water Prediction" title: "Inundation Mapping" url: "https://github.com/NOAA-OWP/inundation-mapping" version: 4.5.10.0 date-released: 2024
Owner metadata
- Name: NOAA-OWP
- Login: NOAA-OWP
- Email:
- Kind: organization
- Description:
- Website:
- Location:
- Twitter:
- Company:
- Icon url: https://avatars.githubusercontent.com/u/60660814?v=4
- Repositories: 28
- Last ynced at: 2023-03-04T04:01:13.237Z
- Profile URL: https://github.com/NOAA-OWP
GitHub Events
Total
- Issues event: 207
- Watch event: 11
- Delete event: 59
- Issue comment event: 352
- Push event: 572
- Pull request review comment event: 21
- Pull request review event: 159
- Pull request event: 104
- Fork event: 6
- Create event: 66
Last Year
- Issues event: 207
- Watch event: 11
- Delete event: 59
- Issue comment event: 352
- Push event: 572
- Pull request review comment event: 21
- Pull request review event: 159
- Pull request event: 104
- Fork event: 6
- Create event: 66
Committers metadata
Last synced: 4 days ago
Total Commits: 506
Total Committers: 25
Avg Commits per committer: 20.24
Development Distribution Score (DDS): 0.826
Commits in past year: 87
Committers in past year: 13
Avg Commits per committer in past year: 6.692
Development Distribution Score (DDS) in past year: 0.713
Name | Commits | |
---|---|---|
Brad | b****s@n****v | 88 |
MattLuck-NOAA | m****k@n****v | 83 |
Rob Hanna - NOAA | 9****A | 72 |
fernando-aristizabal | f****l@n****v | 55 |
RyanSpies-NOAA | r****s@n****v | 36 |
brian.avant | b****t@n****v | 35 |
Carson Pruitt | 9****A | 21 |
NickChadwick-NOAA | n****k@n****v | 20 |
TrevorGrout-NOAA | 6****A | 19 |
EmilyDeardorff-NOAA | 6****f | 14 |
ZahraGhahremani-NOAA | 1****i | 13 |
Rob Gonzalez-Pita | r****a@n****v | 12 |
dependabot[bot] | 4****] | 10 |
AliForghani-NOAA | 1****A | 6 |
LauraKeys-NOAA | 7****A | 5 |
HeidiSafa-NOAA | s****h@g****m | 4 |
CalebOliven-NOAA | 9****A | 2 |
FernandoSalas-NOAA | f****s@n****v | 2 |
Gregory Petrochenkov | 3****A | 2 |
Riley McDermott | 1****A | 2 |
Amin Torabi | 1****A | 1 |
Emma Rachel Kaufman | 1****n | 1 |
EricMyskowski-NOAA | 1****A | 1 |
Greg Cocks | g****s@n****v | 1 |
JimJam | 8****A | 1 |
Committer domains:
- noaa.gov: 9
Issue and Pull Request metadata
Last synced: 2 days ago
Total issues: 1,039
Total pull requests: 551
Average time to close issues: 3 months
Average time to close pull requests: 15 days
Total issue authors: 34
Total pull request authors: 26
Average comments per issue: 1.68
Average comments per pull request: 1.49
Merged pull request: 430
Bot issues: 0
Bot pull requests: 43
Past year issues: 290
Past year pull requests: 161
Past year average time to close issues: about 1 month
Past year average time to close pull requests: 22 days
Past year issue authors: 14
Past year pull request authors: 13
Past year average comments per issue: 1.87
Past year average comments per pull request: 1.4
Past year merged pull request: 81
Past year bot issues: 0
Past year bot pull requests: 17
Top Issue Authors
- RobHanna-NOAA (263)
- CarsonPruitt-NOAA (166)
- BradfordBates-NOAA (166)
- RyanSpies-NOAA (96)
- TrevorGrout-NOAA (80)
- mluck (67)
- EmilyDeardorff (38)
- BrianAvant (30)
- fernando-aristizabal (24)
- nickchadwick-noaa (17)
- hhs732 (17)
- robgpita-noaa (12)
- AliForghani-NOAA (10)
- frsalas-noaa (8)
- HamedZamanisabzi-NOAA (6)
Top Pull Request Authors
- mluck (105)
- RobHanna-NOAA (86)
- RyanSpies-NOAA (51)
- dependabot[bot] (43)
- BradfordBates-NOAA (42)
- CarsonPruitt-NOAA (34)
- BrianAvant (32)
- nickchadwick-noaa (26)
- EmilyDeardorff (24)
- TrevorGrout-NOAA (20)
- ZahraGhahremani (19)
- robgpita-noaa (13)
- AliForghani-NOAA (12)
- hhs732 (11)
- CalebOliven-NOAA (8)
Top Issue Labels
- enhancement (327)
- bug (324)
- FIM4 (288)
- High Priority (114)
- CatFIM (95)
- testing (81)
- Med Priority (58)
- research (54)
- Rating Curves (53)
- Low Priority (51)
- Epic (35)
- FIM3 (31)
- dependencies (27)
- CI/CD (27)
- good first issue (27)
- refactoring (24)
- documentation (22)
- APG (22)
- AWS (21)
- From Vlab (19)
- wontfix (14)
- API (13)
- FIM2 (12)
- Sys Admin (12)
- DevOps (6)
- duplicate (6)
- help wanted (6)
- ras2fim (5)
- AOP (4)
- Inputs (4)
Top Pull Request Labels
- enhancement (197)
- FIM4 (189)
- bug (180)
- dependencies (75)
- CatFIM (35)
- Rating Curves (33)
- High Priority (33)
- FIM3 (25)
- testing (22)
- refactoring (21)
- Ready_to_Merge (19)
- documentation (17)
- API (7)
- AWS (6)
- Low Priority (5)
- good first issue (5)
- CI/CD (3)
- Hold (3)
- Med Priority (2)
- Inputs (2)
- python (2)
- Epic (1)
- From Vlab (1)
- duplicate (1)
- DevOps (1)
- research (1)
- Depth (1)
Dependencies
- ipython * develop
- Shapely ==1.7.0
- fiona ==1.8.17
- geopandas ==0.8.1
- grass-session ==0.5
- h5py ==3.4.0
- memory-profiler *
- natsort *
- netcdf4 ==1.5.7
- numba ==0.50.1
- pandas ==1.0.5
- pygeos ==0.7.1
- pyproj ==3.1.0
- python-dotenv *
- rasterio ==1.1.5
- rasterstats ==0.15.0
- richdem ==0.3.4
- seaborn ==0.11.0
- tables ==3.6.1
- tqdm ==4.48.0
- xarray ==0.19.0
- backcall ==0.2.0 develop
- decorator ==5.1.1 develop
- ipython ==7.31.1 develop
- jedi ==0.18.1 develop
- matplotlib-inline ==0.1.3 develop
- parso ==0.8.3 develop
- pexpect ==4.8.0 develop
- pickleshare ==0.7.5 develop
- prompt-toolkit ==3.0.28 develop
- ptyprocess ==0.7.0 develop
- pygments ==2.11.2 develop
- setuptools ==60.9.3 develop
- traitlets ==5.1.1 develop
- wcwidth ==0.2.5 develop
- affine ==2.3.0
- attrs ==21.4.0
- certifi ==2021.10.8
- cftime ==1.6.0
- click ==7.1.2
- click-plugins ==1.1.1
- cligj ==0.7.2
- cycler ==0.11.0
- fiona ==1.8.17
- fonttools ==4.30.0
- geopandas ==0.8.1
- grass-session ==0.5
- h5py ==3.4.0
- kiwisolver ==1.3.2
- llvmlite ==0.33.0
- matplotlib ==3.5.1
- memory-profiler ==0.60.0
- munch ==2.5.0
- natsort ==8.0.2
- netcdf4 ==1.5.7
- numba ==0.50.1
- numexpr ==2.8.1
- numpy ==1.22.3
- packaging ==21.3
- pandas ==1.0.5
- pillow ==9.0.1
- psutil ==5.9.0
- pygeos ==0.7.1
- pyparsing ==3.0.7
- pyproj ==3.1.0
- python-dateutil ==2.8.2
- python-dotenv ==0.19.2
- pytz ==2021.3
- rasterio ==1.1.5
- rasterstats ==0.15.0
- richdem ==0.3.4
- scipy ==1.8.0
- seaborn ==0.11.0
- setuptools ==60.9.3
- shapely ==1.7.0
- simplejson ==3.17.6
- six ==1.16.0
- snuggs ==1.4.7
- tables ==3.6.1
- tqdm ==4.48.0
- xarray ==0.19.0
- osgeo/gdal ubuntu-full-3.1.2 build
- ghcr.io/osgeo/gdal ubuntu-small-3.7.0 build
- geopandas *
- openpyxl *
- pandas *
- rasterio *
- affine ==2.4.0
- attrs ==23.1.0
- certifi ==2023.7.22
- click ==8.1.6
- click-plugins ==1.1.1
- cligj ==0.7.2
- et-xmlfile ==1.1.0
- fiona ==1.9.4.post1
- geopandas ==0.13.2
- numpy ==1.25.2
- openpyxl ==3.1.2
- packaging ==23.1
- pandas ==2.0.2
- pyparsing ==3.1.1
- pyproj ==3.6.0
- python-dateutil ==2.8.2
- pytz ==2023.3
- rasterio ==1.3.7
- setuptools ==68.0.0
- shapely ==2.0.1
- six ==1.16.0
- snuggs ==1.4.7
- tzdata ==2023.3
Score: 9.121509158269566