Recent Releases of OasisLMF
OasisLMF - Release 2.4.2
2.4.2
OasisLMF Changelog -- #1665 - add support for geotiff in buildin lookup
- #1527 - Add summary descriptions to output files
- #1637 - Z-order Indexing
- #1610 - Feature/lecpy
- #1608 - pytools kat
- #1648 - in fm add polnumber information to level with no terms
- #1649 - Non-NumBa Error Logging
- #1650 - Fix for Platform V2 repairing run dir
- #1652 - Fix workaround for failing data null string tests
- #1653 - fix for empty geopandas df sjoin not supported in geopandas 1.x
- #1655 - Vulnerability blending feature performance tuning
OasisLMF Notes
(PR #1665)
add support for geotiff in built-in lookup -Add support to allocate values to lat long based on geotiff.
Support multi band assignment, and default value for out of range location.
(PR #1636)
join-summary-info -A pytool to join the summary info file data to any ORD pytools output with a SummaryID column.
(PR #1637)
z-order Indexing -Adding functionality to change how areaperil ids are set to have closer regions with similar ids
see: https://en.wikipedia.org/wiki/Z-order_curve
(PR #1610)
lecpy, python replacement for ordleccalc -This PR introduces creating output files from ordleccalc with the pytools replacement, lecpy
Solves LEC in https://github.com/OasisLMF/OasisLMF/issues/1528
(PR #1646)
katpy -Rewrite the kat cli from ktools for pytools.
Does not support parquet, only used when eltpy or pltpy are used which currently also do not support parquet.
(PR #1648)
Fix create_financial_structure issue in fmpy when last fm level has no terms -When last level of insurance has terms on only a part of the accounts and there is RI
An error occur during the creation of fmpy financial files (create_financial_structure)
We now keep track correctly the PolNumber used to merge with re-insurance.
The value to check if there is a term or not is also fixed, we now consider there is a term if the value is different from the default.
In particular, layer participation set to 0 where ignored and no policy were apply. This will change insurance loss output where this term is present
(PR #1650)
Fix for Platform V2 repairing run dir -Fix for platform issue https://github.com/OasisLMF/OasisPlatform/issues/1172 (V2 runs only)
(PR #1652)
Fix workaround for failing data null string tests -fix for https://github.com/OasisLMF/OasisLMF/issues/1651
(PR #1653)
fix for empty geopandas df sjoin not supported in geopandas 1.x -this fix an issue that can come up if a part location dataframe has no lat long and the lookup use rtree built in
In geopandas 1.0.1, empty dataframe are not suported input of sjoin and raise the error
File "/home/ubuntu/venv/lib/python3.12/site-packages/shapely/strtree.py", line 271, in query indices = self._tree.query(geometry, predicate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Array should be of object dtype
(PR #1654)
Improve loading and memory usage of Aggregate Vulnerability -During the loading of the aggregate weight, all possible combination of areaperil and vulnerability where loaded.
This change use the item areaperil and vulnerability as filter to only load relevant weight
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 month ago

OasisLMF - Release 2.3.14
2.3.14
OasisLMF Changelog -- #1655 - Vulnerability blending feature performance tuning
- #1652 - Fix workaround for failing data null string tests
- #1653 - fix for empty geopandas df sjoin not supported in geopandas 1.x
- #1638 - CI Fix, Skip incompatible client checks
OasisLMF Notes
(PR #1654)
Improve loading and memory usage of Aggregate Vulnerability -During the loading of the aggregate weight, all possible combination of areaperil and vulnerability where loaded.
This change use the item areaperil and vulnerability as filter to only load relevant weight
(PR #1652)
Fix workaround for failing data null string tests -fix for https://github.com/OasisLMF/OasisLMF/issues/1651
(PR #1653)
fix for empty geopandas df sjoin not supported in geopandas 1.x -this fix an issue that can come up if a part location dataframe has no lat long and the lookup use rtree built in
In geopandas 1.0.1, empty dataframe are not suported input of sjoin and raise the error
File "/home/ubuntu/venv/lib/python3.12/site-packages/shapely/strtree.py", line 271, in query indices = self._tree.query(geometry, predicate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Array should be of object dtype
(PR #1638)
CI Fix, Skip incompatible client checks -This issue https://github.com/OasisLMF/OasisLMF/pull/1618 (fixed in CI) can still cause problems locally if responses>=0.25.3
Check responses version installed the and disable these tests if not supported.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 month ago

OasisLMF - Release 1.28.12
1.28.12
OasisLMF Changelog -- #1655 - Vulnerability blending feature performance tuning
- #1652 - Fix workaround for failing data null string tests
- #1653 - fix for empty geopandas df sjoin not supported in geopandas 1.x
- #1638 - CI Fix, Skip incompatible client checks
OasisLMF Notes
(PR #1654)
Improve loading and memory usage of Aggregate Vulnerability -During the loading of the aggregate weight, all possible combination of areaperil and vulnerability where loaded.
This change use the item areaperil and vulnerability as filter to only load relevant weight
(PR #1652)
Fix workaround for failing data null string tests -fix for https://github.com/OasisLMF/OasisLMF/issues/1651
(PR #1653)
fix for empty geopandas df sjoin not supported in geopandas 1.x -this fix an issue that can come up if a part location dataframe has no lat long and the lookup use rtree built in
In geopandas 1.0.1, empty dataframe are not suported input of sjoin and raise the error
File "/home/ubuntu/venv/lib/python3.12/site-packages/shapely/strtree.py", line 271, in query indices = self._tree.query(geometry, predicate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Array should be of object dtype
(PR #1638)
CI Fix, Skip incompatible client checks -This issue https://github.com/OasisLMF/OasisLMF/pull/1618 (fixed in CI) can still cause problems locally if responses>=0.25.3
Check responses version installed the and disable these tests if not supported.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 month ago

OasisLMF - Release 2.4.1
2.4.1
OasisLMF Changelog -- #1629 - Add computaion schema to CI and update documentaion
- #1604 - exposure run crashes if CondPeril is missing
- #1634 - Update release section in readme
- #1603 - Feature/aalpy
- #1573 - Update or Remove preparation/oed.py
- #1638 - CI Fix, Skip incompatible client checks
- #1528 - Output Calc Python Rewrite
- #1528 - Output Calc Python Rewrite
- #1628 - Update pages workflow
- #1630 - fix issue with loc id
- #1631 - specify parquet folder to improve footprint read time
OasisLMF Notes
(PR #1632)
Added computation schema build to CI -- Drop py3.8 from unit testing (python support ended)
- Added computation schema build to CI
(PR #1633)
fix missing cond column when no term present -fix for https://github.com/OasisLMF/OasisLMF/issues/1604
(PR #1603)
aalpy, python replacement for aalcalc and aalmeanonlycalc -This PR introduces creating AAL and ALCT files with the python replacement of the original aalcalc and aalmeanonlycalc found in ktools.
Solves AAL in https://github.com/OasisLMF/OasisLMF/issues/1528
(PR #1635)
Refactor preparation/oed.py -The file preparation/oed.py contains a lot of unused code. This PR removes any unneeded code from the file and moves the constants to utils.
(PR #1638)
CI Fix, Skip incompatible client checks -This issue https://github.com/OasisLMF/OasisLMF/pull/1618 (fixed in CI) can still cause problems locally if responses>=0.25.3
Check responses version installed the and disable these tests if not supported.
(PR #1579)
Eltpy, python replacement for eltcalc -This PR introduces the possibility of creating some summary files (SELT, MELT, QELT) with the python replacement of the original eltcalc and summarycalctocsv found in ktools.
(PR #1590)
Pltpy, python replacement for pltcalc -This PR introduces the possibility of creating some summary files (SPLT, MPLT, QPLT) with the python replacement of the original pltcalc found in ktools.
Solves PLT in https://github.com/OasisLMF/OasisLMF/issues/1528
(PR #1628)
Update pages workflow -CI fix
(PR #1630)
Fix for pre analysis hook loc_id -Adapt the pre analysis hook to the new exposure preparation function, use prepare_oed_exposure instead of prepare_location_df.
"prepare_oed_exposure" doesn't create the loc_id column anymore, this was cerating an issue and the hook was not regeneration the column anymore
(PR #1631)
Improve parquet footprint performance -specify the event folder when reading footprint parquet file.
The parquet S3 interface is quite slow to filter by event even if it is partition by event.
This change improve greatly the performance by targeting the relevant folder when retrieving a footprint event
This help to partially solve https://github.com/OasisLMF/OasisLMF/issues/1600
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 months ago

OasisLMF - Release 2.3.13
2.3.13
OasisLMF Changelog -- #1604 - exposure run crashes if CondPeril is missing
- #1638 - CI Fix, Skip incompatible client checks
OasisLMF Notes
(PR #1633)
fix missing cond column when no term present -fix for https://github.com/OasisLMF/OasisLMF/issues/1604
(PR #1638)
CI Fix, Skip incompatible client checks -This issue https://github.com/OasisLMF/OasisLMF/pull/1618 (fixed in CI) can still cause problems locally if responses>=0.25.3
Check responses version installed the and disable these tests if not supported.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 months ago

OasisLMF - Release 1.28.11
1.28.11
OasisLMF Changelog -- #1604 - exposure run crashes if CondPeril is missing
- #1638 - CI Fix, Skip incompatible client checks
- #1611 - fix loss_out len in account level back allocation
- #1618 - Fix testing failures (API Client)
- #1599 - Fixes for CI testing stable 1.28.x
OasisLMF Notes
(PR #1633)
fix missing cond column when no term present -fix for https://github.com/OasisLMF/OasisLMF/issues/1604
(PR #1638)
CI Fix, Skip incompatible client checks -This issue https://github.com/OasisLMF/OasisLMF/pull/1618 (fixed in CI) can still cause problems locally if responses>=0.25.3
Check responses version installed the and disable these tests if not supported.
(PR #1611)
Fix fmpy back allocation for account level term -Trim loss_out to the correct length before account level term back allocation
(PR #1618)
Fix testing failures (API Client) -Objected returned from a mocked API post has changed between versions of the responses
Pinning to responses<=0.25.3
for now, but test should be updated later
(PR #1599)
Fixes for CI testing stable 1.28.x -- Remove numpy max version pin and fix plat2 testing
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 months ago

OasisLMF - Release 2.4.0
2.4.0
OasisLMF Changelog -- #1394 - Net RI losses do not use -z in summarycalc
- #1607 - Fix/lookup sort keys
- #1591 - Dynamic footprint has incorrect type 1 losses
- #1611 - fix loss_out len in account level back allocation
- #1615 - Non-useful error message for missing PolInceptionDate with RA Basis reinsurance
- #1552 - Make analysis and model setting able to modify any computation step parameters (MDK parameters)
- #1618 - Fix testing failures (API Client)
- #1619 - Fix/ci pre analysis testing
- #1624 - Feature/oed v4
- #1625 - feat: security enhancements
OasisLMF Notes
(PR #1601)
Calculate summarycalc reinsurance without all zeros flag -Running summarycalc with the -z flag for Net RI was running into significant performance issues.
To resolve this, this PR:
- removes the output all zeros flag when running
summarycalc
orsummarypy
- adds an additional check on the the
mean_idx
to catch non-zeros that were previously being filtered out
Issue: #1394
sort_values
of keys - (PR #1607)
Fix lookup - The ordering of pandas
df.sort_values
was inconsistent during testing. Passing thekind='stable'
argument resolves this issue.
(PR #1611)
Fix fmpy back allocation for account level term -Trim loss_out to the correct length before account level term back allocation
(PR #1614)
fixes error message for missing PolInceptionDate -For RA based reinsurance introduced in 2.3.11 https://github.com/OasisLMF/OasisLMF/pull/1576
Solution is to look for both non '_x' and '_x' variants of these column numbers in the row if they exist and output them.
acc_info = {
field: row[f'{field}_x'] if f'{field}_x' in row else row[f'{field}']
for field in RISK_LEVEL_FIELD_MAP[oed.REINS_RISK_LEVEL_ACCOUNT]
if f'{field}_x' in row or f'{field}' in row
}
Should output the correct error message as shown in the example
Error: PolInceptionDate missing for {'PortNumber': '1', 'AccNumber': 'A11111'}, cannot use AttachmentBasis [RA]. Please check the account file
(PR #1552)
Make analysis and model setting able to modify any computation step parameters (MDK parameters) -To be accessible to the user and modeler via analysis and model settings, MDK parameters use to need a specific treatment in the code to allow the overwrite.
This PR makes it automatic by using the ods_tools settings object that help merge the different settings and parameters in a coherent way.
The package schema can be created on CLI using: oasislmf model generate-computation-settings-json-schema
(PR #1618)
Fix testing failures (API Client) -Objected returned from a mocked API post has changed between versions of the responses
Pinning to responses<=0.25.3
for now, but test should be updated later
(PR #1619)
Fix/ci pre analysis testing -Fix CI testing with ODS-tools main (using OEDv4)
(PR #1624)
Integrate new OED v4 coverages and perils in OasisLMF -
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 3 months ago

OasisLMF - Release 2.3.12
2.3.12
OasisLMF Changelog -- #1394 - Net RI losses do not use -z in summarycalc
- #1618 - Fix testing failures (API Client)
- #1611 - fix loss_out len in account level back allocation
- #1615 - Non-useful error message for missing PolInceptionDate with RA Basis reinsurance
OasisLMF Notes
(PR #1601)
Calculate summarycalc reinsurance without all zeros flag -Running summarycalc with the -z flag for Net RI was running into significant performance issues.
To resolve this, this PR:
- removes the output all zeros flag when running
summarycalc
orsummarypy
- adds an additional check on the the
mean_idx
to catch non-zeros that were previously being filtered out
Issue: #1394
(PR #1618)
Fix testing failures (API Client) -Objected returned from a mocked API post has changed between versions of the responses
Pinning to responses<=0.25.3
for now, but test should be updated later
(PR #1611)
Fix fmpy back allocation for account level term -Trim loss_out to the correct length before account level term back allocation
(PR #1614)
fixes error message for missing PolInceptionDate -For RA based reinsurance introduced in 2.3.11 https://github.com/OasisLMF/OasisLMF/pull/1576
Solution is to look for both non '_x' and '_x' variants of these column numbers in the row if they exist and output them.
acc_info = {
field: row[f'{field}_x'] if f'{field}_x' in row else row[f'{field}']
for field in RISK_LEVEL_FIELD_MAP[oed.REINS_RISK_LEVEL_ACCOUNT]
if f'{field}_x' in row or f'{field}' in row
}
Should output the correct error message as shown in the example
Error: PolInceptionDate missing for {'PortNumber': '1', 'AccNumber': 'A11111'}, cannot use AttachmentBasis [RA]. Please check the account file
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 3 months ago

OasisLMF - Release 2.3.11
2.3.11
OasisLMF Changelog -- #1394 - Net RI losses do not use -z in summarycalc
- #1250 - Support Risk Attaching 'RA' basis in reinsurance
- #1581 - oasislmf code uses legacy correlation settings location in model settings
- #1589 - Update platform API client for Cyber models
- #1594 - improve the memory performance of il layer number continuity step
- #1595 - Allow process perils with different resolution grids
OasisLMF Notes
(PR #1601)
Calculate summarycalc reinsurance without all zeros flag -Running summarycalc with the -z flag for Net RI was running into significant performance issues.
To resolve this, this PR:
- removes the output all zeros flag when running
summarycalc
orsummarypy
- adds an additional check on the the
mean_idx
to catch non-zeros that were previously being filtered out
Issue: #1394
(PR #1576)
Support Risk Attaching 'RA' basis in reinsurance -Implemented #1250 as shown below. Policies which meet the RA requirement are not set to the PASSTHROUGH_PROFILE_ID, and left as NO_LOSS_PROFILE_ID.
UseReinsDates is currently not considered for checking RA, only AttachmentBasis (LO/RA).
(PR #1587)
Added fix for correlation settings -- Fixed #1581, where oasislmf code was looking for correlation in the legacy location, it now checks both.
- Added testing to catch problems loading correlation from model settings.
(PR #1589)
Update Platform API client for Cyber models -Cyber models can run with only an account file, so no OED location input, updated API client to allow for this.
(PR #1594)
OOM issue for accounts with lots of layers for several levels -Improve the memory performance of level matching between levels of accounts that have lots of layers.
areal_peril
- (PR #1596)
Feature - Unique grid resolution for each This PR does the following:
- Introduces a new builtin lookup function
fixed_size_geo_grid_multi_peril
which allows for a unique resolution grid per peril. - Fixes an issue where
fixed_size_geo_grid
createdarea_peril_id
starting at index0
. It now starts at index1
as expected.
Related Issue: #1595
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 5 months ago

OasisLMF - Release 1.28.10
1.28.10
OasisLMF Changelog -- #1506 - acc_idx read as category type from input/accounts.csv
- #1556 - Fix CI build system failures
- #1566 - Fiona package vulnerability issue
- #1599 - Fixes for CI testing stable 1.28.x
OasisLMF Notes
(PR #1556)
Fix CI build system failures -- Updated GH actions to use artifact v4
- Pinned Fiona package (the newer package has incompatibility with some older versions of Geopandas)
update to geopandas0.14.4
or pin fiona to version1.9.6
(PR #1565)
Remove pin of fiona package -A package pin of fiona causes CVE issues for published docker images
(PR #1599)
Fixes for CI testing stable 1.28.x -- Remove numpy max version pin and fix plat2 testing
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 5 months ago

OasisLMF - Release 2.3.10
2.3.10
OasisLMF Changelog -- #1563 - Intensity Adjustments in gulmc for dynamic footprints
- #1585, #1575 - Fix missing complex keys return without amplification
- #1578 - fix: point to the correct source in error message for vulnerabiity_id…
- #1580 - Hazard correlation defaults to 100% if missing
- #1581 - oasislmf code uses legacy correlation settings location in model settings
OasisLMF Notes
(PR #1574)
Fix missing complex keys return without amplification -- Fix from complex keys return when running without PLA
- Added testing for complex keys return + logging output
(PR #1578)
Fix vulnerability_id missing error message -Fix the error message so it points to the correct source when some vulnerability_id are missing from the vulnerability file.
(PR #1584)
Fix: undo 100% correlation if no correlation setting provided in model settings -The hazard_group_id where not generated if the correlation settings were not provided, causing a 100% correlation across all location.
This fix assure the hazard_group_id are generated and used correctly in gulmc.
(PR #1587)
Added fix for correlation settings -- Fixed #1581, where oasislmf code was looking for correlation in the legacy location, it now checks both.
- Added testing to catch problems loading correlation from model settings.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 6 months ago

OasisLMF - Release 2.3.9
2.3.9
OasisLMF Changelog -- #1567 - Occurrence file not found when requesting output from ktools component alt_meanonly
- #1563 - Intensity Adjustments in gulmc for dynamic footprints
- #1379 - Performance issue in get_exposure_summary
- #1557 - Release 2.3.8
- #1558 - support for having several pla sets
- #1560 - add check for vulnerability id and intensity bin boundary
- #1566 - Fiona package vulnerability issue
OasisLMF Notes
(PR #1550)
Fix performance issues with get exposure summary -No Return keys were included in keys errors file, but this was ballooning to a huge size when AA1 peril code was used, since we would see no returns for all peril codes. This was causing the performance of the generate exposure summary report to degrade massively.
The fix removes the no-returns from the returned errors, but counts them in the exposure summary.
(PR #1558)
support for having several pla sets -Allow user to specify a different pla set from the default one by adding loss_factors_set in the model_settings part of analysis_settings.json
if "loss_factors_set": "2" is added
the loss factor that will be taken into account will be lossfactors_2.bin in the static folder
(PR #1560)
Fix out of bound issue when vulnerability file contains more intensity bin than the footprint -In the model part, we want to only load intensity that will be relevant in the footprint, however there was an issue in the code where the out of bound intensity was overwriting the other part of the vulnerability table that was created.
Also add a check to verify that all vulnerability_id that are needed are present in the vulnerability static file
(PR #1565)
Remove pin of fiona package -A package pin of fiona causes CVE issues for published docker images
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 7 months ago

OasisLMF - Release 2.3.8
2.3.8
OasisLMF Changelog -- #1536 - Release 2.3.7 (Aug 6)
- #1544 - dynamic footprint - slow performance for large hazard case
- #1546 - Combus changes: 20240807
- #1554 - drop duplicate il term lines before merging
- #1547 - Running full_correlation + gulmc causes an execution to hang.
- #1556 - Fix CI build system failures
- #1559 - Read do_disaggregation from settings files
- #1560 - add check for vulnerability id and intensity bin boundary
- #1561 - Set Ktools 3.12.4
- #1531 - fix effective deductible applied in minded calculation for calcrule 19
OasisLMF Notes
(PR #1554)
Reduce memory usage in il generation step -Drop duplicated lines containing each specific level terms before merging with the input level do reduce memory consumption
(PR #1555)
Disable full_correlation when running gulmc -Gulmc provide a more fine grained correlation feature and make full_correlation flag not useful and created an issue where the run would hang
This change disable full_correlation in that case fixing the hanging issue.
(PR #1556)
Fix CI build system failures -- Updated GH actions to use artifact v4
- Pinned Fiona package (the newer package has incompatibility with some older versions of Geopandas)
update to geopandas0.14.4
or pin fiona to version1.9.6
(PR #1559)
Read do_disaggregation from settings json files -Workaround to address ticket #1497 to give more time for creating a better option for overriding settings https://github.com/OasisLMF/OasisLMF/pull/1552
(PR #1560)
Fix out of bound issue when vulnerability file contains more intensity bin than the footprint -In the model part, we want to only load intensity that will be relevant in the footprint, however there was an issue in the code where the out of bound intensity was overwriting the other part of the vulnerability table that was created.
Also add a check to verify that all vulnerability_id that are needed are present in the vulnerability static file
(PR #1531)
Fix issue in calcrule 19 when min deductible is triggered -When % of loss deductible is smaller than the minimum deductible, neither the minimum deductible nor the % of loss deductible gets applied. This overstates gross losses.
the deductible passed to the min deductible calculation was incorrect for calcrule 19, as we passed the percentage instead of the actual deductible, this fix the issue by passing the effective deductible for this calcrule.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 7 months ago

OasisLMF - Release 1.28.9
1.28.9
OasisLMF Changelog -- #1518 - Fix/pre analysis user dir
- #1554 - drop duplicate il term lines before merging
- #1547 - Running full_correlation + gulmc causes an execution to hang.
- #1556 - Fix CI build system failures
OasisLMF Notes
user_data_dir
to pre and post analysis hooks - (PR #1518)
Added Path for custom user assets added to hook class parameters
(PR #1554)
Reduce memory usage in il generation step -Drop duplicated lines containing each specific level terms before merging with the input level do reduce memory consumption
(PR #1555)
Disable full_correlation when running gulmc -Gulmc provide a more fine grained correlation feature and make full_correlation flag not useful and created an issue where the run would hang
This change disable full_correlation in that case fixing the hanging issue.
(PR #1556)
Fix CI build system failures -- Updated GH actions to use artifact v4
- Pinned Fiona package (the newer package has incompatibility with some older versions of Geopandas)
update to geopandas0.14.4
or pin fiona to version1.9.6
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 7 months ago

OasisLMF - Release 1.27.10
1.27.10
OasisLMF Changelog -- #1474 - Use billiard package for keys multiprocess if available
- #1514 - manual merge of PR 1509 improve perf of file preparation
- #1547 - Running full_correlation + gulmc causes an execution to hang.
- #1556 - Fix CI build system failures
OasisLMF Notes
(PR #1474)
Use billiard package for keys multiprocess in workers -Needed for https://github.com/OasisLMF/OasisPlatform/pull/994 otherwise running a keys lookup with multiprocessing
will throw an exception:
[2024-03-14 12:19:54,399: ERROR/ForkPoolWorker-1] generate_input[d60b57a2-f8f9-4794-8f0a-831380a44ea0]: daemonic processes are not allowed to have children
(PR #1514)
improve perf of file preparation -- improve speed of key server id missing check
- improve gul loss generation speed and memory usage in exposure run
(PR #1555)
Disable full_correlation when running gulmc -Gulmc provide a more fine grained correlation feature and make full_correlation flag not useful and created an issue where the run would hang
This change disable full_correlation in that case fixing the hanging issue.
(PR #1556)
Fix CI build system failures -- Updated GH actions to use artifact v4
- Pinned Fiona package (the newer package has incompatibility with some older versions of Geopandas)
update to geopandas0.14.4
or pin fiona to version1.9.6
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 7 months ago

OasisLMF - Release 2.3.7
2.3.7
OasisLMF Changelog -- #1533 - API client names the downloaded output file .tar instead of .tar.gz
- #1530 - Short flags unexpectedly changed in 2.3.6
- #1539 - Allow keys files with both amplification_id and model_data columns
- #1542 - Time and Memory performance issue for RI contract
- #1531 - fix effective deductible applied in minded calculation for calcrule 19
- #1532 - make all compute step run command oasis logged
- #1468 - loss output at intermediate inuring priorities - new features
- #1535 - add interval mapping to built in function
OasisLMF Notes
(PR #1538)
Fix missing short flags from CLI -Fix for https://github.com/OasisLMF/OasisLMF/issues/1530, new custom hooks from 2.3.6
caused the CLI to lose some flags.
(PR #1541)
Fix Cartesian product issue for RI Fac contract -With the introduction of all the scope oed column in the filter process of RI, the Fac contract had to have part of it's logic move to the filter level, however contrary to the other type of contract, we only use 1 layer_id for all FAC, this created a Cartesian product during the merge between info and scope.
this PR add some logic to use the risk level column to merge on in addition to layer_id when contracts see to be exact match which is the case for FAC.
(PR #1531)
Fix issue in calcrule 19 when min deductible is triggered -When % of loss deductible is smaller than the minimum deductible, neither the minimum deductible nor the % of loss deductible gets applied. This overstates gross losses.
the deductible passed to the min deductible calculation was incorrect for calcrule 19, as we passed the percentage instead of the actual deductible, this fix the issue by passing the effective deductible for this calcrule.
(PR #1532)
add oasis log to all compute step run command -(PR #1534)
Allow to have loss output at intermediate RI inuring priorities --
create new rl_outputs and rl_summaries parameters in analysis settings to drive all reinsurance loss perspective outputs
-
relabel the existing reinsurance loss outputs to use the code rl rather than ri, but otherwise keep the same naming convention
-
use existing ri_inuring_priorities parameter in analysis settings to drive new intermediate reinsurance net loss perspectives with output files labelled ri. This involves some genbash work to 'tee' the fmpy ri net stream to files as well as passing it down the pipe to the next RI calculation
As part of this work and to prepare for the UI controls, we are considering making changes to the run dir;
-
move ri_layers.json into the inputs folder (so that the available net perspectives may be read by the UI)
-
move RI_[X] folders from the root run dir to be nested in the inputs dir, and make the necessary changes to genbash to change the relative folder path of the RI_[X] folders in the run_ktools script.
(PR #1535)
add new built in in lookup build_interval_to_index -Allow user to map a float to an index by giving each index interval
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 9 months ago

OasisLMF - Release 2.3.6
2.3.6
OasisLMF Changelog -- #1516 - Added support for occurrence attachment for Per Risk XL
- #1517 - event ranges
- #1518 - Fix/pre analysis user dir
- #1520, #1519 - get_vulns update
- #1258 - Document oasislmf.json configuration options
- #1488 - Add post input-gen hook and pre exec hook
- #1526 - Oasis Dynamic Model
- #1529 - add user_data_dir to all the hooks
OasisLMF Notes
(PR #1516)
Add support for occurrence attachment for a PerRisk XL (ReinsType=PR) -ReinsType 'PR' now supports both risk and occurrence attachment and limit, meaning that it can be used to model reinsurance with unusual features, like CatXL with risk terms for example.
(PR #1517)
Support for ranges of events in analysis settings -Adding support for additional ranges of events in analysis settings, in addition to the list of event ids which can already be provided.
Event id ranges should be submitted using the new "event_ranges" option in analysis settings
The string data will take the format "1-4,8,89-94" which will be parsed and an event list created.
It can be used in combination with the existing list, and unique values will be established
user_data_dir
to pre and post analysis hooks - (PR #1518)
Added Path for custom user assets added to hook class parameters
(PR #1490)
oasislmf.json options now included in docs generated by sphinx -(PR #1524)
add post file gen and pre loss calculation step -Add the possibility for model developer to add two specific step in model run
- post file gen: part of generate file, happen after the oasis files are generated (single worker on the platform)
- pre loss : part of generate foll, happen before the loss are calculated (each loss calculation worker on the platform)
(PR #1496)
Oasis Dynamic Footprint Generation -Additional functionality added to generate footprint data in flow from underlying hazard maps and event definitions using the presented exposure data as a filter
New functionality
- New Keys functionality to provide
section_id
as part of the keys which represents a sub-section of the geography of the model, aligned with the hazard map levels model_data
can also be included in non-complex model keys to house the data needed for defense adjustments- new
hazard_case
andevent_definition
model files accepted representing the hazard maps and event definitions - new functionality to read these files, filtered based on the locations in the exposure, and generate footprints
Outstanding tasks
- overlapping sections - at the moment it is assumed that no sections will overlap, but there is a requirement for this within a single event, with functionality to allow the most severe return period to be chosen
- adjustments for defenses - the framework is there to capture the data and present it at the relevant point, but the calculations do not yet use it
- unit tests to be added to oasislmf. Manual testing has been done with OasisModels, but a small representative model needs to be added to unit tests
(PR #1529)
Add user_data_dir to all hooks -
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 10 months ago

OasisLMF - Release 2.3.5
2.3.5
OasisLMF Changelog -- #1503 - Add a guard which 'fast fails' runs using more than 9 summary groups + summarycalc
- #1454 - RI Scope file filters
- #1506 - acc_idx read as category type from input/accounts.csv
- #1508 - fix pla reader
- #1509 - improve perf of file preparation
- #1510 - Fix error handling for failed file loading in 'get_dataframe'
- #1498 - When LayerParticipation is not present, gross losses are not produced
- #1491 - Correlation group fields are not generated correctly for disaggregated risks
- #1499 - OASISLMF installation issue
OasisLMF Notes
(PR #1504)
Added guard for summarycalc when output groupings over the max of 9 --
With the addition of
summarypy
more than 9 output groups are now supported. However summarycalc will crash,
added a check to catch this before going to execution. -
Updated the bash script error trap to only exclude scripts starting with
*startup.sh
. This to prevent workers going down, but making sure any subshells*/run_ktools.sh
are not left up which could cause an execution to hang.
(PR #1505)
Support ri_scope file filter -- Support all the filter in ri_scope ('CedantName', 'ProducerName', 'LOB', 'CountryCode', 'ReinsTag')
- revamp ri file generation step for speed and simplicity
(PR #1508)
Fix plapy reader creating negative event in output -Plapy reader is a bit different from the other reader using the generic approach because it return all events at ones even if not finished.
This was not handle properly for big even. This fix remediate it by keeping in memory the last event and item id and updating the read and write memory view
(PR #1509)
Improve speed and memory usage of file preparation -- improve speed of key server id missing check
- improve gul loss generation speed and memory usage in exposure run
- improve speed of dis-aggregation
- reduce memory usage of calc rule id assignation
(PR #1510)
Fix error handling for failed file loading in 'get_dataframe' -In some cases a keys.csv file is created as an empty file, so no header. This causes the following error trace:
no-keys-return_no-catch.txt
Added exception a try catch to return the following message instead:
Keys successful: <path>/input/keys.csv generated with 0 items
Keys errors: <path>/input/keys-errors.csv generated with 60 items
0%| | 0/2 [00:00<?, ?it/s]
Failed to load "<path>/input/keys.csv", No successful lookup results found in the keys file - Check the `keys-errors.csv` file for details.
File path: <path>/input/keys-errors.csv, ValueError: cannot mmap an empty file
(PR #1513)
fix fm files generation for terms with non 0 default value -Some terms such as LayerParticipation and AccParticipation have a non 0 default value that was not taken into account during the fm file generation.
In this fix we use ods_tools to get the default value for the term and add it if necessary.
(PR #1494)
Extended field list for correlation groupings for disaggregated risks -In order to add functionality to change the way hazard and damage can be correlated for risks which are disaggregated by NumberOfBuildings>1, we have added support for more internal Oasis fields 'risk_id' and 'building_id' which can be used in conjunction with PortNumber, AccNumber, LocNumber in (model_settings) data settings so that subrisk hazard and damage may be correlated or uncorrelated as specified by the model provider.
The usage of these fields to achieve specific correlation behaviour for disaggregated risks will be documented in https://oasislmf.github.io/sections/correlation.html and available in LTS - 2.3 and later.
Default data settings will not be changed to make use of these fields. The fields must be specified in data settings in model settings in order for them to affect correlation behaviour.
Usage example (model settings json)
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","building_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "building_id"]
}
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","risk_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "risk_id"]
}
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","risk_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "building_id"]
}
Fixes #1491
(PR #1500)
Fixed Python 3.12 installation issue -- Unpinned Numpy
- Added testing for Python 3.12
As outlined in https://github.com/OasisLMF/OasisLMF/issues/1499 The package pin numpy<1.26
causes installation issues when running python version 3.12. Fixed by removing that numpy requirement.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 11 months ago

OasisLMF - Release 1.28.8
1.28.8
OasisLMF Changelog -- #1473 - ajust vuln index after vuln_dict has the updated indexes
- #1509 - improve perf of file preparation
- #1478 - Release 1.28.7
- #1498 - When LayerParticipation is not present, gross losses are not produced
- #1491 - Correlation group fields are not generated correctly for disaggregated risks
OasisLMF Notes
(PR #1473)
Fix gulmc vulnerability loading -Adjust gulmc to take into account when the order in the loaded vulnerability table is adjusted (in particular when loading with parquet)
This fix use the same solution as in gulpy, when we load the mapping between areaperil and vulnerability we use first the vulnerability id. Once we have loaded the vulnerability table and we know each index, we update this mapping to the index directly (in the preparation phase) to remove this lookup from the main look
(PR #1509)
Improve speed and memory usage of file preparation -- improve speed of key server id missing check
- improve gul loss generation speed and memory usage in exposure run
- improve speed of dis-aggregation
- reduce memory usage of calc rule id assignation
(PR #1513)
fix fm files generation for terms with non 0 default value -Some terms such as LayerParticipation and AccParticipation have a non 0 default value that was not taken into account during the fm file generation.
In this fix we use ods_tools to get the default value for the term and add it if necessary.
(PR #1494)
Extended field list for correlation groupings for disaggregated risks -In order to add functionality to change the way hazard and damage can be correlated for risks which are disaggregated by NumberOfBuildings>1, we have added support for more internal Oasis fields 'risk_id' and 'building_id' which can be used in conjunction with PortNumber, AccNumber, LocNumber in (model_settings) data settings so that subrisk hazard and damage may be correlated or uncorrelated as specified by the model provider.
The usage of these fields to achieve specific correlation behaviour for disaggregated risks will be documented in https://oasislmf.github.io/sections/correlation.html and available in LTS - 2.3 and later.
Default data settings will not be changed to make use of these fields. The fields must be specified in data settings in model settings in order for them to affect correlation behaviour.
Usage example (model settings json)
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","building_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "building_id"]
}
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","risk_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "risk_id"]
}
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber","risk_id"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber", "building_id"]
}
Fixes #1491
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 11 months ago

OasisLMF - Release 2.3.4
2.3.4
OasisLMF Changelog -- #1480 - Storage Manager fixes for Platform Lot3
- #1487 - Remove or enable to change the limit of output summaries(currently set to 9)
- #1355 - Refactor footprint file format priorities
OasisLMF Notes
(PR #1480)
Storage Manager fixes for Platform Lot3 -- Fixed run error
Fatal Python error: PyGILState_Release
when using remote storage, issue comes frompyarrow==14.x.x
- Fixed unhelpful error message when storage manager credentials are invalid. If running with remote storage the method
model_storage.listdir()
is called to check the connection before continuing with the execution. - Fixed platform error, skip file copy of
model_storage_config_fp
if the file already exists and is the same.
(PR #1482)
Add number of affected risks to special idx calculated by summary calc (new python version) -- Introduce a new pytools module summarypy to replace summarycalc
- add number of affected risk as -4 sidx in the output of summarypy for gul and il (https://github.com/OasisLMF/ktools/issues/365)
- merge dtype input definition for gulpy, gulmc, fmpy, plapy and summarypy (https://github.com/OasisLMF/OasisLMF/issues/1155)
- merge event stream reading logic for gulpy, gulmc, fmpy, plapy and summarypy
- add possibility to have any number of summary set id (https://github.com/OasisLMF/OasisLMF/issues/1487)
(PR #1486)
Remove multiple definitions of footprint file format priorities -The new static method getmodel/footprint.py::Footprint::get_footprint_fmt_priorities()
can now be called from execution/bin.py::set_footprint_set()
to get a list of footprint file format priorities. This removes the duplicate definition in this function. As before, the priority order is defined in getmodel/common.py
.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild 12 months ago

OasisLMF - Release 2.3.2
2.3.2
OasisLMF Changelog -- #1471 - number_of_samples = 0 not working in oasislmf
- #1473 - ajust vuln index after vuln_dict has the updated indexes
- #1474 - Use billiard package for keys multiprocess if available
- #1481 - Set ktools to 3.12.1
- #1464 - Feature/add fm tests
- #732 - Align FM and RI column headers
OasisLMF Notes
(PR #1473)
Fix gulmc vulnerability loading -Adjust gulmc to take into account when the order in the loaded vulnerability table is adjusted (in particular when loading with parquet)
This fix use the same solution as in gulpy, when we load the mapping between areaperil and vulnerability we use first the vulnerability id. Once we have loaded the vulnerability table and we know each index, we update this mapping to the index directly (in the preparation phase) to remove this lookup from the main look
(PR #1474)
Use billiard package for keys multiprocess in workers -Needed for https://github.com/OasisLMF/OasisPlatform/pull/994 otherwise running a keys lookup with multiprocessing
will throw an exception:
[2024-03-14 12:19:54,399: ERROR/ForkPoolWorker-1] generate_input[d60b57a2-f8f9-4794-8f0a-831380a44ea0]: daemonic processes are not allowed to have children
(PR #1481)
Set ktools to 3.12.1 -See: https://github.com/OasisLMF/ktools/releases/tag/v3.12.1
(PR #1464)
Added fm validation tests -Tests were added to CI to test recent financial module features including support for account level participation (only) and handling of duplicate locations in OED with one blank CondTag.
(PR #1465)
Bring RI file headers in line with those of FM -In the FM files fm_policytc.csv
and fm_profile.csv
, the field name policytc_id
has been replaced by profile_id
. This brings these files in line with those of RI.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 1.28.7
1.28.7
OasisLMF Changelog -- #1471 - number_of_samples = 0 not working in oasislmf
- #1473 - ajust vuln index after vuln_dict has the updated indexes
- #1474 - Use billiard package for keys multiprocess if available
- #1477 - CI - Fix broken symlinks in test_generate_losses (1.28.x)
- #1437 - Release 1.28.6
OasisLMF Notes
(PR #1473)
Fix gulmc vulnerability loading -Adjust gulmc to take into account when the order in the loaded vulnerability table is adjusted (in particular when loading with parquet)
This fix use the same solution as in gulpy, when we load the mapping between areaperil and vulnerability we use first the vulnerability id. Once we have loaded the vulnerability table and we know each index, we update this mapping to the index directly (in the preparation phase) to remove this lookup from the main look
(PR #1474)
Use billiard package for keys multiprocess in workers -Needed for https://github.com/OasisLMF/OasisPlatform/pull/994 otherwise running a keys lookup with multiprocessing
will throw an exception:
[2024-03-14 12:19:54,399: ERROR/ForkPoolWorker-1] generate_input[d60b57a2-f8f9-4794-8f0a-831380a44ea0]: daemonic processes are not allowed to have children
(PR #1477)
CI - Fix broken symlinks in test_generate_losses (1.28.x) -Symlinks using the new pytest package are broken for some tests in test_generate_losses.py
.
Fixed by running all of these checks in a temp model_run_dir
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 1.27.9
1.27.9
OasisLMF Changelog -- #1471 - number_of_samples = 0 not working in oasislmf
- #1474 - Use billiard package for keys multiprocess if available
- #1428 - Release 1.27.8
OasisLMF Notes
(PR #1474)
Use billiard package for keys multiprocess in workers -Needed for https://github.com/OasisLMF/OasisPlatform/pull/994 otherwise running a keys lookup with multiprocessing
will throw an exception:
[2024-03-14 12:19:54,399: ERROR/ForkPoolWorker-1] generate_input[d60b57a2-f8f9-4794-8f0a-831380a44ea0]: daemonic processes are not allowed to have children
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 2.3.1
2.3.1
OasisLMF Changelog -- #1444 - support for pandas 3
- #1446 - Add missing loc_id check
- #1447 - use correct error_model in back allocation
- #1448 - ensure header is written in keys.csv
- #1449 - Footprint_set option not working with parquet format
- #1455 - fix for vuln parquet read
- #1350 - model settings - correlation settings - allow optional hazard or damage correlation value
- #1445 - Platform API needs to check RUN_MODE to detect workflow
- #146 - Outputs Reinsurance: RI contract level output (by ReinsNumber and or ReinsType)
- #1385 - Missing parquet library dependencies for gulmc in 1.27
- #1460 - Occurrence file not found when requesting output from ktools component aalcalcmeanonly
- #1467 - fix for 1 loc with no account fm terms
OasisLMF Notes
(PR #1444)
Preemptive work to support pandas 3 -Pandas 3 will bring several change to the way pandas is behaving.
Pandas provide us with two option to set to mimic the future behavior.
In this PR, make sure all test pass when the following option are set:
pd.options.mode.copy_on_write = True
pd.options.future.infer_string = True
(the setting of the option is not part of the PR, they were just defined when testing locally)
(PR #1446)
Added check for missing loc_ids after lookup returns keys -- Safe guard to ensure all location rows are processed by the lookup class, a return for each
loc_id
should exist in either keys.csv or keys_error.csv - Fixed platform testing after release of
2.3.0
(PR #1447)
fix potential ZeroDivisionError in during fmpy back allocation -(PR #1448)
Fix missing header in keys.csv -This fix make sure a header is written in keys.csv even if the first block of results send to the keys written is all failling
(PR #1450)
Fix footprint_set and vulnerability_set options with parquet -footprint_set and vulnerability_set now correctly handle data provided in parquet format
(PR #1455)
Parquet vulnerability read fix -Apply vulnerability filter before retrieving the vulnerability parquet to reduce memory usage and reading time
(PR #1456)
Hazard and damage correlation values in model settings now optional -The correlation value for either damage or hazard is now optional and defaults to zero if not entered.
(PR #1457)
Update OasisAPI client to poll analysis status using run_mode -With OasisPlatform 2.3.0, the v2
endpoints can support both execution workflows (single server or distributed).
Fix the OasisAPI client to check for the new run_mode={v1| v2}
which waiting for an analysis to complete
(PR #1458)
Enable reinsurance loss output at intermediate inuring priorities -Support for concurrent net and gross reinsurance output streams has been introduced to fmpy
. This change allows the user to request output at intermediate inuring priorities. This is facilitated by branching off gross losses at every requested inuring priority, establishing new streams. Requested reinsurance summaries are extracted from these streams.
(PR #1459)
Fixed package requirments -- Moved PyArrow to a required package, its needed for gulmc which is now the default
- Set maximum pandas version to 2.1.x https://github.com/OasisLMF/OasisLMF/issues/1466
(PR #1463)
Fix missing occurrence file error when aalcalcmeanonly output requested -When only aalcalcmeanonly
output requested and an identifier is used to identify the occurrence file to be used, a symbolic link to that file is created in the run static directory. This fixes an issue where the symbolic link was not created in the aforementioned scenario.
(PR #1467)
fix issue with 1 location and no account terms -Make sure the information in the account file are merge even if no financial terms are present.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 1.28.6
1.28.6
OasisLMF Changelog -- #1460 - Occurrence file not found when requesting output from ktools component aalcalcmeanonly
- #1413 - Release 1.28.5
- #1446 - Add missing loc_id check
- #1447 - use correct error_model in back allocation
- #1448 - ensure header is written in keys.csv
- #1350 - model settings - correlation settings - allow optional hazard or damage correlation value
- #1385 - Missing parquet library dependencies for gulmc in 1.27
- #1430 - FM acceptance tests failing with pandas==2.2.0
- #1467 - fix for 1 loc with no account fm terms
- #1407 - Added tests for condition coverages 1-5 financial terms
OasisLMF Notes
(PR #1463)
Fix missing occurrence file error when aalcalcmeanonly output requested -When only aalcalcmeanonly
output requested and an identifier is used to identify the occurrence file to be used, a symbolic link to that file is created in the run static directory. This fixes an issue where the symbolic link was not created in the aforementioned scenario.
(PR #1446)
Added check for missing loc_ids after lookup returns keys -- Safe guard to ensure all location rows are processed by the lookup class, a return for each
loc_id
should exist in either keys.csv or keys_error.csv - Fixed platform testing after release of
2.3.0
(PR #1447)
fix potential ZeroDivisionError in during fmpy back allocation -(PR #1448)
Fix missing header in keys.csv -This fix make sure a header is written in keys.csv even if the first block of results send to the keys written is all failling
(PR #1456)
Hazard and damage correlation values in model settings now optional -The correlation value for either damage or hazard is now optional and defaults to zero if not entered.
(PR #1459)
Fixed package requirments -- Moved PyArrow to a required package, its needed for gulmc which is now the default
- Set maximum pandas version to 2.1.x https://github.com/OasisLMF/OasisLMF/issues/1466
(PR #1431)
adapt code to be compatible between pandas 2.2 and previous version -- remove some deprecation idiom
- fix issue with acc_idx when loading account
(PR #1467)
fix issue with 1 location and no account terms -Make sure the information in the account file are merge even if no financial terms are present.
(PR #1407)
Added tests for condition coverages 1-5 financial terms -Completed all units in validation/insurance_policy_coverages
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 1.27.8
- Fix/double condtag #1420
- Add missing loc_id check #1446
- ensure header is written in keys.csv #1448
- use correct error_model in back allocation #1447
- Allow optional hazard or damage correlation value #1456
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 2.3.0
2.3.0
OasisLMF Changelog -- #1409 - Fix server-version flag for API client runner
- #1410 - Support for AccParticipation
- #1412 - use category for peril_id in keys.csv, improve write_fm_xref_file
- #1408, #1414 - Replace single vulnerabilities through additional adjustments settings or file
- #1416 - fix useful columns when extra aggregation level is needed
- #1417 - Update CI job triggers - only test on PR or commit to main branches
- #1421 - add test with location with 1 empty and 1 level 2 condtag
- #1423 - add acc participation only
- #1425 - Customise specific vulnerabilities (without providing full replacement data)
- #140 - Implement OED peril fields
- #1429 - franchise deductible
- #1430 - FM acceptance tests failing with pandas==2.2.0
- #1422 - Adjust log levels separately for modules
- #1435 - Fix/update defaults
- #1441 - Feature/lot3 merge
- #1443 - Set package versions for 2.3.0
- #1249 - Discuss documentation strategy
- #1340 - collect_unused_df in il preparation
- #1341 - Bug in latest platform2 release
- #1326 - Update the the
KeyLookupInterface
class to have access to thelookup_complex_config_json
- #140 - Implement OED peril fields
- #1349 - Fix removal of handlers to logger + give logfiles unique names
- #1322 - Step policies: Allow BI ground up loss through to gross losses
- #1293 - Multiple footprint file options
- #1357 - fix permissions for docs deploy
- #1360 - Add docs about gulmc
- #1366 - Update fm supported terms document
- #1347 - Add runtime user supplied secondary factor option to plapy
- #1317 - Add post-analysis hook
- #1372 - Incorect TIV in the summary info files
- #1377 - Clean up 'runs' dir in repo
- #1378 - Support output of overall average period loss without standard deviation calculation
- #1292 - Parquet format summary info file
- #1382 - Change vulnerability weight data type from 32-bit integer to 32-bit float in gulmc
- #1381 - Converting exposure files to previous OED version before running model
- #1394 - Net RI losses do not use -z in summarycalc
- #1398 - Allow disaggregation to be disabled
- #1399 - Fixed loading booleans from oasislmf.json
- #1088 - Correlation options for the user
- #1405 - Fix/non compulsory condtag
- #1403 - Vulnerability File Option
- #1407 - Added tests for condition coverages 1-5 financial terms
OasisLMF Notes
(PR #1409)
Fixed flag in APIclient to set the server version -Use by setting oasislmf api run --server-version v1
or oasislmf api run --server-version v2
(PR #1411)
Support for AccParticipation -add support for AccParticipation in account all level.
introduce new calcrule where a share term is positive for all direct calcrule.
this "duplicated" calcrules have an id corresponding to their no share term calcrule plus 100
(ex: deductible and limit , id 1 => deductible, limit and share, id 101)
Note that calcrules with the same terms can have different id if they are perform in "direct" levels or in "direct layer" levels because in "direct" the share is apply on top of the policy that may have to keep track of deductible underlimit and overlimit
(PR #1412)
Improve memory usage when reading keys.csv -use category for peril_id when reading keys.csv.
use directly index when creating fm_xref_file
(PR #1415)
Add Support for replacing individual Vulnerabilities through field in Analysis Settings -The analysis settings can contain a reference to a .csv file containing the changes or, directly, the necessary changes. If they do, while the specific ids are loaded, they will be taken from the replacements file (if present there) and not from the vulnerability file.
(PR #1416)
Fix in IL file generation with missing columns when final level of aggregation is needed -When account level aggregation is performed but there is no terms, some needed columns where not taken from the account file leading to error in get_xref_df:
KeyError: "['acc_idx', 'PolNumber'] not in index"
This fix the issue by using all useful columns when the account file is merged.
(PR #1418)
Set ktools to 3.11.1 -(PR #1421)
Add test for location with an empty condtag and a priority > 1 condtag -(PR #1423)
fix add Account Participation only calcrule -Add missing calcrule for when there is only Account Participation at the financial terms for account level
(PR #1426)
Factor adjustments to specific vulnerabilities -By specifying adjustments to specific vulnerabilities in the analysis settings, an adjustment can be applied to the probabilities of that vulnerability.
(PR #1299)
Support OED Peril terms and coverage specific terms for all level -- support OED Peril terms (adding a filter so only the loss from correct perils are part of the policy)
- full revamp of fm file generation step in order to preserve memory.
- support coverage specific term for condition
- have the condition logic able to handle graph structure (not just tree structure)
Also, to be able to run our tests using exposure run, Peril need to be taken from LocPerilCovered
in exposure run add option to use LocPerilCovered for peril id and use only certain peril
During an exposure run, the perils used were determine base on num_subperils and their id were 1 to num_subperils
With this change user can specify the peril covered by the deterministic model via --model-perils-covered
if nothing is given all peril in LocPerilCovered will be attributed a key and will receive a loss from the model.
it is also now possible to specify extra summary column so they can be seen in the loss summary at the end of exposure run using --extra-summary-cols
example:
oasislmf exposure run -s ~/test/peril_test -r ~/OasisLMF/runs/peril_test --extra-summary-cols peril_id --model-perils-covered WTC
(PR #1429)
Add Franchise deductible only policy -add the possibility to use Franchise deductible without an associated limit
(PR #1431)
adapt code to be compatible between pandas 2.2 and previous version -- remove some deprecation idiom
- fix issue with acc_idx when loading account
(PR #1432)
Visibility of ods_tools logs in verbose mode for oasislmf -Choosing --verbose when running oasislmf will cause ods_tools logs at level DEBUG and above to be seen in the output.
(PR #1435)
Update default options -- Set API client to default version =
v2
- Set
gulmc
default to True - Set
modelpy
andgulpy
defaults to False - Fixed gulmc execution error when running without correlation options
(PR #1441)
Added the OasisDataManager package to access remote model data (experimental feature) -This adds the optional to load model_data files from a remote object store like S3
or Azure Blob storage
.
File access is configured via a file named model_storage.json
{
"storage_class": "oasis_data_manager.filestore.backends.aws_storage.AwsObjectStore",
"options": {
"bucket_name": "oasislmf-model-library-oasis-piwind",
"access_key": "<aws-s3-key-name>",
"secret_key": "<aws-s3-key-secret>",
"root_dir": "model_data/"
}
}
Example oasislmf.json
{
"model_storage_json": "model_storage.json",
"analysis_settings_json": "analysis_settings.json",
"lookup_config_json": "keys_data/PiWind/lookup_config.json",
"lookup_data_dir": "keys_data/PiWind"
}
oasislmf
package documentation - (PR #1320)
Revamp the This PR Fix #1249 by revamping the oasislmf
package documentation.
The complete documentation of the full Python API of oasislmf
is automatically generated using sphinx-autoapi
. There is no need to manually update the docs pages whenever the oasislmf
package is updated: sphinx-autoapi
dynamically finds the changes and generates the docs for the latest oasislmf
version.
The documentation is built using the build-docs.yml
GH action workflow on all PR targeting main
and is built & deployed to the gh-pages
branch for all commits on main
.
(PR #1340)
In Il preparation collect df that are not in use anymore -In order to save a bit of memory, delete and collect memory of df that are not used anymore
(PR #1342)
Redefine key_columns as local variable -Making changes the global variable key_columns
, which is a list of location file columns used in the lookup process, can lead to errors. As the variable is only used in the method builtin.py::Lookup::process_locations
, it can be defined local to that method instead.
(PR #1345)
Add complex model config into model config if both present -If both complex model config and model config are present, add the json dict from the complex config into the model config
as below
config['complex_config_dir'] = complex_config_dir
config['complex_config'] = complex_config
(PR #1346)
Update all fm test to use AA1 as peril in all peril columns -Work is in progress to have perils columns such as LocPerilsCovered, LocPeril, ... supported in oasislmf. This change aim at changing all perils to AA1 as they represent generic test. some more test specific to peril covered will be added later on with the feature.
also improve the split combine scripts used to add fm unit test by adding support for reinsurance files
(PR #1349)
Fixed the removal of log handlers in logging redirect wrapper -- Log handlers were not correctly removed when exiting from log redirect
- Added log redirect to plapy
- Fixed open file leaks in testing
(PR #1351)
Step policies: Allow BI ground up loss through to gross losses -https://github.com/OasisLMF/OasisLMF/issues/1322
(PR #1352)
Support multiple identifiers for footprint files -To enable the storage of footprints in multiple files rather than a single master file, optional identifiers in the form of footprint file suffixes are now supported. This is executed in a similar way to that currently in place to distinguish multiple events and event occurrences files. The footprint_set
model settings option in the analysis settings file can be set to the desired file suffix for the footprint files to be used. A symbolic link to the desired footprint set is created in the static/
directory within the model run directory. Footprint file priorities are identical to those set by modelpy
and gulmc
, which in order of descending priority are: parquet; zipped binary; binary; and csv.
(PR #1360)
Add extensive docs about gulmc -This PR adds extensive documentation about gulmc.
(PR #1367)
Financial terms supported document update -The document has been updated to reflect recent additional financial fields that are supported, including
- Currency fields
- Account level terms
In addition a 'Version introduced" field has been included to identify the version of OasisLMF in which the field was first supported, if later than v1.15 LTS.
(PR #1369)
Add options to enable Post Loss Amplification and provide secondary and uniform factors -The requirement for amplifications file generated by the MDK as a trigger for the execution of Post Loss Amplification (PLA) has been replaced with the pla
flag in the analysis settings file. This allows a user to enable or disable (default) the PLA component plapy
.
Additionally, a secondary factor in the range [0, 1] can be specified from the command line with the argument -f
when running plapy
:
$ plapy -f 0.8 < gul_output.bin > plapy_output.bin
The secondary factor is applied to the deviation of the loss factor from 1. For example:
event_id | factor from model | relative factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 1.08 |
2 | 1.20 | 0.8 | 1.16 |
3 | 1.00 | 0.8 | 1.00 |
4 | 0.90 | 0.8 | 0.92 |
Finally, an absolute, uniform, positive amplification/reduction factor can be specified from the command line with the argument -F
:
$ plapy -F 0.8 < gul_output.bin > plapy_output.bin
This factor is applied to all losses, thus loss factors from the model (those in lossfactors.bin
) are ignored. For example:
event_id | factor from model | uniform factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 0.8 |
2 | 1.20 | 0.8 | 0.8 |
3 | 1.00 | 0.8 | 0.8 |
4 | 0.90 | 0.8 | 0.8 |
The absolute, uniform factor is incompatible with the relative, secondary factor. Therefore, if both are given by the user, a warning is logged and the secondary factor is ignored.
(PR #1371)
Implement post analysis hook -Model vendors can supply a custom Python module that will be run after the analysis has completed. This module will have access to the run directory, model data directory and analysis settings. It could for instance modify the output files, parse logs to produce user-friendly reports or generate plots.
The two new Oasis settings required to use this feature are similar to the ones used for the pre analysis hook.
post_analysis_module
: Path to the Python module containing the class.post_analysis_class_name
: Name of the class.
The class must have a constructor that takes kwargs model_data_dir
, model_run_dir
and analysis_settings_json
, plus a run
method with no arguments. For example:
class MyPostAnalysis:
def __init__(self, model_data_dir=None, model_run_dir=None, analysis_settings_json=None):
self.model_data_dir = model_data_dir
self.model_run_dir = model_run_dir
self.analysis_settings_json = analysis_settings_json
def run(self):
# do something
(PR #1373)
Fix Tiv calculation when NumberOfBuilding is >1 in location file -The Tiv calculated in the output summaries was incorrect as the granularity has change after the implementation of stochastic dis-aggregation (when NumberOfBuilding > 1).
Only 'loc_id', 'coverage_type_id' were taken in account leading to detect duplicate leading to lower TIV than it should
With this change, we add 'building_id' and 'risk_id' to the summary_map and add building_id in the key to detect duplicate when we calculate the TIV
(PR #1378)
Support output of overall average period loss without standard deviation calculation -The new ktools component aalcalcmeanonly
(see PR https://github.com/OasisLMF/ktools/pull/357) calculates the overall average period loss but does not include the standard deviation. As a result, it has a faster execution time and uses less memory than aalcalc
.
Support for executing this component as part of a model run has been introduced through the aalcalc_meanonly
(legacy output) and alt_meanonly
(ORD output) flags in the analysis settings file.
(PR #1380)
Write summary info files in same format as ORD output reports -Summary info files are now written in the same format as the ORD output reports. Therefore, should a user request ORD output reports in parquet format, the summary info files will also be in parquet format.
(PR #1386)
Change vulnerability weight data type to 32-bit float in gulmc -The data type for vulnerability weights that are read from the binary file weights.bin
by gulmc
has been changed from 32-bit integer to 32-bit float.
If supported OED versions are reported in the model settings, exposure files are converted to the latest compatible OED version before running the model.
(PR #1397)
Assign output zeros flag to summarycalc for all reinsurance loss computes -The ktools
component summarycalc
does not output zero loss events by default. These zero loss events are required when net loss is called in fmpy
. Currently, net loss is called in all reinsurance instances, so the -z
flag has been assigned to all executions ofsummarycalc
when computing reinsurance losses.
(PR #1399)
Fixed loading booleans from oasislmf.json -The function str2bool(var) converts "False" (str)
to False (bool)
but is not correctly called from the oasislmf.json file.
So setting, a boolean flag with:
{
"do_disaggregation": "False"
}
Evaluates to True
because the type is str and not bool
> (self.do_disaggregation)
'False'
> bool(self.do_disaggregation)
True
(PR #1401)
Correlations info in analysis settings takes precedence over that in model settings -The default model correlation factors can be overwritten by specific "correlation_settings" added to the analysis settings file.
(PR #1405)
Fix allow CondTag column to be optional -Fix issue where CondTag was needed in location file if it was present in the account file making user have to add an empty CondTag column.
(PR #1406)
Vulnerability file options can be selected using the apposite field in analysis_settings -If "vulnerability_set" contains an identifier, the corresponding vulnerability file will be used.
(PR #1407)
Added tests for condition coverages 1-5 financial terms -Completed all units in validation/insurance_policy_coverages
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 1 year ago

OasisLMF - Release 1.28.5
1.28.5
OasisLMF Changelog -- #1376 - Release 1.28.3
- #1409 - Fix server-version flag for API client runner
- #1410 - Support for AccParticipation
- #1412 - use category for peril_id in keys.csv, improve write_fm_xref_file
- #1416 - fix useful columns when extra aggregation level is needed
- #1417 - Update CI job triggers - only test on PR or commit to main branches
- #1418 - Set ktools to 3.11.1
- #1387 - Release 1.28.4
- #1347 - Add runtime user supplied secondary factor option to plapy
- #1405 - Fix/non compulsory condtag
- #1403 - Vulnerability File Option
- #1407 - Added tests for condition coverages 1-5 financial terms
OasisLMF Notes
(PR #1409)
Fixed flag in APIclient to set the server version -Use by setting oasislmf api run --server-version v1
or oasislmf api run --server-version v2
(PR #1411)
Support for AccParticipation -add support for AccParticipation in account all level.
introduce new calcrule where a share term is positive for all direct calcrule.
this "duplicated" calcrules have an id corresponding to their no share term calcrule plus 100
(ex: deductible and limit , id 1 => deductible, limit and share, id 101)
Note that calcrules with the same terms can have different id if they are perform in "direct" levels or in "direct layer" levels because in "direct" the share is apply on top of the policy that may have to keep track of deductible underlimit and overlimit
(PR #1412)
Improve memory usage when reading keys.csv -use category for peril_id when reading keys.csv.
use directly index when creating fm_xref_file
(PR #1416)
Fix in IL file generation with missing columns when final level of aggregation is needed -When account level aggregation is performed but there is no terms, some needed columns where not taken from the account file leading to error in get_xref_df:
KeyError: "['acc_idx', 'PolNumber'] not in index"
This fix the issue by using all useful columns when the account file is merged.
(PR #1418)
Set ktools to 3.11.1 -(PR #1369)
Add options to enable Post Loss Amplification and provide secondary and uniform factors -The requirement for amplifications file generated by the MDK as a trigger for the execution of Post Loss Amplification (PLA) has been replaced with the pla
flag in the analysis settings file. This allows a user to enable or disable (default) the PLA component plapy
.
Additionally, a secondary factor in the range [0, 1] can be specified from the command line with the argument -f
when running plapy
:
$ plapy -f 0.8 < gul_output.bin > plapy_output.bin
The secondary factor is applied to the deviation of the loss factor from 1. For example:
event_id | factor from model | relative factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 1.08 |
2 | 1.20 | 0.8 | 1.16 |
3 | 1.00 | 0.8 | 1.00 |
4 | 0.90 | 0.8 | 0.92 |
Finally, an absolute, uniform, positive amplification/reduction factor can be specified from the command line with the argument -F
:
$ plapy -F 0.8 < gul_output.bin > plapy_output.bin
This factor is applied to all losses, thus loss factors from the model (those in lossfactors.bin
) are ignored. For example:
event_id | factor from model | uniform factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 0.8 |
2 | 1.20 | 0.8 | 0.8 |
3 | 1.00 | 0.8 | 0.8 |
4 | 0.90 | 0.8 | 0.8 |
The absolute, uniform factor is incompatible with the relative, secondary factor. Therefore, if both are given by the user, a warning is logged and the secondary factor is ignored.
(PR #1405)
Fix allow CondTag column to be optional -Fix issue where CondTag was needed in location file if it was present in the account file making user have to add an empty CondTag column.
(PR #1406)
Vulnerability file options can be selected using the apposite field in analysis_settings -If "vulnerability_set" contains an identifier, the corresponding vulnerability file will be used.
(PR #1407)
Added tests for condition coverages 1-5 financial terms -Completed all units in validation/insurance_policy_coverages
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.4
- #1292 - Parquet format summary info file
- #1382 - Change vulnerability weight data type from 32-bit integer to 32-bit float in gulmc
- #1381 - Converting exposure files to previous OED version before running model
- #140 - Implement OED peril fields
- #1394 - Net RI losses do not use -z in summarycalc
- #1398 - Allow disaggregation to be disabled
- #1399 - Fixed loading booleans from oasislmf.json
- #1347 - Add runtime user supplied secondary factor option to plapy
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.27.7
1.27.7
OasisLMF Changelog -- #1397 - Add output zeros flag to summarycalc for all reinsurance loss computes
- #1219 - Fix flakly checks in TestGetDataframe
- #1390 - Backport - Post analysis hook
- #1335 - Update CI - 1.27
OasisLMF Notes
(PR #1397)
Assign output zeros flag to summarycalc for all reinsurance loss computes -The ktools
component summarycalc
does not output zero loss events by default. These zero loss events are required when net loss is called in fmpy
. Currently, net loss is called in all reinsurance instances, so the -z
flag has been assigned to all executions ofsummarycalc
when computing reinsurance losses.
(PR #1327)
flaky tests failures -Fixed intermittent testing failures:
- Fixed NaN errors from
utils/test_data.py
- CI failure 9996230083 - Remove deadline from
test_lookup.py
- CI failure 9996208518
(PR #1390)
Implement post analysis hook - Backport 1.27.x -Model vendors can supply a custom Python module that will be run after the analysis has completed. This module will have access to the run directory, model data directory and analysis settings. It could for instance modify the output files, parse logs to produce user-friendly reports or generate plots.
The two new Oasis settings required to use this feature are similar to the ones used for the pre analysis hook.
post_analysis_module
: Path to the Python module containing the class.post_analysis_class_name
: Name of the class.
The class must have a constructor that takes kwargs model_data_dir
, model_run_dir
and analysis_settings_json
, plus a run
method with no arguments. For example:
class MyPostAnalysis:
def __init__(self, model_data_dir=None, model_run_dir=None, analysis_settings_json=None):
self.model_data_dir = model_data_dir
self.model_run_dir = model_run_dir
self.analysis_settings_json = analysis_settings_json
def run():
# do something
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.23.20
1.23.20
OasisLMF Changelog -- #1337 - Update CI - 1.23
- #1397 - Add output zeros flag to summarycalc for all reinsurance loss computes
OasisLMF Notes
(PR #1397)
Assign output zeros flag to summarycalc for all reinsurance loss computes -The ktools
component summarycalc
does not output zero loss events by default. These zero loss events are required when net loss is called in fmpy
. Currently, net loss is called in all reinsurance instances, so the -z
flag has been assigned to all executions ofsummarycalc
when computing reinsurance losses.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.26.9
1.26.9
OasisLMF Changelog -- #1338 - Update CI - 1.26
- #1397 - Add output zeros flag to summarycalc for all reinsurance loss computes
OasisLMF Notes
(PR #1397)
Assign output zeros flag to summarycalc for all reinsurance loss computes -The ktools
component summarycalc
does not output zero loss events by default. These zero loss events are required when net loss is called in fmpy
. Currently, net loss is called in all reinsurance instances, so the -z
flag has been assigned to all executions ofsummarycalc
when computing reinsurance losses.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.15.30
1.15.30
OasisLMF Changelog -- #1336 - Update CI - 1.15
- #1397 - Add output zeros flag to summarycalc for all reinsurance loss computes
OasisLMF Notes
(PR #1397)
Assign output zeros flag to summarycalc for all reinsurance loss computes -The ktools
component summarycalc
does not output zero loss events by default. These zero loss events are required when net loss is called in fmpy
. Currently, net loss is called in all reinsurance instances, so the -z
flag has been assigned to all executions ofsummarycalc
when computing reinsurance losses.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.3
1.28.3
OasisLMF Changelog -- #1377 - Clean up 'runs' dir in repo
- #1378 - Support output of overall average period loss without standard deviation calculation
- #1366 - Update fm supported terms document
- #1347 - Add runtime user supplied secondary factor option to plapy
- #1317 - Add post-analysis hook
- #1372 - Incorect TIV in the summary info files
OasisLMF Notes
(PR #1378)
Support output of overall average period loss without standard deviation calculation -The new ktools component aalcalcmeanonly
(see PR https://github.com/OasisLMF/ktools/pull/357) calculates the overall average period loss but does not include the standard deviation. As a result, it has a faster execution time and uses less memory than aalcalc
.
Support for executing this component as part of a model run has been introduced through the aalcalc_meanonly
(legacy output) and alt_meanonly
(ORD output) flags in the analysis settings file.
(PR #1367)
Financial terms supported document update -The document has been updated to reflect recent additional financial fields that are supported, including
- Currency fields
- Account level terms
In addition a 'Version introduced" field has been included to identify the version of OasisLMF in which the field was first supported, if later than v1.15 LTS.
(PR #1369)
Add options to enable Post Loss Amplification and provide secondary and uniform factors -The requirement for amplifications file generated by the MDK as a trigger for the execution of Post Loss Amplification (PLA) has been replaced with the pla
flag in the analysis settings file. This allows a user to enable or disable (default) the PLA component plapy
.
Additionally, a secondary factor in the range [0, 1] can be specified from the command line with the argument -f
when running plapy
:
$ plapy -f 0.8 < gul_output.bin > plapy_output.bin
The secondary factor is applied to the deviation of the loss factor from 1. For example:
event_id | factor from model | relative factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 1.08 |
2 | 1.20 | 0.8 | 1.16 |
3 | 1.00 | 0.8 | 1.00 |
4 | 0.90 | 0.8 | 0.92 |
Finally, an absolute, uniform, positive amplification/reduction factor can be specified from the command line with the argument -F
:
$ plapy -F 0.8 < gul_output.bin > plapy_output.bin
This factor is applied to all losses, thus loss factors from the model (those in lossfactors.bin
) are ignored. For example:
event_id | factor from model | uniform factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 0.8 |
2 | 1.20 | 0.8 | 0.8 |
3 | 1.00 | 0.8 | 0.8 |
4 | 0.90 | 0.8 | 0.8 |
The absolute, uniform factor is incompatible with the relative, secondary factor. Therefore, if both are given by the user, a warning is logged and the secondary factor is ignored.
(PR #1371)
Implement post analysis hook -Model vendors can supply a custom Python module that will be run after the analysis has completed. This module will have access to the run directory, model data directory and analysis settings. It could for instance modify the output files, parse logs to produce user-friendly reports or generate plots.
The two new Oasis settings required to use this feature are similar to the ones used for the pre analysis hook.
post_analysis_module
: Path to the Python module containing the class.post_analysis_class_name
: Name of the class.
The class must have a constructor that takes kwargs model_data_dir
, model_run_dir
and analysis_settings_json
, plus a run
method with no arguments. For example:
class MyPostAnalysis:
def __init__(self, model_data_dir=None, model_run_dir=None, analysis_settings_json=None):
self.model_data_dir = model_data_dir
self.model_run_dir = model_run_dir
self.analysis_settings_json = analysis_settings_json
def run():
# do something
(PR #1373)
Fix Tiv calculation when NumberOfBuilding is >1 in location file -The Tiv calculated in the output summaries was incorrect as the granularity has change after the implementation of stochastic dis-aggregation (when NumberOfBuilding > 1).
Only 'loc_id', 'coverage_type_id' were taken in account leading to detect duplicate leading to lower TIV than it should
With this change, we add 'building_id' and 'risk_id' to the summary_map and add building_id in the key to detect duplicate when we calculate the TIV
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.2
1.28.2
OasisLMF Changelog -- #1344 - Release/1.28.1 (staging)
- #1326 - Update the the
KeyLookupInterface
class to have access to thelookup_complex_config_json
- #140 - Implement OED peril fields
- #1322 - Step policies: Allow BI ground up loss through to gross losses
- #1249 - Discuss documentation strategy
- #1293 - Multiple footprint file options
- #1357 - fix permissions for docs deploy
- #1360 - Add docs about gulmc
- #1347 - Add runtime user supplied secondary factor option to plapy
- #1340 - collect_unused_df in il preparation
OasisLMF Notes
(PR #1345)
Add complex model config into model config if both present -If both complex model config and model config are present, add the json dict from the complex config into the model config
as below
config['complex_config_dir'] = complex_config_dir
config['complex_config'] = complex_config
(PR #1346)
Update all fm test to use AA1 as peril in all peril columns -Work is in progress to have perils columns such as LocPerilsCovered, LocPeril, ... supported in oasislmf. This change aim at changing all perils to AA1 as they represent generic test. some more test specific to peril covered will be added later on with the feature.
also improve the split combine scripts used to add fm unit test by adding support for reinsurance files
(PR #1351)
Step policies: Allow BI ground up loss through to gross losses -https://github.com/OasisLMF/OasisLMF/issues/1322
oasislmf
package documentation - (PR #1320)
Revamp the This PR Fix #1249 by revamping the oasislmf
package documentation.
The complete documentation of the full Python API of oasislmf
is automatically generated using sphinx-autoapi
. There is no need to manually update the docs pages whenever the oasislmf
package is updated: sphinx-autoapi
dynamically finds the changes and generates the docs for the latest oasislmf
version.
The documentation is built using the build-docs.yml
GH action workflow on all PR targeting main
and is built & deployed to the gh-pages
branch for all commits on main
.
(PR #1352)
Support multiple identifiers for footprint files -To enable the storage of footprints in multiple files rather than a single master file, optional identifiers in the form of footprint file suffixes are now supported. This is executed in a similar way to that currently in place to distinguish multiple events and event occurrences files. The footprint_set
model settings option in the analysis settings file can be set to the desired file suffix for the footprint files to be used. A symbolic link to the desired footprint set is created in the static/
directory within the model run directory. Footprint file priorities are identical to those set by modelpy
and gulmc
, which in order of descending priority are: parquet; zipped binary; binary; and csv.
(PR #1360)
Add extensive docs about gulmc -This PR adds extensive documentation about gulmc.
(PR #1369)
Add options to enable Post Loss Amplification and provide secondary and uniform factors -The requirement for amplifications file generated by the MDK as a trigger for the execution of Post Loss Amplification (PLA) has been replaced with the pla
flag in the analysis settings file. This allows a user to enable or disable (default) the PLA component plapy
.
Additionally, a secondary factor in the range [0, 1] can be specified from the command line with the argument -f
when running plapy
:
$ plapy -f 0.8 < gul_output.bin > plapy_output.bin
The secondary factor is applied to the deviation of the loss factor from 1. For example:
event_id | factor from model | relative factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 1.08 |
2 | 1.20 | 0.8 | 1.16 |
3 | 1.00 | 0.8 | 1.00 |
4 | 0.90 | 0.8 | 0.92 |
Finally, an absolute, uniform, positive amplification/reduction factor can be specified from the command line with the argument -F
:
$ plapy -F 0.8 < gul_output.bin > plapy_output.bin
This factor is applied to all losses, thus loss factors from the model (those in lossfactors.bin
) are ignored. For example:
event_id | factor from model | uniform factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 0.8 |
2 | 1.20 | 0.8 | 0.8 |
3 | 1.00 | 0.8 | 0.8 |
4 | 0.90 | 0.8 | 0.8 |
The absolute, uniform factor is incompatible with the relative, secondary factor. Therefore, if both are given by the user, a warning is logged and the secondary factor is ignored.
(PR #1340)
In Il preparation collect df that are not in use anymore -In order to save a bit of memory, delete and collect memory of df that are not used anymore
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.2rc1
1.28.2rc1
OasisLMF Changelog -- #1344 - Release/1.28.1 (staging)
- #1326 - Update the the
KeyLookupInterface
class to have access to thelookup_complex_config_json
- #140 - Implement OED peril fields
- #1349 - Fix removal of handlers to logger + give logfiles unique names
- #1322 - Step policies: Allow BI ground up loss through to gross losses
- #1293 - Multiple footprint file options
- #1249 - Discuss documentation strategy
- #1324 - Release/1.28.0
- #1357 - fix permissions for docs deploy
- #1360 - Add docs about gulmc
- #1334 - Update CI - 1.28
- #1347 - Add runtime user supplied secondary factor option to plapy
- #1340 - collect_unused_df in il preparation
- #1341 - Bug in latest platform2 release
OasisLMF Notes
(PR #1345)
Add complex model config into model config if both present -If both complex model config and model config are present, add the json dict from the complex config into the model config
as below
config['complex_config_dir'] = complex_config_dir
config['complex_config'] = complex_config
(PR #1346)
Update all fm test to use AA1 as peril in all peril columns -Work is in progress to have perils columns such as LocPerilsCovered, LocPeril, ... supported in oasislmf. This change aim at changing all perils to AA1 as they represent generic test. some more test specific to peril covered will be added later on with the feature.
also improve the split combine scripts used to add fm unit test by adding support for reinsurance files
(PR #1349)
Fixed the removal of log handlers in logging redirect wrapper -- Log handlers were not correctly removed when exiting from log redirect
- Added log redirect to plapy
- Fixed open file leaks in testing
(PR #1351)
Step policies: Allow BI ground up loss through to gross losses -https://github.com/OasisLMF/OasisLMF/issues/1322
(PR #1352)
Support multiple identifiers for footprint files -To enable the storage of footprints in multiple files rather than a single master file, optional identifiers in the form of footprint file suffixes are now supported. This is executed in a similar way to that currently in place to distinguish multiple events and event occurrences files. The footprint_set
model settings option in the analysis settings file can be set to the desired file suffix for the footprint files to be used. A symbolic link to the desired footprint set is created in the static/
directory within the model run directory. Footprint file priorities are identical to those set by modelpy
and gulmc
, which in order of descending priority are: parquet; zipped binary; binary; and csv.
oasislmf
package documentation - (PR #1320)
Revamp the This PR Fix #1249 by revamping the oasislmf
package documentation.
The complete documentation of the full Python API of oasislmf
is automatically generated using sphinx-autoapi
. There is no need to manually update the docs pages whenever the oasislmf
package is updated: sphinx-autoapi
dynamically finds the changes and generates the docs for the latest oasislmf
version.
The documentation is built using the build-docs.yml
GH action workflow on all PR targeting main
and is built & deployed to the gh-pages
branch for all commits on main
.
(PR #1360)
Add extensive docs about gulmc -This PR adds extensive documentation about gulmc.
(PR #1369)
Add options to enable Post Loss Amplification and provide secondary and uniform factors -The requirement for amplifications file generated by the MDK as a trigger for the execution of Post Loss Amplification (PLA) has been replaced with the pla
flag in the analysis settings file. This allows a user to enable or disable (default) the PLA component plapy
.
Additionally, a secondary factor in the range [0, 1] can be specified from the command line with the argument -f
when running plapy
:
$ plapy -f 0.8 < gul_output.bin > plapy_output.bin
The secondary factor is applied to the deviation of the loss factor from 1. For example:
event_id | factor from model | relative factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 1.08 |
2 | 1.20 | 0.8 | 1.16 |
3 | 1.00 | 0.8 | 1.00 |
4 | 0.90 | 0.8 | 0.92 |
Finally, an absolute, uniform, positive amplification/reduction factor can be specified from the command line with the argument -F
:
$ plapy -F 0.8 < gul_output.bin > plapy_output.bin
This factor is applied to all losses, thus loss factors from the model (those in lossfactors.bin
) are ignored. For example:
event_id | factor from model | uniform factor from user | applied factor |
---|---|---|---|
1 | 1.10 | 0.8 | 0.8 |
2 | 1.20 | 0.8 | 0.8 |
3 | 1.00 | 0.8 | 0.8 |
4 | 0.90 | 0.8 | 0.8 |
The absolute, uniform factor is incompatible with the relative, secondary factor. Therefore, if both are given by the user, a warning is logged and the secondary factor is ignored.
(PR #1340)
In Il preparation collect df that are not in use anymore -In order to save a bit of memory, delete and collect memory of df that are not used anymore
(PR #1342)
Redefine key_columns as local variable -Making changes the global variable key_columns
, which is a list of location file columns used in the lookup process, can lead to errors. As the variable is only used in the method builtin.py::Lookup::process_locations
, it can be defined local to that method instead.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.1
1.28.1
OasisLMF Changelog -- #1341 - Bug in latest platform2 release
- #1324 - Release/1.28.0
- #1349 - Fix removal of handlers to logger + give logfiles unique names
- #1334 - Update CI - 1.28
OasisLMF Notes
(PR #1342)
Redefine key_columns as local variable -Making changes the global variable key_columns
, which is a list of location file columns used in the lookup process, can lead to errors. As the variable is only used in the method builtin.py::Lookup::process_locations
, it can be defined local to that method instead.
(PR #1349)
Fixed the removal of log handlers in logging redirect wrapper -- Log handlers were not correctly removed when exiting from log redirect
- Added log redirect to plapy
- Fixed open file leaks in testing
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.1rc1
1.28.1rc1
OasisLMF Changelog -- #1341 - Bug in latest platform2 release
- #1349 - Fix removal of handlers to logger + give logfiles unique names
- #1334 - Update CI - 1.28
OasisLMF Notes
(PR #1342)
Redefine key_columns as local variable -Making changes the global variable key_columns
, which is a list of location file columns used in the lookup process, can lead to errors. As the variable is only used in the method builtin.py::Lookup::process_locations
, it can be defined local to that method instead.
(PR #1349)
Fixed the removal of log handlers in logging redirect wrapper -- Log handlers were not correctly removed when exiting from log redirect
- Added log redirect to plapy
- Fixed open file leaks in testing
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 1 year ago

OasisLMF - Release 1.28.0
1.28.0
OasisLMF Changelog -- #1280 - Drop py3.7 in testing and add py3.11
- #1134 - nan output in dummy model generated vulnerability file when intensity sparseness is low
- #1282 - Adding tarfile member sanitization to extractall()
- #1286 - Improve code coverage accuracy
- #1288 - Support for pandas 2
- #1289 - Fix/fillna on str
- #1295 - Add platform client unit tests
- #1291 - remove obsolete fm compute
- #1294 - Removed obsolete Cookiecutter from oasislmf
- #1296 - Pin ods-tools package for new fillna function
- #1298 - Add testing for computation funcs
- #1214 - Undefined behaviour when no successes are passed to write_oasis_keys_file
- #1306 - lookup: remove extra column after failed combine step
- #1307 - Update release section on Readme
- #1156, #1180, #1151 - [gulmc] implement hazard correlation
- #1310 - log wrapper, check log level before running input log
- #902 - Support for monetary / absolute damage functions
- #1315 - Bug fix for setcwd context manager
- #1300 - Assignment of loc_id and idx is not unique when using an EPA to split locations across more rows
- #1316 - Enhance pre-analysis hook
- #1321 - Cleanup/fix fm validation cr
- #1219 - Fix flakly checks in TestGetDataframe
- #1142 - Post Loss Amplification
- #1329 - Fix/ci skip ods build manual trigger
- #1330 - Set Ktools to version 3.10.0
- #1204 - Release/1.27.0
- #1332 - fix correlation issue
- #1207 - (FM) CondClass 1 as second priority condition doesn't work
- #1211 - OSError: [Errno 24] Too many open files in gulmc test
- #1218 - Fix missing default collumn for RI
- #1220 - Fix mapping of
vuln_idx
tovuln_i
and implementeff_vuln_cdf
as a dynamic array to work with large footprints - #1222 - add option to have custom oed schema
- #1230 - Redirect warnings from pytools
- #1231 - Fix expecteted output with Extra TIV cols from ods-tools
- #1232 - Moved Settings JSON validation to ods-tools
- #1233 - oasislmf exposure run reports missing location file when account missing
- #141 - Implement account level financial structures
- #1236 - Switch changelog builder from "build" repo to "OasisPlatform"
- #1237 - Add *.npy to gitignore and clean files from validation
- #1238 - Set oasislmf to version 1.27.2
- #1245 - Add the possibility to have both policy coverage and policy PD
- #28 - Fix/genbash
- #1244 - Use console entrypoint to define and install
oasislmf
binary - #1251 - Error caused by pandas 2.0.0
- #1247 - OED/oasislmf version compatibility matrix #oasislmf
- #1253 - pandas 2.0.0 error using "oed_fields" in analysis settings
- #1257 - Fix summary levels file
- #1130 - Remove check for IL + RI files from the run model cmd
- #1260 - Use low_memory=False in get_dataframe
- #1123 - Stochastic disaggregation 4 & 6 File preparation for disaggregated locations
- #1259 - Duplicate summary_ids in outputs
- #1221 - Enable simple way to specify hierarchal key lookup in lookup_config.json
- #1267 - Numba 0.57 breaks fmpy
- #1270 - Add generate and run to rest client
- #1272 - Update numpy pin
- #1277 - Fix invalid columns in FM tests
OasisLMF Notes
(PR #1153)
Fix generation of dummy model footprint and vulnerability files in cases where there are no impacted bins -When generating dummy model files, the vulnerability sparseness is the percentage of bins impacted for a vulnerability at an intensity level and the intensity sparseness is the percentage of bins impacted for an event and areaperil. Setting these at relatively low values can result in cases where an areaperil contains no impacted bins. This can cause problems with division by zero warnings during normalisation.
Entries with no impacted intensity bins have been dropped from the generated vulnerability file. Additionally, in cases where there are no vulnerable damage bins, the first damage bin probability is set to 1 as this is always generated as the zero loss bin.
(PR #1282)
Patch CVE-2007-4559 bug in the Python tarfile -- External PR Patch to fix CVE-2007-4559 see https://github.com/OasisLMF/OasisLMF/pull/1183 for details
(PR #1286)
Improve code coverage accuracy -- Run the CI unit test workflow by installing the oasislmf package using editable mode. This is so code coverage will pickup execution checks for pytools (for example test_fm.py and test_fmpy.py)
- Fixed JIT functions not picked up in code coverage
- Removed the utils/forex.py file as the same functionality has been moved over to the ods-tools package.
- Added
--gul-rtol
and--gul-atol
override options as testing flags, this is needed for a larger difference in expected tolerance between running with and without JIT.
(PR #1287)
Fix rtree lookup builtin function so it is compatible with pandas 2 -- builtin rtree compatibility with pandas 2
- add test for builtin lookup
(PR #1289)
support setting default value for all kinds of blank value -replace fillna by ods_tools fill_empty in order to set default to all kind of blank values including empty string.
(PR #1290)
Add platform client testing -- Added checks for the Platform API session manager
- Added checks for the Platform API client
- Added fixes and tidy up to API client and session manager code
(PR #1294)
Removed Cookiecutter from oasislmf -The template Cookiecutter repos have been removed, so the code in oasislmf is now redundant.
Removed the deadcode
(PR #1296)
Pin ods-tools package for new fillna function -- The next ods-tools release is moving to
3.1.x
for backporting, Pin the package to that version and up ondevelop
Warning: NOT for backporting
(PR #1298)
Add testing for computation funcs -- Added unit tests for main interface classes, important to check since platform2 calls these directly
- Added check to make sure the
lookup_complex_config_json
input is validated correctly using the analysis settings - Minor fixes and clean up to computation functions.
(PR #1305)
builtin KeyServer not failing when no keys success in a batch -KeyServer will not raise an error in a batch of location contain no successful keys.
(PR #1306)
lookup: remove extra column after failed combine step -In builtin lookup, for locations where a combine step is not valid, some extra column could be added on top of the id one.
With this change only the original column are passed to the next combine step
(PR #1181)
Introducing support for hazard correlation -This PR introduces the capability to handle hazard correlation in gulmc
. Hazard correlation parameters are defined analogously to damage correlation parameters.
Before entering into details, these are breaking changes vs the past:
- group ids are now always hashed. This ensures results are fully reproducible. Therefore
hashed_group_id
argument has been dropped from the relevant functions. - from this version,
oasislmf model run
will fail if an older model settings JSON file usinggroup_fields
is used vs the new schema that usesdamage_group_fields
andhazard_group_fields
as defined in thedata_settings
key. See more details below. - command line interface argument
--group_id_cols
foroasislmf model run
has been renamed--damage_group_id_cols
. A new argument--hazard_group_id_cols
has been introduced to specify the columns to use for defining group ids for the hazard sampling. They respectively default to:DAMAGE_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"] HAZARD_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"]
Update to the model settings JSON schema
The oasislmf model settings JSON schema is updated to support the new feature with a breaking change. Previous correlation_settings
and data_settings
entries in the model settings such as:
"correlation_settings": [
{"peril_correlation_group": 1, "correlation_value": "0.7"},
],
"data_settings": {
"group_fields": ["PortNumber", "AccNumber", "LocNumber"],
},
are not supported anymore. The correlation_settings
must contain a new key hazard_correlation_value
and the correlation_value
key is renamed to damage_correlation_value
:
"correlation_settings": [
{"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.0"},
{"peril_correlation_group": 2, "damage_correlation_value": "0.5", "hazard_correlation_value": "0.0"}
],
Likewise, the data_settings
entries are renamed from group_fields
to damage_group_fields
and now supports hazard_group_fields
, which is optional key:
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber"]
},
Correlations updated schema
With this PR:
- if
correlation_settings
is not present,damage_correlation_value
andhazard_correlation_value
are assumed zero. peril correlation groups (if defined in supported perils) are ignored. No errors are raised. Example of valid model settings:"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] },
- if
correlation_settings
is present, it needs to contain ,damage_correlation_value
andhazard_correlation_value
for eachperil_correlation_group
entry.
Example of a valid model settings:
Example of an invalid model settings that raises a"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] }, "correlation_settings": [ {"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.3"} ],
ValueError
:"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] }, "correlation_settings": [ {"peril_correlation_group": 1} ],
Correlations files updated schema csv <-> binary conversion tools
The correlations.csv and .bin files are modified: they now contain two additional columns: hazard_group_id
and hazard_correlation_value
. They also feature a renamed column from correlation_value
to damage_correlation_value
.
This PR introduces conversion tools for the correlations files: correlationtobin
to convert a correlations file from csv to bin:
correlationtobin correlations.csv -o correlations.bin
and correlationtocsv
to convert a correlations.bin
file to csv
. If -o <filename>
is specified, it writes the csv table to file:
correlationtocsv correlations.bin -o correlations.csv
If no -o <filename>
is specified, it prints the csv table to stdout:
correlationtocsv correlations.bin
item_id,peril_correlation_group,damage_correlation_value,hazard_correlation_value
1,1,0.4,0.0
2,1,0.4,0.0
3,1,0.4,0.0
4,1,0.4,0.0
5,1,0.4,0.0
6,1,0.4,0.0
7,1,0.4,0.0
8,1,0.4,0.0
9,1,0.4,0.0
10,2,0.7,0.9
11,2,0.7,0.9
12,2,0.7,0.9
13,2,0.7,0.9
14,2,0.7,0.9
15,2,0.7,0.9
16,2,0.7,0.9
17,2,0.7,0.9
18,2,0.7,0.9
19,2,0.7,0.9
20,2,0.7,0.9
(PR #1310)
fix wrapper computing debug level operation in non debug mode -Preparing to data to log the input argument of function can be very costly in time and memory. and it is perform before the debug call is ignored
With this change we preemptively check for the log level and perform this operation only if relevant
In case of big portfolio this can lead to a massive increase in preparation step time and memory
Note: Performance impact can still occur when running with
--verbose
orDEBUG
logging.
(PR #1309)
Add support for absolute damage (vulnerability) function -This PR introduces a new feature that allows gulmc
to support absolute damage (i.e., vulnerability) functions.
In its current implementation, the damage bin dicts file containing the definition of the damage bins for an entire model can contain both relative and absolute damage bins, e.g.:
"bin_index","bin_from","bin_to","interpolation"
1,0.000000,0.000000,0.000000
2,0.000000,0.100000,0.050000
3,0.100000,0.200000,0.150000
4,0.200000,0.300000,0.250000
5,0.300000,0.400000,0.350000
6,0.400000,0.500000,0.450000
7,0.500000,0.600000,0.550000
8,0.600000,0.700000,0.650000
9,0.700000,0.800000,0.750000
10,0.800000,0.900000,0.850000
11,0.900000,1.000000,0.950000
12,1.000000,1.000000,1.000000
13,1.000000,2.000000,1.500000
14,2.000000,3.000000,2.500000
15,3.000000,30.00000,16.50000
where bins 1 to 12 represent a relative damage, and bins 13 to 15 represent an absolute damage.
For random losses falling in absolute damage bins that have a non-zero width (e.g., bins 13, 14, and 15), the loss is interpolated using the same linear or parabolic interpolation algorithm already used for the relative damage bins.
IMPORTANT: vulnerability functions are required to be either entirely absolute or entirely relative. Mixed vulnerability functions defined by a mixture of absolute and relative vulnerability function are not supported. Currently there is no automatic pre-run check that verifies that all vulnerability functions comply with this requirement; the user must check this manually.
(PR #1315)
Bug fix for setcwd context manager -If an exception happens within the path context manager setcwd
the original path is not restored.
This then breaks some of the other tests which rely on relative paths.
(PR #1318)
regenerate ids after ExposurePreAnalysis -Make sure internal ids in location file are regenerated after ExposurePreAnalysis step in case the number of location has change.
for example, after a disaggregation or a filtering.
(PR #1319)
Enhance pre-analysis hook with added kwargs -Added analysis_settings
, model_data_dir
and input_dir
arguments to pre-analysis hook.
(PR #1321)
make validation oed file comply with oed conditional requirement -Add conditionally required column to oed validation files
(PR #1327)
flaky tests failures -Fixed intermittent testing failures:
- Fixed NaN errors from
utils/test_data.py
- CI failure 9996230083 - Remove deadline from
test_lookup.py
- CI failure 9996208518
(PR #1328)
Implement Post Loss Amplification -Major events can give rise to inflated costs as a result of the shortage of labour, materials and other factors. Conversely, in some cases the costs may be lower as the main expenses may be shared among the sites that are hit in the same area. To account for this, the ground up losses from gulpy
are multiplied by post loss amplifications by a new component plapy
.
plapy
requires the files static/lossfactors.bin
and input/amplifications.bin
to assign loss factors to event ID-item ID pairs from gulpy
. More details on these files are available in the ktools
PR https://github.com/OasisLMF/ktools/pull/351. Losses are then multiplied by their corresponding factors. Loss factors that are not found in the loss factors file are assumed to be 1. The output is identical to that of gulpy
: event ID, item ID, sample ID (sidx); and loss.
The file static/lossfactors.bin
is supplied by the model provider, and maps event ID-amplification ID pairs to loss factors. The file input/amplifications.bin
is generated from the keys file. A strategy to assign amplification IDs to fields in the source locations file can be supplied by the model provider. If present, amplification IDs are assigned to keys. If there are no amplification IDs,
If input/amplifications.bin
is present, the Ground Up Loss (GUL) output is piped through plapy
.
(PR #1329)
Fixed skipping ODS-tools in CI -When manual actions Job is triggered ods-tools build was not skipped when ods_branch
is blank
(PR #1332)
Fix issue with mismatch correlation group -in gulmc, during the init phase, only present correlation group are creating line in haz_eps_ij.
however in the run, it is called with the correlation group id directly as an index.
If those don't align it can create mismatch or error
(PR #1209)
fix (FM) CondClass 1 as second priority condition -have CondClass exclusion work even if it is not the first priority condition
gulmc
and getmodel
- (PR #1212)
fix Footprint open file leakage in Closure of footprint.bin
and footprint.bin.z
file was not handled properly in oasislmf/pytools/getmodel/footprint.py
.
This code is used in both gulmc
and getmodel
and was keeping the file open until the process finished.
gulmc
to deal with vulnerability tables with missing ids and with large fooprint files. - (PR #1220)
Bugfix in This PR fixes a bug in gulmc
which was causing gulmc
not to work when vulnerability functions are missing so that their numeration is non-contiguous, e.g., 3, 5, 200
. This PR also implements a dynamic array storage for effective vulnerability cdf which allows processing large footprint files.
(PR #1222)
add option oed_schema_info to specify custom oed schmea -ods_tools provide the ability to specify your own oed schmea instead of using the default one.
This option can be use for example to change default value or add new columns or code.
To create your own oed_schema from the original one.
- copy OpenExposureData_Spec.xlsx.
- do your modification
- convert it to oed json schema using the script in extract_spec.py
python path_to/extract_spec.py json --source-excel-path path_to/OpenExposureData_Spec.xlsx --output-json-path path_to/custom_oed_schema.json
then add the path to the config or option when calling oasislmf using the name oed_schema_info.
example in the config for oasislmf model:
"oed_schema_info": "path_to/custom_oed_schema.json",
(PR #1229)
Redirect pytool Warning messages -Added a Decorator to pytool functions that redirects all non error logging to a file.
@redirect_logging(exec_name='my_script', log_dir='./logs', log_level=logging.DEBUG)
def pytool_run_function():
This create a new file per process id in the same style as ktools process
❯ oasislmf model run --ktools-num-processes 2
..
❯ ls runs/losses-20230301142412/log
aalcalc_119392.log eltcalc_119099.log eve_119119.log gulpy_119120.log modelpy_119118.log summarycalc_119105.log
aalcalc_119394.log eltcalc_119100.log fmpy_119124.log gulpy_119123.log modelpy_119121.log summarycalc_119106.log
aalcalc_119396.log eltcalc_119107.log fmpy_119127.log leccalc_119393.log stderror.err summarycalc_119113.log
eltcalc_119091.log eltcalc_119108.log fmpy_119128.log leccalc_119395.log summarycalc_119097.log summarycalc_119114.log
eltcalc_119092.log eve_119117.log fmpy_119130.log leccalc_119397.log summarycalc_119098.log
Each PID file lists the start and end of the pytool process
❯ cat fmpy_119127.log
2023-03-01 14:24:14,084 - oasislmf - INFO - {'allocation_rule': 2, 'net_loss': False, 'static_path': 'input', 'files_in': None, 'files_out': None, 'low_memory': False, 'sort_output': False, 'storage_method': 'sparse', 'step_policies': False}
2023-03-01 14:24:14,084 - oasislmf - INFO - starting process
2023-03-01 14:24:16,040 - oasislmf - INFO - finishing process
Update the bash error checking to test for missing processes using these files. For example if one fmpy
process was silently killed it will have starting process
but no finishing process
. This will get flagged as an execution error.
❯ ./run_ktools.sh
[OK] eve
[OK] summarycalc
[OK] eltcalc
[OK] aalcalc
[OK] leccalc
[OK] modelpy
[OK] gulpy
[ERROR] fmpy - 1 processes lost
(PR #1231)
Fix Unit tests for ods-tools>=3.0.2 -With this change to ods-tools https://github.com/OasisLMF/ODS_OpenExposureData/pull/133 missing required columns
which allow blanks are added to processed location files. This causes the tests in tests/model_preparation/test_exposure_pre_analysis.py
to fail.
Fixed by added the extra TIV columns to expected output. See Github actions for example error
(PR #1232)
Moved Settings JSON validation to ods-tools -Deleted the JSON specification from Oasislmf and replaced get_model_settings
, get_analysis_settings
calls with equivalents from https://github.com/OasisLMF/ODS_Tools/pull/1
Expanded error message in oasislmf exposure run for missing/misspelled account file names. Previously the error message suggested the location file was missing, which is misleading
(PR #1235)
Support Account terms -Support terms AccDed6All, AccLimit6All, AccMinDed6All, AccMaxDed6All, AccDedType6All, AccLimitType6All from OED AccAll level
At Account level, policies are applied to the sum of all layers then back allocated to the layer and to the items.
(PR #1236)
Switch changelog builder from "build" repo to "OasisPlatform" -The repository OasisLMF/build is planned for removal, switch the Changelog script to run from OasisPlatform/develop/scripts/update-changelog.py instead of build
(PR #1237)
Add *.npy to gitignore and clean files from validation -Repository cleanup, the validation testing generates intermediate *.npy
files which don't need to be stored.
These get accidentally committed from time to time. Removed existing files and added an ignore for git.
(PR #1242)
move physical damage policy term from 'policy coverage' to 'policy pd' level -Physical damage policies were embedded into level 7, making it impossible to have policy coverage for both individual coverage and physical damage aggregated.
Physical damage will now be in the "correct" level policy pd
(PR #1252)
Make oasislmf compatible with pandas 2.0.0 -We introduce a minor fix to get oasislmf
to work with pandas 2.0.0
, which was released in April 2023.
(PR #1255)
Provide version compatibility summary with ods_tools -(PR #1257)
Fixed summary levels file not detecting OED fields -Fix for issue https://github.com/OasisLMF/OasisLMF/issues/1241
(PR #1130)
Remove redundant IL+RI check -With the addition of https://github.com/OasisLMF/OasisLMF/pull/1112 there are two input file checks, one which can be toggled on/off via the flag --check-missing-inputs
and another which is hard coded in the oasislmf run model
command.
Removed the check from oasislmf run model
which is a 'compound' command of [generate-files
+ generate-losses
]
(PR #1260)
Fix issue where Datafame is loaded with mixed types -- Loading a dataframe in 'chunks' can cause a column to have mixed dtypes, causing #1259
(PR #1261)
Stochastic Disaggregation for items and fm files -Use NumberOfBuildings
from location file to generate expanded items file
Use IsAggregate
flag value from location file to generate fm files.
Each disaggregated location has the same areaperil / vulnerability attributes as the parent coverage.
A new field is needed in gul_summary_map and fm_summary_map to link disaggregated locations to original location (disagg_id)
TIV, deductibles and limits are split equally.
The definition of site for the application of site terms depends on the value of IsAggregate.
where IsAggregate
= 1, site is the disaggregated location
where IsAggregate
= 0, site is the non-disaggregated location
(PR #1262)
specify dtype for useful summary column reading -During the loss computation preparation step, we read the summaries files using a simple pandas.read_csv.
This change provide the read with the dtype of each needed columns improving read speed and avoiding potential wrong data type issues.
We also remove the case lowering of the column name as the column are expected to have the proper case in the files and we need them to match properly with the other files.
(PR #1265)
add combine functionality to lookup builtin function -add a function that combine several strategies trying to achieve the same purpose by different mean into one.
for example, finding the correct area_peril_id for a location with one method using (latitude, longitude) and one using postcode.
each strategy will be applied sequentially on the location that steal have OASIS_UNKNOWN_ID in their id_columns after the precedent strategy
(PR #1269)
fix for numba 0.57 warning message -Make sure all jit functions are explicitly set as no python True
(PR #1270)
Added generate and run to OasisAPI client -- Extend API client to work with https://github.com/OasisLMF/OasisPlatform/pull/843
(PR #1272)
Update Numpy pin for oasislmf package requirements -- Numba now supports numpy up to
1.24
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.27.5
1.27.5
OasisLMF Changelog -- #1214 - Undefined behaviour when no successes are passed to write_oasis_keys_file
- #1306 - lookup: remove extra column after failed combine step
- #1310 - log wrapper, check log level before running input log
OasisLMF Notes
(PR #1305)
builtin KeyServer not failing when no keys success in a batch -KeyServer will not raise an error in a batch of location contain no successful keys.
(PR #1306)
lookup: remove extra column after failed combine step -In builtin lookup, for locations where a combine step is not valid, some extra column could be added on top of the id one.
With this change only the original column are passed to the next combine step
(PR #1310)
fix wrapper computing debug level operation in non debug mode -Preparing to data to log the input argument of function can be very costly in time and memory. and it is perform before the debug call is ignored
With this change we preemptively check for the log level and perform this operation only if relevant
In case of big portfolio this can lead to a massive increase in preparation step time and memory
Note: Performance impact can still occur when running with
--verbose
orDEBUG
logging.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.26.8
1.26.8
OasisLMF Changelog -- #1310 - log wrapper, check log level before running input log
OasisLMF Notes
(PR #1310)
fix wrapper computing debug level operation in non debug mode -Preparing to data to log the input argument of function can be very costly in time and memory. and it is perform before the debug call is ignored
With this change we preemptively check for the log level and perform this operation only if relevant
In case of big portfolio this can lead to a massive increase in preparation step time and memory
Note: Performance impact can still occur when running with
--verbose
orDEBUG
logging.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.23.18
1.23.18
OasisLMF Changelog -- #1310 - log wrapper, check log level before running input log
OasisLMF Notes
(PR #1310)
fix wrapper computing debug level operation in non debug mode -Preparing to data to log the input argument of function can be very costly in time and memory. and it is perform before the debug call is ignored
With this change we preemptively check for the log level and perform this operation only if relevant
In case of big portfolio this can lead to a massive increase in preparation step time and memory
Note: Performance impact can still occur when running with
--verbose
orDEBUG
logging.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.15.29
1.15.29
OasisLMF Changelog -- #1310 - log wrapper, check log level before running input log
OasisLMF Notes
(PR #1310)
fix wrapper computing debug level operation in non debug mode -Preparing to data to log the input argument of function can be very costly in time and memory. and it is perform before the debug call is ignored
With this change we preemptively check for the log level and perform this operation only if relevant
In case of big portfolio this can lead to a massive increase in preparation step time and memory
Note: Performance impact can still occur when running with
--verbose
orDEBUG
logging.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.27.4
1.27.4
OasisLMF Changelog -- #1282 - Adding tarfile member sanitization to extractall()
- #1288 - Support for pandas 2
- #1221 - Enable simple way to specify hierarchal key lookup in lookup_config.json
- #1301 - Pin ods-tools to less than 3.1.x
- #1302 - Set ktools to 3.9.8
OasisLMF Notes
(PR #1282)
Patch CVE-2007-4559 bug in the Python tarfile -- External PR Patch to fix CVE-2007-4559 see https://github.com/OasisLMF/OasisLMF/pull/1183 for details
(PR #1287)
Fix rtree lookup builtin function so it is compatible with pandas 2 -- builtin rtree compatibility with pandas 2
- add test for builtin lookup
(PR #1265)
add combine functionality to lookup builtin function -add a function that combine several strategies trying to achieve the same purpose by different mean into one.
for example, finding the correct area_peril_id for a location with one method using (latitude, longitude) and one using postcode.
each strategy will be applied sequentially on the location that steal have OASIS_UNKNOWN_ID in their id_columns after the precedent strategy
(PR #1301)
ODS-Tools package frozen to backports 3.0.x -Oasis Stable 1.27 is fixed to the ods-tools package built from backports/3.0.x.
This is to allow patching and support while also add new features to the next Oasis Stable version
(PR #1302)
Set ktools to 3.9.8 -Update the ktools package for bug fix https://github.com/OasisLMF/ktools/pull/349
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.15.28
1.15.28
OasisLMF Changelog -OasisLMF Notes
(PR #997)
Fix for ktools script -- Successful runs were intermittently marked as failed by an issue in the bash
exit_handler
(PR #1284)
Fixed incorrect pltcalc flagFrom Ktools version 3.8.0 and up the skip header flag has been changed from -s
to -H
https://github.com/OasisLMF/ktools/pull/283
When ktools was version bumped in release 1.15.26
this was flag update was missing from bash generation, fixed in this PR.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.27.3
1.27.3
OasisLMF Changelog -- #1251 - Error caused by pandas 2.0.0
- #1253 - pandas 2.0.0 error using "oed_fields" in analysis settings
- #1257 - Fix summary levels file
- #1260 - Use low_memory=False in get_dataframe
- #1259 - Duplicate summary_ids in outputs
- #1231 - Fix expecteted output with Extra TIV cols from ods-tools
- #1267 - Numba 0.57 breaks fmpy
- #1238 - Set oasislmf to version 1.27.2
- #1239 - Release/1.27.2
- #1272 - Update numpy pin
- #1245 - Add the possibility to have both policy coverage and policy PD
- #1277 - Fix invalid columns in FM tests
OasisLMF Notes
(PR #1252)
Make oasislmf compatible with pandas 2.0.0 -We introduce a minor fix to get oasislmf
to work with pandas 2.0.0
, which was released in April 2023.
(PR #1257)
Fixed summary levels file not detecting OED fields -Fix for issue https://github.com/OasisLMF/OasisLMF/issues/1241
(PR #1260)
Fix issue where Datafame is loaded with mixed types -- Loading a dataframe in 'chunks' can cause a column to have mixed dtypes, causing #1259
(PR #1262)
specify dtype for useful summary column reading -During the loss computation preparation step, we read the summaries files using a simple pandas.read_csv.
This change provide the read with the dtype of each needed columns improving read speed and avoiding potential wrong data type issues.
We also remove the case lowering of the column name as the column are expected to have the proper case in the files and we need them to match properly with the other files.
(PR #1231)
Fix Unit tests for ods-tools>=3.0.2 -With this change to ods-tools https://github.com/OasisLMF/ODS_OpenExposureData/pull/133 missing required columns
which allow blanks are added to processed location files. This causes the tests in tests/model_preparation/test_exposure_pre_analysis.py
to fail.
Fixed by added the extra TIV columns to expected output. See Github actions for example error
(PR #1269)
fix for numba 0.57 warning message -Make sure all jit functions are explicitly set as no python True
(PR #1272)
Update Numpy pin for oasislmf package requirements -- Numba now supports numpy up to
1.24
(PR #1242)
move physical damage policy term from 'policy coverage' to 'policy pd' level -Physical damage policies were embedded into level 7, making it impossible to have policy coverage for both individual coverage and physical damage aggregated.
Physical damage will now be in the "correct" level policy pd
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.23.17
1.23.17
OasisLMF Changelog -- #1278 - Fixes for newer versions of pandas and numpy
- #1260 - Use low_memory=False in get_dataframe
- #1267 - Numba 0.57 breaks fmpy
- #1251 - Error caused by pandas 2.0.0
OasisLMF Notes
(PR #1260)
Fix issue where Datafame is loaded with mixed types -- Loading a dataframe in 'chunks' can cause a column to have mixed dtypes, causing #1259
(PR #1269)
fix for numba 0.57 warning message -Make sure all jit functions are explicitly set as no python True
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.26.7
1.26.7
OasisLMF Changelog -- #1272 - Update numpy pin
- #1260 - Use low_memory=False in get_dataframe
- #1267 - Numba 0.57 breaks fmpy
- #1251 - Error caused by pandas 2.0.0
OasisLMF Notes
(PR #1272)
Update Numpy pin for oasislmf package requirements -- Numba now supports numpy up to
1.24
(PR #1260)
Fix issue where Datafame is loaded with mixed types -- Loading a dataframe in 'chunks' can cause a column to have mixed dtypes, causing #1259
(PR #1269)
fix for numba 0.57 warning message -Make sure all jit functions are explicitly set as no python True
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.15.27
1.15.27
OasisLMF Changelog -- #1278 - Fixes for newer versions of pandas and numpy
- #1260 - Use low_memory=False in get_dataframe
- #1267 - Numba 0.57 breaks fmpy
- #1251 - Error caused by pandas 2.0.0
OasisLMF Notes
(PR #1260)
Fix issue where Datafame is loaded with mixed types -- Loading a dataframe in 'chunks' can cause a column to have mixed dtypes, causing #1259
(PR #1269)
fix for numba 0.57 warning message -Make sure all jit functions are explicitly set as no python True
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 2 years ago

OasisLMF - Release 1.27.2
1.27.2
OasisLMF Changelog -- #1220 - Fix mapping of
vuln_idx
tovuln_i
and implementeff_vuln_cdf
as a dynamic array to work with large footprints - #1130 - Remove check for IL + RI files from the run model cmd
- #1232 - Moved Settings JSON validation to ods-tools
- #1233 - oasislmf exposure run reports missing location file when account missing
- #141 - Implement account level financial structures
- #1236 - Switch changelog builder from "build" repo to "OasisPlatform"
- #1156, #1180, #1151 - [gulmc] implement hazard correlation
OasisLMF Notes
gulmc
to deal with vulnerability tables with missing ids and with large fooprint files. - (PR #1220)
Bugfix in This PR fixes a bug in gulmc
which was causing gulmc
not to work when vulnerability functions are missing so that their numeration is non-contiguous, e.g., 3, 5, 200
. This PR also implements a dynamic array storage for effective vulnerability cdf which allows processing large footprint files.
(PR #1130)
Remove redundant IL+RI check -With the addition of https://github.com/OasisLMF/OasisLMF/pull/1112 there are two input file checks, one which can be toggled on/off via the flag --check-missing-inputs
and another which is hard coded in the oasislmf run model
command.
Removed the check from oasislmf run model
which is a 'compound' command of [generate-files
+ generate-losses
]
(PR #1232)
Moved Settings JSON validation to ods-tools -Deleted the JSON specification from Oasislmf and replaced get_model_settings
, get_analysis_settings
calls with equivalents from https://github.com/OasisLMF/ODS_Tools/pull/1
Expanded error message in oasislmf exposure run for missing/misspelled account file names. Previously the error message suggested the location file was missing, which is misleading
(PR #1235)
Support Account terms -Support terms AccDed6All, AccLimit6All, AccMinDed6All, AccMaxDed6All, AccDedType6All, AccLimitType6All from OED AccAll level
At Account level, policies are applied to the sum of all layers then back allocated to the layer and to the items.
(PR #1236)
Switch changelog builder from "build" repo to "OasisPlatform" -The repository OasisLMF/build is planned for removal, switch the Changelog script to run from OasisPlatform/develop/scripts/update-changelog.py instead of build
(PR #1181)
Introducing support for hazard correlation -This PR introduces the capability to handle hazard correlation in gulmc
. Hazard correlation parameters are defined analogously to damage correlation parameters.
Before entering into details, these are breaking changes vs the past:
- group ids are now always hashed. This ensures results are fully reproducible. Therefore
hashed_group_id
argument has been dropped from the relevant functions. - from this version,
oasislmf model run
will fail if an older model settings JSON file usinggroup_fields
is used vs the new schema that usesdamage_group_fields
andhazard_group_fields
as defined in thedata_settings
key. See more details below. - command line interface argument
--group_id_cols
foroasislmf model run
has been renamed--damage_group_id_cols
. A new argument--hazard_group_id_cols
has been introduced to specify the columns to use for defining group ids for the hazard sampling. They respectively default to:DAMAGE_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"] HAZARD_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"]
Update to the model settings JSON schema
The oasislmf model settings JSON schema is updated to support the new feature with a breaking change. Previous correlation_settings
and data_settings
entries in the model settings such as:
"correlation_settings": [
{"peril_correlation_group": 1, "correlation_value": "0.7"},
],
"data_settings": {
"group_fields": ["PortNumber", "AccNumber", "LocNumber"],
},
are not supported anymore. The correlation_settings
must contain a new key hazard_correlation_value
and the correlation_value
key is renamed to damage_correlation_value
:
"correlation_settings": [
{"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.0"},
{"peril_correlation_group": 2, "damage_correlation_value": "0.5", "hazard_correlation_value": "0.0"}
],
Likewise, the data_settings
entries are renamed from group_fields
to damage_group_fields
and now supports hazard_group_fields
, which is optional key:
"data_settings": {
"damage_group_fields": ["PortNumber", "AccNumber", "LocNumber"],
"hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber"]
},
Correlations updated schema
With this PR:
- if
correlation_settings
is not present,damage_correlation_value
andhazard_correlation_value
are assumed zero. peril correlation groups (if defined in supported perils) are ignored. No errors are raised. Example of valid model settings:"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] },
- if
correlation_settings
is present, it needs to contain ,damage_correlation_value
andhazard_correlation_value
for eachperil_correlation_group
entry.
Example of a valid model settings:
Example of an invalid model settings that raises a"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] }, "correlation_settings": [ {"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.3"} ],
ValueError
:"lookup_settings":{ "supported_perils":[ {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} ] }, "correlation_settings": [ {"peril_correlation_group": 1} ],
Correlations files updated schema csv <-> binary conversion tools
The correlations.csv and .bin files are modified: they now contain two additional columns: hazard_group_id
and hazard_correlation_value
. They also feature a renamed column from correlation_value
to damage_correlation_value
.
This PR introduces conversion tools for the correlations files: correlationtobin
to convert a correlations file from csv to bin:
correlationtobin correlations.csv -o correlations.bin
and correlationtocsv
to convert a correlations.bin
file to csv
. If -o <filename>
is specified, it writes the csv table to file:
correlationtocsv correlations.bin -o correlations.csv
If no -o <filename>
is specified, it prints the csv table to stdout:
correlationtocsv correlations.bin
item_id,peril_correlation_group,damage_correlation_value,hazard_correlation_value
1,1,0.4,0.0
2,1,0.4,0.0
3,1,0.4,0.0
4,1,0.4,0.0
5,1,0.4,0.0
6,1,0.4,0.0
7,1,0.4,0.0
8,1,0.4,0.0
9,1,0.4,0.0
10,2,0.7,0.9
11,2,0.7,0.9
12,2,0.7,0.9
13,2,0.7,0.9
14,2,0.7,0.9
15,2,0.7,0.9
16,2,0.7,0.9
17,2,0.7,0.9
18,2,0.7,0.9
19,2,0.7,0.9
20,2,0.7,0.9
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 years ago

OasisLMF - Release 1.27.1
1.27.1
OasisLMF Changelog -- #1218 - Fix missing default collumn for RI
- #1220 - Fix mapping of
vuln_idx
tovuln_i
and implementeff_vuln_cdf
as a dynamic array to work with large footprints - #1222 - add option to have custom oed schema
- #1227 - Set ktools to 3.9.7 (oasislmf 1.27)
- #1207 - (FM) CondClass 1 as second priority condition doesn't work
- #1211 - OSError: [Errno 24] Too many open files in gulmc test
OasisLMF Notes
gulmc
to deal with vulnerability tables with missing ids and with large fooprint files. - (PR #1220)
Bugfix in This PR fixes a bug in gulmc
which was causing gulmc
not to work when vulnerability functions are missing so that their numeration is non-contiguous, e.g., 3, 5, 200
. This PR also implements a dynamic array storage for effective vulnerability cdf which allows processing large footprint files.
(PR #1222)
add option oed_schema_info to specify custom oed schmea -ods_tools provide the ability to specify your own oed schmea instead of using the default one.
This option can be use for example to change default value or add new columns or code.
To create your own oed_schema from the original one.
- copy OpenExposureData_Spec.xlsx.
- do your modification
- convert it to oed json schema using the script in extract_spec.py
python path_to/extract_spec.py json --source-excel-path path_to/OpenExposureData_Spec.xlsx --output-json-path path_to/custom_oed_schema.json
then add the path to the config or option when calling oasislmf using the name oed_schema_info.
example in the config for oasislmf model:
"oed_schema_info": "path_to/custom_oed_schema.json",
(PR #1227)
Set ktools to 3.9.7 -Update ktools for LTS fixes
(PR #1209)
fix (FM) CondClass 1 as second priority condition -have CondClass exclusion work even if it is not the first priority condition
gulmc
and getmodel
- (PR #1212)
fix Footprint open file leakage in Closure of footprint.bin
and footprint.bin.z
file was not handled properly in oasislmf/pytools/getmodel/footprint.py
.
This code is used in both gulmc
and getmodel
and was keeping the file open until the process finished.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 years ago

OasisLMF - Release 1.26.6
1.26.6
OasisLMF Changelog -- #1091 - Debug complex model execution
- #1207 - (FM) CondClass 1 as second priority condition doesn't work
- #1228 - Set ktools to 3.9.7 (oasislmf 1.26)
- #1205 - Backport 1.26.5
- #1211 - OSError: [Errno 24] Too many open files in gulmc test
- #1213 - Debug complex model execution (#1091) - Backport
OasisLMF Notes
(PR #1091)
Fix run errors in complex models (1.26.2) -- Fixed running complex models in the Azure platform, - name 'gul_legacy_stream' is not defined
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 378, in run
return model_runner_module.run_analysis(**bash_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/runner.py", line 111, in run_analysis
create_bash_analysis(**params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1940, in create_bash_analysis
getmodel_cmd = _get_getmodel_cmd(**getmodel_args)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1357, in custom_get_getmodel_cmd
if gul_legacy_stream and coverage_output != '':
NameError: name 'gul_legacy_stream' is not defined
- name 'analysis_settings' is not defined
[2022-07-21 13:22:33,192: ERROR/ForkPoolWorker-1] Task generate_losses_chunk[8b4e84f8-25c4-4c60-9f29-d0f730f73402] raised unexpected: NameError("name 'analysis_settings' is not defined")
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 377, in run
peril_filter = self._get_peril_filter(analysis_settings)
NameError: name 'analysis_settings' is not defined
[2022-07-21 13:22:33,206: INFO/ForkPoolWorker-3] Task error handler
[2022-07-21 13:22:33,207: INFO/ForkPoolWorker-3] Store file: /var/log/oasis/tasks/f74d312a0a60454e9fe6c21a7184cee4_generate-losses-chunk-17.log -> /shared-fs/f7f535bffb9c406097d81bb7e30688ea.log
(PR #1225)
fix (FM) CondClass 1 as second priority condition -have CondClass exclusion work even if it is not the first priority condition
(PR #1228)
Set ktools to 3.9.7 -Update ktools for LTS fixes
(PR #1205)
Update versions for backport 1.26.x -- Pin ods-tools to versions lower than v3.0.0
- Update ktools to 3.9.6
gulmc
and getmodel
- (PR #1212)
fix Footprint open file leakage in Closure of footprint.bin
and footprint.bin.z
file was not handled properly in oasislmf/pytools/getmodel/footprint.py
.
This code is used in both gulmc
and getmodel
and was keeping the file open until the process finished.
(PR #1213)
Fix run errors in complex models on Platform2 -
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 years ago

OasisLMF - Release 1.23.16
1.23.16
OasisLMF Changelog -- #1226 - Set ktools to 3.9.7 (oasislmf 1.23)
- #1207 - (FM) CondClass 1 as second priority condition doesn't work
OasisLMF Notes
(PR #1226)
Set ktools to 3.9.7 -Update ktools for LTS fixes
(PR #1119)
Fix for cond class exclusion when multiple priority are used -Add a loop breaker in case account cond hierarchy lead to cycling cond tag infinitely.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 2 years ago

OasisLMF - Release 1.26.5
1.26.5
OasisLMF Changelog -- #1187 -
sklearn
is now deprecated in requirements files: usescikit-learn
instead - #1160 - Don't write temp files during test
- #1133 - Fix/package install error
- #1165 - Hotfix/update piwind tests
- #1205 - Backport 1.26.5
- #1174 - Set pip-compile to backtracking and trim unused requirments
- #1150 - Model data in the
runs/lossess-xxx/static
directory are symbolic links - #1119 - use correct condpriority to fix cond class exclusion
OasisLMF Notes
(PR #1160)
Oasislmf Testing fixes -- test_reinsurance.py - removed unused debugging files
- test_bash.py - fix for running individual tests
- flake8 - Newer versions of flake8 only recognise codes that match this pattern (single letter, three digits)
(PR #1133)
Fix Package requirements -Add Scipy to required packages
From https://github.com/OasisLMF/OasisLMF/pull/1069 the package scipy
is used without being included in requirements-package.in
Update Numpy maximum version
Platform docker builds fail due to the pinned numpy version, this was fixed in version 1.26.3
but that change didn't get merged back into develop from the branches diverging.
Issue fixed with this PR instead of a merge back from backports/1.26.x
.
numpy==1.22.4 and oasislmf[extra]==<develop> because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy==1.22.4
fastparquet 0.8.0 depends on numpy>=1.18
numba 0.55.2 depends on numpy<1.23 and >=1.18
numexpr 2.8.1 depends on numpy>=1.13.3
pandas 1.5.1 depends on numpy>=1.20.3; python_version < "3.10"
pyarrow 8.0.0 depends on numpy>=1.16.6
scikit-learn 1.1.2 depends on numpy>=1.17.3
scipy 1.9.3 depends on numpy<1.26.0 and >=1.18.5
oasislmf[extra] depends on numpy<1.22 and >=1.18
(PR #1165)
Update PiWind testing -- Fixed workflows running from external forks
- Replaced the PiWind testing scripts with https://github.com/OasisLMF/OasisPiWind/pull/109
- Fixed codecov running twice
(PR #1205)
Update versions for backport 1.26.x -- Pin ods-tools to versions lower than v3.0.0
- Update ktools to 3.9.6
(PR #1174)
Fix CI bug with pip compile -- There are several excess packages in
requirements.in
which are not needed for CI testing, these have been removed. - Set pip-compile to use backtracking when running with python3.7.
- Tweaked the default in the manual trigger for
unittest.yml
, it now won't build ktools unless a branch is given as input.
oasislmf model run --copy_model_data
flag - (PR #1149)
Introducing This PR Fixes #1150 by introducing an optional flag --copy-model-data
to copy the model data to the runs/losses-xxx/static
directory instead of creating symbolic links to individual files. By default the flag is False
, which reproduces current default behaviour of creating symbolic links to the model data.
(PR #1119)
Fix for cond class exclusion when multiple priority are used -Add a loop breaker in case account cond hierarchy lead to cycling cond tag infinitely.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - Release 1.23.15
1.23.15
OasisLMF Changelog -- #1187 -
sklearn
is now deprecated in requirements files: usescikit-learn
instead - #1160 - Don't write temp files during test
- #1165 - Hotfix/update piwind tests
- #1174 - Set pip-compile to backtracking and trim unused requirments
- #1206 - Backport 1.23.15
- #1150 - Model data in the
runs/lossess-xxx/static
directory are symbolic links
OasisLMF Notes
(PR #1160)
Oasislmf Testing fixes -- test_reinsurance.py - removed unused debugging files
- test_bash.py - fix for running individual tests
- flake8 - Newer versions of flake8 only recognise codes that match this pattern (single letter, three digits)
(PR #1165)
Update PiWind testing -- Fixed workflows running from external forks
- Replaced the PiWind testing scripts with https://github.com/OasisLMF/OasisPiWind/pull/109
- Fixed codecov running twice
(PR #1174)
Fix CI bug with pip compile -- There are several excess packages in
requirements.in
which are not needed for CI testing, these have been removed. - Set pip-compile to use backtracking when running with python3.7.
- Tweaked the default in the manual trigger for
unittest.yml
, it now won't build ktools unless a branch is given as input.
(PR #1206)
Update versions for backport 1.23.x -- Set ktools to v3.9.6
- Set package version 1.23.15
oasislmf model run --copy_model_data
flag - (PR #1149)
Introducing This PR Fixes #1150 by introducing an optional flag --copy-model-data
to copy the model data to the runs/losses-xxx/static
directory instead of creating symbolic links to individual files. By default the flag is False
, which reproduces current default behaviour of creating symbolic links to the model data.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - Release 1.27.0
1.27.0
OasisLMF Changelog -- #135 - Implement OED policy coverage terms in Financial Module
- #1154 - Update
.gitignore
- #1157 - Fix fm visualizer tool
- #1160 - Don't write temp files during test
- #1198 - Fix/pep8-code-quality-pr-trigger
- #1165 - Hotfix/update piwind tests
- #1169 - Track code coverage
- #1188, #1126 - Implement weighted vulnerability feature in
gulmc
- #1201 - Update/add necessary docstrings in
gulmc
- #1169 - Track code coverage
- #908 - ensure that logging is sufficient to capture and report all common errors
- #1174 - Set pip-compile to backtracking and trim unused requirments
- #1175 - Fix drop na in expected data
- #1176 - Feature/ods tools migration test
- #1177 - The run_ktools.sh script does not check if all custom gulcalc processes completed successfully.
- #1182 - Feature/ods tools migration piwind fix
- #1048 - set modelpy and gulpy as default runtime options
- #1057 - Remove sys.exit(1) calls and replace with exceptions
- #1058 - Correlation map
- #1187 -
sklearn
is now deprecated in requirements files: usescikit-learn
instead - #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1040 - Builtin lookup - missing feedback when all locations are using unsupported LocPerilsCovered
- #1063 - Fix/pre analysis hook
- #1007 - Parquet to csv comparison script
- #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1191 - Github actions to manage new issues and PR's
- #1067 - Fix/platform run
- #1184 - Add
gulmc
option to the model runner - #1068 - Implement correlated random number generation in gulpy
- #1192 - Track pid of gulcalc processes and wait on them in
run_ktools.sh
- #1071 - Feature/param loading
- #1072 - Update/package requirements
- #1070 - Clean up warning messages
- #1074 - Added lower-case-cols and raise-error flags
- #1075 - setting model_custom_gulcalc disables gulpy
- #1076 - Set ktools to 3.9.2
- #992 - Peril Specific Runs
- #1049 - random number generator can be set to 0 at oasislmf command line
- #1066 - Gulpy failing in distributed runs
- #1080 - add peril_filter to run settings spec
- #1200 - Update CI badges
- #1016 - Update package testing
- #1203 - Update Develop to be inline with master
- #1085 - Disable all deadlines in utils/test_data.py
- #1090 - Request token refresh on HTTP error 403 - Forbidden
- #1091 - Debug complex model execution
- #1035 - No check of parquet output before running model
- #1093 - Fix call to write_summary_levels - missing IL options
- #1094 - Disable GroupID hashing for acceptance tests
- #1096 - Hashing investigation
- #1097 - Fix/pip compile
- #1099 - Implement multiplicative method for total loss computation
- #1100 - OED support for multi-currencies
- #1101 - Always create a correlations.bin, if missing model_settings file is b…
- #1102 - FM documentation update
- #1107 - Fix/678 logger
- #1110 - extent api commands with run-inputs/run-losses options
- #1108 - API client doesn't detect cancelled analysis
- #1105 - Add a 'strict' mode to fail runs if IL/RI is requested but files are missing
- #1113 - Bugfix: out of bounds cdf
- #906 - include "classic" event rates and Metadata in ORD output for oasis outputs
- #1115 - Logs cluttered with warnings from geopandas about GEOS versions
- #1119 - use correct condpriority to fix cond class exclusion
- #1120 - GUL alloc rule range in MDK has not been updated to reflect addition of rule 3
- #1129 - Add contributing guidelines
- #1133 - Fix/package install error
- #1135 - gulpy appears to hang when sample size is large
- #1127 - Stochastic disaggregation 7 Full Monte Carlo
- #1139 - Hotfix/GitHub actions
- #1141 -
gulpy
produces zero losses for entire items for large number of samples - #1144 - Release/1.27.0rc3
- #1132 - Make code PEP8 compliant
- #1148 - Remove auto-merge option
- #1150 - Model data in the
runs/lossess-xxx/static
directory are symbolic links
OasisLMF Notes
(PR #1024)
support OED policy coverage terms -- allow fmpy to handle multi-tree structure via using negative agg_id to indicate a reference to item level
- back allocate loss at each level during fm (including deductible overlimit and underlimit)
- add new level PolCoverage
(PR #1157)
Update the FM visualizer tool -The testing tool for running FM module file generation is broken in Binder. Switch the cloud runner to Google Colab
(PR #1160)
Oasislmf Testing fixes -- test_reinsurance.py - removed unused debugging files
- test_bash.py - fix for running individual tests
- flake8 - Newer versions of flake8 only recognise codes that match this pattern (single letter, three digits)
(PR #1165)
Update PiWind testing -- Fixed workflows running from external forks
- Replaced the PiWind testing scripts with https://github.com/OasisLMF/OasisPiWind/pull/109
- Fixed codecov running twice
(PR #1168)
Introducing support for aggregate vulnerability definitions -With this PR we enable gulmc
to support aggregate vulnerability functions, i.e., vulnerability functions that are composed of multiple individual vulnerability functions.
gulmc
now can efficiently reconstruct the aggregate vulnerability functions on-the-fly and compute the aggregate (aka blended, aka weighted) vulnerability function. This new functionality works both in the "effective damageability" mode and in the full Monte Carlo mode.
Aggregate vulnerability functions are defined using two new tables, to be stored in the static/
directory of the model data: aggregate_vulnerability.csv
(or .bin
) and weights.csv
(or .bin
). Example tables:
- an
aggregate_vulnerability
table that defines 3 aggregate vulnerability functions, made of 2, 3, and 4 individual vulnerabilities, respectively:
aggregate_vulnerability_id,vulnerability_id
100001,1
100001,2
100002,3
100002,4
100002,5
100003,6
100003,7
100003,8
100003,9
- a
weights
table that specifies weights for each of the individual vulnerability functions in allareaperil_id
:
areaperil_id,vulnerability_id,weight
54,1,138
54,2,224
54,3,194
54,4,264
54,5,390
54,6,107
[...]
154,1,1
154,2,97
154,3,273
154,4,296
[...]
items.csv (use only two aggregate vulnerability ids):
item_id,coverage_id,areaperil_id,vulnerability_id,group_id
1,1,154,8,833720067
2,1,54,2,833720067
3,2,154,8,956003481
4,2,54,100001,956003481
5,4,154,100002,2030714556
[...]
Notes:
- if
aggregate_vulnerability.csv
or.bin
is present, thenweights.csv
orweights.bin
needs to be present too, orgulmc
raises an error. - if
aggregate_vulnerability.csv
or.bin
is not present, thengulmc
runs normally, without any definition of aggregate vulnerability.
Caching
In order to speed up the calculation of losses in the full Monte Carlo mode, we implement a simple caching mechanism whereby the most commonly used vulnerability functions cdf are stored in memory for efficient re-usage.
The cache size is set as the minimum between the cache size specified by the user with the new --vuln-cache-size
argument (default: 200, units: MB) and the amount of memory needed to store all the vulnerability functions to be used in the calculations.
The cache dramatically speeds up the execution when the hazard intensity distribution is narrowly peaked (i.e., when most of the intensity falls in a few intensity bins), which implies a few vulnerability functions are used repeatedly.
The cache only stores individual vulnerability functions cdf, not the aggregate/weighted cdf, which would be too many to be stored.
Example: to allow the vulnerability cache size to grow up to 1000 MB can be done with:
eve 1 1 | gulmc -S100 -a1 --vuln-cache-size=1000
Testing
This PR:
- introduces a suite of test model assets, from
test_model_2
totest_model_6
, each with different properties that capture different ways of using aggregate vulnerabilities. - adds expected results for test_model_1 for zero samples (
-S0
parameter). - adds 720 further quantitative tests carried out against the test models. The total number of quantitative tests of
gulmc
that can be carried out withpytest -v tests/pytools/gulmc
is now 1200.
(PR #1171)
Use Ods_tools to interact with exposure data -- remove specific dataframe loader for exposure (get_location, get_account)
- use OED Field name instead of lower case to access exposure file columns
- change interface for exposure pre-analysis, pass OedExposure object instead of path to OED file inputs and outputs
- use ods_tools check capability to verify the validity of OED exposure files
(PR #1174)
Fix CI bug with pip compile -- There are several excess packages in
requirements.in
which are not needed for CI testing, these have been removed. - Set pip-compile to use backtracking when running with python3.7.
- Tweaked the default in the manual trigger for
unittest.yml
, it now won't build ktools unless a branch is given as input.
(PR #1175)
Fix expected data in test_data.py -Calling expected = df.dropna(subset=non_na_cols, axis=0)
is not equivalent to get_dataframe(non_na_cols=non_na_cols, ..)
because get_dataframe overrides which strings pandas views as NaN.
These need to be matched by replacing strings matching patterns in oasislmf.utils.data.PANDAS_DEFAULT_NULL_VALUES
with np.nan
(PR #1176)
Update CI so the ODS package is tested from a branch -- Added a new
ods_branch
input to theunittests.yml
file to build on push - update optional package requirement from
sklearn
toscikit-learn
- Unittest now invokes
pytest
instead oftox
to avoid overwriting the built package in its env setup
(PR #1179)
Update the run_ktools.sh script to use logs for checking custom gulcalc process complete -This MR is to add two new settings to the file oasislmf.json
and to also enable these settings to be configured via the CLI. The two new settings are model_custom_gulcalc_log_start
and model_custom_gulcalc_log_finish
. They allow the user to specify the start and end log message strings used in a custom gulcalc process.
These log messages are then used in the check_complete()
function in run_ktools.sh
to grep the log file gul_stderror.err
. If the number of processes started and finished in the log is not the same an error is raised.
Two new tests have been added. Each uses one of the two methods for generating a run_ktools.sh
script, one using genbash()
and the other gen_chunked_bash()
.
I have tested this change internally, and it works well with farmcat and catches the previous problem case
where the Out of Memory killer on Linux kills one of our custom gulcalc processes and this is not reported by Oasis.
Adding new settings to oasislmf_settings.json
The two new settings are called model_custom_gulcalc_log_start
and model_custom_gulcalc_log_finish
, if you specify these two strings then run_ktools.sh
will now check to ensure that the same number of custom gulcalc process finished that started.
(PR #1182)
CI update to build and install ODS tools before piwind checks -Added ods_branch
build options to CI piwind testing scripts
(PR #1055)
modelpy and gulpy set as default run options -The two new python replacements for getmodel
and gulcalc
are now the default run options.
To disable either of these use the command line flags oasislmf model run --modelpy false --gulpy false
or add the following to a configuration json
oasislmf.json
{
...
"modelpy": false,
"gulpy": false
}
(PR #1057)
Fix platform client error handling -Removed calls to sys.exit(1) and replace with OasisException
(PR #1058)
Adding Correlation mapping to the model -This addition adds the option to have correlation values
eltcalc.csv
and summarycalc.csv
outputs in distributed platform - (PR #1060)
Fix for The output kat commands need to executed in create_bash_outputs instead of create_bash_analysis
otherwise these output files will be overwritten by each chunk execution. The reason for the missing CSV header is due to incomplete output.
(PR #1062)
add unsupported peril in the keys-errors file -In the keys server, Keys with Peril Id not present in the model peril covered were simply discarded. To make it clearer as to why the key was removed, they will now appear in the keys-errors.csv file with the status "noreturn" and message "unsuported peril_id"
(PR #1063)
Updated the pre-analysis hook function to return edited file paths -When calling OasisManager().exposure_pre_analysis( .. )
from the oasis manager, there should be a way to get the list of file paths for raw
and edited
exposure files. Changed the function return to include these in a dictionary.
Example
OasisManager().exposure_pre_analysis(**params)
{
'class': <_class_return>,
'modified': {
"oed_location_csv": "/tmp/tmpaqq_k5pr/location.csv",
"oed_accounts_csv": "/tmp/tmpaqq_k5pr/account.csv",
"oed_info_csv": "/tmp/tmpaqq_k5pr/ri_info.csv",
"oed_scope_csv": "/tmp/tmpaqq_k5pr/ri_scope.csv"
},
'original': {
"raw_oed_location_csv": "/tmp/tmpaqq_k5pr/epa_location.csv",
"raw_oed_accounts_csv": "/tmp/tmpaqq_k5pr/epa_account.csv",
"raw_oed_info_csv": "/tmp/tmpaqq_k5pr/epa_ri_info.csv",
"raw_oed_scope_csv": "/tmp/tmpaqq_k5pr/epa_ri_scope.csv"
}
}
(PR #1064)
script to compare parquet and csv file -add script to compare parquet and csv file
(PR #1065)
Fix for kat outputs for distributed execution -- Update to PR https://github.com/OasisLMF/OasisLMF/pull/1060, the output kats process counter didn't match the total number of chunks in a distributed run. Instead it defaulted to 'number of cpus on system'.
(PR #1194)
Add Github action workflows for project board automation -When either an issue or PR is opened add these to the projectV2 board https://github.com/orgs/OasisLMF/projects/44
(PR #1067)
Minor fixes for run API command line -- Fix for model-id flag,
oasislmf api run --model-id 1 --portfolio-id 11 --analysis-settings-json <fpath>
gulmc
, the full Monte Carlo ground-up loss calculator in the oasislmf
model runner - (PR #1193)
Enabling This PR enables users to optionally employ gulmc
, the full Monte Carlo ground-up loss calculator, in the oasislmf
model runner. As documented in the release notes of PR #1168, gulmc
not only implements the full Monte Carlo sampling of hazard intensity and damage, but can also handle aggregate vulnerability functions.
Note: gulpy
, the Python version of the historical gulcalc
, is still used as the default engine in oasislmf model run
. However, users can now use gulmc
instead of gulpy
with:
oasislmf model run --gulpy=False --gulmc
Since this release is the first one introducing gulmc
, we require users to explicitly turn off gulpy
and explicitly enable gulmc
. In future releases, when gulmc
will be set as the default engine, these explicit flags won't be necessary.
oasislmf model run
exposes some useful gulmc
arguments, e.g.:
oasislmf model run --gulpy=False --gulmc --gulmc-random-generator=0
allows users to usegulmc
with the desired random generator.oasislmf model run --gulpy=False --gulmc --gulmc-effective-damageability
allows users to use the effective damageability method (i.e., an equivalent algorithm to gulpy) when computing ground-up losses.oasislmf model run --gulpy=False --gulmc --gulmc-vuln-cache-size=400
allows users to specify the desired cache size (in MB) for the vulnerability calculation. This option is useful to speed up calculations for portfolios with large numbers of aggregate or non-aggregate vulnerability functions.
get_config_profile
and environment variables when calling the _param
functions - (PR #1071)
Check Fix needed for https://github.com/OasisLMF/OasisPlatform/issues/630
-
When loading older
oasislmf.json
files are processed programmatically, using_param
functions in the platform-2.0 branch. Theconfig_compatibility_profile.json
file isn't checked for outdated key names, unlike with CLI.
The update edits the OasisManager._params_<funcName> methods so that outdated keys are replaced, making it equivalent to the CLI argument loading. -
If
export OASIS_ENV_OVERRIDE=True
is set, check ifOASIS_<param_name>
is defined and load value from override environment variable
Example 1 - Config profile
the source_exposure_file_path
was updated to oed_location_csv
if given to the function for loading default args, the newer name will be used.
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'my-location-file-path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': None,
'lookup_multiprocessing': True,
'verbose': False}
Example 2 - override environment variable
export OASIS_ENV_OVERRIDE=True
export OASIS_OED_LOCATION_CSV='override_path'
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'override_path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': 'foo-path',
'lookup_multiprocessing': True,
'verbose': False}
(PR #1072)
Update numba package pin -- Numba requirements set to
numba>=0.55.1
(PR #1073)
Fix warning messages in package runs -- Added minor changes to appease the package deprecation warnings, results in cleaner logs when running the MDK.
- Updated the oasislmf deprecated module loader to support sub-module remapping
oasislmf.preparation.old_module
->oasislmf.preparation.new_module
(PR #1074)
Added more flags to compare-parquet script ---lower-case-cols
lower case column names of both dataFrames before running compare--raise-error
default is now to catch and print the exception message, add flag to also raise the exception (like before PR)
(PR #1075)
Setting a complex model gulcalc disables gulpy -- if the
model_custom_gulcalc
config option is set, gulpy is disabled
(PR #1076)
Ktools 3.9.2 -- Fix for ktools aalcalc https://github.com/OasisLMF/ktools/pull/311
(PR #1077)
Peril specific filter for modelpy and gulpy -add a filter option for modelpy and gulpy
- modelpy --peril-filter WTC < eve.bin > /dev/null
- gulpy -S10 -L0 -a1 --peril-filter WTC > /dev/null < cdf.bin
allow peril filter to be specified via the MDK
- in the mdk => --peril-filter WSS WTC
- in oasislmf.json => "peril_filter": ["WSS", "WTC"],
if filter is specified in the MDK only modelpy is filtering as is is sufficient. Option in gulpy can be use for custom model.
gulpy_random_generator
flag in the oasislmf
model runner - (PR #1078)
New This PR introduces the gulpy_random_generator
flag in the oasislmf
model runner. This will allow users to set the random number generator to be used in gulpy
. By default gulpy
uses the Latin Hypercube Sampling algorithm (see #1000). However, it also implements the Mersenne Twister random generator (namely, the generator that was used in gulcalc
).
With the introduction of gulpy
, the user can already set the random number generator with its --random-generator
flag:
--random-generator RANDOM_GENERATOR
random number generator
0: numpy default (MT19937), 1: Latin Hypercube. Default: 1.
Regarding the oasislmf
model runner CLI, so far the following command
oasislmf model run --gulpy [...]
implicitly used the Latin Hypercube generator.
This PR introduces the possibility for the user to specify which random number generator to be used in gulpy
through the --gulpy-random-generator
flag. The following commands are equivalent and use the Latin Hypercube Sampling:
oasislmf model run --gulpy [...]
oasislmf model run --gulpy --gulpy-random-generator=1 [...]
To run a model with the Mersenne Twister it is now possible with:
oasislmf model run --gulpy --gulpy-random-generator=0 [...]
(PR #1079)
fix gulpy error when cdf is empty -(PR #1080)
Add peril filter to analysis settings -- peril filter is also read from
analysis_settings.json
and overrides options set viaoasislmf.json
or CLi
analysis_settings.json
{
"analysis_tag": "base_example",
"source_tag": "MDK",
"model_name_id": "PiWind",
"peril_filter": ["WTC"],
...
(PR #1200)
Update Readme build Badges -- Update to fix the links to github actions
(PR #1082)
Move package unit testing to GitHub actions -oasislmf is testing vs multiple python versions with an option to pin a single dependent package:
Example test matrix
strategy:
matrix:
cfg:
- { python-version: '3.9', pkg-version: ""}
- { python-version: '3.10', pkg-version: ""}
- { python-version: '3.10', pkg-version: 'numba==0.55.1' }
- { python-version: '3.10', pkg-version: "pandas>=1.3.0"}
- Removed
Sphinx
and doctest due to pip-compile error (can add back in later) - By default Unit-tests are skipped in Jenkins, but the piwind tests still trigger
(PR #1085)
Disable flaky test failures -- Fix for
tests/utils/test_data.py
, which fails intermittently in concurrent unit test runs
(PR #1090)
Minor fix for Platform client -Attempt to request a refresh token on 403 errors, otherwise runs can fail on token timeouts.
Creating portfolio
File uploaded: ~/ram/location.csv
Settings JSON uploaded: ~/ram/analysis_settings.json
Inputs Generation: Starting (id=61)
Input Generation: Queued (id=61)
Input Generation: Executing (id=61)
Input Generation: 38%|████████████████████████▌ | 10/26 [04:51<07:45, 29.10s/ sub_task]
run_generate: failed
api error: 403, url: https://xxxxxxx.northcentralus.cloudapp.azure.com/api/V1/analyses/61/sub_task_list/, msg: {"detail":"Token verification failed"}
(PR #1091)
Fix run errors in complex models (1.26.2) -- Fixed running complex models in the Azure platform, - name 'gul_legacy_stream' is not defined
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 378, in run
return model_runner_module.run_analysis(**bash_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/runner.py", line 111, in run_analysis
create_bash_analysis(**params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1940, in create_bash_analysis
getmodel_cmd = _get_getmodel_cmd(**getmodel_args)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1357, in custom_get_getmodel_cmd
if gul_legacy_stream and coverage_output != '':
NameError: name 'gul_legacy_stream' is not defined
- name 'analysis_settings' is not defined
[2022-07-21 13:22:33,192: ERROR/ForkPoolWorker-1] Task generate_losses_chunk[8b4e84f8-25c4-4c60-9f29-d0f730f73402] raised unexpected: NameError("name 'analysis_settings' is not defined")
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 377, in run
peril_filter = self._get_peril_filter(analysis_settings)
NameError: name 'analysis_settings' is not defined
[2022-07-21 13:22:33,206: INFO/ForkPoolWorker-3] Task error handler
[2022-07-21 13:22:33,207: INFO/ForkPoolWorker-3] Store file: /var/log/oasis/tasks/f74d312a0a60454e9fe6c21a7184cee4_generate-losses-chunk-17.log -> /shared-fs/f7f535bffb9c406097d81bb7e30688ea.log
(PR #1092)
Add check for parquet output before running model -Optional third party libraries are required for parquet output of ktools
components (see https://github.com/OasisLMF/ktools/pull/283 for more details). A user can request parquet output by setting the parquet_format
flag to true
in the analysis_settings.json
file. It is possible to compile ktools
binaries without linking to these optional parquet libraries, as is the case with the Mac OS build. In this case, requesting parquet output will result in an error after all loss calculations have been performed.
A check to determine whether the ktools
components have been linked with parquet libraries during compilation has been introduced before input generation, yielding an error message if parquet output has been requested but is not supported by the ktools
build.
(PR #1093)
Fix for exposure_summary_levels.json -- Fixed missing account level summary options in summary_levels.json
(PR #1094)
Disable GroupID hashing for acceptance tests -- Switched the default of
hashed_group_id
to False for the FM acceptance tests
(PR #1097)
Fixed package clash in pip-compile -The pip packages flake8
and virtualenv
dependency clash on the version of importlib-metadata
Added a workaround by pinning virtualenv<=20.16.2
in the requirements.in file
(PR #1100)
Add multi-currency support for OED files -check if oed files contain multiple currencies. If they do then a currency converter is needed.
provide 3 ways for the user to pass a currency converter:
- via csv or parquet file
- using forex-python (need to be installed)
- by providing a path to your own module and class
To be set are set via --currency-settings
(PR #1102)
Documentation on supported fields for reinsurance types -Additional information in docs/OED_financial_terms_supported.xls about which fields are used for different types of reinsurance contracts
(PR #1107)
Fix set logger configuration for oasislmf only -When setting logger, OasisLmf was actually changing the log configuration of all the module by calling logging.getLogger() instead of logging.getLogger('oasislmf')
(PR #1110)
Add extra OasisPlatform run commands -Split the platform api run
CLI call into partial commands:
oasislmf api generate-oasis-files
- Only generate inputs for an analysis in the OasisPlatformoasislmf api generate-losses
- Only generate losses for analysis in the OasisPlatform
The previous command oasislmf api run
still runs a model end-to-end but its now a chained run action based on the above two steps.
(PR #1111)
Allow API client to correctly detect cancelled analyses -Previously the API client would not detect cancelled analyses and would hang, waiting for them to complete.
This resolves that issue.
(PR #1112)
Added option to fail loss generation if input files are missing -If the --check-missing-inputs
option is set, a loss analysis will fail if either IL
or RI
is set in the analysis_settings but the oasis files are missing (the input generation was run without acc / ri OED files).
If not set the MDK will still warn when this happens, but not fail.
Warning message
UserWarning: ['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
Exception example
$ oasislmf model run --check-missing-inputs
Stating oasislmf command - RunModel
RUNNING: oasislmf.manager.interface
Processing arguments - Creating Oasis Files
Generating Oasis files (GUL=True, IL=False, RIL=False)
RUNNING: oasislmf.lookup.factory.generate_key_files
COMPLETED: oasislmf.lookup.factory.generate_key_files in 0.1s
...
RUNNING: oasislmf.preparation.summaries.write_mapping_file
Oasis files generated: {
"items": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/items.csv",
"coverages": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/coverages.csv"
}
['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
$ echo $?
1
(PR #1113)
Fix bug in gulpy causing cdfs being stored incorrectly -This PR fixes a bug in gulpy that was causing the cdfs being stored/indexed in a wrong way.
The bug did not affect models where all items had cdfs of same length, but affected all models with items with cdfs of variable length.
Since the bug consisted in accessing an array out of its bounds, it was causing garbage results, that were changing from run to run.
(PR #1114)
Facilitate class event rates in Moment Event Loss Table (MELT) output -If the event_rates.csv
file exists, it is copied to the input
directory. This file gives event rates for each event ID (see ktools PR https://github.com/OasisLMF/ktools/pull/327). In a similar fashion to events.bin
, and event dictionary file can be defined in analysis_settings.json
so that multiple event dictionary files can be stored in the same model files directory:
.
"model_settings": {
"event_set": "p",
"event_rates_set": "p",
}
.
where event_rates_set
is the ID of the event rates file (in this case event_rates_p.csv
).
If this file does not exist, ktools component eltcalc
will calculate event rates from the occurrence file, which is the current mode of operation.
(PR #1116)
Hide geopandas compatibility warnings from logs -Fixes #1115.
Hide numerous messages like the following from the logs:
...lib64/python3.8/site-packages/geopandas/_compat.py:112: UserWarning: The Shapely GEOS version (3.8.0-CAPI-1.13.1 ) is incompatible with the GEOS version PyGEOS was compiled with (3.10.3-CAPI-1.16.1). Conversions between both will be slow.
(PR #1119)
Fix for cond class exclusion when multiple priority are used -Add a loop breaker in case account cond hierarchy lead to cycling cond tag infinitely.
(PR #1121)
Add support for GUL alloc rule 3 -Since ktools v3.9.3, the new GUL alloc rule 3 was introduced to calculate the total peril loss using the multiplicative method (please see ktools issue https://github.com/OasisLMF/ktools/issues/118 for more details). The check for the GUL alloc rule range has been updated to reflect this new alloc rule.
(PR #1133)
Fix Package requirements -Add Scipy to required packages
From https://github.com/OasisLMF/OasisLMF/pull/1069 the package scipy
is used without being included in requirements-package.in
Update Numpy maximum version
Platform docker builds fail due to the pinned numpy version, this was fixed in version 1.26.3
but that change didn't get merged back into develop from the branches diverging.
Issue fixed with this PR instead of a merge back from backports/1.26.x
.
numpy==1.22.4 and oasislmf[extra]==<develop> because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy==1.22.4
fastparquet 0.8.0 depends on numpy>=1.18
numba 0.55.2 depends on numpy<1.23 and >=1.18
numexpr 2.8.1 depends on numpy>=1.13.3
pandas 1.5.1 depends on numpy>=1.20.3; python_version < "3.10"
pyarrow 8.0.0 depends on numpy>=1.16.6
scikit-learn 1.1.2 depends on numpy>=1.17.3
scipy 1.9.3 depends on numpy<1.26.0 and >=1.18.5
oasislmf[extra] depends on numpy<1.22 and >=1.18
gulmc
, full Monte Carlo loss calculation engine - (PR #1137)
Introducing This PR introduces gulmc
, a new tool that uses a "full Monte Carlo" approach for ground up losses calculation that, instead of drawing loss samples from the 'effective damageability' probability distribution (as done by calling eve | modelpy | gulpy
), it first draws a sample of the hazard intensity, and then draws a sample of the damage from the vulnerability function corresponding to the hazard intensity sample.
gulpy
and gulmc
output
Comparing gulmc
runs the same algorithm of eve | modelpy | gulpy
, i.e., it runs the 'effective damageability' calculation mode, with the same command line arguments. For example, to run a model with 1000 samples, alloc rule 1, and streaming the binary output to the output.bin
file, can be done with:
eve 1 1 | modelpy | gulpy -S1000 -a1 -o output.bin
or
eve 1 1 | gulmc -S1000 -a1 -o output.bin
Hazard uncertainty treatment
If the hazard intensity in the fooprint has no uncertainty, i.e.:
event_id,areaperil_id,intensity_bin_id,probability
1,4,1,1
[...]
then gulpy
and gulmc
produce the same outputs. However, if the hazard intensity has a probability distribution, e.g.:
event_id,areaperil_id,intensity_bin_id,probability
1,4,1,2.0000000298e-01
1,4,2,6.0000002384e-01
1,4,3,2.0000000298e-01
[...]
then, by default, gulmc
runs the full Monte Carlo sampling of the hazard intensity, and then of damage. In order to reproduce the same results that gulpy
produces can be achieved by using the --effective-damageability
flag:
eve 1 1 | gulmc -S1000 -a1 -o output.bin --effective-damageability
modelpy
and eve
with gulmc
On the usage of Due to internal refactoring, gulmc
now incorporates the functionality performed by modelpy
, therefore modelpy
should not be used in a pipe with gulmc
:
eve 1 1 | modelpy | gulpy -S1000 -a1 -o output.bin # wrong usage, won't work
eve 1 1 | gulpy -S1000 -a1 -o output.bin # correct usage
Note: both
gulpy
andgulmc
can read the events stream from binary file, i.e., without the need ofeve
, with:gulmc -i input/events.bin -S1000 -a1 -o output.bin
Printing the random values used for sampling
Since we now sample in two dimensions (hazard intensity and damage), the -d
flag is revamped to output both random values used for sampling. While gulpy -d
printed the random values used to sample the effective damageability distribution, in gulmc
:
gulmc -d1 [...] # prints the random values used for the hazard intensity sampling
gulmc -d2 [...] # prints the random values used for the damage sampling
Note: if the
--effective-damageability
flag is used, only-d2
is valid since there is no sampling of the hazard intensity, and the random value printed are those used for the effective damageability sampling.
Note: if
-d1
or-d2
are passed, the only validalloc_rule
value is0
. This is because, when printing the random values, back-allocation is not meaningful.alloc_rule=0
is the default value or it can be set with-a0
. If a value other than 0 is passed to-a
, an error will be thrown.
Testing suite
This PR introduces:
- a minimal toy model in
tests/assets/test_model_1/
that can be used to run unit tests on various functionality in the repository. A more detailed description of the content of the model can be found attests/assets/test_model_1/README.md
. - a suite of 192 quantitative tests for the
gulmc
output for combinations of input parameters (alloc rule, correlation, etc.). Binary files with the expected outputs are stored attests/assets/test_model_1/expected/
. - a suite of 48 quantitative tests for the
gulpy
output for combinations of input parameters. - tests checking that
ValueErrors
ingulmc
are raised when expected.
(PR #1139)
Replace Jenkins CI with GitHub actions -- Updated readme build badges
- MDK check are now run from PiWind MDK
- PiWind output checks are now run from PiWind Output
- The Jenkins script has been disabled, but not removed. The package publish job also needs moving to GitHub actions before deleting.
(PR #1140)
gulpy bugfix -This PR solves Issue #1141 where a bug that was causing a wrong output of gulpy
(losses all zero for entire items) for large number of samples, without throwing an error.
(PR #1146)
Improving Code Quality -This PR makes all the codePEP8 compliant and introduces automatic CI checks to preserve PEP8 compliance.
This PR fixes some flake8 errors and introduces automatic CI checks to avoid that the same errors are introduced in the code in the future.
(PR #1148)
Update Github Actions release workflow -- Removed the auto-merge option from the release script
oasislmf model run --copy_model_data
flag - (PR #1149)
Introducing This PR Fixes #1150 by introducing an optional flag --copy-model-data
to copy the model data to the runs/losses-xxx/static
directory instead of creating symbolic links to individual files. By default the flag is False
, which reproduces current default behaviour of creating symbolic links to the model data.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - Release 1.26.4
1.26.4
OasisLMF Changelog -- #1120 - GUL alloc rule range in MDK has not been updated to reflect addition of rule 3
- #1135 - gulpy appears to hang when sample size is large
- #1139 - Hotfix/GitHub actions
- #1141 -
gulpy
produces zero losses for entire items for large number of samples - #1148 - Remove auto-merge option
OasisLMF Notes
(PR #1121)
Add support for GUL alloc rule 3 -Since ktools v3.9.3, the new GUL alloc rule 3 was introduced to calculate the total peril loss using the multiplicative method (please see ktools issue https://github.com/OasisLMF/ktools/issues/118 for more details). The check for the GUL alloc rule range has been updated to reflect this new alloc rule.
(PR #1139)
Replace Jenkins CI with GitHub actions -- Updated readme build badges
- MDK check are now run from PiWind MDK
- PiWind output checks are now run from PiWind Output
- The Jenkins script has been disabled, but not removed. The package publish job also needs moving to GitHub actions before deleting.
(PR #1140)
gulpy bugfix -This PR solves Issue #1141 where a bug that was causing a wrong output of gulpy
(losses all zero for entire items) for large number of samples, without throwing an error.
(PR #1148)
Update Github Actions release workflow -- Removed the auto-merge option from the release script
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - Release 1.23.14
1.23.14
OasisLMF Changelog -- #1120 - GUL alloc rule range in MDK has not been updated to reflect addition of rule 3
- #1139 - Hotfix/GitHub actions
- #1148 - Remove auto-merge option
- #1005 - in run_ktools use -s flag on kat for ORD elt reports
OasisLMF Notes
(PR #1121)
Add support for GUL alloc rule 3 -Since ktools v3.9.3, the new GUL alloc rule 3 was introduced to calculate the total peril loss using the multiplicative method (please see ktools issue https://github.com/OasisLMF/ktools/issues/118 for more details). The check for the GUL alloc rule range has been updated to reflect this new alloc rule.
(PR #1139)
Replace Jenkins CI with GitHub actions -- Updated readme build badges
- MDK check are now run from PiWind MDK
- PiWind output checks are now run from PiWind Output
- The Jenkins script has been disabled, but not removed. The package publish job also needs moving to GitHub actions before deleting.
(PR #1148)
Update Github Actions release workflow -- Removed the auto-merge option from the release script
(PR #1030)
Drop kat sort flag -From ktools v3.8.1
, kat
will attempt to detect table type and sort by event ID in the case of Exceedance Loss Tables (ELTs) and by period ID, and then by event ID, in the case of Period Loss Tables (PLTs), prior to concatenation. As this is now the default option, the sort flag -s
has been dropped. Sorting correctly requires eve
to employ the deterministic method to shuffle event IDs. Therefore, if other shuffling methods have been used by eve
, the flag -u
, which produces unsorted output, is employed.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - Release 1.27.0rc3
1.27.0rc3
OasisLMF Changelog -- #1120 - GUL alloc rule range in MDK has not been updated to reflect addition of rule 3
- #1129 - Add contributing guidelines
- #1133 - Fix/package install error
- #1135 - gulpy appears to hang when sample size is large
- #1139 - Hotfix/GitHub actions
- #1141 -
gulpy
produces zero losses for entire items for large number of samples - #1119 - use correct condpriority to fix cond class exclusion
OasisLMF Notes
(PR #1121)
Add support for GUL alloc rule 3 -Since ktools v3.9.3, the new GUL alloc rule 3 was introduced to calculate the total peril loss using the multiplicative method (please see ktools issue https://github.com/OasisLMF/ktools/issues/118 for more details). The check for the GUL alloc rule range has been updated to reflect this new alloc rule.
(PR #1133)
Fix Package requirements -Add Scipy to required packages
From https://github.com/OasisLMF/OasisLMF/pull/1069 the package scipy
is used without being included in requirements-package.in
Update Numpy maximum version
Platform docker builds fail due to the pinned numpy version, this was fixed in version 1.26.3
but that change didn't get merged back into develop from the branches diverging.
Issue fixed with this PR instead of a merge back from backports/1.26.x
.
numpy==1.22.4 and oasislmf[extra]==<develop> because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy==1.22.4
fastparquet 0.8.0 depends on numpy>=1.18
numba 0.55.2 depends on numpy<1.23 and >=1.18
numexpr 2.8.1 depends on numpy>=1.13.3
pandas 1.5.1 depends on numpy>=1.20.3; python_version < "3.10"
pyarrow 8.0.0 depends on numpy>=1.16.6
scikit-learn 1.1.2 depends on numpy>=1.17.3
scipy 1.9.3 depends on numpy<1.26.0 and >=1.18.5
oasislmf[extra] depends on numpy<1.22 and >=1.18
(PR #1139)
Replace Jenkins CI with GitHub actions -- Updated readme build badges
- MDK check are now run from PiWind MDK
- PiWind output checks are now run from PiWind Output
- The Jenkins script has been disabled, but not removed. The package publish job also needs moving to GitHub actions before deleting.
(PR #1140)
gulpy bugfix -This PR solves Issue #1141 where a bug that was causing a wrong output of gulpy
(losses all zero for entire items) for large number of samples, without throwing an error.
(PR #1119)
Fix for cond class exclusion when multiple priority are used -Add a loop breaker in case account cond hierarchy lead to cycling cond tag infinitely.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - 1.27.0rc1
1.27.0rc1
OasisLMF Changelog -- #135 - Implement OED policy coverage terms in Financial Module
- #1057 - Remove sys.exit(1) calls and replace with exceptions
- #1058 - Correlation map
- #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1040 - Builtin lookup - missing feedback when all locations are using unsupported LocPerilsCovered
- #1063 - Fix/pre analysis hook
- #1007 - Parquet to csv comparison script
- #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1067 - Fix/platform run
- #1068 - Implement correlated random number generation in gulpy
- #1071 - Feature/param loading
- #1072 - Update/package requirements
- #1070 - Clean up warning messages
- #1074 - Added lower-case-cols and raise-error flags
- #1075 - setting model_custom_gulcalc disables gulpy
- #1076 - Set ktools to 3.9.2
- #992 - Peril Specific Runs
- #1049 - random number generator can be set to 0 at oasislmf command line
- #1066 - Gulpy failing in distributed runs
- #1080 - add peril_filter to run settings spec
- #1016 - Update package testing
- #1085 - Disable all deadlines in utils/test_data.py
- #1090 - Request token refresh on HTTP error 403 - Forbidden
- #1091 - Debug complex model execution
- #1035 - No check of parquet output before running model
- #1093 - Fix call to write_summary_levels - missing IL options
- #1094 - Disable GroupID hashing for acceptance tests
- #1096 - Hashing investigation
- #1097 - Fix/pip compile
- #1099 - Implement multiplicative method for total loss computation
- #1100 - OED support for multi-currencies
- #1101 - Always create a correlations.bin, if missing model_settings file is b…
- #1102 - FM documentation update
- #1107 - Fix/678 logger
- #1110 - extent api commands with run-inputs/run-losses options
- #1108 - API client doesn't detect cancelled analysis
- #1105 - Add a 'strict' mode to fail runs if IL/RI is requested but files are missing
- #1113 - Bugfix: out of bounds cdf
- #906 - include "classic" event rates and Metadata in ORD output for oasis outputs
- #1116 - Hide geopandas warning
OasisLMF Notes
(PR #1024)
support OED policy coverage terms -- allow fmpy to handle multi-tree structure via using negative agg_id to indicate a reference to item level
- back allocate loss at each level during fm (including deductible overlimit and underlimit)
- add new level PolCoverage
(PR #1057)
Fix platform client error handling -Removed calls to sys.exit(1) and replace with OasisException
(PR #1058)
Adding Correlation mapping to the model -This addition adds the option to have correlation values
eltcalc.csv
and summarycalc.csv
outputs in distributed platform - (PR #1060)
Fix for The output kat commands need to executed in create_bash_outputs instead of create_bash_analysis
otherwise these output files will be overwritten by each chunk execution. The reason for the missing CSV header is due to incomplete output.
(PR #1062)
add unsupported peril in the keys-errors file -In the keys server, Keys with Peril Id not present in the model peril covered were simply discarded. To make it clearer as to why the key was removed, they will now appear in the keys-errors.csv file with the status "noreturn" and message "unsuported peril_id"
(PR #1063)
Updated the pre-analysis hook function to return edited file paths -When calling OasisManager().exposure_pre_analysis( .. )
from the oasis manager, there should be a way to get the list of file paths for raw
and edited
exposure files. Changed the function return to include these in a dictionary.
Example
OasisManager().exposure_pre_analysis(**params)
{
'class': <_class_return>,
'modified': {
"oed_location_csv": "/tmp/tmpaqq_k5pr/location.csv",
"oed_accounts_csv": "/tmp/tmpaqq_k5pr/account.csv",
"oed_info_csv": "/tmp/tmpaqq_k5pr/ri_info.csv",
"oed_scope_csv": "/tmp/tmpaqq_k5pr/ri_scope.csv"
},
'original': {
"raw_oed_location_csv": "/tmp/tmpaqq_k5pr/epa_location.csv",
"raw_oed_accounts_csv": "/tmp/tmpaqq_k5pr/epa_account.csv",
"raw_oed_info_csv": "/tmp/tmpaqq_k5pr/epa_ri_info.csv",
"raw_oed_scope_csv": "/tmp/tmpaqq_k5pr/epa_ri_scope.csv"
}
}
(PR #1064)
script to compare parquet and csv file -add script to compare parquet and csv file
(PR #1065)
Fix for kat outputs for distributed execution -- Update to PR https://github.com/OasisLMF/OasisLMF/pull/1060, the output kats process counter didn't match the total number of chunks in a distributed run. Instead it defaulted to 'number of cpus on system'.
(PR #1067)
Minor fixes for run API command line -- Fix for model-id flag,
oasislmf api run --model-id 1 --portfolio-id 11 --analysis-settings-json <fpath>
get_config_profile
and environment variables when calling the _param
functions - (PR #1071)
Check Fix needed for https://github.com/OasisLMF/OasisPlatform/issues/630
-
When loading older
oasislmf.json
files are processed programmatically, using_param
functions in the platform-2.0 branch. Theconfig_compatibility_profile.json
file isn't checked for outdated key names, unlike with CLI.
The update edits the OasisManager._params_<funcName> methods so that outdated keys are replaced, making it equivalent to the CLI argument loading. -
If
export OASIS_ENV_OVERRIDE=True
is set, check ifOASIS_<param_name>
is defined and load value from override environment variable
Example 1 - Config profile
the source_exposure_file_path
was updated to oed_location_csv
if given to the function for loading default args, the newer name will be used.
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'my-location-file-path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': None,
'lookup_multiprocessing': True,
'verbose': False}
Example 2 - override environment variable
export OASIS_ENV_OVERRIDE=True
export OASIS_OED_LOCATION_CSV='override_path'
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'override_path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': 'foo-path',
'lookup_multiprocessing': True,
'verbose': False}
(PR #1072)
Update numba package pin -- Numba requirements set to
numba>=0.55.1
(PR #1073)
Fix warning messages in package runs -- Added minor changes to appease the package deprecation warnings, results in cleaner logs when running the MDK.
- Updated the oasislmf deprecated module loader to support sub-module remapping
oasislmf.preparation.old_module
->oasislmf.preparation.new_module
(PR #1074)
Added more flags to compare-parquet script ---lower-case-cols
lower case column names of both dataFrames before running compare--raise-error
default is now to catch and print the exception message, add flag to also raise the exception (like before PR)
(PR #1075)
Setting a complex model gulcalc disables gulpy -- if the
model_custom_gulcalc
config option is set, gulpy is disabled
(PR #1076)
Ktools 3.9.2 -- Fix for ktools aalcalc https://github.com/OasisLMF/ktools/pull/311
(PR #1077)
Peril specific filter for modelpy and gulpy -add a filter option for modelpy and gulpy
- modelpy --peril-filter WTC < eve.bin > /dev/null
- gulpy -S10 -L0 -a1 --peril-filter WTC > /dev/null < cdf.bin
allow peril filter to be specified via the MDK
- in the mdk => --peril-filter WSS WTC
- in oasislmf.json => "peril_filter": ["WSS", "WTC"],
if filter is specified in the MDK only modelpy is filtering as is is sufficient. Option in gulpy can be use for custom model.
gulpy_random_generator
flag in the oasislmf
model runner - (PR #1078)
New This PR introduces the gulpy_random_generator
flag in the oasislmf
model runner. This will allow users to set the random number generator to be used in gulpy
. By default gulpy
uses the Latin Hypercube Sampling algorithm (see #1000). However, it also implements the Mersenne Twister random generator (namely, the generator that was used in gulcalc
).
With the introduction of gulpy
, the user can already set the random number generator with its --random-generator
flag:
--random-generator RANDOM_GENERATOR
random number generator
0: numpy default (MT19937), 1: Latin Hypercube. Default: 1.
Regarding the oasislmf
model runner CLI, so far the following command
oasislmf model run --gulpy [...]
implicitly used the Latin Hypercube generator.
This PR introduces the possibility for the user to specify which random number generator to be used in gulpy
through the --gulpy-random-generator
flag. The following commands are equivalent and use the Latin Hypercube Sampling:
oasislmf model run --gulpy [...]
oasislmf model run --gulpy --gulpy-random-generator=1 [...]
To run a model with the Mersenne Twister it is now possible with:
oasislmf model run --gulpy --gulpy-random-generator=0 [...]
(PR #1079)
fix gulpy error when cdf is empty -(PR #1080)
Add peril filter to analysis settings -- peril filter is also read from
analysis_settings.json
and overrides options set viaoasislmf.json
or CLi
analysis_settings.json
{
"analysis_tag": "base_example",
"source_tag": "MDK",
"model_name_id": "PiWind",
"peril_filter": ["WTC"],
...
(PR #1082)
Move package unit testing to GitHub actions -oasislmf is testing vs multiple python versions with an option to pin a single dependent package:
Example test matrix
strategy:
matrix:
cfg:
- { python-version: '3.9', pkg-version: ""}
- { python-version: '3.10', pkg-version: ""}
- { python-version: '3.10', pkg-version: 'numba==0.55.1' }
- { python-version: '3.10', pkg-version: "pandas>=1.3.0"}
- Removed
Sphinx
and doctest due to pip-compile error (can add back in later) - By default Unit-tests are skipped in Jenkins, but the piwind tests still trigger
(PR #1085)
Disable flaky test failures -- Fix for
tests/utils/test_data.py
, which fails intermittently in concurrent unit test runs
(PR #1090)
Minor fix for Platform client -Attempt to request a refresh token on 403 errors, otherwise runs can fail on token timeouts.
Creating portfolio
File uploaded: ~/ram/location.csv
Settings JSON uploaded: ~/ram/analysis_settings.json
Inputs Generation: Starting (id=61)
Input Generation: Queued (id=61)
Input Generation: Executing (id=61)
Input Generation: 38%|████████████████████████▌ | 10/26 [04:51<07:45, 29.10s/ sub_task]
run_generate: failed
api error: 403, url: https://xxxxxxx.northcentralus.cloudapp.azure.com/api/V1/analyses/61/sub_task_list/, msg: {"detail":"Token verification failed"}
(PR #1091)
Fix run errors in complex models (1.26.2) -- Fixed running complex models in the Azure platform, - name 'gul_legacy_stream' is not defined
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 378, in run
return model_runner_module.run_analysis(**bash_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/runner.py", line 111, in run_analysis
create_bash_analysis(**params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1940, in create_bash_analysis
getmodel_cmd = _get_getmodel_cmd(**getmodel_args)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/execution/bash.py", line 1357, in custom_get_getmodel_cmd
if gul_legacy_stream and coverage_output != '':
NameError: name 'gul_legacy_stream' is not defined
- name 'analysis_settings' is not defined
[2022-07-21 13:22:33,192: ERROR/ForkPoolWorker-1] Task generate_losses_chunk[8b4e84f8-25c4-4c60-9f29-d0f730f73402] raised unexpected: NameError("name 'analysis_settings' is not defined")
Traceback (most recent call last):
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/home/worker/.local/lib/python3.8/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 907, in run
return fn(self, params, *args, analysis_id=analysis_id, **kwargs)
File "/home/worker/src/model_execution_worker/distributed_tasks.py", line 989, in generate_losses_chunk
OasisManager().generate_losses_partial(**chunk_params)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/home/worker/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 377, in run
peril_filter = self._get_peril_filter(analysis_settings)
NameError: name 'analysis_settings' is not defined
[2022-07-21 13:22:33,206: INFO/ForkPoolWorker-3] Task error handler
[2022-07-21 13:22:33,207: INFO/ForkPoolWorker-3] Store file: /var/log/oasis/tasks/f74d312a0a60454e9fe6c21a7184cee4_generate-losses-chunk-17.log -> /shared-fs/f7f535bffb9c406097d81bb7e30688ea.log
(PR #1092)
Add check for parquet output before running model -Optional third party libraries are required for parquet output of ktools
components (see https://github.com/OasisLMF/ktools/pull/283 for more details). A user can request parquet output by setting the parquet_format
flag to true
in the analysis_settings.json
file. It is possible to compile ktools
binaries without linking to these optional parquet libraries, as is the case with the Mac OS build. In this case, requesting parquet output will result in an error after all loss calculations have been performed.
A check to determine whether the ktools
components have been linked with parquet libraries during compilation has been introduced before input generation, yielding an error message if parquet output has been requested but is not supported by the ktools
build.
(PR #1093)
Fix for exposure_summary_levels.json -- Fixed missing account level summary options in summary_levels.json
(PR #1094)
Disable GroupID hashing for acceptance tests -- Switched the default of
hashed_group_id
to False for the FM acceptance tests
(PR #1097)
Fixed package clash in pip-compile -The pip packages flake8
and virtualenv
dependency clash on the version of importlib-metadata
Added a workaround by pinning virtualenv<=20.16.2
in the requirements.in file
(PR #1100)
Add multi-currency support for OED files -check if oed files contain multiple currencies. If they do then a currency converter is needed.
provide 3 ways for the user to pass a currency converter:
- via csv or parquet file
- using forex-python (need to be installed)
- by providing a path to your own module and class
To be set are set via --currency-settings
(PR #1102)
Documentation on supported fields for reinsurance types -Additional information in docs/OED_financial_terms_supported.xls about which fields are used for different types of reinsurance contracts
(PR #1107)
Fix set logger configuration for oasislmf only -When setting logger, OasisLmf was actually changing the log configuration of all the module by calling logging.getLogger() instead of logging.getLogger('oasislmf')
(PR #1110)
Add extra OasisPlatform run commands -Split the platform api run
CLI call into partial commands:
oasislmf api generate-oasis-files
- Only generate inputs for an analysis in the OasisPlatformoasislmf api generate-losses
- Only generate losses for analysis in the OasisPlatform
The previous command oasislmf api run
still runs a model end-to-end but its now a chained run action based on the above two steps.
(PR #1111)
Allow API client to correctly detect cancelled analyses -Previously the API client would not detect cancelled analyses and would hang, waiting for them to complete.
This resolves that issue.
(PR #1112)
Added option to fail loss generation if input files are missing -If the --check-missing-inputs
option is set, a loss analysis will fail if either IL
or RI
is set in the analysis_settings but the oasis files are missing (the input generation was run without acc / ri OED files).
If not set the MDK will still warn when this happens, but not fail.
Warning message
UserWarning: ['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
Exception example
$ oasislmf model run --check-missing-inputs
Stating oasislmf command - RunModel
RUNNING: oasislmf.manager.interface
Processing arguments - Creating Oasis Files
Generating Oasis files (GUL=True, IL=False, RIL=False)
RUNNING: oasislmf.lookup.factory.generate_key_files
COMPLETED: oasislmf.lookup.factory.generate_key_files in 0.1s
...
RUNNING: oasislmf.preparation.summaries.write_mapping_file
Oasis files generated: {
"items": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/items.csv",
"coverages": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/coverages.csv"
}
['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
$ echo $?
1
(PR #1113)
Fix bug in gulpy causing cdfs being stored incorrectly -This PR fixes a bug in gulpy that was causing the cdfs being stored/indexed in a wrong way.
The bug did not affect models where all items had cdfs of same length, but affected all models with items with cdfs of variable length.
Since the bug consisted in accessing an array out of its bounds, it was causing garbage results, that were changing from run to run.
(PR #1114)
Facilitate class event rates in Moment Event Loss Table (MELT) output -If the event_rates.csv
file exists, it is copied to the input
directory. This file gives event rates for each event ID (see ktools PR https://github.com/OasisLMF/ktools/pull/327). In a similar fashion to events.bin
, and event dictionary file can be defined in analysis_settings.json
so that multiple event dictionary files can be stored in the same model files directory:
.
"model_settings": {
"event_set": "p",
"event_rates_set": "p",
}
.
where event_rates_set
is the ID of the event rates file (in this case event_rates_p.csv
).
If this file does not exist, ktools component eltcalc
will calculate event rates from the occurrence file, which is the current mode of operation.
(PR #1116)
Hide geopandas compatibility warnings from logs -Fixes #1115.
Hide numerous messages like the following from the logs:
...lib64/python3.8/site-packages/geopandas/_compat.py:112: UserWarning: The Shapely GEOS version (3.8.0-CAPI-1.13.1 ) is incompatible with the GEOS version PyGEOS was compiled with (3.10.3-CAPI-1.16.1). Conversions between both will be slow.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - 1.23.13
1.23.13
OasisLMF Changelog -- #1090 - Request token refresh on HTTP error 403 - Forbidden
- #1035 - No check of parquet output before running model
- #1093 - Fix call to write_summary_levels - missing IL options
- #1005 - in run_ktools use -s flag on kat for ORD elt reports
- #1097 - Fix/pip compile
- #1107 - Fix/678 logger
- #1110 - extent api commands with run-inputs/run-losses options
- #1108 - API client doesn't detect cancelled analysis
- #1105 - Add a 'strict' mode to fail runs if IL/RI is requested but files are missing
- #1016 - Update package testing
- #1085 - Disable all deadlines in utils/test_data.py
OasisLMF Notes
(PR #1090)
Minor fix for Platform client -Attempt to request a refresh token on 403 errors, otherwise runs can fail on token timeouts.
Creating portfolio
File uploaded: ~/ram/location.csv
Settings JSON uploaded: ~/ram/analysis_settings.json
Inputs Generation: Starting (id=61)
Input Generation: Queued (id=61)
Input Generation: Executing (id=61)
Input Generation: 38%|████████████████████████▌ | 10/26 [04:51<07:45, 29.10s/ sub_task]
run_generate: failed
api error: 403, url: https://xxxxxxx.northcentralus.cloudapp.azure.com/api/V1/analyses/61/sub_task_list/, msg: {"detail":"Token verification failed"}
(PR #1092)
Add check for parquet output before running model -Optional third party libraries are required for parquet output of ktools
components (see https://github.com/OasisLMF/ktools/pull/283 for more details). A user can request parquet output by setting the parquet_format
flag to true
in the analysis_settings.json
file. It is possible to compile ktools
binaries without linking to these optional parquet libraries, as is the case with the Mac OS build. In this case, requesting parquet output will result in an error after all loss calculations have been performed.
A check to determine whether the ktools
components have been linked with parquet libraries during compilation has been introduced before input generation, yielding an error message if parquet output has been requested but is not supported by the ktools
build.
(PR #1093)
Fix for exposure_summary_levels.json -- Fixed missing account level summary options in summary_levels.json
(PR #1030)
Drop kat sort flag -From ktools v3.8.1
, kat
will attempt to detect table type and sort by event ID in the case of Exceedance Loss Tables (ELTs) and by period ID, and then by event ID, in the case of Period Loss Tables (PLTs), prior to concatenation. As this is now the default option, the sort flag -s
has been dropped. Sorting correctly requires eve
to employ the deterministic method to shuffle event IDs. Therefore, if other shuffling methods have been used by eve
, the flag -u
, which produces unsorted output, is employed.
(PR #1097)
Fixed package clash in pip-compile -The pip packages flake8
and virtualenv
dependency clash on the version of importlib-metadata
Added a workaround by pinning virtualenv<=20.16.2
in the requirements.in file
(PR #1107)
Fix set logger configuration for oasislmf only -When setting logger, OasisLmf was actually changing the log configuration of all the module by calling logging.getLogger() instead of logging.getLogger('oasislmf')
(PR #1110)
Add extra OasisPlatform run commands -Split the platform api run
CLI call into partial commands:
oasislmf api generate-oasis-files
- Only generate inputs for an analysis in the OasisPlatformoasislmf api generate-losses
- Only generate losses for analysis in the OasisPlatform
The previous command oasislmf api run
still runs a model end-to-end but its now a chained run action based on the above two steps.
(PR #1111)
Allow API client to correctly detect cancelled analyses -Previously the API client would not detect cancelled analyses and would hang, waiting for them to complete.
This resolves that issue.
(PR #1112)
Added option to fail loss generation if input files are missing -If the --check-missing-inputs
option is set, a loss analysis will fail if either IL
or RI
is set in the analysis_settings but the oasis files are missing (the input generation was run without acc / ri OED files).
If not set the MDK will still warn when this happens, but not fail.
Warning message
UserWarning: ['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
Exception example
$ oasislmf model run --check-missing-inputs
Stating oasislmf command - RunModel
RUNNING: oasislmf.manager.interface
Processing arguments - Creating Oasis Files
Generating Oasis files (GUL=True, IL=False, RIL=False)
RUNNING: oasislmf.lookup.factory.generate_key_files
COMPLETED: oasislmf.lookup.factory.generate_key_files in 0.1s
...
RUNNING: oasislmf.preparation.summaries.write_mapping_file
Oasis files generated: {
"items": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/items.csv",
"coverages": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/coverages.csv"
}
['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
$ echo $?
1
(PR #1082)
Move package unit testing to GitHub actions -oasislmf is testing vs multiple python versions with an option to pin a single dependent package:
Example test matrix
strategy:
matrix:
cfg:
- { python-version: '3.9', pkg-version: ""}
- { python-version: '3.10', pkg-version: ""}
- { python-version: '3.10', pkg-version: 'numba==0.55.1' }
- { python-version: '3.10', pkg-version: "pandas>=1.3.0"}
- Removed
Sphinx
and doctest due to pip-compile error (can add back in later) - By default Unit-tests are skipped in Jenkins, but the piwind tests still trigger
(PR #1085)
Disable flaky test failures -- Fix for
tests/utils/test_data.py
, which fails intermittently in concurrent unit test runs
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - 1.26.3
1.26.3
OasisLMF Changelog -- #1090 - Request token refresh on HTTP error 403 - Forbidden
- #1035 - No check of parquet output before running model
- #1093 - Fix call to write_summary_levels - missing IL options
- #1097 - Fix/pip compile
- #1107 - Fix/678 logger
- #1108 - API client doesn't detect cancelled analysis
- #1066 - Gulpy failing in distributed runs
- #1110 - extent api commands with run-inputs/run-losses options
- #1113 - Bugfix: out of bounds cdf
- #1016 - Update package testing
- #1105 - Add a 'strict' mode to fail runs if IL/RI is requested but files are missing
- #1085 - Disable all deadlines in utils/test_data.py
OasisLMF Notes
(PR #1090)
Minor fix for Platform client -Attempt to request a refresh token on 403 errors, otherwise runs can fail on token timeouts.
Creating portfolio
File uploaded: ~/ram/location.csv
Settings JSON uploaded: ~/ram/analysis_settings.json
Inputs Generation: Starting (id=61)
Input Generation: Queued (id=61)
Input Generation: Executing (id=61)
Input Generation: 38%|████████████████████████▌ | 10/26 [04:51<07:45, 29.10s/ sub_task]
run_generate: failed
api error: 403, url: https://xxxxxxx.northcentralus.cloudapp.azure.com/api/V1/analyses/61/sub_task_list/, msg: {"detail":"Token verification failed"}
(PR #1092)
Add check for parquet output before running model -Optional third party libraries are required for parquet output of ktools
components (see https://github.com/OasisLMF/ktools/pull/283 for more details). A user can request parquet output by setting the parquet_format
flag to true
in the analysis_settings.json
file. It is possible to compile ktools
binaries without linking to these optional parquet libraries, as is the case with the Mac OS build. In this case, requesting parquet output will result in an error after all loss calculations have been performed.
A check to determine whether the ktools
components have been linked with parquet libraries during compilation has been introduced before input generation, yielding an error message if parquet output has been requested but is not supported by the ktools
build.
(PR #1093)
Fix for exposure_summary_levels.json -- Fixed missing account level summary options in summary_levels.json
(PR #1097)
Fixed package clash in pip-compile -The pip packages flake8
and virtualenv
dependency clash on the version of importlib-metadata
Added a workaround by pinning virtualenv<=20.16.2
in the requirements.in file
(PR #1107)
Fix set logger configuration for oasislmf only -When setting logger, OasisLmf was actually changing the log configuration of all the module by calling logging.getLogger() instead of logging.getLogger('oasislmf')
(PR #1111)
Allow API client to correctly detect cancelled analyses -Previously the API client would not detect cancelled analyses and would hang, waiting for them to complete.
This resolves that issue.
(PR #1079)
fix gulpy error when cdf is empty -(PR #1110)
Add extra OasisPlatform run commands -Split the platform api run
CLI call into partial commands:
oasislmf api generate-oasis-files
- Only generate inputs for an analysis in the OasisPlatformoasislmf api generate-losses
- Only generate losses for analysis in the OasisPlatform
The previous command oasislmf api run
still runs a model end-to-end but its now a chained run action based on the above two steps.
(PR #1113)
Fix bug in gulpy causing cdfs being stored incorrectly -This PR fixes a bug in gulpy that was causing the cdfs being stored/indexed in a wrong way.
The bug did not affect models where all items had cdfs of same length, but affected all models with items with cdfs of variable length.
Since the bug consisted in accessing an array out of its bounds, it was causing garbage results, that were changing from run to run.
(PR #1082)
Move package unit testing to GitHub actions -oasislmf is testing vs multiple python versions with an option to pin a single dependent package:
Example test matrix
strategy:
matrix:
cfg:
- { python-version: '3.9', pkg-version: ""}
- { python-version: '3.10', pkg-version: ""}
- { python-version: '3.10', pkg-version: 'numba==0.55.1' }
- { python-version: '3.10', pkg-version: "pandas>=1.3.0"}
- Removed
Sphinx
and doctest due to pip-compile error (can add back in later) - By default Unit-tests are skipped in Jenkins, but the piwind tests still trigger
(PR #1112)
Added option to fail loss generation if input files are missing -If the --check-missing-inputs
option is set, a loss analysis will fail if either IL
or RI
is set in the analysis_settings but the oasis files are missing (the input generation was run without acc / ri OED files).
If not set the MDK will still warn when this happens, but not fail.
Warning message
UserWarning: ['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
Exception example
$ oasislmf model run --check-missing-inputs
Stating oasislmf command - RunModel
RUNNING: oasislmf.manager.interface
Processing arguments - Creating Oasis Files
Generating Oasis files (GUL=True, IL=False, RIL=False)
RUNNING: oasislmf.lookup.factory.generate_key_files
COMPLETED: oasislmf.lookup.factory.generate_key_files in 0.1s
...
RUNNING: oasislmf.preparation.summaries.write_mapping_file
Oasis files generated: {
"items": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/items.csv",
"coverages": "/home/sam/repos/models/piwind/runs/losses-20220909083601/input/coverages.csv"
}
['IL', 'RI'] are enabled analysis_settings without the generated input files. The 'generate-oasis-files' step should be rerun with account/reinsurance files.
$ echo $?
1
(PR #1085)
Disable flaky test failures -- Fix for
tests/utils/test_data.py
, which fails intermittently in concurrent unit test runs
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 2 years ago

OasisLMF - 1.26.2
1.26.2
OasisLMF Changelog -- #135 - Implement OED policy coverage terms in Financial Module
- #1057 - Remove sys.exit(1) calls and replace with exceptions
- #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1040 - Builtin lookup - missing feedback when all locations are using unsupported LocPerilsCovered
- #1063 - Fix/pre analysis hook
- #1007 - Parquet to csv comparison script
- #1059 - Missing CSV headers in summarycalc.csv when running chunked losses
- #1067 - Fix/platform run
- #1071 - Feature/param loading
- #1072 - Update/package requirements
- #1070 - Clean up warning messages
- #1075 - setting model_custom_gulcalc disables gulpy
- #1076 - Set ktools to 3.9.2
- #1048 - set modelpy and gulpy as default runtime options
OasisLMF Notes
(PR #1024)
support OED policy coverage terms -- allow fmpy to handle multi-tree structure via using negative agg_id to indicate a reference to item level
- back allocate loss at each level during fm (including deductible overlimit and underlimit)
- add new level PolCoverage
(PR #1057)
Fix platform client error handling -Removed calls to sys.exit(1) and replace with OasisException
eltcalc.csv
and summarycalc.csv
outputs in distributed platform - (PR #1060)
Fix for The output kat commands need to executed in create_bash_outputs instead of create_bash_analysis
otherwise these output files will be overwritten by each chunk execution. The reason for the missing CSV header is due to incomplete output.
(PR #1062)
add unsupported peril in the keys-errors file -In the keys server, Keys with Peril Id not present in the model peril covered were simply discarded. To make it clearer as to why the key was removed, they will now appear in the keys-errors.csv file with the status "noreturn" and message "unsuported peril_id"
(PR #1063)
Updated the pre-analysis hook function to return edited file paths -When calling OasisManager().exposure_pre_analysis( .. )
from the oasis manager, there should be a way to get the list of file paths for raw
and edited
exposure files. Changed the function return to include these in a dictionary.
Example
OasisManager().exposure_pre_analysis(**params)
{
'class': <_class_return>,
'modified': {
"oed_location_csv": "/tmp/tmpaqq_k5pr/location.csv",
"oed_accounts_csv": "/tmp/tmpaqq_k5pr/account.csv",
"oed_info_csv": "/tmp/tmpaqq_k5pr/ri_info.csv",
"oed_scope_csv": "/tmp/tmpaqq_k5pr/ri_scope.csv"
},
'original': {
"raw_oed_location_csv": "/tmp/tmpaqq_k5pr/epa_location.csv",
"raw_oed_accounts_csv": "/tmp/tmpaqq_k5pr/epa_account.csv",
"raw_oed_info_csv": "/tmp/tmpaqq_k5pr/epa_ri_info.csv",
"raw_oed_scope_csv": "/tmp/tmpaqq_k5pr/epa_ri_scope.csv"
}
}
(PR #1064)
script to compare parquet and csv file -add script to compare parquet and csv file
(PR #1065)
Fix for kat outputs for distributed execution -- Update to PR https://github.com/OasisLMF/OasisLMF/pull/1060, the output kats process counter didn't match the total number of chunks in a distributed run. Instead it defaulted to 'number of cpus on system'.
(PR #1067)
Minor fixes for run API command line -- Fix for model-id flag,
oasislmf api run --model-id 1 --portfolio-id 11 --analysis-settings-json <fpath>
get_config_profile
and environment variables when calling the _param
functions - (PR #1071)
Check Fix needed for https://github.com/OasisLMF/OasisPlatform/issues/630
-
When loading older
oasislmf.json
files are processed programmatically, using_param
functions in the platform-2.0 branch. Theconfig_compatibility_profile.json
file isn't checked for outdated key names, unlike with CLI.
The update edits the OasisManager._params_<funcName> methods so that outdated keys are replaced, making it equivalent to the CLI argument loading. -
If
export OASIS_ENV_OVERRIDE=True
is set, check ifOASIS_<param_name>
is defined and load value from override environment variable
Example 1 - Config profile
the source_exposure_file_path
was updated to oed_location_csv
if given to the function for loading default args, the newer name will be used.
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'my-location-file-path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': None,
'lookup_multiprocessing': True,
'verbose': False}
Example 2 - override environment variable
export OASIS_ENV_OVERRIDE=True
export OASIS_OED_LOCATION_CSV='override_path'
In [x]: OasisManager._params_generate_keys(
...: **{"source_exposure_file_path": "my-location-file-path"}
...: )
Deprecated key(s) in MDK config:
'source_exposure_file_path' loaded as 'oed_location_csv'
Out[x]:
{'oed_location_csv': 'override_path',
'keys_data_csv': None,
'keys_errors_csv': None,
'keys_format': 'oasis',
'lookup_config_json': None,
'lookup_data_dir': None,
'lookup_module_path': None,
'lookup_complex_config_json': None,
'lookup_num_processes': -1,
'lookup_num_chunks': -1,
'model_version_csv': None,
'user_data_dir': 'foo-path',
'lookup_multiprocessing': True,
'verbose': False}
(PR #1072)
Update numba package pin -- Numba requirements set to
numba>=0.55.1
(PR #1073)
Fix warning messages in package runs -- Added minor changes to appease the package deprecation warnings, results in cleaner logs when running the MDK.
- Updated the oasislmf deprecated module loader to support sub-module remapping
oasislmf.preparation.old_module
->oasislmf.preparation.new_module
(PR #1075)
Setting a complex model gulcalc disables gulpy -- if the
model_custom_gulcalc
config option i set, gulpy option is set to false
(PR #1076)
Ktools 3.9.2 -- Fix for ktools aalcalc https://github.com/OasisLMF/ktools/pull/311
(PR #1055)
modelpy and gulpy set as default run options -The two new python replacements for getmodel
and gulcalc
are now the default run options.
To disable either of these use the command line flags oasislmf model run --modelpy false --gulpy false
or add the following to a configuration json
oasislmf.json
{
...
"modelpy": false,
"gulpy": false
}
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.26.0
1.26.0
OasisLMF Changelog -- #1018 - Convergence task 11. Set the default number of samples for an analysis from model settings
- #1029 - Class not picked up when using key_server_module_path
- #1005 - in run_ktools use -s flag on kat for ORD elt reports
- #1031 - Feature/group id cleanup
- #1041 - Improve memory usage of gulpy
- #1034 - Distributed Platform Fixes
- #1036 - group id cols updated
- #1037 - use valid buff as look breaker
- #1038 - Fix/parallel chunk script error run errors
- #1042 - Fixed RI outputs issue in platform-2
- #1014 - gulpy: raise error if
-r
or-c
are passed - #1046 - Memory error while running gulpy on PiWind
- #1050 - Infer the correct mime-type when uploading files to the oasis-platform
- #1051 - Enable output of Average Loss Convergence Table through MDK
- #1055 - Set gulpy and modelpy default run options to True
- #989 - adding numba to stitching function
- #986, #994 - Feature/986 ods tools dyptes
- #991 - Refactor group id seed
- #996 - Intermittent bash exit handler failures
- #999 - Port
gulcalc
to Python - #1001 - Keys Lookup allow parameter to be passed to read_csv in build_merge
- #1004 - Minor Fix: arrange requirements in alphabetical order
- #1008 - Lookup fix if message column is missing
- #907 - generate outputs in chosen ORD technology choice (e.g. parquet)
- #1010 - Support optionally using
gulpy
in theoasislmf model run
job - #1012 - Update/readme release section
- #1013 - Feature/gulpy option in cli test
- #1015 - Bugfix:
--random-generator
and--logging-level
should be parsed asint
and not aslist
ingulpy
CLI - #1017 - Add fix for complex model wrapper calls
OasisLMF Notes
(PR #1026)
Load default samples from model_settings -The model run commands now includes a --model-settings-json <file-path>
option. If set, or defined in the oasislmf.json
config, this is used to load the default number of samples if not defined in the analysis settings.
- Removed required field
number_of_samples
from the analysis_settings validation. - If
number_of_samples
is not set, the value will be taken frommodel_default_samples
in the model_settings file. - If neither
number_of_samples
ormodel_default_samples
is set then the run will fail with either:
'number_of_samples' not set in analysis_settings and no default value 'model_default_samples' found in model_settings file.
'number_of_samples' not set in analysis_settings and no model_settings.json file provided for a default value.
(PR #1028)
Fix issue with key_server class retrieving -when using key_server_module_path, only the module itself was loaded and set as the KeysServer instead of the KeysServer class inside.
(PR #1030)
Drop kat sort flag -From ktools v3.8.1
, kat
will attempt to detect table type and sort by event ID in the case of Exceedance Loss Tables (ELTs) and by period ID, and then by event ID, in the case of Period Loss Tables (PLTs), prior to concatenation. As this is now the default option, the sort flag -s
has been dropped. Sorting correctly requires eve
to employ the deterministic method to shuffle event IDs. Therefore, if other shuffling methods have been used by eve
, the flag -u
, which produces unsorted output, is employed.
gulpy
- (PR #1033)
Releasing production-ready This PR significantly improves memory usage and performance of gulpy, which is now ready for production.
(PR #1034)
Fixes for Distributed Platform -Fixed Parallel analysis chunks for new platform
The Updated platform in https://github.com/OasisLMF/OasisPlatform/tree/platform-2.0 handles ktools parallel workloads using concurrent bash scripts. These were causing interference with each other, where error checking and temporary file cleanup commands would kill scripts from other analysis chunks.
-
Disabled the script
check_complete
calls when running in chunked mode - this is used to detect dropped processes (OOM errors) and would incorrectly trigger when analysis chunk run in parallel. -
When running as an analysis chunks, ktools logs are now placed in
<run_dir>/log/<chunk_id>
or<run_dir>/log/out
for report generation, this separates the logging on a per chunk basis and stops output in log/stderror.err killing all currently running chunks. -
Replaced cleanup calls for fifo dirs, switched from
rm -R -f fifo/*
to a more selectivefind fifo/ \( -name '*P<chunk_id>[^0-9]*' -o -name '*P<chunk_id>' \) -exec rm -R -f {} +
-
Fixed summaryxref generation, this was incorrectly called before processing each chunk causing failures when reading the GUL xref file.
-
Fixed Chunk storage, the work dir for each batch of events has been moved to
{chunk_id}.work
to prevent interference, so the outputs for script1.run_analysis.sh
are accumulated and stored in1.work
-rw-r--r-- 1 root root 3436 May 20 07:33 1.run_analysis.sh
drwxr-xr-x 5 root root 4096 May 20 07:33 1.work
-rw-r--r-- 1 root root 3432 May 20 07:33 2.run_analysis.sh
drwxr-xr-x 5 root root 4096 May 20 07:33 2.work
-rw-r--r-- 1 root root 3432 May 20 07:33 3.run_analysis.sh
drwxr-xr-x 5 root root 4096 May 20 07:33 3.work
- Moved the fmpy financial structures generation from bash to to the
prepare-losses-generation-directory
sub-task. This prevents IL and RI loss generation from crashing with:
KTOOLS_STDERR:
Traceback (most recent call last):
File "/root/.local/bin/fmpy", line 8, in <module>
sys.exit(main())
File "/root/.local/lib/python3.8/site-packages/oasislmf/pytools/fmpy.py", line 34, in main
manager.run(**kwargs)
File "/root/.local/lib/python3.8/site-packages/oasislmf/pytools/fm/manager.py", line 24, in run
return run_synchronous(**kwargs)
File "/root/.local/lib/python3.8/site-packages/oasislmf/pytools/fm/manager.py", line 46, in run_synchronous
run_synchronous_sparse(max_sidx_val, allocation_rule, streams_in=streams_in, files_out = files_out, net_loss=net_loss, **kwargs)
File "/root/.local/lib/python3.8/site-packages/oasislmf/pytools/fm/manager.py", line 95, in run_synchronous_sparse
compute_info, nodes_array, node_parents_array, node_profiles_array, output_array, fm_profile = load_financial_structure(
File "/root/.local/lib/python3.8/site-packages/oasislmf/pytools/fm/financial_structure.py", line 680, in load_financial_structure
nodes_array = np.load(os.path.join(static_path, f'nodes_array_{allocation_rule}.npy'), mmap_mode='r')
File "/root/.local/lib/python3.8/site-packages/numpy/lib/npyio.py", line 445, in load
raise ValueError("Cannot load file containing pickled data "
ValueError: Cannot load file containing pickled data when allow_pickle=False
FATAL: summarycalc: Read error on stream
Fixed loading the fm_aggregation_profile
The FM file generation in the new platform fails when using the fm_aggregation_profile. This was due to the fm_aggregation_profile loading the level keys as type str
, added a fix to check and switch these to int
File "/root/.local/lib/python3.8/site-packages/oasislmf/computation/generate/files.py", line 274, in run
il_inputs_df = get_il_input_items(
File "/root/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/root/.local/lib/python3.8/site-packages/oasislmf/preparation/il_inputs.py", line 1178, in get_il_input_items
prev_agg_key = [v['field'].lower() for v in fm_aggregation_profile[level_id]['FMAggKey'].values()]
KeyError: 1
(PR #1036)
Changing default group ID columns to PortNumber, AccNumber, and LocNumber -The loc_id
for default GROUP_ID_COLS
was replaced with PortNumber
, AccNumber
, and LocNumber
in order to make result repeatable.
(PR #1042)
Fixed RI output issue in platform-2 -Fix issue where output_reports
task re-runs bin.prepare_run_directory and fails the execution with:
File "/root/.local/lib/python3.8/site-packages/oasislmf/manager.py", line 93, in interface
return computation_cls(**kwargs).run()
File "/root/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 402, in run
analysis_settings = GenerateLossesDir.run(self)
File "/root/.local/lib/python3.8/site-packages/oasislmf/computation/generate/losses.py", line 224, in run
prepare_run_directory(
File "/root/.local/lib/python3.8/site-packages/oasislmf/utils/log.py", line 111, in wrapper
result = func(*args, **kwargs)
File "/root/.local/lib/python3.8/site-packages/oasislmf/execution/bin.py", line 191, in prepare_run_directory
raise OasisException("Error preparing the 'run' directory: {}".format(e))
oasislmf.utils.exceptions.OasisException: Error preparing the 'run' directory: Destination path '/tmp/run/analysis-38_losses-13df4aa5c74d457c8e2f64800acc62d6/run-data/ri_layers.json' already exists
(PR #1043)
Introducing gulpy -gulpy
is now production ready.
We added a document called Introducing gulpy
which describes the introduction of gulpy
for gulcalc
users.
(PR #1053)
Facilitate output of Average Loss Convergence Table (ALCT) -The Average Loss Convergence Table (ALCT) can be generated and confidence levels can be set in the analysis_settings.json
:
"gul_summaries": [
{
"id": 1,
"ord_output": {
"alt_period": true,
"alct_convergence": true,
"alct_confidence": 0.95
}
}
]
where alct_convergence
produces the ALCT when set to true
and alct_confidence
sets the confidence level for the confidence intervals (default 0.95). Average Loss Table (ALT) output is required to produce the ALCT.
(PR #1055)
modelpy and gulpy set as default run options -The two new python replacements for getmodel
and gulcalc
are now the default run options.
To disable either of these use the command line flags oasislmf model run --modelpy false --gulpy false
or add the following to a configuration json
oasislmf.json
{
...
"modelpy": false,
"gulpy": false
}
(PR #989)
Release notes numba patch -A standard function that zips up three numpy arrays to one single one has now been converted to a numba function speeding up the computation
(PR #990)
Replace Dtype files with ods-tools package -- Added ods_tools==2.1.2 and replaced dtype files with calls to fetch dtype from OED spec
- Added a workaround for https://github.com/pandas-dev/pandas/issues/30552
(PR #991)
hashing group IDs -adding a hashing function that will replace the group_id
with a hashed ID. This can be done by turning the hashed_group_id
flag to true
(PR #997)
Fix for ktools script -- Successful runs were intermittently marked as failed by an issue in the bash
exit_handler
gulpy
, the Python version of gulcalc - (PR #1000)
Add This PR introduces gulpy
, a Python version of gulcalc. At the time of merging this PR, gulpy is functionally equivalent to gulcalc, i.e. the following commands are equivalent:
# with gulcalc ...with gulpy
gulcalc -a0 -S10 -i - gulpy -a0 -S10
gulcalc -a1 -S20 -i - gulpy -a1 -S20
gulcalc -a2 -S30 -i - gulpy -a2 -S30
By default, gulpy
uses the Latin Hypercube Sampler algorithm to draw random numbers for the positive sidx samples, which is shown to require less samples than the Mersenne Twister used by gulcalc
when probing a given probability distribution function.
The following gulcalc
command-line arguments were ported to gulpy
:
-a
to specify the back-allocation rule.-d
to print the random values instead of gul-S
to specify the sample size.-L
to specify the loss threshold (only losses larger than the loss threshold are printed to the loss stream).-h
to print the help: usage and options-v
to print the version number:- NOTE: it has been rename to
-V
or--version
. It prints theoasislmf
Python package version
- NOTE: it has been rename to
The following gulcalc
command-line arguments were not ported to gulpy
:
-R [max random numbers]
used to allocate array for random numbers default 1,000,000-i
[output pipe] - item output- NOTE: by default
gulpy
prints the items output to stdout, sogulpy
is equivalent togulcalc -i -
.
- NOTE: by default
c
[output pipe] - coverage output-s
seed for random number generation (used for debugging)-A
automatically hashed seed driven random number generation (default)-l
legacy mechanism driven by random numbers generated dynamically per group - will be removed in future-b
benchmark (in development)-r
use random number file [currently: takes a txt file, not binary as in gulcalc]
The following command-line arguments are new in gulpy
:
--random-generator
to specify the random number generator. Options are:- 0: for the Mersenne Twister, which implements the same algorithm used in
gulcalc
; - 1: for the Latin Hypercube Sampler.
- 0: for the Mersenne Twister, which implements the same algorithm used in
(PR #1002)
keys lookup pandas read parameter in build_merge -- allow user to specify any of the read_csv parameter via the lookup config (in particular allow to specify column dtype instead of relying on inference.
- fix rtree issue when lat or long column are null
- fix, remove unused index after spacial join is done to allow multiple step geoloc
- fix, rtree issue when the location dataframe is empty
(PR #1008)
Fix needed for CoreLogic custom lookup -- If all lookup results are successful then no ''message" column is returned which fails with
KeyError: "['message'] not in index"
(PR #1009)
Add option to generate parquet output files -To write the output csv files in parquet format, the boolean parquet_format
under the key ord_output
should be set to true in the analysis_settings.json
file as follows:
"ord_output": {
...
"parquet_format": true,
...
}
The following tables can be written in parquet format:
- Average Loss Table (ALT)
- Moment Event Loss Table (MELT)
- Quantile Event Loss Table (QELT)
- Exceedance Probability Table (EPT)
- Per Sample Exceedance Probability Table (PSEPT)
- Moment Period Loss Table (MPLT)
- Quantile Period Loss Table (QPLT)
- Sample Period Loss Table (SPLT)
- Sample Event Loss Table (SELT)
gulpy
in the model runner - (PR #1011)
Introducing This PR introduces the possibility to use the recently released gulpy
tool, namely the Python version of the gulcalc
tool to compute the ground-up losses.
By default,
oasislmf model run
keeps using gulcalc
. Optionally, gulpy
can be used instead of gulcalc
by passing the --gulpy
flag:
oasislmf model run --gulpy
or
oasislmf model run --gulpy true
(PR #1012)
Updated release cycle notes -- Updated the releases-and-maintenance section in the oasislmf readme.
(PR #1017)
Fix for complex model command -Analysis loss chunks now generated correctly and added a check in case the custom_get_getmodel_cmd
is set as a kwarg
Example supplier_model_runner.py
from oasislmf.execution.runner import run as oasislmf_run
from oasislmf.execution.runner import run_analysis as oasislmf_run_analysis_chunk
from oasislmf.execution.runner import run_outputs
CUSTOM_GULCALC_CMD = "CustomGulCalcBinary"
def custom_get_getmodel_cmd(**kwargs):
return <generated complex model cmd>
def run(analysis_settings, **params):
params['custom_get_getmodel_cmd'] = custom_get_getmodel_cmd
params['custom_gulcalc_cmd'] = CUSTOM_GULCALC_CMD
oasislmf_run(analysis_settings, **params)
def run_analysis(**params):
params['_get_getmodel_cmd'] = custom_get_getmodel_cmd
params['custom_gulcalc_cmd'] = CUSTOM_GULCALC_CMD
oasislmf_run_analysis_chunk(**params)
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.23.12
1.23.12
OasisLMF Changelog -- #907 - generate outputs in chosen ORD technology choice (e.g. parquet)
OasisLMF Notes
(PR #1052)
Add option to generate parquet output files (Backport) -From original PR: https://github.com/OasisLMF/OasisLMF/pull/1009
To write the output csv files in parquet format, the boolean parquet_format
under the key ord_output
should be set to true in the analysis_settings.json
file as follows:
"ord_output": {
...
"parquet_format": true,
...
}
The following tables can be written in parquet format:
- Average Loss Table (ALT)
- Moment Event Loss Table (MELT)
- Quantile Event Loss Table (QELT)
- Exceedance Probability Table (EPT)
- Per Sample Exceedance Probability Table (PSEPT)
- Moment Period Loss Table (MPLT)
- Quantile Period Loss Table (QPLT)
- Sample Period Loss Table (SPLT)
- Sample Event Loss Table (SELT)
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.23.11
- #1039 - Fix/282 events not in occurrence file (Backport)
OasisLMF Notes
(PR #1039)
Ignore events that do not exist in occurrence file when creating summary indexes in aalcalc -From: https://github.com/OasisLMF/ktools/pull/284
It is no longer a requirement for all event IDs to be present in the occurrence file in order to create the summary index files in aalcalc. Should an event ID be encountered that does not exist in the occurrence file, only its offset is recorded.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.26.1rc2
1.26.1rc2
OasisLMF Changelog -- #1017 - Add fix for complex model wrapper calls
- #1018 - Convergence task 11. Set the default number of samples for an analysis from model settings
- #1029 - Class not picked up when using key_server_module_path
- #1005 - in run_ktools use -s flag on kat for ORD elt reports
OasisLMF Notes
(PR #1017)
Fix for complex model command -Analysis loss chunks now generated correctly and added a check in case the custom_get_getmodel_cmd
is set as a kwarg
Example supplier_model_runner.py
from oasislmf.execution.runner import run as oasislmf_run
from oasislmf.execution.runner import run_analysis as oasislmf_run_analysis_chunk
from oasislmf.execution.runner import run_outputs
CUSTOM_GULCALC_CMD = "CustomGulCalcBinary"
def custom_get_getmodel_cmd(**kwargs):
return <generated complex model cmd>
def run(analysis_settings, **params):
params['custom_get_getmodel_cmd'] = custom_get_getmodel_cmd
params['custom_gulcalc_cmd'] = CUSTOM_GULCALC_CMD
oasislmf_run(analysis_settings, **params)
def run_analysis(**params):
params['_get_getmodel_cmd'] = custom_get_getmodel_cmd
params['custom_gulcalc_cmd'] = CUSTOM_GULCALC_CMD
oasislmf_run_analysis_chunk(**params)
(PR #1026)
Load default samples from model_settings -The model run commands now includes a --model-settings-json <file-path>
option. If set, or defined in the oasislmf.json
config, this is used to load the default number of samples if not defined in the analysis settings.
- Removed required field
number_of_samples
from the analysis_settings validation. - If
number_of_samples
is not set, the value will be taken frommodel_default_samples
in the model_settings file. - If neither
number_of_samples
ormodel_default_samples
is set then the run will fail with either:
'number_of_samples' not set in analysis_settings and no default value 'model_default_samples' found in model_settings file.
'number_of_samples' not set in analysis_settings and no model_settings.json file provided for a default value.
(PR #1028)
Fix issue with key_server class retrieving -when using key_server_module_path, only the module itself was loaded and set as the KeysServer instead of the KeysServer class inside.
(PR #1030)
Drop kat sort flag -From ktools v3.8.1
, kat
will attempt to detect table type and sort by event ID in the case of Exceedance Loss Tables (ELTs) and by period ID, and then by event ID, in the case of Period Loss Tables (PLTs), prior to concatenation. As this is now the default option, the sort flag -s
has been dropped. Sorting correctly requires eve
to employ the deterministic method to shuffle event IDs. Therefore, if other shuffling methods have been used by eve
, the flag -u
, which produces unsorted output, is employed.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.23.10
- #1029 - Class not picked up when using key_server_module_path
OasisLMF Notes
(PR #1028)
Fix issue with key_server class retrieving -when using key_server_module_path, only the module itself was loaded and set as the KeysServer instead of the KeysServer class inside.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild almost 3 years ago

OasisLMF - 1.26.1rc1
OasisLMF Notes
Pinned numpy package
Updated package requirements to solve sub-dependency issue with numba https://github.com/OasisLMF/OasisLMF/commit/a1019a4e78f4da3cfd99b35b516c5d0a2c41425f
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.26.0rc1
1.26.0rc1
OasisLMF Changelog -- #996 - Intermittent bash exit handler failures
- #999 - Port
gulcalc
to Python - #1001 - Keys Lookup allow parameter to be passed to read_csv in build_merge
- #1004 - Minor Fix: arrange requirements in alphabetical order
- #1008 - Lookup fix if message column is missing
- #907 - generate outputs in chosen ORD technology choice (e.g. parquet)
- #1010 - Support optionally using
gulpy
in theoasislmf model run
job - #1012 - Update/readme release section
- #1013 - Feature/gulpy option in cli test
- #1015 - Bugfix:
--random-generator
and--logging-level
should be parsed asint
and not aslist
ingulpy
CLI - #989 - adding numba to stitching function
- #986, #994 - Feature/986 ods tools dyptes
- #991 - Refactor group id seed
OasisLMF Notes
(PR #997)
Fix for ktools script -- Successful runs were intermittently marked as failed by an issue in the bash
exit_handler
gulpy
, the Python version of gulcalc - (PR #1000)
Add This PR introduces gulpy
, a Python version of gulcalc. At the time of merging this PR, gulpy is functionally equivalent to gulcalc, i.e. the following commands are equivalent:
# with gulcalc ...with gulpy
gulcalc -a0 -S10 -i - gulpy -a0 -S10
gulcalc -a1 -S20 -i - gulpy -a1 -S20
gulcalc -a2 -S30 -i - gulpy -a2 -S30
By default, gulpy
uses the Latin Hypercube Sampler algorithm to draw random numbers for the positive sidx samples, which is shown to require less samples than the Mersenne Twister used by gulcalc
when probing a given probability distribution function.
The following gulcalc
command-line arguments were ported to gulpy
:
-a
to specify the back-allocation rule.-d
to print the random values instead of gul-S
to specify the sample size.-L
to specify the loss threshold (only losses larger than the loss threshold are printed to the loss stream).-h
to print the help: usage and options-v
to print the version number:- NOTE: it has been rename to
-V
or--version
. It prints theoasislmf
Python package version
- NOTE: it has been rename to
The following gulcalc
command-line arguments were not ported to gulpy
:
-R [max random numbers]
used to allocate array for random numbers default 1,000,000-i
[output pipe] - item output- NOTE: by default
gulpy
prints the items output to stdout, sogulpy
is equivalent togulcalc -i -
.
- NOTE: by default
c
[output pipe] - coverage output-s
seed for random number generation (used for debugging)-A
automatically hashed seed driven random number generation (default)-l
legacy mechanism driven by random numbers generated dynamically per group - will be removed in future-b
benchmark (in development)-r
use random number file [currently: takes a txt file, not binary as in gulcalc]
The following command-line arguments are new in gulpy
:
--random-generator
to specify the random number generator. Options are:- 0: for the Mersenne Twister, which implements the same algorithm used in
gulcalc
; - 1: for the Latin Hypercube Sampler.
- 0: for the Mersenne Twister, which implements the same algorithm used in
(PR #1002)
keys lookup pandas read parameter in build_merge -- allow user to specify any of the read_csv parameter via the lookup config (in particular allow to specify column dtype instead of relying on inference.
- fix rtree issue when lat or long column are null
- fix, remove unused index after spacial join is done to allow multiple step geoloc
- fix, rtree issue when the location dataframe is empty
(PR #1008)
Fix needed for CoreLogic custom lookup -- If all lookup results are successful then no ''message" column is returned which fails with
KeyError: "['message'] not in index"
(PR #1009)
Add option to generate parquet output files -To write the output csv files in parquet format, the boolean parquet_format
under the key ord_output
should be set to true in the analysis_settings.json
file as follows:
"ord_output": {
...
"parquet_format": true,
...
}
The following tables can be written in parquet format:
- Average Loss Table (ALT)
- Moment Event Loss Table (MELT)
- Quantile Event Loss Table (QELT)
- Exceedance Probability Table (EPT)
- Per Sample Exceedance Probability Table (PSEPT)
- Moment Period Loss Table (MPLT)
- Quantile Period Loss Table (QPLT)
- Sample Period Loss Table (SPLT)
- Sample Event Loss Table (SELT)
gulpy
in the model runner - (PR #1011)
Introducing This PR introduces the possibility to use the recently released gulpy
tool, namely the Python version of the gulcalc
tool to compute the ground-up losses.
By default,
oasislmf model run
keeps using gulcalc
. Optionally, gulpy
can be used instead of gulcalc
by passing the --gulpy
flag:
oasislmf model run --gulpy
or
oasislmf model run --gulpy true
(PR #1012)
Updated release cycle notes -- Updated the releases-and-maintenance section in the oasislmf readme.
(PR #989)
Release notes numba patch -A standard function that zips up three numpy arrays to one single one has now been converted to a numba function speeding up the computation
(PR #990)
Replace Dtype files with ods-tools package -- Added ods_tools==2.1.2 and replaced dtype files with calls to fetch dtype from OED spec
- Added a workaround for https://github.com/pandas-dev/pandas/issues/30552
(PR #991)
hashing group IDs -adding a hashing function that will replace the group_id
with a hashed ID. This can be done by turning the hashed_group_id
flag to true
Climate Change - Natural Hazard and Storm
- Python
Published by sambles about 3 years ago

OasisLMF - 1.23.9
- #1001 - Keys Lookup allow parameter to be passed to read_csv in build_merge
OasisLMF Notes
(PR #1002)
keys lookup pandas read parameter in build_merge -- allow user to specify any of the read_csv parameter via the lookup config (in particular allow to specify column dtype instead of relying on inference.
- fix rtree issue when lat or long column are null
- fix, remove unused index after spacial join is done to allow multiple step geoloc
- fix, rtree issue when the location dataframe is empty
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.23.8
1.23.8
OasisLMF Changelog -OasisLMF Notes
(PR #988)
fix size of intermediary dense array -with the introduction of sidx -5, intermediary dense array was too small and overlap with sidx (num_sample - 1).
increase the size by 2 to fix the issue
(PR #973)
Handle Max loss -4 and Chance of Loss -5 -- create a new category of sidx that will be pass through and added to the output stream
- handle sidx -4 (Max Loss) like other computed sidx
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.15.24
1.15.24
OasisLMF Changelog -- #1006 - fmpy ignore sidx -4, -5
OasisLMF Notes
(PR #1006)
ignore new sidx -4 and -5 from gul_calc -
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.23.7
1.23.7
OasisLMF Changelog -OasisLMF Notes
(PR #997)
Fix for ktools script -- Successful runs were intermittently marked as failed by an issue in the bash
exit_handler
(PR #998)
Fix for memory blowup when grouping categorical data -Added observed=True
to pandas groupby calls in file generation, from PR https://github.com/OasisLMF/OasisLMF/pull/990
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.25.1
1.25.1
OasisLMF Changelog -- #987 - Add a Python implementation of the
cdftocsv
tool - #983 - Unexpected output change in PiWind testing
OasisLMF Notes
cdftocsvpy
tool - (PR #985)
Added A Python implementation of the cdftocsv
tool is now available: it is called cdftocsvpy
.
Like cdftocsv
, cdftocsvpy
prints the cdf produced by getmodel
to stdout
in csv
format.
It has the same input/output interface of cdftocsv
. It can be used with a stream as an input, e.g.:
eve 1 1 | getmodel | cdftocsv | head
produces:
event_id,areaperil_id,vulnerability_id,bin_index,prob_to,bin_mean
1,1,9,1,0.268378,0.000000
1,1,9,2,0.318824,0.125000
1,1,9,3,0.665534,0.375000
1,1,9,4,0.818048,0.625000
1,1,9,5,0.933643,0.875000
1,1,9,6,1.000000,1.000000
1,1,10,1,0.048263,0.000000
1,1,10,2,0.548144,0.125000
1,1,10,3,0.597388,0.375000
By passing the -s
optional argument, the header is not printed, e.g.:
eve 1 1 | getmodel | cdftocsv -s | head
produces:
1,1,9,1,0.268378,0.000000
1,1,9,2,0.318824,0.125000
1,1,9,3,0.665534,0.375000
1,1,9,4,0.818048,0.625000
1,1,9,5,0.933643,0.875000
1,1,9,6,1.000000,1.000000
1,1,10,1,0.048263,0.000000
1,1,10,2,0.548144,0.125000
1,1,10,3,0.597388,0.375000
(PR #988)
fix size of intermediary dense array -with the introduction of sidx -5, intermediary dense array was too small and overlap with sidx (num_sample - 1).
increase the size by 2 to fix the issue
(PR #982)
Update CI package tests to check the PiWind output on each commit -Replaced the MDK run check with a platform output check by running a worker test & build
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.23.6
1.23.6
OasisLMF Changelog -- #983 - Fixed unexpected output change from PiWind testing
- #983 - Updated CI testing and moved PiWind output checks from platform to oasislmf
OasisLMF Notes
(PR #984)
Reverted (PR 973) and applied pandas 1.4.0 fix in its own commit -Due issue #983 the commit https://github.com/OasisLMF/OasisLMF/commit/40888a1deb24c8eb0e75bb6c9845b8ee2acb834d has been reverted and replaced with https://github.com/OasisLMF/OasisLMF/commit/e6712a45bdb2369d39cf1ef9ea954e5b50eeb110
(PR #982)
Update CI package tests to check the PiWind output on each commit -Replaced the MDK run check with a platform output check by running a worker test & build
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.25.0
1.25.0
OasisLMF Changelog -- #961 - Feature/docs
- #973 - manage -4 and pass through -5 sidx
- #963 - Add supported OED versions to model metadata (model_settings.json)
- #978 - Model Schema update - replace
numeric_parameters
withinteger_parameters
- #980 - Feature/976 quantile
- #981 - Footprint server profiling
OasisLMF Notes
(PR #961)
OED v2 documentation -- Updated financial fields supported list
- OED validation guidelines added
(PR #973)
Handle Max loss -4 and Chance of Loss -5 -- create a new category of sidx that will be pass through and added to the output stream
- handle sidx -4 (Max Loss) like other computed sidx
(PR #974)
Added OED versions to model metadata -Example:
{
..
"data_settings":{
"supported_oed_versions": ["1.5", "2.0"],
..
}
}
(PR #979)
Added integer_parameters to model settings schema -New option for integer only model setting parameters
(PR #980)
Create quantile file at run directory preparation -- New option in analysis_settings to set quantile points.
"quantiles": [0.0, 0.5, 1.0],
this overrides files set by a model supplier. - If no quantile settings are given by the user or supplier, then create default file based on quantile.csv
(PR #981)
Parquet support for Footprint files -modelpy
now supports parquet files without needing a TCP server for footprint data. In order to utilise this, we have to have a footprint.parquet
directory in the static folder. The footrpint.parquet
directory hosts multiple parquet files of partitioned data based on event_id
. You can convert your footprint.bin
file to parquet by running the convertbintoparquet
command in the same directory as the footprint.bin
file.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.23.5
1.23.5
OasisLMF Changelog -- #978 - Model Schema update - replace
numeric_parameters
withinteger_parameters
- #980 - Feature/976 quantile
- #975 - Reinsurance prep won't run with pandas 1.4.0
OasisLMF Notes
(PR #979)
Added integer_parameters to model settings schema -New option for integer only model setting parameters
(PR #980)
Create quantile file at run directory preparation -- New option in analysis_settings to set quantile points.
"quantiles": [0.0, 0.5, 1.0],
this overrides files set by a model supplier. - If no quantile settings are given by the user or supplier, then create default file based on quantile.csv
(PR #973)
Handle Max loss -4 and Chance of Loss -5 -- create a new category of sidx that will be pass through and added to the output stream
- handle sidx -4 (Max Loss) like other computed sidx
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.15.23
1.15.23
OasisLMF Changelog -OasisLMF Notes
(PR #977)
Update Ktools to v3.7.4 -Requested update to fix the issue #259 - Missing header record when no type 1 losses in leccalc
(PR #964)
Event ID list in Analysis Settings -Allows a list of event ids to be passed in the analysis settings file, which will be used as a priority over the default or delected event set file.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild about 3 years ago

OasisLMF - 1.23.2
1.23.2
OasisLMF Changelog -- #950 - allow event subset to be passed in analysis settings
- #963 - Add supported OED versions to model metadata (model_settings.json)
OasisLMF Notes
(PR #964)
Event ID list in Analysis Settings -Allows a list of event ids to be passed in the analysis settings file, which will be used as a priority over the default or delected event set file.
(PR #974)
Added OED versions to model metadata -Example:
{
..
"data_settings":{
"supported_oed_versions": ["1.5", "2.0"],
..
}
}
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.24.0
1.24.0
OasisLMF Changelog -- #962 - Prepare 1.23.0 for LTS
- #950 - allow event subset to be passed in analysis settings
- #965 - Client Fix for update in platform 2.0
- #966 - Footprint server
- #881 - Enable the use of summary index files by ktools component aalcalc
OasisLMF Notes
(PR #962)
Prepare 1.23.0 for LTS -Updated the Jenkins build script with changes from backports/1.15.x
(PR #964)
Event ID list in Analysis Settings -Allows a list of event ids to be passed in the analysis settings file, which will be used as a priority over the default or delected event set file.
(PR #966)
TCP server that serves footprint data between getmodel processes [Experimental] -This feature is a Python TCP server that serves footprint data between getmodel processes. Once the specific data is served to the getmodel process it is deleted in the server process in order to reduce the amount of memory required by stopping duplication
To enable this on command line add the flags --modelpy
and --model-py-server
Or via the configuration file oasislmf.json
{
"modelpy": True,
"model_py_server": True,
...
}
(PR #970)
Summary index files copied to aalcalc work directory by genbash -The ktools component summarycalc
produces summary index files if the -m
argument is issued, allowing for faster look up speeds and a reduction in memory use by downstream components. This has already been accomplished for ktools components leccalc
and ordleccalc
(from MDK v1.17.0, see PR https://github.com/OasisLMF/OasisLMF/pull/845). By copying these summary index files to the aalcalc
work directory, a similar reduction in memory use and decrease in run time is achieved by the ktools component aalcalc
.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.23.1
1.23.1
OasisLMF Changelog -OasisLMF Notes
(PR #962)
Prepare 1.23.0 for LTS -Updated the Jenkins build script with changes from backports/1.15.x
(PR #964)
Event ID list in Analysis Settings -Allows a list of event ids to be passed in the analysis settings file, which will be used as a priority over the default or delected event set file.
Climate Change - Natural Hazard and Storm
- Python
Published by sambles over 3 years ago

OasisLMF - 1.15.22
- #950 - allow event subset to be passed in analysis settings
OasisLMF Notes
(PR #964)
Event ID list in Analysis Settings -Allows a list of event ids to be passed in the analysis settings file, which will be used as a priority over the default or delected event set file.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.23.0
1.23.0
OasisLMF Changelog -- #762, #915, #838, #839 - Feature/oed2tests
- #901 - fmpy: areaperil_id 8 bytes support
- #830 - Step policies: add new calcrule (calcrule 28 + limit)
- #903 - Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT)
- #942 - Option 'lookup_multiprocessing' not read from config file.
- #944 - Loss is not init to 0 for step policy
- #946 - Pymodel optimize
- #948 - Fix/platform client error
- #916 - stashing
- #953 - removing memory map attribute
- #858 - support parquet for OED
- #955 - Update the model references for consistency
- #959 - Replace refs to getmodelpy with modelpy
OasisLMF Notes
(PR #954)
Upgrade to OED2 -Support for OED v2.0.0 exposure input format, including
- OEDVersion field in all files (optional)
- CondTag field added to OED location and account
- CondNumber field removed from OED location
- RiskLevel field moved from Reinsurance Scope to Reinsurance Info
New policy conditions features supported
- hierarchal nested conditions
- policy restrictions
Validation test suite upgraded to OED 2.0.0 input format
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
(PR #933)
Step policy 28 improvment -adding new calcrule 37 equivalent to 28 + limit
create specific calcrule for conditional coverage loss (was 28 with payout_start= 0)
(PR #903)
Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Quantile Event Loss Table (QELT) | elt_quantile |
Quantile Period Loss Table (QPLT) | plt_quantile |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_quantile": true,
"elt_quantile": true
}
}
]
(PR #943)
Fix for missing options in CLI -Added lookup_multiprocessing
, ktools_legacy_stream
and model_custom_gulcalc
to MDK command line and config as valid options. These were previously only function parameters.
(PR #945)
Fix to step policies in fmpy -Fixes numerical issues in gross losses for all step policy calcrules. Losses were not being initialised between events
(PR #948)
Fixed platform client, incorrect error message -Logging in with an invalid user incorrectly reported Authentication Error: {"Detail":"invalid refresh token"}
(PR #916)
Release notes feature title -... Release notes description / summary
... Any text between these two tags will be automatically pulled into the platform release notes
(PR #890)
OED parquet format support -Allow parquet format to be used for OED files for model and exposure command
(PR #957)
Updated settings schema -analysis_settings.json
- Renamed
model_version_id
->model_name_id
- Renamed
module_supplier_id
->model_supplier_id
model_settings.json
- Added
model_default_samples
- Added
number_of_events
to event_set section - Added option
all
to theuser_for
field - Added
stepsize
to float_parameters
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.21
- #948 - Fix/platform client error
OasisLMF Notes
(PR #948)
Fixed platform client, incorrect error message -Logging in with an invalid user incorrectly reported Authentication Error: {"Detail":"invalid refresh token"}
Update Ktools to version v3.7.2
Updated the ktools binaries to fix an apple silicon build issue https://github.com/OasisLMF/ktools/pull/253
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.20
- #944 - Loss is not init to 0 for step policy
OasisLMF Notes
(PR #947)
Fix to step policies in fmpy -Fixes numerical issues in gross losses for all step policy calcrules. Losses were not being initialised between events
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.22.0
1.22.0
OasisLMF Changelog -- #914 - numba and numpy version incompatibility
- #930 - Fix/update dependencies
- #857 - getmodel revamping
- #901 - fmpy: areaperil_id 8 bytes support
- #931 - Disable memory map for non-utf8 encoding
- #903 - Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT)
- #935 - fmpy:ignore sidx < -3
- #936 - Bash options overriden when running ktools in Subprrocess
- #913 - Update API platform client
- #858 - support parquet for OED
- #916 - stashing
- #917 - High memory use in generating dummy model
- #829 - Step policies: support files with both step and non-step policies
- #920 - conditions for multi-layer accounts file generation
- #884 - OasisLMF install fails on OSX Catalina because of ktools installation
- #739, #740 - Dummy model occurrence file generation supports repeated events over time and dummy model files are split into static and input directories
- #924 - Non UTF-8 portfolio causes model run to crash
OasisLMF Notes
(PR #929)
Fix requirement clash for numba -Pinned the numpy version to solve install issue with the latest version of numba
numba 0.54.1 requires numpy<1.21,>=1.17, but you'll have numpy 1.21.3 which is incompatible.
(PR #930)
Updated requirements file -- Updated dependencies
- https://github.com/OasisLMF/OasisLMF/pull/925
(PR #899)
Optimise Python based getmodel -Improved the performance of the python implementation of getmodel. Can be invoked on the command line using oasislmf model run --getmodelpy
or set in a configuration file using "getmodelpy": true
.
Note: compressed footprint files are not currently supported
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
(PR #931)
Fixed non-utf8 encoding error -Reading OED files with memory_map
enabled causes an encoding error
(PR #903)
Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Quantile Event Loss Table (QELT) | elt_quantile |
Quantile Period Loss Table (QPLT) | plt_quantile |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_quantile": true,
"elt_quantile": true
}
}
]
(PR #935)
fmpy: ignore sidx < -3 -(PR #937)
Fix error guard and support older bash versions -- Fixed running the ktools script on older bash version, added compatibility checks to disable unsupported features and print warnings.
- Fixed script options getting overridden by python subprocess calls #936
Example - Bash v4.0
$ bash_4 --version
GNU bash, version 4.0.38(1)-release (x86_64-unknown-linux-gnu)
$ bash_4 run_ktools.sh
WARNING: Unable to set inherit_errexit. Possibly unsupported by this shell, Subprocess failures may not be detected.
WARNING: logging disabled, bash version '4.0.38(1)-release' is not supported, minimum requirement is bash v4.4
[OK] eve
[OK] getmodel
[OK] gulcalc
[OK] fmcalc
[OK] summarycalc
[OK] eltcalc
[OK] aalcalc
[OK] leccalc
Run Completed
(PR #913)
Updated the Oasis Platform client -Added/updated the following endpoints missing from the APIClient
class
analyses/{id}/cancel_generate_inputs
analyses/{id}/cancel_analysis_run
analyses/{id}/cancel
analyses/{id}/copy
analyses/{id}/storage_links
portfolio/{id}/storage_links
server_info/
Platform 2.0
models/{id}/chunking_configuration
models/{id}/scaling_configuration
(PR #890)
OED parquet format support -Allow parquet format to be used for OED files for model and exposure command
(PR #916)
Release notes feature title -... Release notes description / summary
... Any text between these two tags will be automatically pulled into the platform release notes
(PR #918)
Reduce memory use when generating dummy model data -When generating very large dummy model files, the generation process is killed with an out-of-memory error. By introducing a for loop in the write_file()
method, the data is effectively written to the file in chunks, reducing the memory use.
(PR #919)
Step policies: support files with both step and non-step policies -integrate step policy key cov_agg_id completely with the main tree structure in order to support ODS files with step and non step policies
(PR #921)
fix issue in conditions for multi-layer accounts file generation -(PR #922)
MacOS files ktools build from source -The configure step was missing the flag --enable-osx
when building ktools, the setup.py file has been updated to fix that.
(PR #923)
Dummy model occurrence file generation supports repeated events over time -Events can occur multiple times over multiple periods in the occurrence file. The number of periods per event is modelled by sampling from a truncated normal distribution with mean and standard deviations given by the command line arguments --periods-per-event-mean
and --periods-per-event-stddev
respectively. Default values are mean = 1
and standard deviation = 0.0
. The lower tail of the distribution is truncated at 0.5 and the cumulative distribution function is given by:
F(x) = [Φ(g(x)) - Φ(g(a))] / [Φ(g(b)) - Φ(g(a))]
where
g(y) = (y - μ) / σ
Φ(g(y)) = 1/2 * (1 + erf(g(y) / √2))
lower boundary a = 0.5, upper boundary b = ∞, mean μ and standard deviation σ.
For example, mean = 1
and standard deviation = 0.0
creates the following occurrence.bin
file:
$ occurrencetocsv < occurrence.bin
event_id,period_no,occ_year,occ_month,occ_day
1,7,7,10,11
2,1,1,9,17
3,1,1,8,6
When mean = 5
and standard deviation = 3.0
the following occurrence.bin
file is created:
event_id,period_no,occ_year,occ_month,occ_day
1,3,3,10,1
1,1,1,9,1
1,6,6,9,22
1,7,7,1,6
1,2,2,11,7
1,7,7,6,23
1,5,5,1,24
2,9,9,5,5
2,9,9,1,15
2,3,3,3,17
3,10,10,6,7
3,4,4,5,26
3,1,1,10,5
3,10,10,10,3
3,4,4,3,20
3,9,9,4,16
3,2,2,12,9
3,8,8,1,14
3,2,2,2,17
3,8,8,3,19
3,6,6,3,29
Dummy model files are split into static and input directories
Dummy model files are split into static
and input
directories, as opposed to being written in the parent target directory, when using the commands:
$ oasislmf test model generate-model-files
$ oasislmf test model generate-oasis-files
This should make it easier to run tests directly on these files with the ktools
components.
(PR #926)
Fix loading of non utf-8 csv file with potential FlexiLocZZZ column. -In case of non utf-8 files that use FlexiLocZZZ fields in their col_dtypes, encoding is recognize but not used when loading the header. This lead to an encoding error.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.19
1.15.19
OasisLMF Changelog -OasisLMF Notes
(PR #938)
Fix error guard and support older bash versions -- Fixed running the ktools script on older bash version, added compatibility checks to disable unsupported features and print warnings.
- Fixed script options getting overridden by python subprocess calls #936
Example - Bash v4.0
$ bash_4 --version
GNU bash, version 4.0.38(1)-release (x86_64-unknown-linux-gnu)
$ bash_4 run_ktools.sh
WARNING: Unable to set inherit_errexit. Possibly unsupported by this shell, Subprocess failures may not be detected.
WARNING: logging disabled, bash version '4.0.38(1)-release' is not supported, minimum requirement is bash v4.4
[OK] eve
[OK] getmodel
[OK] gulcalc
[OK] fmcalc
[OK] summarycalc
[OK] eltcalc
[OK] aalcalc
[OK] leccalc
Run Completed
(PR #932)
Backported Fix - non-utf8 encoding error -Reading OED files with memory_map
enabled causes an encoding error
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.22.0rc2
1.22.0rc2
OasisLMF Changelog -- #914 - numba and numpy version incompatibility
- #930 - Fix/update dependencies
- #857 - getmodel revamping
- #901 - fmpy: areaperil_id 8 bytes support
- #903 - Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT)
- #913 - Update API platform client
- #858 - support parquet for OED
- #916 - stashing
- #917 - High memory use in generating dummy model
- #829 - Step policies: support files with both step and non-step policies
- #920 - conditions for multi-layer accounts file generation
- #884 - OasisLMF install fails on OSX Catalina because of ktools installation
- #739, #740 - Dummy model occurrence file generation supports repeated events over time and dummy model files are split into static and input directories
- #924 - Non UTF-8 portfolio causes model run to crash
OasisLMF Notes
(PR #929)
Fix requirement clash for numba -Pinned the numpy version to solve install issue with the latest version of numba
numba 0.54.1 requires numpy<1.21,>=1.17, but you'll have numpy 1.21.3 which is incompatible.
(PR #930)
Updated requirements file -- Updated dependencies
- https://github.com/OasisLMF/OasisLMF/pull/925
(PR #899)
Optimise Python based getmodel -Improved the performance of the python implementation of getmodel. Can be invoked on the command line using oasislmf model run --getmodelpy
or set in a configuration file using "getmodelpy": true
.
Note: compressed footprint files are not currently supported
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
(PR #903)
Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Quantile Event Loss Table (QELT) | elt_quantile |
Quantile Period Loss Table (QPLT) | plt_quantile |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_quantile": true,
"elt_quantile": true
}
}
]
(PR #913)
Updated the Oasis Platform client -Added/updated the following endpoints missing from the APIClient
class
analyses/{id}/cancel_generate_inputs
analyses/{id}/cancel_analysis_run
analyses/{id}/cancel
analyses/{id}/copy
analyses/{id}/storage_links
portfolio/{id}/storage_links
server_info/
Platform 2.0
models/{id}/chunking_configuration
models/{id}/scaling_configuration
(PR #890)
OED parquet format support -Allow parquet format to be used for OED files for model and exposure command
(PR #916)
Release notes feature title -... Release notes description / summary
... Any text between these two tags will be automatically pulled into the platform release notes
(PR #918)
Reduce memory use when generating dummy model data -When generating very large dummy model files, the generation process is killed with an out-of-memory error. By introducing a for loop in the write_file()
method, the data is effectively written to the file in chunks, reducing the memory use.
(PR #919)
Step policies: support files with both step and non-step policies -integrate step policy key cov_agg_id completely with the main tree structure in order to support ODS files with step and non step policies
(PR #921)
fix issue in conditions for multi-layer accounts file generation -(PR #922)
MacOS files ktools build from source -The configure step was missing the flag --enable-osx
when building ktools, the setup.py file has been updated to fix that.
(PR #923)
Dummy model occurrence file generation supports repeated events over time -Events can occur multiple times over multiple periods in the occurrence file. The number of periods per event is modelled by sampling from a truncated normal distribution with mean and standard deviations given by the command line arguments --periods-per-event-mean
and --periods-per-event-stddev
respectively. Default values are mean = 1
and standard deviation = 0.0
. The lower tail of the distribution is truncated at 0.5 and the cumulative distribution function is given by:
F(x) = [Φ(g(x)) - Φ(g(a))] / [Φ(g(b)) - Φ(g(a))]
where
g(y) = (y - μ) / σ
Φ(g(y)) = 1/2 * (1 + erf(g(y) / √2))
lower boundary a = 0.5, upper boundary b = ∞, mean μ and standard deviation σ.
For example, mean = 1
and standard deviation = 0.0
creates the following occurrence.bin
file:
$ occurrencetocsv < occurrence.bin
event_id,period_no,occ_year,occ_month,occ_day
1,7,7,10,11
2,1,1,9,17
3,1,1,8,6
When mean = 5
and standard deviation = 3.0
the following occurrence.bin
file is created:
event_id,period_no,occ_year,occ_month,occ_day
1,3,3,10,1
1,1,1,9,1
1,6,6,9,22
1,7,7,1,6
1,2,2,11,7
1,7,7,6,23
1,5,5,1,24
2,9,9,5,5
2,9,9,1,15
2,3,3,3,17
3,10,10,6,7
3,4,4,5,26
3,1,1,10,5
3,10,10,10,3
3,4,4,3,20
3,9,9,4,16
3,2,2,12,9
3,8,8,1,14
3,2,2,2,17
3,8,8,3,19
3,6,6,3,29
Dummy model files are split into static and input directories
Dummy model files are split into static
and input
directories, as opposed to being written in the parent target directory, when using the commands:
$ oasislmf test model generate-model-files
$ oasislmf test model generate-oasis-files
This should make it easier to run tests directly on these files with the ktools
components.
(PR #926)
Fix loading of non utf-8 csv file with potential FlexiLocZZZ column. -In case of non utf-8 files that use FlexiLocZZZ fields in their col_dtypes, encoding is recognize but not used when loading the header. This lead to an encoding error.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.18
1.15.18
OasisLMF Changelog -OasisLMF Notes
(PR #928)
Fix sparse stream reader empty item condition -(PR #926)
Fix loading of non utf-8 csv file with potential FlexiLocZZZ column. -In case of non utf-8 files that use FlexiLocZZZ fields in their col_dtypes, encoding is recognize but not used when loading the header. This lead to an encoding error.
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.21.0
1.21.0
OasisLMF Changelog -- #897 - added test cases with account terms
- #803 - Max Ded back allocation
- #901 - fmpy: areaperil_id 8 bytes support
- #903 - Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT)
- #858 - support parquet for OED
OasisLMF Notes
(PR #897)
Validation test cases for insurance account terms -A set of test cases have been added for validation testing of account financial terms.
(PR #898)
max deductible back allocation based on under limit -change the back allocation method when max deductible is activated.
In the former implementation the amount back allocated due to the activation of max deductible was proportional to the sub node computed loss if it was not null and to the sub node input loss otherwise.
this had multiple issue regarding the consistency of the result.
depending on the financial structure and loss input, back allocated loss could:
- be bigger than the limit
- increase loss of item that had no deductible
- be not back allocated if the sub-nodes input loss were all null
in the new implementation, all the back allocated loss above the computed aggregated loss is reallocated based on the under limit of the sub nodes.
As the under limit is the minimum between the deductible and the loss available under the limit, this ensure that:
- the node loss will never be over the limit.
- the loss is back allocated proportionally to what is relevant to max deductible (the more deductible a node have the more is back allocated)
- no loss is lost (if the max deductible is activated then under limit is not null)
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
(PR #903)
Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Quantile Event Loss Table (QELT) | elt_quantile |
Quantile Period Loss Table (QPLT) | plt_quantile |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_quantile": true,
"elt_quantile": true
}
}
]
(PR #890)
OED parquet format support -Allow parquet format to be used for OED files for model and exposure command
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.16
- #901 - fmpy: areaperil_id 8 bytes support
OasisLMF Notes
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.21.0rc1
1.21.0rc1
OasisLMF Changelog -- #897 - added test cases with account terms
- #803 - Max Ded back allocation
- #901 - fmpy: areaperil_id 8 bytes support
- #903 - Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT)
- #858 - support parquet for OED
OasisLMF Notes
(PR #897)
Validation test cases for insurance account terms -A set of test cases have been added for validation testing of account financial terms.
(PR #898)
max deductible back allocation based on under limit -change the back allocation method when max deductible is activated.
In the former implementation the amount back allocated due to the activation of max deductible was proportional to the sub node computed loss if it was not null and to the sub node input loss otherwise.
this had multiple issue regarding the consistency of the result.
depending on the financial structure and loss input, back allocated loss could:
- be bigger than the limit
- increase loss of item that had no deductible
- be not back allocated if the sub-nodes input loss were all null
in the new implementation, all the back allocated loss above the computed aggregated loss is reallocated based on the under limit of the sub nodes.
As the under limit is the minimum between the deductible and the loss available under the limit, this ensure that:
- the node loss will never be over the limit.
- the loss is back allocated proportionally to what is relevant to max deductible (the more deductible a node have the more is back allocated)
- no loss is lost (if the max deductible is activated then under limit is not null)
(PR #900)
fmpy: areaperil_id 8 bytes support -allow area peril to be passed as a 8 bytes unsigned int via environment variable AREAPERIL_TYPE as a numpy dtype string
default is 4 bytes
- export AREAPERIL_TYPE=u8 # for 8 bytes
- export AREAPERIL_TYPE=u4 # for 4 bytes
(PR #903)
Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Quantile Event Loss Table (QELT) | elt_quantile |
Quantile Period Loss Table (QPLT) | plt_quantile |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_quantile": true,
"elt_quantile": true
}
}
]
(PR #890)
OED parquet format support -Allow parquet format to be used for OED files for model and exposure command
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.20.1
1.20.1
OasisLMF Changelog -- HOTFIX - Patched issue with new getmodelpy default value
- #873, #885 - Feature/873 issues testing update2
- #857 - getmodel revamping
- #892 - inconsistent assignment of group_id in items file leads to non-repeatable results for the same input
- #894 - Fix/arch 2020 update
- #895 - Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT)
OasisLMF Notes
(PR #888)
Past fm issues added to regression test -A group of tests called 'issues' has been added to the fm regression tests so that issues related to the financial module calculations will be regression tested if a set of oed files has been added to the set of test cases in validation/issues.
(PR #889)
New Python getmodel -In this release we are introducing the new getmodel
which is written in Python. The old C++ get model is still available through the already established commands. We can incorporate the Python getmodel
command into our K-tools with the command below:
eve 1 1 | getmodelpy | cdftocsv > output.csv
as opposed to the C++ getmodel
which takes the form below:
eve 1 1 | getmodel | cdftocsv > output.csv
It has to be noted that at the time of this release the Python getmodel
is slower than the C++ version. We are currently working on optimising this in future releases. The Python getmodel
accepts the following files:
- csv (default format) with the
.csv
extension - parquet with the
.parquet
extension - binary with the
.bin
extension
These can be defined with the -f
argument like seen in the example below:
eve 1 1 | getmodelpy -f bin | cdftocsv > output.csv
(PR #893)
Fixed inconsistent assignment of group_id in items file -From issue #892, The problem stems from the group_id_cols
order between runs, that causes panda's sort function to order the group_id values differently between multiple runs. Fixed by adding group_id_cols.sort()
before assigning group_ids
(PR #894)
Minor fixes to support OasisPlatform 2.0 development -- Fixes needed for job chunking
(PR #895)
Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Moment Event Loss Table (MELT) | elt_moment |
Sample Event Loss Table (SELT) | elt_sample |
Moment Period Loss Table (MPLT) | plt_moment |
Sample Period Loss Table (SPLT) | plt_sample |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_sample": true,
"plt_moment": true,
"elt_sample": true,
"elt_moment": true
}
}
]
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.20.0
1.20.0
OasisLMF Changelog -- #873, #885 - Feature/873 issues testing update2
- #857 - getmodel revamping
- #892 - inconsistent assignment of group_id in items file leads to non-repeatable results for the same input
- #894 - Fix/arch 2020 update
- #895 - Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT)
OasisLMF Notes
(PR #888)
Past fm issues added to regression test -A group of tests called 'issues' has been added to the fm regression tests so that issues related to the financial module calculations will be regression tested if a set of oed files has been added to the set of test cases in validation/issues.
(PR #889)
New Python getmodel -In this release we are introducing the new getmodel
which is written in Python. The old C++ get model is still available through the already established commands. We can incorporate the Python getmodel
command into our K-tools with the command below:
eve 1 1 | getmodelpy | cdftocsv > output.csv
as opposed to the C++ getmodel
which takes the form below:
eve 1 1 | getmodel | cdftocsv > output.csv
It has to be noted that at the time of this release the Python getmodel
is slower than the C++ version. We are currently working on optimising this in future releases. The Python getmodel
accepts the following files:
- csv (default format) with the
.csv
extension - parquet with the
.parquet
extension - binary with the
.bin
extension
These can be defined with the -f
argument like seen in the example below:
eve 1 1 | getmodelpy -f bin | cdftocsv > output.csv
(PR #893)
Fixed inconsistent assignment of group_id in items file -From issue #892, The problem stems from the group_id_cols
order between runs, that causes panda's sort function to order the group_id values differently between multiple runs. Fixed by adding group_id_cols.sort()
before assigning group_ids
(PR #894)
Minor fixes to support OasisPlatform 2.0 development -- Fixes needed for job chunking
(PR #895)
Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Moment Event Loss Table (MELT) | elt_moment |
Sample Event Loss Table (SELT) | elt_sample |
Moment Period Loss Table (MPLT) | plt_moment |
Sample Period Loss Table (SPLT) | plt_sample |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_sample": true,
"plt_moment": true,
"elt_sample": true,
"elt_moment": true
}
}
]
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.15.15
1.15.15
OasisLMF Changelog -- #896 - Set ktools to version v3.6.4
- #892 - inconsistent assignment of group_id in items file leads to non-repeatable results for the same input
OasisLMF Notes
(PR #896)
Update Ktools version for Oasis LTS -- Update ktools from
v3.6.0
tov3.6.4
(PR #893)
Fixed inconsistent assignment of group_id in items file -From issue #892, The problem stems from the group_id_cols
order between runs, that causes panda's sort function to order the group_id values differently between multiple runs. Fixed by adding group_id_cols.sort()
before assigning group_ids
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago

OasisLMF - 1.20.0rc1
1.20.0rc1
OasisLMF Changelog -- #873, #885 - Feature/873 issues testing update2
- #857 - getmodel revamping
- #892 - inconsistent assignment of group_id in items file leads to non-repeatable results for the same input
- #894 - Fix/arch 2020 update
- #895 - Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT)
OasisLMF Notes
(PR #888)
Past fm issues added to regression test -A group of tests called 'issues' has been added to the fm regression tests so that issues related to the financial module calculations will be regression tested if a set of oed files has been added to the set of test cases in validation/issues.
(PR #889)
New Python getmodel -In this release we are introducing the new getmodel
which is written in Python. The old C++ get model is still available through the already established commands. We can incorporate the Python getmodel
command into our K-tools with the command below:
eve 1 1 | getmodelpy | cdftocsv > output.csv
as opposed to the C++ getmodel
which takes the form below:
eve 1 1 | getmodel | cdftocsv > output.csv
It has to be noted that at the time of this release the Python getmodel
is slower than the C++ version. We are currently working on optimising this in future releases. The Python getmodel
accepts the following files:
- csv (default format) with the
.csv
extension - parquet with the
.parquet
extension - binary with the
.bin
extension
These can be defined with the -f
argument like seen in the example below:
eve 1 1 | getmodelpy -f bin | cdftocsv > output.csv
(PR #893)
Fixed inconsistent assignment of group_id in items file -From issue #892, The problem stems from the group_id_cols
order between runs, that causes panda's sort function to order the group_id values differently between multiple runs. Fixed by adding group_id_cols.sort()
before assigning group_ids
(PR #894)
Minor fixes to support OasisPlatform 2.0 development -- Fixes needed for job chunking
(PR #895)
Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample Period Loss Table (SPLT) -The following Open Results Data (ORD) tables can be generated:
Table Name | analysis_settings file flag |
---|---|
Moment Event Loss Table (MELT) | elt_moment |
Sample Event Loss Table (SELT) | elt_sample |
Moment Period Loss Table (MPLT) | plt_moment |
Sample Period Loss Table (SPLT) | plt_sample |
Example analysis_settings
"gul_summaries": [
{
"id": 1,
"ord_output": {
"plt_sample": true,
"plt_moment": true,
"elt_sample": true,
"elt_moment": true
}
}
]
Climate Change - Natural Hazard and Storm
- Python
Published by awsbuild over 3 years ago
