{"id":297074,"name":"Neural-LAM","description":"A repository of graph-based neural weather prediction models for Limited Area Modeling.","url":"https://github.com/mllam/neural-lam","last_synced_at":"2026-04-15T06:04:26.307Z","repository":{"id":197977555,"uuid":"695904711","full_name":"mllam/neural-lam","owner":"mllam","description":"Research Software for Neural Weather Prediction for Limited Area Modeling","archived":false,"fork":false,"pushed_at":"2026-02-28T12:50:43.000Z","size":5970,"stargazers_count":225,"open_issues_count":117,"forks_count":152,"subscribers_count":7,"default_branch":"main","last_synced_at":"2026-02-28T16:34:22.638Z","etag":null,"topics":["gsoc","gsoc-2026","machine-learning","weather"],"latest_commit_sha":null,"homepage":"https://kutt.to/mllam","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mllam.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-09-24T15:28:27.000Z","updated_at":"2026-02-28T13:01:59.000Z","dependencies_parsed_at":"2023-12-13T13:45:54.865Z","dependency_job_id":"c9665475-c590-4fe7-8aed-80e6f8c1b338","html_url":"https://github.com/mllam/neural-lam","commit_stats":{"total_commits":45,"total_committers":6,"mean_commits":7.5,"dds":"0.37777777777777777","last_synced_commit":"3369515b81979c1f1e442ff8cc8213776afb4132"},"previous_names":["joeloskarsson/neural-lam","mllam/neural-lam"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/mllam/neural-lam","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29964203,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-01T06:55:38.174Z","status":"ssl_error","status_checked_at":"2026-03-01T06:53:04.810Z","response_time":124,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"owner":{"login":"mllam","name":"ML-LAM collaboration","uuid":"149403204","kind":"organization","description":"Collaborative development of data-driven weather forecasts for limited area modelling","email":null,"website":null,"location":null,"twitter":null,"company":null,"icon_url":"https://avatars.githubusercontent.com/u/149403204?v=4","repositories_count":1,"last_synced_at":"2024-03-11T20:57:58.660Z","metadata":{"has_sponsors_listing":false},"html_url":"https://github.com/mllam","funding_links":[],"total_stars":1,"followers":null,"following":null,"created_at":"2024-03-11T20:57:58.672Z","updated_at":"2024-03-11T20:57:58.672Z","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam/repositories"},"packages":[{"id":11792190,"name":"neural-lam","ecosystem":"pypi","description":"LAM-based data-driven forecasting","homepage":null,"licenses":"mit","normalized_licenses":["MIT"],"repository_url":"https://github.com/mllam/neural-lam","keywords_array":[],"namespace":null,"versions_count":2,"first_release_published_at":"2025-06-12T07:20:46.000Z","latest_release_published_at":"2026-01-14T10:03:04.000Z","latest_release_number":"0.5.0","last_synced_at":"2026-02-11T18:21:35.968Z","created_at":"2025-06-12T07:31:32.137Z","updated_at":"2026-02-11T18:21:35.968Z","registry_url":"https://pypi.org/project/neural-lam/","install_command":"pip install neural-lam --index-url https://pypi.org/simple","documentation_url":"https://neural-lam.readthedocs.io/","metadata":{"funding":null,"documentation":null,"classifiers":[],"normalized_name":"neural-lam","project_status":null},"repo_metadata":{"id":197977555,"uuid":"695904711","full_name":"mllam/neural-lam","owner":"mllam","description":"Research Software for Neural Weather Prediction for Limited Area Modeling","archived":false,"fork":false,"pushed_at":"2025-10-13T14:39:40.000Z","size":5845,"stargazers_count":181,"open_issues_count":43,"forks_count":68,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-10-23T14:19:44.747Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://join.slack.com/t/ml-lam/shared_invite/zt-2t112zvm8-Vt6aBvhX7nYa6Kbj_LkCBQ","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mllam.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-09-24T15:28:27.000Z","updated_at":"2025-10-20T11:53:26.000Z","dependencies_parsed_at":"2023-12-13T13:45:54.865Z","dependency_job_id":"9beddaca-d890-4793-8529-345e8e6b4ee6","html_url":"https://github.com/mllam/neural-lam","commit_stats":{"total_commits":45,"total_committers":6,"mean_commits":7.5,"dds":"0.37777777777777777","last_synced_commit":"3369515b81979c1f1e442ff8cc8213776afb4132"},"previous_names":["joeloskarsson/neural-lam","mllam/neural-lam"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/mllam/neural-lam","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":280832996,"owners_count":26398969,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-24T02:00:06.418Z","response_time":73,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"},"owner_record":{"login":"mllam","name":"ML-LAM collaboration","uuid":"149403204","kind":"organization","description":"Collaborative development of data-driven weather forecasts for limited area modelling","email":null,"website":null,"location":null,"twitter":null,"company":null,"icon_url":"https://avatars.githubusercontent.com/u/149403204?v=4","repositories_count":1,"last_synced_at":"2024-03-11T20:57:58.660Z","metadata":{"has_sponsors_listing":false},"html_url":"https://github.com/mllam","funding_links":[],"total_stars":1,"followers":null,"following":null,"created_at":"2024-03-11T20:57:58.672Z","updated_at":"2024-03-11T20:57:58.672Z","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mllam/repositories"},"tags":[{"name":"v0.4.0","sha":"0e9c2cce15c8b3729884b18091690d1a62da9c83","kind":"commit","published_at":"2025-06-12T07:00:35.000Z","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/v0.4.0","html_url":"https://github.com/mllam/neural-lam/releases/tag/v0.4.0","dependencies_parsed_at":null,"dependency_job_id":null,"purl":"pkg:github/mllam/neural-lam@v0.4.0","tag_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.4.0","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.4.0/manifests"},{"name":"v0.3.0","sha":"cfda1c9bbaa322176c1ef07ca4d7526e92033852","kind":"commit","published_at":"2025-01-21T15:30:32.000Z","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/v0.3.0","html_url":"https://github.com/mllam/neural-lam/releases/tag/v0.3.0","dependencies_parsed_at":null,"dependency_job_id":null,"purl":"pkg:github/mllam/neural-lam@v0.3.0","tag_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.3.0","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.3.0/manifests"},{"name":"v0.2.0","sha":"7112013f24ad36d8d9d19b4b5f853b11a2bbebf4","kind":"commit","published_at":"2024-10-24T11:43:32.000Z","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/v0.2.0","html_url":"https://github.com/mllam/neural-lam/releases/tag/v0.2.0","dependencies_parsed_at":null,"dependency_job_id":null,"purl":"pkg:github/mllam/neural-lam@v0.2.0","tag_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.2.0","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.2.0/manifests"},{"name":"v0.1.0","sha":"2378ed7eddf8da5bfec6f57c41cadf310d191dee","kind":"commit","published_at":"2023-12-13T12:50:48.000Z","download_url":"https://codeload.github.com/mllam/neural-lam/tar.gz/v0.1.0","html_url":"https://github.com/mllam/neural-lam/releases/tag/v0.1.0","dependencies_parsed_at":null,"dependency_job_id":null,"purl":"pkg:github/mllam/neural-lam@v0.1.0","tag_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.1.0","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/tags/v0.1.0/manifests"}]},"repo_metadata_updated_at":"2025-10-30T20:51:17.467Z","dependent_packages_count":0,"downloads":10,"downloads_period":"last-month","dependent_repos_count":0,"rankings":{"downloads":null,"dependent_repos_count":50.715240110648075,"dependent_packages_count":9.001061701466881,"stargazers_count":null,"forks_count":null,"docker_downloads_count":null,"average":29.858150906057478},"purl":"pkg:pypi/neural-lam","advisories":[],"docker_usage_url":"https://docker.ecosyste.ms/usage/pypi/neural-lam","docker_dependents_count":null,"docker_downloads_count":null,"usage_url":"https://repos.ecosyste.ms/usage/pypi/neural-lam","dependent_repositories_url":"https://repos.ecosyste.ms/api/v1/usage/pypi/neural-lam/dependencies","status":null,"funding_links":[],"critical":null,"issue_metadata":{"last_synced_at":"2025-10-30T16:05:07.030Z","issues_count":58,"pull_requests_count":152,"avg_time_to_close_issue":5347676.695652174,"avg_time_to_close_pull_request":2801782.8165137614,"issues_closed_count":23,"pull_requests_closed_count":109,"pull_request_authors_count":16,"issue_authors_count":13,"avg_comments_per_issue":1.896551724137931,"avg_comments_per_pull_request":2.6052631578947367,"merged_pull_requests_count":82,"bot_issues_count":0,"bot_pull_requests_count":0,"past_year_issues_count":27,"past_year_pull_requests_count":81,"past_year_avg_time_to_close_issue":748340.3333333334,"past_year_avg_time_to_close_pull_request":1088688.52,"past_year_issues_closed_count":6,"past_year_pull_requests_closed_count":50,"past_year_pull_request_authors_count":11,"past_year_issue_authors_count":10,"past_year_avg_comments_per_issue":1.222222222222222,"past_year_avg_comments_per_pull_request":2.0,"past_year_bot_issues_count":0,"past_year_bot_pull_requests_count":0,"past_year_merged_pull_requests_count":42,"issues_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/issues","maintainers":[{"login":"joeloskarsson","count":53,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/joeloskarsson"},{"login":"leifdenby","count":38,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/leifdenby"},{"login":"sadamov","count":33,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/sadamov"},{"login":"bet20ICL","count":2,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/bet20ICL"}],"active_maintainers":[{"login":"joeloskarsson","count":25,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/joeloskarsson"},{"login":"leifdenby","count":18,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/leifdenby"},{"login":"sadamov","count":7,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/sadamov"}]},"versions_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/neural-lam/versions","version_numbers_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/neural-lam/version_numbers","dependent_packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/neural-lam/dependent_packages","related_packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/neural-lam/related_packages","codemeta_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages/neural-lam/codemeta","maintainers":[{"uuid":"leifdenby","login":"leifdenby","name":null,"email":null,"url":null,"packages_count":23,"html_url":"https://pypi.org/user/leifdenby/","role":"Owner","created_at":"2025-06-12T08:03:56.892Z","updated_at":"2025-06-12T08:03:56.892Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers/leifdenby/packages"}],"registry":{"name":"pypi.org","url":"https://pypi.org","ecosystem":"pypi","default":true,"packages_count":805582,"maintainers_count":339226,"namespaces_count":0,"keywords_count":0,"github":"pypi","metadata":{"funded_packages_count":52398},"icon_url":"https://github.com/pypi.png","created_at":"2022-04-04T15:19:23.364Z","updated_at":"2026-02-28T06:08:48.742Z","packages_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/packages","maintainers_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/maintainers","namespaces_url":"https://packages.ecosyste.ms/api/v1/registries/pypi.org/namespaces"}}],"commits":{"id":1638670,"full_name":"mllam/neural-lam","default_branch":"main","total_commits":86,"total_committers":11,"total_bot_commits":0,"total_bot_committers":0,"mean_commits":7.818181818181818,"dds":0.5813953488372092,"past_year_total_commits":28,"past_year_total_committers":8,"past_year_total_bot_commits":0,"past_year_total_bot_committers":0,"past_year_mean_commits":3.5,"past_year_dds":0.6428571428571428,"last_synced_at":"2026-02-17T05:45:42.997Z","last_synced_commit":"8d2f32cee9d38afcf8ad87d30aa9d0074a561f61","created_at":"2024-07-24T00:12:21.938Z","updated_at":"2026-02-17T05:45:09.652Z","committers":[{"name":"Joel Oskarsson","email":"joel.oskarsson@liu.se","login":"joeloskarsson","count":36},{"name":"Leif Denby","email":"leif@denby.eu","login":"leifdenby","count":14},{"name":"Hauke Schulz","email":"43613877+observingClouds","login":"observingClouds","count":12},{"name":"SimonKamuk","email":"43374850+SimonKamuk","login":"SimonKamuk","count":10},{"name":"sadamov","email":"45732287+sadamov","login":"sadamov","count":8},{"name":"lorenzo30salgado","email":"79310171+lorenzo30salgado","login":"lorenzo30salgado","count":1},{"name":"YUTAIPAN","email":"139090433+YUTAIPAN","login":"YUTAIPAN","count":1},{"name":"K. Hintz","email":"kah@dmi.dk","login":"khintz","count":1},{"name":"Jordan Matelsky","email":"j6k4m8","login":"j6k4m8","count":1},{"name":"Erik Larsson","email":"86654747+ErikLarssonDev","login":"ErikLarssonDev","count":1},{"name":"Daniel Holmberg","email":"daniel.holmberg97@gmail.com","login":"deinal","count":1}],"past_year_committers":[{"name":"Hauke Schulz","email":"43613877+observingClouds","login":"observingClouds","count":10},{"name":"Joel Oskarsson","email":"joel.oskarsson@outlook.com","login":"joeloskarsson","count":7},{"name":"SimonKamuk","email":"43374850+SimonKamuk","login":"SimonKamuk","count":3},{"name":"Leif Denby","email":"leif@denby.eu","login":"leifdenby","count":3},{"name":"sadamov","email":"45732287+sadamov","login":"sadamov","count":2},{"name":"lorenzo30salgado","email":"79310171+lorenzo30salgado","login":"lorenzo30salgado","count":1},{"name":"YUTAIPAN","email":"139090433+YUTAIPAN","login":"YUTAIPAN","count":1},{"name":"Daniel Holmberg","email":"daniel.holmberg97@gmail.com","login":"deinal","count":1}],"commits_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/commits","host":{"name":"GitHub","url":"https://github.com","kind":"github","last_synced_at":"2026-03-01T00:00:12.398Z","repositories_count":6184018,"commits_count":930368198,"contributors_count":36036002,"owners_count":1146194,"icon_url":"https://github.com/github.png","host_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://commits.ecosyste.ms/api/v1/hosts/GitHub/repositories"}},"issues_stats":{"full_name":"mllam/neural-lam","html_url":"https://github.com/mllam/neural-lam","last_synced_at":"2025-10-30T16:05:07.030Z","status":null,"issues_count":58,"pull_requests_count":152,"avg_time_to_close_issue":5347676.695652174,"avg_time_to_close_pull_request":2801782.8165137614,"issues_closed_count":23,"pull_requests_closed_count":109,"pull_request_authors_count":16,"issue_authors_count":13,"avg_comments_per_issue":1.896551724137931,"avg_comments_per_pull_request":2.6052631578947367,"merged_pull_requests_count":82,"bot_issues_count":0,"bot_pull_requests_count":0,"past_year_issues_count":27,"past_year_pull_requests_count":81,"past_year_avg_time_to_close_issue":748340.3333333334,"past_year_avg_time_to_close_pull_request":1088688.52,"past_year_issues_closed_count":6,"past_year_pull_requests_closed_count":50,"past_year_pull_request_authors_count":11,"past_year_issue_authors_count":10,"past_year_avg_comments_per_issue":1.222222222222222,"past_year_avg_comments_per_pull_request":2.0,"past_year_bot_issues_count":0,"past_year_bot_pull_requests_count":0,"past_year_merged_pull_requests_count":42,"created_at":"2024-07-24T00:12:30.747Z","updated_at":"2025-10-30T16:05:07.031Z","repository_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam","issues_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/mllam%2Fneural-lam/issues","issue_labels_count":{"enhancement":15,"bug":8,"good first issue":5,"documentation":4,"discussion":4,"cicd":2,"question":1,"help wanted":1},"pull_request_labels_count":{"enhancement":17,"bug":12,"cicd":5,"documentation":3,"help wanted":2},"issue_author_associations_count":{"COLLABORATOR":32,"MEMBER":10,"NONE":9,"CONTRIBUTOR":7},"pull_request_author_associations_count":{"COLLABORATOR":56,"CONTRIBUTOR":39,"NONE":29,"MEMBER":28},"issue_authors":{"joeloskarsson":19,"sadamov":12,"leifdenby":10,"observingClouds":6,"mpvginde":2,"lorenzo30salgado":2,"evilla-eni":1,"khintz":1,"ealerskans":1,"Liraelyn":1,"zhending111":1,"liufeng0612":1,"bet20ICL":1},"pull_request_authors":{"joeloskarsson":34,"leifdenby":28,"sadamov":21,"SimonKamuk":19,"observingClouds":12,"clechartre":9,"matschreiner":9,"khintz":6,"TomasLandelius":2,"oceand1":2,"deinal":2,"Moiz101-ch":2,"ErikLarssonDev":2,"j6k4m8":2,"YUTAIPAN":1,"bet20ICL":1},"host":{"name":"GitHub","url":"https://github.com","kind":"github","last_synced_at":"2025-10-30T00:00:25.546Z","repositories_count":11263014,"issues_count":35009938,"pull_requests_count":113611200,"authors_count":11042159,"icon_url":"https://github.com/github.png","host_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories","owners_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/owners","authors_url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors"},"past_year_issue_labels_count":{"bug":5,"good first issue":3,"enhancement":2,"cicd":2},"past_year_pull_request_labels_count":{"bug":9,"cicd":5,"enhancement":4},"past_year_issue_author_associations_count":{"COLLABORATOR":12,"CONTRIBUTOR":7,"NONE":6,"MEMBER":2},"past_year_pull_request_author_associations_count":{"CONTRIBUTOR":29,"COLLABORATOR":20,"MEMBER":16,"NONE":16},"past_year_issue_authors":{"joeloskarsson":8,"observingClouds":6,"sadamov":4,"leifdenby":2,"lorenzo30salgado":2,"ealerskans":1,"evilla-eni":1,"khintz":1,"Liraelyn":1,"zhending111":1},"past_year_pull_request_authors":{"joeloskarsson":17,"leifdenby":16,"SimonKamuk":15,"observingClouds":12,"matschreiner":9,"sadamov":3,"j6k4m8":2,"deinal":2,"Moiz101-ch":2,"oceand1":2,"YUTAIPAN":1},"maintainers":[{"login":"joeloskarsson","count":53,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/joeloskarsson"},{"login":"leifdenby","count":38,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/leifdenby"},{"login":"sadamov","count":33,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/sadamov"},{"login":"bet20ICL","count":2,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/bet20ICL"}],"active_maintainers":[{"login":"joeloskarsson","count":25,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/joeloskarsson"},{"login":"leifdenby","count":18,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/leifdenby"},{"login":"sadamov","count":7,"url":"https://issues.ecosyste.ms/api/v1/hosts/GitHub/authors/sadamov"}]},"events":{"total":{"ForkEvent":18,"CreateEvent":4,"CommitCommentEvent":2,"ReleaseEvent":3,"IssuesEvent":40,"WatchEvent":64,"DeleteEvent":3,"IssueCommentEvent":215,"PushEvent":48,"GollumEvent":10,"PullRequestReviewEvent":263,"PullRequestReviewCommentEvent":231,"PullRequestEvent":73},"last_year":{"ForkEvent":18,"CreateEvent":3,"CommitCommentEvent":2,"ReleaseEvent":2,"IssuesEvent":37,"WatchEvent":62,"DeleteEvent":1,"IssueCommentEvent":201,"PushEvent":45,"PullRequestReviewCommentEvent":207,"GollumEvent":7,"PullRequestReviewEvent":233,"PullRequestEvent":68}},"keywords":["gsoc","gsoc-2026","machine-learning","weather"],"dependencies":[{"ecosystem":"pypi","filepath":"requirements.txt","sha":null,"kind":"manifest","created_at":"2023-10-24T08:34:44.997Z","updated_at":"2023-10-24T08:34:44.997Z","repository_link":"https://github.com/mllam/neural-lam/blob/main/requirements.txt","dependencies":[{"id":14274588993,"package_name":"numpy","ecosystem":"pypi","requirements":"\u003e=1.24.2","direct":true,"kind":"runtime","optional":false},{"id":14274588994,"package_name":"wandb","ecosystem":"pypi","requirements":"\u003e=0.13.10","direct":true,"kind":"runtime","optional":false},{"id":14274588995,"package_name":"matplotlib","ecosystem":"pypi","requirements":"\u003e=3.7.0","direct":true,"kind":"runtime","optional":false},{"id":14274588996,"package_name":"scipy","ecosystem":"pypi","requirements":"\u003e=1.10.0","direct":true,"kind":"runtime","optional":false},{"id":14274588997,"package_name":"pytorch-lightning","ecosystem":"pypi","requirements":"\u003e=2.0.3","direct":true,"kind":"runtime","optional":false},{"id":14274588998,"package_name":"shapely","ecosystem":"pypi","requirements":"\u003e=2.0.1","direct":true,"kind":"runtime","optional":false},{"id":14274588999,"package_name":"networkx","ecosystem":"pypi","requirements":"\u003e=3.0","direct":true,"kind":"runtime","optional":false},{"id":14274589000,"package_name":"Cartopy","ecosystem":"pypi","requirements":"\u003e=0.22.0","direct":true,"kind":"runtime","optional":false},{"id":14274589001,"package_name":"pyproj","ecosystem":"pypi","requirements":"\u003e=3.4.1","direct":true,"kind":"runtime","optional":false},{"id":14274589002,"package_name":"tueplots","ecosystem":"pypi","requirements":"\u003e=0.0.8","direct":true,"kind":"runtime","optional":false},{"id":14274589497,"package_name":"plotly","ecosystem":"pypi","requirements":"\u003e=5.15.0","direct":true,"kind":"runtime","optional":false}]}],"score":10.630601282659345,"created_at":"2024-07-24T00:12:20.329Z","updated_at":"2026-04-15T06:04:26.322Z","avatar_url":"https://github.com/mllam.png","language":"Python","category":"Atmosphere","sub_category":"Meteorological Observation and Forecast","monthly_downloads":10,"total_dependent_repos":0,"total_dependent_packages":0,"readme":"[![slack](https://img.shields.io/badge/slack-join-brightgreen.svg?logo=slack)](https://join.slack.com/t/ml-lam/shared_invite/zt-2t112zvm8-Vt6aBvhX7nYa6Kbj_LkCBQ)\n[![Linting](https://github.com/mllam/neural-lam/actions/workflows/pre-commit.yml/badge.svg?branch=main)](https://github.com/mllam/neural-lam/actions/workflows/pre-commit.yml)\n[![CPU+GPU testing](https://github.com/mllam/neural-lam/actions/workflows/install-and-test.yml/badge.svg?branch=main)](https://github.com/mllam/neural-lam/actions/workflows/install-and-test.yml)\n\n\u003cp align=\"middle\"\u003e\n    \u003cimg src=\"figures/neural_lam_header.png\" width=\"700\"\u003e\n\u003c/p\u003e\n\nNeural-LAM is a repository of graph-based neural weather prediction models for Limited Area Modeling (LAM).\nAlso global forecasting is possible, but currently on a [different branch](https://github.com/mllam/neural-lam/tree/prob_model_global) ([planned to be merged with main](https://github.com/mllam/neural-lam/issues/63)).\nThe code uses [PyTorch](https://pytorch.org/) and [PyTorch Lightning](https://lightning.ai/pytorch-lightning).\nGraph Neural Networks are implemented using [PyG](https://pyg.org/) and logging is set up through [Weights \u0026 Biases](https://wandb.ai/).\n\nThe repository contains LAM versions of:\n\n* The graph-based model from [Keisler (2022)](https://arxiv.org/abs/2202.07575).\n* GraphCast, by [Lam et al. (2023)](https://arxiv.org/abs/2212.12794).\n* The hierarchical model from [Oskarsson et al. (2023)](https://arxiv.org/abs/2309.17370).\n\n# Publications\nFor a more in-depth scientific introduction to machine learning for LAM weather forecasting see the publications listed here.\nAs the code in the repository is continuously evolving, the latest version might feature some small differences to what was used for these publications.\nWe retain some paper-specific branches for reproducibility purposes.\n\n\n*If you use Neural-LAM in your work, please cite the relevant paper(s)*.\n\n#### [Graph-based Neural Weather Prediction for Limited Area Modeling](https://arxiv.org/abs/2309.17370)\n```\n@inproceedings{oskarsson2023graphbased,\n    title={Graph-based Neural Weather Prediction for Limited Area Modeling},\n    author={Oskarsson, Joel and Landelius, Tomas and Lindsten, Fredrik},\n    booktitle={NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning},\n    year={2023}\n}\n```\nSee the branch [`ccai_paper_2023`](https://github.com/joeloskarsson/neural-lam/tree/ccai_paper_2023) for a revision of the code that reproduces this workshop paper.\n\n#### [Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks](https://arxiv.org/abs/2406.04759)\n```\n@inproceedings{oskarsson2024probabilistic,\n  title = {Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks},\n  author = {Oskarsson, Joel and Landelius, Tomas and Deisenroth, Marc Peter and Lindsten, Fredrik},\n  booktitle = {Advances in Neural Information Processing Systems},\n  volume = {37},\n  year = {2024},\n}\n```\nSee the branches [`prob_model_lam`](https://github.com/mllam/neural-lam/tree/prob_model_lam) and [`prob_model_global`](https://github.com/mllam/neural-lam/tree/prob_model_global) for revisions of the code that reproduces this paper.\nThe global and probabilistic models from this paper are not yet fully merged with `main` (see issues [62](https://github.com/mllam/neural-lam/issues/62) and [63](https://github.com/mllam/neural-lam/issues/63)).\n\n# Modularity\nThe Neural-LAM code is designed to modularize the different components involved in training and evaluating neural weather prediction models.\nModels, graphs and data are stored separately and it should be possible to swap out individual components.\nStill, some restrictions are inevitable:\n\n* The graph used has to be compatible with what the model expects. E.g. a hierarchical model requires a hierarchical graph.\n* The graph and data are specific to the limited area under consideration. This is of course true for the data, but also the graph should be created with the exact geometry of the area in mind.\n\n\u003cp align=\"middle\"\u003e\n  \u003cimg src=\"figures/neural_lam_setup.png\" width=\"600\"/\u003e\n\u003c/p\u003e\n\n\n# Installing Neural-LAM\n\nWhen installing `neural-lam` you have a choice of either installing with\ndirectly `pip` or using the `pdm` package manager.\nWe recommend using `pdm` as it makes it easy to add/remove packages while\nkeeping versions consistent (it automatically updates the `pyproject.toml`\nfile), makes it easy to handle virtual environments and includes the\ndevelopment toolchain packages installation too.\n\n**regarding `torch` installation**: because `torch` creates different package\nvariants for different CUDA versions and cpu-only support you will need to install\n`torch` separately if you don't want the most recent GPU variant that also\nexpects the most recent version of CUDA on your system.\n\nWe cover all the installation options in our [github actions ci/cd\nsetup](.github/workflows/) which you can use as a reference.\n\n### From pypi.org\n\n```\npython -m pip install neural_lam\n```\n\n### From source\n\n#### Using `pdm`\n\n1. Clone this repository and navigate to the root directory.\n2. Install `pdm` if you don't have it installed on your system (either with `pip install pdm` or [following the install instructions](https://pdm-project.org/latest/#installation)).\n\u003e If you are happy using the latest version of `torch` with GPU support (expecting the latest version of CUDA is installed on your system) you can skip to step 5.\n3. Create a virtual environment for pdm to use with `pdm venv create --with-pip`.\n4. Install a specific version of `torch` with `pdm run python -m pip install torch --index-url https://download.pytorch.org/whl/cpu` for a CPU-only version or `pdm run python -m pip install torch --index-url https://download.pytorch.org/whl/cu111` for CUDA 11.1 support (you can find the correct URL for the variant you want on [PyTorch webpage](https://pytorch.org/get-started/locally/)).\n5. Install the dependencies with `pdm install` (by default this in include the). If you will be developing `neural-lam` we recommend to install the development dependencies with `pdm install --group dev`. By default `pdm` installs the `neural-lam` package in editable mode, so you can make changes to the code and see the effects immediately.\n\n#### Using `pip`\n\n1. Clone this repository and navigate to the root directory.\n\u003e If you are happy using the latest version of `torch` with GPU support (expecting the latest version of CUDA is installed on your system) you can skip to step 3.\n2. Install a specific version of `torch` with `python -m pip install torch --index-url https://download.pytorch.org/whl/cpu` for a CPU-only version or `python -m pip install torch --index-url https://download.pytorch.org/whl/cu111` for CUDA 11.1 support (you can find the correct URL for the variant you want on [PyTorch webpage](https://pytorch.org/get-started/locally/)).\n3. Install the dependencies with `python -m pip install .`. If you will be developing `neural-lam` we recommend to install in editable mode and install the development dependencies with `python -m pip install -e \".[dev]\"` so you can make changes to the code and see the effects immediately.\n\n\n# Using Neural-LAM\n\nOnce `neural-lam` is installed you will be able to train/evaluate models. For this you will in general need two things:\n\n1. **Data to train/evaluate the model**. To represent this data we use a concept of\n   *datastores* in Neural-LAM (see the [Data](#data-the-datastore-and-weatherdataset-classes) section for more details).\n   In brief, a datastore implements the process of loading data from disk in a\n   specific format (for example zarr or numpy files) by implementing an\n   interface that provides the data in a data-structure that can be used within\n   neural-lam. A datastore is used to create a `pytorch.Dataset`-derived\n   class that samples the data in time to create individual samples for\n   training, validation and testing.\n\n2. **The graph structure** is used to define message-passing GNN layers,\n   that are trained to emulate fluid flow in the atmosphere over time. The\n   graph structure is created for a specific datastore.\n\nAny command you run in neural-lam will include the path to a configuration file\nto be used (usually called `config.yaml`). This configuration file defines the\npath to the datastore configuration you wish to use and allows you to configure\ndifferent aspects about the training and evaluation of the model.\n\nThe path you provide to the neural-lam config (`config.yaml`) also sets the\nroot directory relative to which all other paths are resolved, as in the parent\ndirectory of the config becomes the root directory. Both the datastore and\ngraphs you generate are then stored in subdirectories of this root directory.\nExactly how and where a specific datastore expects its source data to be stored\nand where it stores its derived data is up to the implementation of the\ndatastore.\n\nIn general the folder structure assumed in Neural-LAM is as follows (we will\nassume you placed `config.yaml` in a folder called `data`):\n\n```\ndata/\n├── config.yaml           - Configuration file for neural-lam\n├── danra.datastore.yaml  - Configuration file for the datastore, referred to from config.yaml\n└── graphs/               - Directory containing graphs for training\n```\n\nAnd the content of `config.yaml` could in this case look like:\n```yaml\ndatastore:\n  kind: mdp\n  config_path: danra.datastore.yaml\ntraining:\n  state_feature_weighting:\n    __config_class__: ManualStateFeatureWeighting\n    weights:\n      u100m: 1.0\n      v100m: 1.0\n      t2m: 1.0\n      r2m: 1.0\n  output_clamping:\n    lower:\n      t2m: 0.0\n      r2m: 0\n    upper:\n      r2m: 1.0\n```\n\nFor now the neural-lam config only defines few things:\n\n1. The kind of datastore and the path to its config\n2. The weighting of different features in\nthe loss function. If you don't define the state feature weighting it will default to\nweighting all features equally.\n3. Valid numerical range for output of each feature.The numerical range of all features default to $]-\\infty, \\infty[$.\n\n(This example is taken from the `tests/datastore_examples/mdp` directory.)\n\n\nBelow follows instructions on how to use Neural-LAM to train and evaluate\nmodels, with details first given for each kind of datastore implemented\nand later the graph generation. Once `neural-lam` has been installed the\ngeneral process is:\n\n1. Run any pre-processing scripts to generate the necessary derived data that your chosen datastore requires\n2. Run graph-creation step\n3. Train the model\n\n## Data (the `DataStore` and `WeatherDataset` classes)\n\nTo enable flexibility in what input-data sources can be used with neural-lam,\nthe input-data representation is split into two parts:\n\n1. A \"datastore\" (represented by instances of\n   [neural_lam.datastore.BaseDataStore](neural_lam/datastore/base.py)) which\n   takes care of loading a given category (state, forcing or static) and split\n   (train/val/test) of data from disk and returning it as a `xarray.DataArray`.\n   The returned data-array is expected to have the spatial coordinates\n   flattened into a single `grid_index` dimension and all variables and vertical\n   levels stacked into a feature dimension (named as `{category}_feature`). The\n   datastore also provides information about the number, names and units of\n   variables in the data, the boundary mask, normalisation values and grid\n   information.\n\n2. A `pytorch.Dataset`-derived class (called\n   `neural_lam.weather_dataset.WeatherDataset`) which takes care of sampling in\n   time to create individual samples for training, validation and testing. The\n   `WeatherDataset` class is also responsible for normalising the values and\n   returning `torch.Tensor`-objects.\n\nThere are currently two different datastores implemented in the codebase:\n\n1. `neural_lam.datastore.MDPDatastore` which represents loading of\n   *training-ready* datasets in zarr format created with the\n   [mllam-data-prep](https://github.com/mllam/mllam-data-prep) package.\n   Training-ready refers to the fact that this data has been transformed\n   (variables have been stacked, spatial coordinates have been flattened,\n   statistics for normalisation have been calculated, etc) to be ready for\n   training. `mllam-data-prep` can combine any number of datasets that can be\n   read with [xarray](https://github.com/pydata/xarray) and the processing can\n   either be done at run-time or as a pre-processing step before calling\n   neural-lam.\n\n2. `neural_lam.datastore.NpyFilesDatastoreMEPS` which reads MEPS data from\n   `.npy`-files in the format introduced in neural-lam `v0.1.0`. Note that this\n   datastore is specific to the format of the MEPS dataset, but can act as an\n   example for how to create similar numpy-based datastores.\n\nIf neither of these options fit your need you can create your own datastore by\nsubclassing the `neural_lam.datastore.BaseDataStore` class or\n`neural_lam.datastore.BaseRegularGridDatastore` class (if your data is stored on\na regular grid) and implementing the abstract methods.\n\n\n### MDP (mllam-data-prep) Datastore - `MDPDatastore`\n\nWith `MDPDatastore` (the mllam-data-prep datastore) all the selection,\ntransformation and pre-calculation steps that are needed to go from\nfor example gridded weather data to a format that is optimised for training\nin neural-lam, are done in a separate package called\n[mllam-data-prep](https://github.com/mllam/mllam-data-prep) rather than in\nneural-lam itself.\nSpecifically, the `mllam-data-prep` datastore configuration (for example\n[danra.datastore.yaml](tests/datastore_examples/mdp/danra.datastore.yaml))\nspecifies a) what source datasets to read from, b) what variables to select, c)\nwhat transformations of dimensions and variables to make, d) what statistics to\ncalculate (for normalisation) and e) how to split the data into training,\nvalidation and test sets (see full details about the configuration specification\nin the [mllam-data-prep README](https://github.com/mllam/mllam-data-prep)).\n\nFrom a datastore configuration `mllam-data-prep` returns the transformed\ndataset as an `xr.Dataset` which is then written in zarr-format to disk by\n`neural-lam` when the datastore is first initiated (the path of the dataset is\nderived from the datastore config, so that from a config named `danra.datastore.yaml` the resulting dataset is stored in `danra.datastore.zarr`).\nYou can also run `mllam-data-prep` directly to create the processed dataset by providing the path to the datastore configuration file:\n\n```bash\npython -m mllam_data_prep --config data/danra.datastore.yaml\n```\n\nIf you will be working on a large dataset (on the order of 10GB or more) it\ncould be beneficial to produce the processed `.zarr` dataset before using it\nin neural-lam so that you can do the processing across multiple CPU cores in parallel. This is done by including the `--dask-distributed-local-core-fraction` argument when calling mllam-data-prep to set the fraction of your system's CPU cores that should be used for processing (see the\n[mllam-data-prep\nREADME for details](https://github.com/mllam/mllam-data-prep?tab=readme-ov-file#creating-large-datasets-with-daskdistributed)).\n\nFor example:\n\n```bash\npython -m mllam_data_prep --config data/danra.datastore.yaml --dask-distributed-local-core-fraction 0.5\n```\n\n### NpyFiles MEPS Datastore - `NpyFilesDatastoreMEPS`\n\nVersion `v0.1.0` of Neural-LAM was built to train from numpy-files from the\nMEPS weather forecasts dataset.\nTo enable this functionality to live on in later versions of neural-lam we have\nbuilt a datastore called `NpyFilesDatastoreMEPS` which implements functionality\nto read from these exact same numpy-files. At this stage this datastore class\nis very much tied to the MEPS dataset, but the code is written in a way where\nit quite easily could be adapted to work with numpy-based weather\nforecast/analysis files in future.\n\nThe full MEPS dataset can be shared with other researchers on request, contact us for this.\nA tiny subset of the data (named `meps_example`) is available in\n`example_data.zip`, which can be downloaded from\n[here](https://drive.google.com/drive/folders/1N6ZT_mkfbdVloVsNs9T5YOrMtxd3jG-j?usp=sharing).\n\nDownload the file and unzip in the neural-lam directory.\nGraphs used in the initial paper are also available for download at the same link (but can as easily be re-generated using `python -m neural_lam.create_graph`).\nNote that this is far too little data to train any useful models, but all pre-processing and training steps can be run with it.\nIt should thus be useful to make sure that your python environment is set up correctly and that all the code can be ran without any issues.\n\nThe following datastore configuration works with the MEPS dataset:\n\n```yaml\n# meps.datastore.yaml\ndataset:\n  name: meps_example\n  num_forcing_features: 16\n  var_longnames:\n  - pres_heightAboveGround_0_instant\n  - pres_heightAboveSea_0_instant\n  - nlwrs_heightAboveGround_0_accum\n  - nswrs_heightAboveGround_0_accum\n  - r_heightAboveGround_2_instant\n  - r_hybrid_65_instant\n  - t_heightAboveGround_2_instant\n  - t_hybrid_65_instant\n  - t_isobaricInhPa_500_instant\n  - t_isobaricInhPa_850_instant\n  - u_hybrid_65_instant\n  - u_isobaricInhPa_850_instant\n  - v_hybrid_65_instant\n  - v_isobaricInhPa_850_instant\n  - wvint_entireAtmosphere_0_instant\n  - z_isobaricInhPa_1000_instant\n  - z_isobaricInhPa_500_instant\n  var_names:\n  - pres_0g\n  - pres_0s\n  - nlwrs_0\n  - nswrs_0\n  - r_2\n  - r_65\n  - t_2\n  - t_65\n  - t_500\n  - t_850\n  - u_65\n  - u_850\n  - v_65\n  - v_850\n  - wvint_0\n  - z_1000\n  - z_500\n  var_units:\n  - Pa\n  - Pa\n  - W/m\\textsuperscript{2}\n  - W/m\\textsuperscript{2}\n  - \"-\"\n  - \"-\"\n  - K\n  - K\n  - K\n  - K\n  - m/s\n  - m/s\n  - m/s\n  - m/s\n  - kg/m\\textsuperscript{2}\n  - m\\textsuperscript{2}/s\\textsuperscript{2}\n  - m\\textsuperscript{2}/s\\textsuperscript{2}\n  num_timesteps: 65\n  num_ensemble_members: 2\n  step_length: 3\n  remove_state_features_with_index: [15]\ngrid_shape_state:\n- 268\n- 238\nprojection:\n  class_name: LambertConformal\n  kwargs:\n    central_latitude: 63.3\n    central_longitude: 15.0\n    standard_parallels:\n    - 63.3\n    - 63.3\n```\n\nWhich you can then use in a neural-lam configuration file like this:\n\n```yaml\n# config.yaml\ndatastore:\n  kind: npyfilesmeps\n  config_path: meps.datastore.yaml\ntraining:\n  state_feature_weighting:\n    __config_class__: ManualStateFeatureWeighting\n    values:\n      u100m: 1.0\n      v100m: 1.0\n```\n\nFor npy-file based datastores you must separately run the command that creates the variables used for standardization:\n\n```bash\npython -m neural_lam.datastore.npyfilesmeps.compute_standardization_stats \u003cpath-to-datastore-config\u003e\n```\n\n### Graph creation\n\nRun `python -m neural_lam.create_mesh` with suitable options to generate the graph you want to use (see `python neural_lam.create_mesh --help` for a list of options).\nThe graphs used for the different models in the [paper](#graph-based-neural-weather-prediction-for-limited-area-modeling) can be created as:\n\n* **GC-LAM**: `python -m neural_lam.create_graph --config_path \u003cneural-lam-config-path\u003e --name multiscale`\n* **Hi-LAM**: `python -m neural_lam.create_graph --config_path \u003cneural-lam-config-path\u003e --name hierarchical --hierarchical` (also works for Hi-LAM-Parallel)\n* **L1-LAM**: `python -m neural_lam.create_graph --config_path \u003cneural-lam-config-path\u003e --name 1level --levels 1`\n\nThe graph-related files are stored in a directory called `graphs`.\n\n## Logging your experiments\n\n### Weights \u0026 Biases Integration\nThe project is fully integrated with [Weights \u0026 Biases](https://www.wandb.ai/) (W\u0026B) for logging and visualization, but can just as easily be used without it.\nWhen W\u0026B is used, training configuration, training/test statistics and plots are sent to the W\u0026B servers and made available in an interactive web interface.\nIf W\u0026B is turned off, logging instead saves everything locally to a directory like `wandb/dryrun...`.\nThe W\u0026B project name is set to `neural-lam`, but this can be changed in the flags of `python -m neural_lam.train_model` (using argsparse).\nSee the [W\u0026B documentation](https://docs.wandb.ai/) for details.\n\nIf you would like to login and use W\u0026B, run:\n```\nwandb login\n```\nIf you would like to turn off W\u0026B and just log things locally, run:\n```\nwandb off\n```\n\n### MLFlow Integration\nThe project is also integrated with [MLFlow](https://mlflow.org/) for logging and storing artefacts.\n\nMLFlow is not used by default, but can be switched to by setting `--logger mlflow` in the training command. With MLFlow enabled, training configuration, training/test statistics and plots are logged to the MLFlow server. MLFlow is self-hosted and can be run locally or on a server. See the [MLFlow documentation](https://mlflow.org/docs/latest/index.html) for details.\n\nUse the environment variable `MLFLOW_TRACKING_URI` to set the URI of the MLFlow server. If not set the logging can not be used. An example of setting the URI to a server is and running a training command is `MLFLOW_TRACKING_URI=http://localhost:5000 python -m neural_lam.train_model --config_path \u003cconfig_path\u003e --logger mlflow`.\n\n## Train Models\nModels can be trained using `python -m neural_lam.train_model --config_path \u003cconfig_path\u003e`.\nRun `python neural_lam.train_model --help` for a full list of training options.\nA few of the key ones are outlined below:\n\n* `--config_path`: Path to the configuration for neural-lam (for example in `data/myexperiment/config.yaml`).\n* `--model`: Which model to train\n* `--graph`: Which graph to use with the model\n* `--epochs`: Number of epochs to train for\n* `--processor_layers`: Number of GNN layers to use in the processing part of the model\n* `--ar_steps_train`: Number of time steps to unroll for when making predictions and computing the loss\n* `--ar_steps_eval`: Number of time steps to unroll for during validation steps\n\nCheckpoints of trained models are stored in the `saved_models` directory.\nThe implemented models are:\n\n### Graph-LAM\nThis is the basic graph-based LAM model.\nThe encode-process-decode framework is used with a mesh graph in order to make one-step pedictions.\nThis model class is used both for the L1-LAM and GC-LAM models from the [paper](#graph-based-neural-weather-prediction-for-limited-area-modeling), only with different graphs.\n\nTo train 1L-LAM use\n```\npython -m neural_lam.train_model --model graph_lam --graph 1level ...\n```\n\nTo train GC-LAM use\n```\npython -m neural_lam.train_model --model graph_lam --graph multiscale ...\n```\n\n### Hi-LAM\nA version of Graph-LAM that uses a hierarchical mesh graph and performs sequential message passing through the hierarchy during processing.\n\nTo train Hi-LAM use\n```\npython -m neural_lam.train_model --model hi_lam --graph hierarchical ...\n```\n\n### Hi-LAM-Parallel\nA version of Hi-LAM where all message passing in the hierarchical mesh (up, down, inter-level) is ran in parallel.\nNot included in the paper as initial experiments showed worse results than Hi-LAM, but could be interesting to try in more settings.\n\nTo train Hi-LAM-Parallel use\n```\npython -m neural_lam.train_model --model hi_lam_parallel --graph hierarchical ...\n```\n\nCheckpoint files for our models trained on the MEPS data are available upon request.\n\n### High Performance Computing\n\nThe training script can be run on a cluster with multiple GPU-nodes. Neural LAM is set up to use PyTorch Lightning's `DDP` backend for distributed training.\nThe code can be used on systems both with and without slurm. If the cluster has multiple nodes, set the `--num_nodes` argument accordingly.\n\nUsing SLURM, the job can be started with `sbatch slurm_job.sh` with a shell script like the following.\n```\n#!/bin/bash -l\n#SBATCH --job-name=Neural-LAM\n#SBATCH --time=24:00:00\n#SBATCH --nodes=2\n#SBATCH --ntasks-per-node=4\n#SBATCH --gres:gpu=4\n#SBATCH --partition=normal\n#SBATCH --mem=444G\n#SBATCH --no-requeue\n#SBATCH --exclusive\n#SBATCH --output=lightning_logs/neurallam_out_%j.log\n#SBATCH --error=lightning_logs/neurallam_err_%j.log\n\n# Load necessary modules or activate environment, for example:\nconda activate neural-lam\n\nsrun -ul python -m neural_lam.train_model \\\n    --config_path /path/to/config.yaml \\\n    --num_nodes $SLURM_JOB_NUM_NODES\n```\n\nWhen using on a system without SLURM, where all GPU's are visible, it is possible to select a subset of GPU's to use for training with the `devices` cli argument, e.g. `--devices 0 1` to use the first 2 GPU's.\n\n## Evaluate Models\nEvaluation is also done using `python -m neural_lam.train_model --config_path \u003cconfig-path\u003e`, but using the `--eval` option.\nUse `--eval val` to evaluate the model on the validation set and `--eval test` to evaluate on test data.\nMost of the training options are also relevant for evaluation.\nSome options specifically important for evaluation are:\n\n* `--load`: Path to model checkpoint file (`.ckpt`) to load parameters from\n* `--n_example_pred`: Number of example predictions to plot during evaluation.\n* `--ar_steps_eval`: Number of time steps to unroll for during evaluation\n\n**Note:** While it is technically possible to use multiple GPUs for running evaluation, this is strongly discouraged. If using multiple devices the `DistributedSampler` will replicate some samples to make sure all devices have the same batch size, meaning that evaluation metrics will be unreliable.\nA possible workaround is to just use batch size 1 during evaluation.\nThis issue stems from PyTorch Lightning. See for example [this PR](https://github.com/Lightning-AI/torchmetrics/pull/1886) for more discussion.\n\n# Repository Structure\nExcept for training and pre-processing scripts all the source code can be found in the `neural_lam` directory.\nModel classes, including abstract base classes, are located in `neural_lam/models`.\nNotebooks for visualization and analysis are located in `docs`.\n\n## Format of graph directory\nThe `graphs` directory contains generated graph structures that can be used by different graph-based models.\nThe structure is shown with examples below:\n```\ngraphs\n├── graph1                                  - Directory with a graph definition\n│   ├── m2m_edge_index.pt                   - Edges in mesh graph (neural_lam.create_mesh)\n│   ├── g2m_edge_index.pt                   - Edges from grid to mesh (neural_lam.create_mesh)\n│   ├── m2g_edge_index.pt                   - Edges from mesh to grid (neural_lam.create_mesh)\n│   ├── m2m_features.pt                     - Static features of mesh edges (neural_lam.create_mesh)\n│   ├── g2m_features.pt                     - Static features of grid to mesh edges (neural_lam.create_mesh)\n│   ├── m2g_features.pt                     - Static features of mesh to grid edges (neural_lam.create_mesh)\n│   └── mesh_features.pt                    - Static features of mesh nodes (neural_lam.create_mesh)\n├── graph2\n├── ...\n└── graphN\n```\n\n### Mesh hierarchy format\nTo keep track of levels in the mesh graph, a list format is used for the files with mesh graph information.\nIn particular, the files\n```\n│   ├── m2m_edge_index.pt                   - Edges in mesh graph (neural_lam.create_mesh)\n│   ├── m2m_features.pt                     - Static features of mesh edges (neural_lam.create_mesh)\n│   ├── mesh_features.pt                    - Static features of mesh nodes (neural_lam.create_mesh)\n```\nall contain lists of length `L`, for a hierarchical mesh graph with `L` layers.\nFor non-hierarchical graphs `L == 1` and these are all just singly-entry lists.\nEach entry in the list contains the corresponding edge set or features of that level.\nNote that the first level (index 0 in these lists) corresponds to the lowest level in the hierarchy.\n\nIn addition, hierarchical mesh graphs (`L \u003e 1`) feature a few additional files with static data:\n```\n├── graph1\n│   ├── ...\n│   ├── mesh_down_edge_index.pt             - Downward edges in mesh graph (neural_lam.create_mesh)\n│   ├── mesh_up_edge_index.pt               - Upward edges in mesh graph (neural_lam.create_mesh)\n│   ├── mesh_down_features.pt               - Static features of downward mesh edges (neural_lam.create_mesh)\n│   ├── mesh_up_features.pt                 - Static features of upward mesh edges (neural_lam.create_mesh)\n│   ├── ...\n```\nThese files have the same list format as the ones above, but each list has length `L-1` (as these edges describe connections between levels).\nEntries 0 in these lists describe edges between the lowest levels 1 and 2.\n\n# Development and Contributing\nAny push or Pull-Request to the main branch will trigger a selection of pre-commit hooks.\nThese hooks will run a series of checks on the code, like formatting and linting.\nIf any of these checks fail the push or PR will be rejected.\nTo test whether your code passes these checks before pushing, run\n``` bash\npre-commit run --all-files\n```\nfrom the root directory of the repository.\n\nFurthermore, all tests in the ```tests``` directory will be run upon pushing changes by a github action. Failure in any of the tests will also reject the push/PR.\n\n# Contact\nIf you are interested in machine learning models for LAM, have questions about the implementation or ideas for extending it, feel free to get in touch.\nThere is an open [mllam slack channel](https://join.slack.com/t/ml-lam/shared_invite/zt-2t112zvm8-Vt6aBvhX7nYa6Kbj_LkCBQ) that anyone can join (after following the link you have to request to join, this is to avoid spam bots).\nYou can also open a github issue on this page.\n","funding_links":[],"readme_doi_urls":[],"works":{},"citation_counts":{},"total_citations":0,"keywords_from_contributors":[],"project_url":"https://ost.ecosyste.ms/api/v1/projects/297074","html_url":"https://ost.ecosyste.ms/projects/297074"}