Open Energy Benchmark
This repository contains code for benchmarking optimization solvers on problems from the energy planning domain, and an interactive website for analyzing the results.
https://github.com/open-energy-transition/solver-benchmark
Category: Energy Systems
Sub Category: Energy System Modeling Frameworks
Keywords from Contributors
energy-system-model energy-system energy-system-planning investment-optimization operational-optimization power-system-model power-system-planning pypsa-africa pypsa-earth scenario-analysis
Last synced: about 5 hours ago
JSON representation
Repository metadata
A benchmark of (MI)LP solvers on energy modelling problems
- Host: GitHub
- URL: https://github.com/open-energy-transition/solver-benchmark
- Owner: open-energy-transition
- License: agpl-3.0
- Created: 2024-07-15T08:05:05.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-12-20T00:41:40.000Z (23 days ago)
- Last Synced: 2025-12-21T08:54:59.448Z (22 days ago)
- Language: TypeScript
- Homepage: https://openenergybenchmark.org/
- Size: 29.8 MB
- Stars: 52
- Watchers: 3
- Forks: 11
- Open Issues: 42
- Releases: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
README.md
Open Energy Benchmark
This repository contains code for benchmarking optimization solvers on problems from the energy planning domain, and an interactive website for analyzing the results. The live website can be viewed at:
https://openenergybenchmark.org/
Table of Contents
Benchmark Problems
All our benchmark problems are open and available as LP/MPS files that can be downloaded in one click from our website's Benchmark Set page. Some problems have been generated by us using open source energy model frameworks, and for these we have configuration files and instructions for reproducing the problems.
For more details on how to contribute benchmark problems, see the Benchmarks README.
Solvers
The benchmark runner can run solvers listed in the Solvers README.
We use the last released solver version in each calendar year. 2025 solvers will be updated at the end of the year.
Project Structure
Understanding the project layout to help you navigate and contribute:
solver-benchmark/
├── runner/ # Benchmark execution scripts
│ ├── benchmark_all.sh # Main entry point for running benchmarks
│ ├── run_benchmarks.py # Python script that orchestrates benchmark runs
│ ├── run_solver.py # Individual solver runner
│ ├── envs/ # Conda environment definitions for each solver year
│ └── benchmarks/ # Downloaded benchmark problem files
├── benchmarks/ # Benchmark problem definitions and metadata
│ ├── pypsa/ # PyPSA-generated energy models
│ ├── jump_highs_platform/ # JuMP/HiGHS benchmark metadata
│ └── *_metadata.yaml # Problem definitions and details
├── website-nextjs/ # Next.js website for viewing results
├── infrastructure/ # GCP VM deployment scripts (for running benchmarks at scale)
├── results/ # Output directory for benchmark results
├── benchmark_results.csv # Main results file
└── metadata.yaml # Merged metadata of all problems on the website
Running Benchmarks
Local Runs
Prerequisites
System Requirements
The benchmark runner currently requires Linux as it uses systemd-run to enforce memory limits on solvers, which is not available on macOS or Windows.
Supported Linux distributions:
- Ubuntu 20.04 LTS or later
- Debian 11 or later
- Other systemd-based Linux distributions
Required Software
Ensure you have the following installed:
- Python 3.12+
- Conda (install Miniconda)
- systemd (usually pre-installed on modern Linux distributions)
Running Supported Solvers on Benchmarks
The benchmark runner script (runner/benchmark_all.sh) is the main entry point for running benchmarks. It takes a list of solvers and a list of years as arguments, and runs the benchmarks for each solver and year. It creates conda environments containing the solvers and other necessary prerequisites, so a virtual environment is not necessary just for running the benchmark runner. See README .
Quickstart:
- Run benchmarks
./runner/benchmark_all.sh -s "highs scip" -y "2025" infrastructure/benchmarks/sample_run/standard-00.yaml
- View logs and results
tail results/benchmark_results.csv # will overwrite currently committed results
tail runner/logs/*
- View and analyze results by running the website locally
The script will save the measured runtime and memory consumption into a CSV file in results/ that the website will then read and display. Running the website locally will allow you to view and analyze results in a user friendly way. It will use the results from results/benchmark_results.csv.
runner/benchmark_all.sh uses runner/run_benchmarks.py to run the benchmarks by year. If you wish to run benchmarks directly, you can set up the requisite conda env manually. See Documentation.
Cloud Runs
We have cloud orchestration setup for running benchmarks on Google Cloud Platform. See Documentation.
Quickstart:
For cloud infrastructure setup, install:
gcloud auth application-default login
cd infrastructure
tofu init
tofu apply -var-file benchmarks/sample_run/run.tfvars
To set up comprehensive benchmark campaigns, like the one available on the website:
- Use
notebooks/allocate-benchmarks-to-vms.ipynbto create the benchmark campaign. - Run
notebooks/run-and-observe-benchmarks.ipynbto observe the benchmark campaign progress.
Running your own benchmarks
To run your own benchmark problems, either locally or on the cloud, follow the steps in the appropriate section above but using a benchmarks.yaml file of your own that gives the details (metadata) and URL/path of your benchmark problems.
Here is a small example:
benchmarks:
genx-3_three_zones_w_co2_capture-no_uc:
Sizes:
- Name: 3-1h
# Size classification
Size: M
# URL of the problem (needed for cloud runs)
URL: https://storage.googleapis.com/solver-benchmarks/genx-3_three_zones_w_co2_capture-no_uc.lp
# ALTERNATIVELY, for local runs, you can also give a local path
Path: tests/sample_benchmarks/sample_lp.lp
You can quickly try running your own problem locally on our supported set of solvers by following these instructions.
Running other solvers
To run either our benchmarks, or your own (see the previous section), on a solver that we do not yet support, you need to install it into the active conda evironment and modify the run_solver.py appropriately. Please reach out to us (or open an issue) if you would like more details, or any help with this.
Running the Website
The website code is under website-nextjs/. To run the website locally, you need a recent version of node and npm installed. Then, run the following commands:
cd website-nextjs/
npm install
npm run build && npm run dev
Open http://localhost:3000 with your browser to see the website.
To see the results from your runs, navigate to the results page.
Development
We use the ruff code linter and formatter, and GitHub Actions runs various pre-commit checks to ensure code and files are clean.
You can install a git pre-commit that will ensure that your changes are formatted
and no lint issues are detected before creating new commits:
pip install pre-commit
pre-commit install
If you want to skip these pre-commit steps for a particular commit, you can run:
git commit --no-verify
Owner metadata
- Name: open-energy-transition
- Login: open-energy-transition
- Email:
- Kind: organization
- Description:
- Website:
- Location:
- Twitter:
- Company:
- Icon url: https://avatars.githubusercontent.com/u/131007753?v=4
- Repositories: 1
- Last ynced at: 2023-05-03T12:28:56.288Z
- Profile URL: https://github.com/open-energy-transition
GitHub Events
Total
- Create event: 139
- Issues event: 119
- Watch event: 28
- Delete event: 119
- Member event: 1
- Issue comment event: 307
- Push event: 657
- Pull request event: 272
- Pull request review event: 321
- Pull request review comment event: 127
- Fork event: 4
Last Year
- Create event: 135
- Issues event: 115
- Watch event: 27
- Delete event: 117
- Member event: 1
- Issue comment event: 295
- Push event: 640
- Pull request event: 266
- Pull request review event: 310
- Pull request review comment event: 119
- Fork event: 4
Committers metadata
Last synced: 28 days ago
Total Commits: 213
Total Committers: 9
Avg Commits per committer: 23.667
Development Distribution Score (DDS): 0.545
Commits in past year: 164
Committers in past year: 9
Avg Commits per committer in past year: 18.222
Development Distribution Score (DDS) in past year: 0.591
| Name | Commits | |
|---|---|---|
| jacek-oet | j****g@o****g | 97 |
| Siddharth Krishna | s****a | 40 |
| Kristijan Faust | k****t@o****g | 35 |
| Daniele Lerede | d****e@o****g | 26 |
| pre-commit-ci[bot] | 6****] | 5 |
| dependabot[bot] | 4****] | 4 |
| Madhukar Mishra | m****3@g****m | 3 |
| Goli Vamsi Priya | g****2@g****m | 2 |
| nodet | x****t@g****m | 1 |
Committer domains:
Issue and Pull Request metadata
Last synced: 4 days ago
Total issues: 88
Total pull requests: 311
Average time to close issues: about 1 month
Average time to close pull requests: 8 days
Total issue authors: 6
Total pull request authors: 11
Average comments per issue: 0.7
Average comments per pull request: 1.27
Merged pull request: 228
Bot issues: 0
Bot pull requests: 18
Past year issues: 61
Past year pull requests: 195
Past year average time to close issues: 24 days
Past year average time to close pull requests: 6 days
Past year issue authors: 5
Past year pull request authors: 10
Past year average comments per issue: 0.54
Past year average comments per pull request: 1.51
Past year merged pull request: 141
Past year bot issues: 0
Past year bot pull requests: 16
Top Issue Authors
- siddharth-krishna (78)
- jacek-oet (4)
- danielelerede-oet (3)
- erling-d-andersen (1)
- mattmilten (1)
- jajhall (1)
Top Pull Request Authors
- jacek-oet (138)
- siddharth-krishna (66)
- danielelerede-oet (50)
- KristijanFaust-OET (33)
- pre-commit-ci[bot] (12)
- dependabot[bot] (6)
- drifter089 (2)
- JohannesBehrens (1)
- nour-boulos (1)
- lprieto1409 (1)
- nodet (1)
Top Issue Labels
Top Pull Request Labels
- dependencies (6)
- javascript (4)
- do not merge (2)
Dependencies
- dash ==2.17.1
- glpk ==0.4.7
- highspy ==1.7.2
- ipython ==8.26.0
- linopy ==0.3.14
- memory_profiler ==0.61.0
- pandas ==2.2.2
- pyomo ==6.7.3
- pypsa ==0.29.0
- streamlit ==1.37.0
- streamlit-aggrid ==1.0.5
- streamlit_shadcn_ui ==0.1.18
Score: 6.740519359606223