A curated list of open technology projects to sustain a stable climate, energy supply, biodiversity and natural resources.

xl2times

An open source tool to convert TIMES models specified in Excel to a format ready for processing by GAMS.
https://github.com/etsap-times/xl2times

Category: Energy Systems
Sub Category: Energy System Modeling Frameworks

Keywords

data-processing energy-systems-modelling open-science open-source times-model

Keywords from Contributors

sustainable climate-change 3d-map measurement web-map orchestration

Last synced: about 12 hours ago
JSON representation

Repository metadata

Open source tool to convert TIMES models specified in Excel

README.md

xl2times

xl2times is an open source tool to convert TIMES models specified in Excel to a format ready for processing by GAMS.
Development of the tool originally started in a Microsoft repository with an intention to make it easier for anyone to reproduce research results on TIMES models.

TIMES is an open source energy systems model generator developed by the Energy Technology Systems Analysis Program (ETSAP) of the International Energy Agency (IEA) that is used around the world to inform energy policy.
It is fully explained in the TIMES Model Documentation.

Multiple approaches to using spreadsheets for specifying TIMES models have been developed, e.g. ANSWER-TIMES and VEDA-TIMES.
At present, xl2times implements partial support of the Veda approach described in the TIMES Model Documentation PART IV and Veda Documentation.
Support of other approaches may be added over time.

Installation and Basic Usage

You can install the latest published version of the tool from PyPI using pip (preferably in a virtual environment):

pip install xl2times

You can also install the latest development version by cloning this repository and running the following command in the root directory:

pip install .

After installation, run the following command to see the basic usage and available options:

xl2times --help

Here is an example invocation to convert the Demo 1 model into DD (you need to have the benchmarks set up, see the "Running Benchmarks" section below):

xl2times benchmarks/xlsx/DemoS_001/

Note that by default, the tool puts the produced output DD files into a directory called output/ in the current working directory. This behavior can be changed using the --output_dir /path/to/desired/output/ argument.

Note: If you are running a huge model, and it looks like nothing is happening, try adding a -v or --verbose argument to see more detailed logs, inlcuding a message when each intermediate transform is completed.

If the tool is installed on Windows, the above commands should be prefixed by python -m.

Documentation

The tool's documentation is at http://xl2times.readthedocs.io/ and the source is in the docs/ directory.

The documentation is generated by Sphinx and hosted on ReadTheDocs. We use the following extensions:

  • myst-parser: to be able to write documentation in markdown
  • sphinx-book-theme: the theme
  • sphinx-copybutton: to add copy buttons to code blocks
  • sphinxcontrib-apidoc: to automatically generate API documentation from the Python package

Documentation can be generated locally (after setting up your development environment as described below) by:

cd docs
make html

Testing on an existing model

If you have an existing TIMES model in Excel (e.g. developed using Veda) and would like to use the tool with it, we recommend to conduct bulk testing first. Bulk testing will allow understanding how much of the syntax used in the model is supported by the tool.

Start by generating *.dd files based on AllScenario scenario group (i.e. in Veda).

Afterwards, execute the following command from the root of the tool (assumes My_Bulk_Test case name) to extract all the data from the *.dd files:

python xl2times/dd_to_csv.py "C:\VEDA\GAMS_WrkTIMES\My_Bulk_Test" ground_truth

Finally, execute the tool on the model (e.g. My_TIMES-Model) and compare the results to the previously extracted data (assumes activated virtual environment):

xl2times "C:\VEDA\VEDA_Models\My_TIMES-Model"  --ground_truth_dir=ground_truth -v

The tool will summarise any differences between the data it generates and the extracted data.

Development

Setup

We recommend installing the tool in editable mode (-e) in a Python virtual environment:

python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -e .[dev]

On Windows:

python -m venv .venv
".venv/Scripts/python" -m pip install -U pip
".venv/Scripts/activate"
pip install -e .[dev]

We use the black code formatter. The pip command above will install it along with other requirements.

We also use the pyright type checker -- our GitHub Actions check will fail if pyright detects any type errors in your code. You can install pyright in your virtual environment and check your code by running these commands in the root of the repository:

pip install pyright==1.1.304
pyright

Additionally, you can install a git pre-commit that will ensure that your changes are formatted and pyright detects no issues before creating new commits:

pre-commit install

If you want to skip these pre-commit steps for a particular commit, if for instance pyright has issues but you still want to commit your changes to your branch, you can run:

git commit --no-verify

Running Benchmarks

We use the TIMES DemoS models and some public TIMES models as benchmarks.
See our GitHub Actions CI .github/workflows/ci.yml and the utility script utils/run_benchmarks.py to see how to we benchmark the tool and check PRs automatically for regression.
If you are a developer, you can use the below instructions to set up and run the benchmarks locally on Linux/WSL:

./setup-benchmarks.sh

Note that this script assumes you have access to all the relevant repositories (some are private and you'll have to request access) - if not, comment out the inaccessible benchmarks from benchmarks.yml before running.

Then to run the benchmarks:

# Run a only a single benchmark by name (see benchmarks.yml for name list)
python utils/run_benchmarks.py benchmarks.yml --run DemoS_001-all

# To see the full output logs, and save it in a file for convenience
python utils/run_benchmarks.py benchmarks.yml --run DemoS_001-all --verbose | tee out.txt

# Run all benchmarks (without GAMS run, just comparing CSV data for regressions)
# Note: if you have multiple remotes, set etsap-TIMES/xl2times as the `origin`, as it is used for speed/correctness comparisons.
python utils/run_benchmarks.py benchmarks.yml

# Run benchmarks with regression tests vs main branch
git branch feature/your_new_changes --checkout
# ... make your code changes here ...
git commit -a -m "your commit message" # code must be committed for comparison to `main` branch to run.
python utils/run_benchmarks.py benchmarks.yml

At this point, if you haven't broken anything you should see something like:

Change in runtime: +2.97s
Change in correct rows: +0
Change in additional rows: +0
No regressions. You're awesome!

If you have a large increase in runtime, a decrease in correct rows or fewer rows being produced, then you've broken something and will need to figure out how to fix it.

Debugging Regressions

If your change is causing regressions on one of the benchmarks, a useful way to debug and find the difference is to run the tool in verbose mode and compare the intermediate tables. For example, if your branch has regressions on Demo 1:

# First, on the `main` branch:
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all -v -v > before 2>&1
# Then, on your branch:
git checkout my-branch-name
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all -v -v > after 2>&1
# And then compare the files `before` and `after`
code -d before after

VS Code will highlight the changes in the two files, which should correspond to any differences in the intermediate tables.

Publishing the Tool

Follow these steps to release a new version of xl2times and publish it on PyPI:

  • Bump the version number in pyproject.toml and xl2times/__init__.py (use Semantic Versioning)
  • Open a PR with this change titled "Release vX.Y.Z"
  • When the PR is merged, create a new release titled "vX.Y.Z". Select "Create a new tag: on publish" and click "Generate release notes" to generate the notes automatically.
  • Click "Publish release" to publish the release on GitHub. A GitHub Actions workflow will automatically upload the distribution to PyPI.

Contributing

This project welcomes contributions and suggestions. See Code of Conduct and Contributing for more details.


Owner metadata


GitHub Events

Total
Last Year

Committers metadata

Last synced: 7 days ago

Total Commits: 356
Total Committers: 9
Avg Commits per committer: 39.556
Development Distribution Score (DDS): 0.598

Commits in past year: 69
Committers in past year: 3
Avg Commits per committer in past year: 23.0
Development Distribution Score (DDS) in past year: 0.174

Name Email Commits
Olexandr Balyk ob@f****y 143
Sam Webster 1****r 104
Siddharth Krishna s****a 79
Tom Minka 8****a 13
Sam West s****t@c****u 7
Microsoft Open Source m****e 5
Niloy n****t 3
microsoft-github-operations[bot] 5****] 1
AlexRomeroPrieto 1****o 1

Committer domains:


Issue and Pull Request metadata

Last synced: 1 day ago

Total issues: 68
Total pull requests: 121
Average time to close issues: 4 months
Average time to close pull requests: 16 days
Total issue authors: 3
Total pull request authors: 4
Average comments per issue: 3.19
Average comments per pull request: 2.01
Merged pull request: 116
Bot issues: 0
Bot pull requests: 0

Past year issues: 9
Past year pull requests: 23
Past year average time to close issues: 15 days
Past year average time to close pull requests: 1 day
Past year issue authors: 1
Past year pull request authors: 2
Past year average comments per issue: 3.11
Past year average comments per pull request: 2.3
Past year merged pull request: 22
Past year bot issues: 0
Past year bot pull requests: 0

More stats: https://issues.ecosyste.ms/repositories/lookup?url=https://github.com/etsap-times/xl2times

Top Issue Authors

  • olejandro (51)
  • siddharth-krishna (9)
  • samwebster (8)

Top Pull Request Authors

  • olejandro (55)
  • siddharth-krishna (49)
  • samwebster (16)
  • SamRWest (1)

Top Issue Labels

  • bug (2)
  • documentation (2)

Top Pull Request Labels


Package metadata

pypi.org: xl2times

An open source tool to convert TIMES models specified in Excel to a format ready for processing by GAMS

  • Homepage:
  • Documentation: https://xl2times.readthedocs.io
  • Licenses: MIT License Copyright (c) 2022-2023 Microsoft Corporation. Copyright (c) 2023-2024 IEA Energy Technology Systems Analysis Programme. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE
  • Latest release: 0.2.2 (published 22 days ago)
  • Last Synced: 2025-04-25T19:00:42.046Z (1 day ago)
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 391 Last month
  • Rankings:
    • Dependent packages count: 10.141%
    • Average: 38.596%
    • Dependent repos count: 67.051%
  • Maintainers (2)

Score: 12.160018235769755