Energy-Languages
The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game.
https://github.com/greensoftwarelab/Energy-Languages
Category: Consumption
Sub Category: Computation and Communication
Keywords
clbg energy languages programming
Keywords from Contributors
clang compile gcc pending zig
Last synced: about 20 hours ago
JSON representation
Repository metadata
The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game
- Host: GitHub
- URL: https://github.com/greensoftwarelab/Energy-Languages
- Owner: greensoftwarelab
- License: mit
- Created: 2017-08-28T16:41:20.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2023-10-12T12:46:46.000Z (over 1 year ago)
- Last Synced: 2025-04-25T13:05:26.669Z (2 days ago)
- Topics: clbg, energy, languages, programming
- Language: C
- Homepage:
- Size: 1.67 MB
- Stars: 700
- Watchers: 31
- Forks: 115
- Open Issues: 12
- Releases: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
README.md
Energy Efficiency in Programming Languages
Checking Energy Consumption in Programming Languages Using the Computer Language Benchmark Game as a case study.
What is this?
This repo contains the source code of 10 distinct benchmarks, implemented in 28 different languages (exactly as taken from the Computer Language Benchmark Game).
It also contains tools which provide support, for each benchmark of each language, to 4 operations: (1) compilation, (2) execution, (3) energy measuring and (4) memory peak detection.
How is it structured and hows does it work?
This framework follows a specific folder structure, which guarantees the correct workflow when the goal is to perform and operation for all benchmarks at once.
Moreover, it must be defined, for each benchmark, how to perform the 4 operations considered.
Next, we explain the folder structure and how to specify, for each language benchmark, the execution of each operation.
The Structure
The main folder contains 32 elements:
- 28 sub-folders (one for each of the considered languages); each folder contains a sub-folder for each considered benchmark.
- A
Python
scriptcompile_all.py
, capable of building, running and measuring the energy and memory usage of every benchmark in all considered languages. - A
RAPL
sub-folder, containing the code of the energy measurement framework. - A
Bash
scriptgen-input.sh
, used to generate the input files for 3 benchmarks:k-nucleotide
,reverse-complement
, andregex-redux
.
Basically, the directories tree will look something like this:
| ...
| <Language-1>
| <benchmark-1>
| <source>
| Makefile
| [input]
| ...
| <benchmark-i>
| <source>
| Makefile
| [input]
| ...
| <Language-i>
| <benchmark-1>
| ...
| <benchmark-i>
| RAPL
| compile_all.py
| gen-input.sh
Taking the C
language as an example, this is how the folder for the binary-trees
and k-nucleotide
benchmarks would look like:
| ...
| C
| binary-trees
| binarytrees.gcc-3.c
| Makefile
| k-nucleotide
| knucleotide.c
| knucleotide-input25000000.txt
| Makefile
| ...
| ...
The Operations
Each benchmark sub-folder, included in a language folder, contains a Makefile
.
This is the file where is stated how to perform the 4 supported operations: (1) compilation, (2) execution, (3) energy measuring and (4) memory peak detection.
Basically, each Makefile
must contains 4 rules, one for each operations:
Rule | Description |
---|---|
compile |
This rule specifies how the benchmark should be compiled in the considered language; Interpreted languages don't need it, so it can be left blank in such cases. |
run |
This rule specifies how the benchmark should be executed; It is used to test whether the benchmark runs with no errors, and the output is the expected. |
measure |
This rule shows how to use the framework included in the RAPL folder to measure the energy of executing the task specified in the run rule. |
mem |
Similar to measure , this rule executes the task specified in the run rule but with support for memory peak detection. |
To better understand it, here's the Makefile
for the binary-trees
benchmark in the C
language:
compile:
/usr/bin/gcc -pipe -Wall -O3 -fomit-frame-pointer -march=native -fopenmp -D_FILE_OFFSET_BITS=64 -I/usr/include/apr-1.0 binarytrees.gcc-3.c -o binarytrees.gcc-3.gcc_run -lapr-1 -lgomp -lm
measure:
sudo ../../RAPL/main "./binarytrees.gcc-3.gcc_run 21" C binary-trees
run:
./binarytrees.gcc-3.gcc_run 21
mem:
/usr/bin/time -v ./binarytrees.gcc-3.gcc_run 21
Running an example.
First things first: We must give sudo access to the energy registers for RAPL to access
sudo modprobe msr
and then generate the input files, like this
./gen-input.sh
This will generate the necessary input files, and are valid for every language.
We included a main Python script, compile_all.py
, that you can either call from the main folder or from inside a language folder, and it can be executed as follows:
python compile_all.py [rule]
You can provide a rule from the available 4 referenced before, and the script will perform it using every Makefile
found in the same folder level and bellow.
The default rule is compile
, which means that if you run it with no arguments provided (python compile_all.py
) the script will try to compile all benchmarks.
The results of the energy measurements will be stored in files with the name <language>.csv
, where <language>
is the name of the running language.
You will find such file inside of corresponding language folder.
Each .csv will contain a line with the following:
benchmark-name ; PKG (Joules) ; CPU (J) ; GPU (J) ; DRAM (J) ; Time (ms)
Do note that the availability of GPU/DRAM measurements depend on your machine's architecture. These are requirements from RAPL itself.
Add your own example!
Wanna know your own code's energy behavior? We can help you!
Follow this steps:
test-benchmark
, inside the language you implemented it.
1. Create a folder with the name of you benchmark, such as Operations section, and fill the Makefile
.
2. Follow the instructions presented in the compile_all.py
script to compile, run, and/or measure what you want! Or run it yourself using the make
command.
3. Use the Further Reading
Wanna know more? Check this website!
There you can find the results of a successful experimental setup using the contents of this repo, and the used machine and compilers specifications.
You can also find there the paper which include such results and our discussion on them:
"Energy Efficiency across Programming Languages: How does Energy, Time and Memory Relate?",
Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, Jácome Cunha, João Paulo Fernandes, and João Saraiva.
In Proceedings of the 10th International Conference on Software Language Engineering (SLE '17)
IMPORTANT NOTE:
The Makefiles
have specified, for some cases, the path for the language's compiler/runner.
It is most likely that you will not have them in the same path of your machine.
If you would like to properly test every benchmark of every language, please make sure you have all compilers/runners installed, and adapt the Makefiles
accordingly.
Contacts and References
Main contributors: @Marco Couto and @Rui Pereira
Owner metadata
- Name: Green Software Lab
- Login: greensoftwarelab
- Email:
- Kind: organization
- Description:
- Website: http://greenlab.di.uminho.pt/
- Location:
- Twitter:
- Company:
- Icon url: https://avatars.githubusercontent.com/u/11410556?v=4
- Repositories: 7
- Last ynced at: 2023-02-28T16:10:15.819Z
- Profile URL: https://github.com/greensoftwarelab
GitHub Events
Total
- Issues event: 1
- Watch event: 13
- Issue comment event: 2
- Fork event: 2
Last Year
- Issues event: 1
- Watch event: 13
- Issue comment event: 2
- Fork event: 2
Committers metadata
Last synced: 4 days ago
Total Commits: 22
Total Committers: 8
Avg Commits per committer: 2.75
Development Distribution Score (DDS): 0.5
Commits in past year: 0
Committers in past year: 0
Avg Commits per committer in past year: 0.0
Development Distribution Score (DDS) in past year: 0.0
Name | Commits | |
---|---|---|
MarcoCouto | m****0@g****m | 11 |
States | r****3@g****m | 4 |
Green Software Lab | g****b@d****t | 2 |
Ben Albrecht | b****t | 1 |
Felipe Móz | 6****z | 1 |
José Nuno Macedo | j****o@i****t | 1 |
Logan Kilpatrick | 2****3@g****m | 1 |
michaljroszak | 4****k | 1 |
Committer domains:
- inesctec.pt: 1
- di.uminho.pt: 1
Issue and Pull Request metadata
Last synced: 2 days ago
Total issues: 22
Total pull requests: 18
Average time to close issues: 10 months
Average time to close pull requests: 23 days
Total issue authors: 20
Total pull request authors: 12
Average comments per issue: 2.18
Average comments per pull request: 0.44
Merged pull request: 13
Bot issues: 0
Bot pull requests: 0
Past year issues: 1
Past year pull requests: 0
Past year average time to close issues: N/A
Past year average time to close pull requests: N/A
Past year issue authors: 1
Past year pull request authors: 0
Past year average comments per issue: 2.0
Past year average comments per pull request: 0
Past year merged pull request: 0
Past year bot issues: 0
Past year bot pull requests: 0
Top Issue Authors
- igouy (2)
- italomaia (2)
- logankilpatrick (1)
- meehew (1)
- kassane (1)
- ChrisChinchilla (1)
- Alevale (1)
- paul-hammant (1)
- xmnlab (1)
- ben-albrecht (1)
- timocov (1)
- turric4n (1)
- timmattison (1)
- xz328 (1)
- dogac00 (1)
Top Pull Request Authors
- MarcoCouto (7)
- felipemoz (1)
- ctjlewis (1)
- timmattison (1)
- ben-albrecht (1)
- hyukjekwon (1)
- nicovank (1)
- WebReflection (1)
- zenunomacedo (1)
- meehew (1)
- States (1)
- logankilpatrick (1)
Top Issue Labels
Top Pull Request Labels
Dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
- 123 dependencies
Score: 8.647519453091812