Prospect Energy

An open source data platform for the energy access sector that allows you to customize data flows coming from ongrid, minigrid and offgrid sources.
https://gitlab.com/prospect-energy/prospect-server

Category: Energy Systems
Sub Category: Energy Data Accessibility and Integration

Last synced: about 8 hours ago
JSON representation

Repository metadata

https://gitlab.com/prospect-energy/prospect-server/blob/main/

          ![Coverage](https://gitlab.com/prospect-energy/prospect-server/badges/main/coverage.svg?job=merge-coverage&key_text=Coverage&key_width=60)
![Core coverage](https://gitlab.com/prospect-energy/prospect-server/badges/main/coverage.svg?job=merge-rspec-coverage&key_text=Core+Coverage&key_width=90)
![Dataworker coverage](https://gitlab.com/prospect-energy/prospect-server/badges/main/coverage.svg?job=dataworker-tests&key_text=Dataworker+Coverage&key_width=130)
![Pipeline Status](https://gitlab.com/prospect-energy/prospect-server/badges/main/pipeline.svg)


# Prospect Server
---
**NOTE:**

Prospect is under heavy development and might change massively within short time frames. Please use current state of development for experimental use only

---

Prospect is a server run application for data gathering and visualization in field of renewable energy. It consists of a set of open source tools running in Docker containers and a RubyOnRails Application combining these parts of Software. See it up and running: https://app.prospect.energy/

---

Prospect Server is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License version 3 as published by the Free Software Foundation.

---

If you want to contribute to Prospect (e.g. connecting own data sources), you are most welcome. Please use our [developer wiki](https://gitlab.com/prospect-energy/prospect-server/-/wikis/home) to get started.


## General Software setup and Architecture

Please see the [wiki for more information](https://gitlab.com/prospect-energy/prospect-server/-/wikis/home).

## 1. Prerequisites

* Install [git](https://github.com/git-guides/install-git) for your OS
  * Windows only: run the following commands in your cmd terminal to not mix up [LF / CRLF](https://adaptivepatchwork.com/2012/03/01/mind-the-end-of-your-line/)
     ```
     git config --global core.autocrlf input
     git config --global core.eol lf
     ```
* Install the latest docker and docker compose for your OS: [Get Docker](https://docs.docker.com/get-docker/)
  * Ubuntu only: don't use snap, but [apt repository](https://docs.docker.com/engine/install/ubuntu/#installation-methods)
  * Windows only: docker setup will ask you to install Windows Subsystem for Linux ([WSL](https://learn.microsoft.com/en-us/windows/wsl/install))
* Checkout the [repo](https://gitlab.com/prospect-energy/prospect-server) in a folder of your choice by running the command in your system terminal (MacOSX: Terminal, Windows: cmd, Linux: Terminal). This folder will be the working directory for all following commands.
```
git clone https://gitlab.com/prospect-energy/prospect-server.git
```


## 2. Setting up the Dev environment

There are two supported options.
- The simpler one (2.1) runs all parts in docker managed by docker compose. This is the ideal option, if you want to test the project.
- If you want to deep dive and debug via your local IDE, you can choose to have the core/ruby project run on your local machine, whilst using docker compose for the rest of the infrastructure (2.2).

### 2.1 Docker compose - all batteries included

#### 2.1.1 Starting the engines

Make sure, you're in the root directory of the project, which contains `docker-compose.yml` as the recipe for the full prospect stack.

Start the whole set of containers with:

```bash
docker compose up     # run in foreground, CTRL-C will stop everything
docker compose up -d  # detach, you will be able to close the terminal
```

After a while dependencies or the containers might have bee updated, so to rebuild the containers run:

```bash
docker compose up --build
```

For the docker compose setup init scripts are already taking care of:
* Creating the db, precompile assets and starting the prospect core server
* Creating connection, users and buckets for minio
* Running pending DB migrations

#### 2.1.2 Stopping the engines

To stop everything, just use CTRL-C or if run in background use your Docker Desktop or

```bash
docker compose down
```

Next time you wanna start using it again, just use `docker compose up` again.
As all data is stored on persistent volumes you can continue to work where you left.

***That's it, you can start testing!***
Checkout **Overview about all endpoints** to get an overview about all endpoints.
Also checkout **Logging**, to get to know how to use the built-in logging system.

#### 2.1.3 Resetting the environment

Sometimes you might want to start with a clean state, so best just clear out the volumes. This will delete all the data.

```bash
docker compose down -v
```

#### 2.1.4 Code Changes

Even though the code is mounted dynamically into the respective containers, already running processes need to pick up changes.
To do this just restart the affected containers, this is much faster that taking everything just down and up again:

```bash
docker compose restart core       # restart core container, have it pick up changes to things like initializers
docker compose restart dataworker # restart dataworker container
```

Note: The core Rails App will normally automatically pick up all code changes within `core/app` directory on each page reload.

#### 2.1.5 Logs and Troubleshooting

Sometimes it is great to see what is going on within the containers:

```bash
docker compose logs                  # print logs of all containers
docker compose logs -f               # follow logs of all containers
docker compose logs core             # print the logs of the core container and exit
docker compose logs -f -t dataworker # follow the logs of the dataworker container with added timestamps
```

#### 2.1.6 Database Migrations

Normally when you (re)start the `core` docker container all pending DB migrations should be run.

For Devs writing migrations, that behavior might be unwanted, as they might have unfinished migrations.
If you do not want migration run automatically on core container start, just set the following in your local .env.

```
PROSPECT_AUTOMATIC_DB_MIGRATE=false
```

To run migrations manually open a bash inside the core container and run the migrations via the rake task:

```bash
docker compose exec core bash # Open a bash console inside the core service
bundle exec rake db:migrate   # run the database migrations
```

### 2.2 Core app development

Normally the core app is running and developed with docker compose.

If you optionally want the app running locally on your machine and have the rest of the infrastructure managed by docker you can
follow the [instructions from the wiki](https://gitlab.com/prospect-energy/prospect-server/-/wikis/Development/Core-App-Local-Development)

Most of the commands below should be run from where the core application is running, so enter the core container with

```bash
docker compose exec core bash
```

The following command will set up the database and create the admin user. ( Normally done via docker compose)

```bash
bundle exec rake db:setup
```

You also might want to dynamically compile the css while developing the UI.
This tasks runs indefinite and will recreate all assets while developing, so give it an extra terminal to run.

```bash
bundle exec rails tailwindcss:watch
```

Start the development server (if not running via docker).
Then you will be able to log in with the credentials generated above: http://localhost:3000 :confetti_ball:

```bash
bundle exec rails s -p 3000 -b '127.0.0.1'  # locally
```

After code changes some migrations might need to be run, dependencies updated, so...

```bash
bundle install                # too lazy to rebuild container? just install dependencies
bundle exec rake db:migrate   # run database- (and some few data-) migrations
```

Get a repl inside the app

```bash
bundle exec rails c
```

Keep your code clean before committing, to not get caught by the CI later. So run the linter with

```bash
bundle exec rubocop     # just print violations
bundle exec rubocop -A  # autocorrect, 99% of the time, nothing breaks, but sometimes it does :)
```

Running the tests, using the [RSpec](https://rspec.info/) framework

```bash
bundle exec rspec
```

### 2.3 Enabling prek pre-commit hooks for linting

If you want to contribute. Our linting, including rubocop, is handled by pre-commit hooks.

There is a feature of git called `pre-commit hooks`, that are scripts that get called
when you want to commit changes. Those script must run without error on the applied
changes for git to proceed with the commit. The script used are normally code quality
tools that enforce standards in the committed code.

There is a tool called [prek](https://prek.j178.dev) that handles installing the
tools in isolated environments and registering scripts in the `.git` directory of the
project so they get run by git. `prek` reads its config from `.pre-commit-config.yaml`.
This config describes what script should be installed and how it should run.

When setting up the project for the first time with git, you can install the pre-commit
hooks with the `prek` tool with the command `uv tool install prek` (preferred
method requiring to have
[uv installed](https://docs.astral.sh/uv/getting-started/installation/)) or else by
running `pip install prek` and then`prek install`.

The tool will install locally the required versions of python.
Rubocop runs via the core Docker container, so no host Ruby or ruby-build is needed.
Ensure Docker is running and the core image is built (`docker compose build core`).

After that, hooks will be run on changed code before every commit. Running the hooks might
change your code, in which case you also need to add the resulting changes to the
changes you want to commit before retrying. It might also prevent you from committing, if
tools detect a rule violation but cannot auto correct it. In that case you must correct
the error manually, or explicitly ignore it.

## 3. Overview about all endpoints

If you have followed the guide until here, you should have a working development setup
by now. Check your `.env` as it contains the default credentials for all of these
endpoints. The following endpoints should be available:

* Prospect App: [http://localhost:3000](http://localhost:3000)
  ```
  PROSPECT_ADMIN_USER='default@example.com'
  PROSPECT_ADMIN_PASSWORD='Oraech*ai2ve3Me7Cae9'
  ```
* Grafana: [http://localhost:3001](http://localhost:3001)
  ```
  GRAFANA_ADMIN_USER='admin'
  GRAFANA_ADMIN_PASSWORD='oozaimahv9iemaeDa5xu'
  ```
* Postgres: Access the DB through a SQL client
  ```
  POSTGRES_HOST='localhost'
  POSTGRES_PORT=5431
  POSTGRES_USER='postgres'
  POSTGRES_PASSWORD='postgres'
  POSTGRES_DATABASE='prospect_development'
  ```
* Minio: [http://localhost:9090](http://localhost:9090)
  ```
  MINIO_ROOT_USER=admin
  MINIO_ROOT_PASSWORD='Ri7icee1Iechiecu1roe'
  ```
* Faktory: [http://localhost:7420](http://localhost:7420)
  just leave the username blank for the basic auth, password is
  `FAKTORY_PASSWORD='faktorypassword'`

Overwrite these credentials if needed like described below.

## 4. Additional configurations

### ENV Variables

The app is mostly configured via ENV variables using the [dotenv-rails](https://github.com/bkeepers/dotenv) gem.
The respective files required for development already mounted via docker compose.
If you feel the need to override those locally, you can just use `core/.env.local` to overwrite all variables specified in `.env`.

### Google Auth
The App can also use Google Auth locally, if you need that then follow the steps for Google OAuth setup [here](https://support.google.com/cloud/answer/6158849?hl=en)
and enter the credentials in your `core/.env.local`. Set `GOOGLE_AUTH='true'`. Use `http://localhost:3000/session/callback?provider=google?` as authorized redirect URI.

## 5. Connectors

Prospect data connectors allow to import data into the system und various ways.

We allow direct imports into our data schema via manual CSV/Excel uploads, S3 Buckets, Google Sheets and API push. For an overview about our data schema please [check our documentation](https://app.prospect.energy/docs).

We also support an ever growing set of direct API integrations with manufacturer and CRM systems. Find the [full list here](https://gitlab.com/prospect-energy/prospect-server/-/wikis/User-Wiki/EN/Prospect-Integrations).

## 6. Main Organization

Members of the *Main Organization* can see/do more on the platform. This helps to hide complexity from normal users and allow for features that need a more centralized approach. Set the `MAIN_ORG_ID` env variable to the main organization's id.

Main organization features:
- setup Stellar Multi Org Parent connector

## 7. Beta: Importing Verasol product data

Admin users can navigate to Admin/Misc and upload the Verasol product data into our `common_products` db table. This is a beta feature as the common_products table is not yet used.

## 8. Logging

You can use the `docker logs` command to check the logs as well, e.g. the following command to check the logs of minio via the docker cli:

```bash
docker logs --follow prospect-server-minio-1
```

## 9. Production Deployment

Prospect Demo Core is automatically deployed. A successful gitlab pipeline run on main branch triggers redeployment.
See [event](https://gitlab.com/a2ei/prospect/event) project for more details.

In a production setup make sure to back up your `Rails.application.secret_key_base`.
It's the key you need to reproduce the pseudonymization step applied to all sensitive data.
Don't store it next to your database backups, because access to it would make it much easier for an attacker to reverse the pseudonymization.

**Warning**: When you set up a new production instance make sure you delete the default user (added by `seed.rb`).
Also use different credentials than the ones provided with the default environment variables and generate a new JWK key in place of the default one in: `docker_shared/jwks_pk.json`
via `docker compose run --rm core rake grafana:generate_jwk` for example.
If no key is present, the core container will also generate a new key, but Grafana will not start without one and is a required service for the core container.
This poses a little "hen-egg-problem", therefore the default key.

        

Owner metadata


Committers metadata

Last synced: 3 days ago

Total Commits: 14,601
Total Committers: 41
Avg Commits per committer: 356.122
Development Distribution Score (DDS): 0.879

Commits in past year: 6,260
Committers in past year: 31
Avg Commits per committer in past year: 201.935
Development Distribution Score (DDS) in past year: 0.865

Name Email Commits
Karl Welter k****r@a****g 1762
klaytron c****d@a****g 1614
Germain Wessely g****y@a****g 1500
Antoine Gallix a****x@a****g 1444
Matthieu Rigal m****l@a****g 1299
Andrea a****r@a****g 1261
bherzog-a2ei b****g@a****g 635
Karl Welter k****b@a****g 626
Martin m****l@m****e 615
lottame l****r@a****g 523
dammyadeniji12 d****2@g****m 513
Anne Schanz a****z@a****g 429
gonnek g****e@w****e 414
Joram Okwaro j****m@e****m 344
martin.daehn m****n@a****g 307
viktoriaschaale v****e@a****g 221
Stefan Zelazny s****y@a****g 179
Catherine Tushabe t****y@g****m 166
Kaja k****o@p****e 145
DamilolaAdeniji d****i@y****m 143
Catherine Tushabe c****e@a****g 97
sweeneyiiid d****e@g****m 80
Andrea b****e@g****m 55
Paul Mugabi p****i@g****m 38
Pavel Voronov p****7@g****m 35
Lokalise Bot c****t@p****l 29
Tarik Tokic t****c@b****e 25
Ofentse Phuti o****i@a****g 25
Martin 9****r@u****m 21
Renovate Bot r****t@a****g 13
and 11 more...

Committer domains:


Issue and Pull Request metadata

Last synced: 1 day ago

Score: 9.05590631866912