Deployment

Warning

You need to have a valid license available prior to running OKA installer as you will be asked to provide it during the process.

Obtaining the OKA package

OKA package can be downloaded from UCit website:

$> wget --user <USERNAME> --password <PASSWORD> https://ucit.fr/product-publish/oka/package/vX.Y/OKA-X.Y.Z.run

Please contact UCit to obtain your <USERNAME> and <PASSWORD> to access the package. Replace X.Y and X.Y.Z by the version of OKA you want to download

Prior to any installation, you should check the SHA256 of the installer to ensure that it hasn’t been tampered with. To do so, first download the OKA-X.Y.Z.sha256 file and run the shasum utility:

$> wget --user <USERNAME> --password <PASSWORD> https://ucit.fr/product-publish/oka/package/vX.Y/OKA-X.Y.Z.sha256
$> shasum -c OKA-X.Y.Z.sha256
OKA-X.Y.Z.sha256: OK

Installation

The installer oka-x.y.z.run will guide you through the installation of OKA on your device. The installer must have internet access to allow the download of python packages requirements.

Important

To make sure OKA’s requirements are available and that everything is accessible and running, please use our diagnostic tool using the following script after installing OKA : ${OKA_INSTALL_DIR}/bin/diagnostic.sh

Note

OKA installer has a set of command line options. To specify options you first need to separate them from the installer by --

For example, to access the help, simply run: oka-x.y.z.run -- -h.

Warning

If you want to install OKA systemd services, you should run the installer with root privileges.

Batch mode:

OKA can also be installed in a non-interactive way. To do so, you need to specify the required parameters:

  • through a configuration file, that contains all the configuration variables of OKA. This file is exactly the content of an oka.conf file generated by the installer in ${OKA_INSTALL_DIR}/conf/oka.conf (see below for an example).

To run the installer in batch mode, you need to run the following command line:

./OKA-xx.run -- -b -c <PATH_TO oka.conf>

with:

  • -b: activation of the batch installation mode

  • -c <PATH_TO oka.conf>: path to configuration file

OKA administrator

Once OKA is installed, you will need to create your first user; the OKA administrator/super-user. Administrator users are managed using a command line available at: ${OKA_INSTALL_DIR}/bin/manage-superuser. This command allows you to either create a new administrator, or to reset his password to the desired value.

./manage-superuser [create|passwd] <options>
create <option>:
--noinput, --no-input: Suppresses all user prompts. If a suppressed prompt cannot be resolved automatically, the command will exit with error code 1.
--username USERNAME (superuser login)

passwd [username]

OKA services

Important

If you have executed OKA installer without root privileges, you will need to manually install them afterwards. The services will be present in ${OKA_INSTALL_DIR}/conf/services. You will need to copy and activate them:

  • sudo cp ${OKA_INSTALL_DIR}/conf/services/* /etc/systemd/system/

  • sudo systemctl enable oka.service to activate all services at once.

  • sudo systemctl daemon-reload

OKA is based on a main service named oka.service that will handle multiple sub-services. See the management Services section for more details.

After installing OKA you will have to start the main service before moving on to the initialization part:

sudo systemctl start oka.service

Make sure OKA is running properly by checking the status of all sub-services:

sudo systemctl status 'oka*'

Nginx configuration

OKA will be deployed thanks to Gunicorn behind a proxy server. We advise that you use Nginx as a proxy.

More details here on how to configure properly Gunicorn and Nginx.

After installing OKA, make sure to copy the Nginx configuration for OKA in the right repository to avoid having a bad gateway error:

sudo cp ${OKA_INSTALL_DIR}/conf/oka.nginx.conf /etc/nginx/conf.d/
sudo systemctl restart nginx

Initialization

Important

This is the recommended way to create clusters for OKA as it will automatically configure everything for you.

Once OKA is up and running, you can login and access the Cluster page that will allow you to create your first cluster. You can find more about this here. This page is where you will be sent automatically the first time you connect to OKA.

Advanced initialization

In case you wish to manually handle the creation of a cluster and the configuration of specific pipelines, follow the following instructions.

We call pipeline the processes handling any ‘behind the scene’ tasks provided by OKA. This includes any type of data retrieval you might configure as well as for example, any training of our different prediction models.

Setup cluster and pipelines

Warning

Default configuration might be different for a cluster when using the API compared to the way things are done when creating them through the UI.

Clusters and pipelines must be created using the setup API: https://<OKA SERVER>/api/data_manager/setup. The default call without any parameter will create a basic a default cluster object named main_cluster with all its default related pipelines selected based on your current license.

In case that you need more control on what you wish to setup, you can call this API with different parameters allowing you to create new clusters or setup / add new pipelines for an existing cluster:

  • ?cluster= (optional) to create a cluster with a specific name. Use that to specify name of new a cluster of the one you want to edit. Default is main_cluster.

  • ?pipeline= (optional) to specify the type(s) of pipelines to initialize with this call taken from the following categories fetch_js, fetch_energy, meteo, cloud_cost. Default is all.

  • ?js= (optional) to specify the jobscheduler used by your cluster among lsf, oar, openlava, pbs, sge, slurm, swf and torque. Default is slurm.

  • ?country= (optional) to specify the main location of your cluster among France and Japan. Default is France.

  • ?energy_source= (mandatory when using pipeline all or fetch_energy) to create energy and power pipelines (see Energy and power metrics for details). Choices among EAR and SNMP_CUSTOM.

For example, the following call would create the pipelines for a cluster named test with lsf jobscheduler:

  • https://<OKA SERVER>/api/data_manager/setup?cluster=test&js=lsf

The pipelines will be generated in accordance with your OKA license.

The setup API returns Generation ended when the generation completed successfully.

Multiple types of pipelines are generated such as:

  • <CLUSTER NAME>_log_js_fetch: pipeline to load the accounting logs

  • <CLUSTER NAME>_log_energy_fetch: pipeline to load the energy logs

  • <CLUSTER NAME>_cost_retrieve_aws_cost: pipeline to retrieve AWS cost reports

  • <CLUSTER NAME>_meteo_cluster_<TARGET>_<PERIOD>_<PRECISION>: pipeline to train a MeteoCluster model to predict TARGET (cluster load, power consumption…) on the next PERIOD (month, week) at a PRECISION (Day, Hour) level

Configuration

Once you are done with the setup (through the UI or the APIs) you can adjust the configuration of what you just initialized. In order to do so you can go check on the available configurations within the administration panel accessible through the following URL: https://<OKA SERVER>/admin/.

Example:

  • Check DATA MANAGER > Conf job schedulers to make sure your scheduler is properly set (version, access - import from file or ssh ?, credential, etc.).

  • Check DATA MANAGER > Conf files to make sure your input files are properly set if you plan to use them to import logs from a specific folder.

  • Configure your new cluster in DATA MANAGER > Cluster configurations (number of cores, default queue…).

  • Check Conf apis to control for example the access to your Elasticsearch database (see the FAQ for security related example).

Usage

At this point, OKA is ready for you to use and you can check the Quickstart to proceed further. Although, prior to moving on we strongly suggest you browse through the Configuration and Management sections.

A note regarding the different pipelines that were created for each cluster. OKA will create and attach for each of its pipeline a task so that each action can be scheduled and processed automatically. The default behavior is to create those tasks in a disabled state. That means, that there will not be any processes started automatically by default. You will have to manually enable them after they are configured to fit your needs (see Periodic Tasks).

Additionally, each pipeline can also be triggered manually either by requesting the execution of the task (see Periodic Tasks) or, through a specific API:

  • The run API https://<OKA SERVER>/api/data_manager/run?pipeline=<PIPELINE NAME>. This API needs at least one parameter ?pipeline=.

  • The pipeline names can be found within your administrator database: https://<OKA SERVER>/admin/data_manager/pipeline/.

Important

The API use is not recommended however as it is a less optimized way of executing the pipelines. One of the main reasons is that when using the API, the actual processing will be handled by OKA main process contrary to requesting the execution of a task as it will lead to the pipeline being executed in another dedicated worker (see Celery).