Installing for OpenShift Container Platform with Gluster

Prerequisites

Before installing Kubesafe, you need the following:

  • A working OpenShift Container Platform 3.x cluster running with access to AWS S3, SNS, and SQS services. Make sure you have access to configuration details about the S3 bucket, including

    • AWS access key

    • AWS private key

    • AWS region

  • A storage class which is capable of provisioning Gluster volumes.

  • The helm command version 3. Helm installs and updates Kubernetes applications, including Kubesafe. See the Helm documentation for details.

  • The oc command.

  • An email address registered with Kubesafe. To register, visit kubesafe.io/signup. This is the email address you’ll use when logging in to the Kubesafe software you install.

Creating an S3 bucket

Kubesafe needs an Amazon S3 bucket to hold its configuration information. Create an S3 bucket in the AWS console, or using the aws CLI:

aws s3api create-bucket --bucket name-you-pick --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2

Make a note of the bucket name you use, as you will enter that name when you configure Kubesafe. Specify whichever AWS region makes sense for your deployment.

Configuring AWS permissions

When configuring Kubesafe, you will need your AWS account ID. To find this ID, log in to the AWS console and select “My Security Credentials” from the pulldown in the top menu bar. The account ID is displayed there.

Ensure that your user account and the cluster node roles have at least the following AWS permissions:

  • AmazonSQSFullAccess

  • IAMFullAccess

  • AmazonS3FullAccess

  • AmazonVPCFullAccess

  • AmazonSNSFullAccess

Downloading Kubesafe software

Download the latest version of Kubesafe software from download.kubesafe.io.

Deploying Kubesafe software

Use helm and oc to install the Kubesafe software in each cluster you’re managing.

Before continuing, ensure that your kubectl context is pointing to the cluster you want to configure with Kubesafe software. Refer to the Kubernetes task documentation for details.

mkdir /tmp/kubesafe
cd /tmp/kubesafe
tar -xzf kubesafe_v2.0.3.tar.gz

After extracting the downloaded tar file, change into the helm directory:

cd helm

This directory contains the Kubesafe helm chart.

Customizing the deployment (optional)

You might need to customize the kubesafe/values.yaml for your installation:

  1. If you have your own Amazon Cognito authentication service, specify the configuration details for your identity provider.

  2. Kubesafe services normally create their own self-signed SSL certificates for HTTPS. If you have your own SSL certificates, specify the keyFile, certFile, caCert, pemFile.

Installing Kubesafe services

For OpenShift Container Platform 3.x, update the kubesafe/values.yaml file:

  1. Replace instances of gp2 with glusterfs-storage (or whatever Kubernetes StorageClass you have defined for Gluster).

  2. In the apiserver section, change the tag for quay.io.kubesafe.io/api to ocs:

    image:
      repository: quay.io/kubesafe.io/api
      pullPolicy: IfNotPresent
      tag: ocs
    
  3. Set the openshift.enable: “true” flag:

    openshift:
      enable: "true"
    
  4. If your OpenShift Container Platform is running on premises, change these settings:

    onprem.enabled = "true"
    ui.service.type = "NodePort"
    

After making these changes, generate and apply the configuration:

oc new-project kubesafe
helm template  <release-name> kubesafe > ocs.yaml
oc create -f ocs.yaml

The output shows something like this:

$ oc create -f ocs.yaml
secret/kubesafe-api-secrets created
secret/kubesafe-db created
secret/kubesafe-db-certs created
secret/kubesafe-api-certs created
secret/kubesafe-ui-certs created
persistentvolumeclaim/kubesafe-api-pvc created
persistentvolumeclaim/kubesafe-db-pvc created
clusterrole.rbac.authorization.k8s.io/kubesafe-api-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-kubesafe-api created
service/kubesafe-api created
service/kubesafe-db created
service/kubesafe-ui-cache created
service/kubesafe-ui created
deployment.apps/kubesafe-api created
deployment.apps/kubesafe-db created
deployment.apps/kubesafe-ui created
securitycontextconstraints.security.openshift.io/kubesafe created

Launching the Kubesafe UI

You access the Kubesafe UI through the name and port number assigned to the kubesafe-ui service.

$ kubectl get svc kubesafe-ui -n kubesafe
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP                             PORT(S)          AGE
kubesafe-ui   LoadBalancer   100.65.65.216   a24a-19739.us-west-2.elb.amazonaws.com  8080:31321/TCP   3m54s

In this example, Kubesafe is available at https://100.65.65.216:8080 or https://a24a-19739.us-west-2.elb.amazonaws.com:8080.

Log in using the email address and password you registered with kubesafe.io/signup.

The first time you log in you will be prompted to upload the storage configuration file for both the local and remote clusters. If you want to save data in an S3 store, you also have to upload the S3 storage configuration. See the sections below for details on how to create the needed configuration files.

Cleaning up a failed installation

If the deployment is unsuccessful, you might need to clean up the namespace containing Kubesafe to try again. If so, use this command:

kubectl delete namespace kubesafe
oc delete scc kubesafe
oc delete clusterrole kubesafe-api-runner
oc delete clusterrolebindings run-kubesafe-api

Customizing the cluster configuration

To configure Kubesafe software, create and upload into Kubesafe a configuration file for each of the clusters in your environment. The storage-config-samples directory contains several examples.

The configuration files have the following fields:

ks_cluster_id

Unique identifier for this cluster. This will appear in the UI as the name of this cluster. The identifier must not use these characters: /. “$

ks_ui_url

URL to use for sending API requests within Kubesafe. The format is https://{IP}:8080, where IP is the external IP address displayed in output of kubectl get svc kubesafe-ui -n kubesafe.

awsElasticBlockStore.name

Use any name to identify the EBS service used for provisioning storage for Kubesafe.

metadata_backup_config.s3.name

Name of the S3 bucket you have created in AWS to store Kubesafe metadata. No more than two Kubesafe installations can share the same S3 metadata bucket.

primarynode

IP address of one of the Gluster Pods in your cluster.

resturl

URL of the Heketi server RESTful management interface.

restuser

Username for logging in to Heketi.

restuserkey

Password to use when logging in to Heketi.

accesskey

AWS access key ID for your S3 and SNS services.

privatekey

AWS secret access key for your S3 and SNS services.

ownerid

AWS account ID. To find this ID, log in to the AWS console and select “My Security Credentials” from the pulldown in the top menu bar.

Example Configuration

As with all Kubernetes YAML specifications, the files are sensitive to the indentation of each line. Change the fields below to reflect the values for your environment.

$ cat clusterone.yaml
version: 1
ks_cluster_id: prod-cluster
ks_api_url: http://127.0.0.1:8000/kubesafe/v1
ks_ui_url: https://a795a20aeaa1b4c789aa8660example-2073642121.us-west-2.elb.amazonaws.com:8080
storage_vendor_config:
  glusterfs:
    - name: gluster-store
      site_info:
        glusterfs:
          primarynode: "10.70.53.216"
          resturl: "http://heketi-storage.glusterfs.svc:8080"
          restuser: "admin"
          restuserkey: "adminkey"
metadata_backup_config:
  s3:
    - name: metadata-example-com
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE
notify_server_config:
  sns:
    - name: kubesafe-notification
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE

Customizing storage configuration for Amazon S3 backups

The example objectstore.yaml file illustrates how to configure an Amazon S3 bucket as a backup location. Use this if you want to store backups in S3 that can be recovered on another cluster.

The metadata_backup_config section will have the same values as your cluster; the data_backup_config section specifies the S3 bucket that will hold backups.

$ cat objectstore.yaml
version: 1
ks_cluster_id: backups
ks_api_url: http://127.0.0.1:8000/kubesafe/v1
ks_ui_url:

data_backup_config:
  s3:
    - name: kubesafe-backups
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE

metadata_backup_config:
  s3:
    - name: kubesafe-metadata
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample

Uploading Kubesafe configurations

When the configuration files are prepared, upload them into Kubesafe from the Configuration tab.

After adding the local cluster configuration, you can add optional remote clusters and optional object stores by clicking the “+” icons on the Configuration page. There are two options for remote clusters and object stores:

  • Read backups: the local cluster is authorized to clone from backups created on the remote cluster or stored on the remote object store

  • Write backups: the local cluster is authorized to store backups on the remote cluster or object store

Repeat the steps on any other clusters, making sure to load the correct “local” configuration for each one, and any optional remote configurations.