Installing Kubesafe

Prerequisites

Before installing Kubesafe, you need the following:

  • A working Kubernetes or OpenShift Container Platform cluster running in AWS or on premises, with access to S3, SNS, and SQS services. Make sure you have access to configuration details about the cluster and S3 bucket, including

    • AWS access key

    • AWS private key

    • AWS region

  • A storage class which is capable of provisioning persistent volumes that can be attached to Pods in your cluster. For AWS, the default storage class is “gp2”. Clusters using Trident with NetApp ONTAP storage will have a storage class such as “basic”.

  • The helm command version 3. Helm installs and updates Kubernetes applications, including Kubesafe. See the Helm documentation for details.

  • The kubectl command, if using Kubernetes.

  • The oc command, if using OpenShift Container Platform.

  • An email address registered with Kubesafe. To register, visit kubesafe.io/signup. This is the email address you’ll use when logging in to the Kubesafe software you install.

Creating an S3 bucket

Kubesafe needs an Amazon S3 bucket to hold its configuration information. Create an S3 bucket in the AWS console, or using the aws CLI:

aws s3api create-bucket --bucket name-you-pick --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2

Make a note of the bucket name you use, as you will enter that name when you configure Kubesafe. Specify whichever AWS region makes sense for your deployment.

Configuring AWS permissions

When configuring Kubesafe, you will need your AWS account ID. To find this ID, log in to the AWS console and select “My Security Credentials” from the pulldown in the top menu bar. The account ID is displayed there.

Ensure that your user account and the cluster node roles have at least the following AWS permissions:

  • AmazonEC2FullAccess

  • AmazonSQSFullAccess

  • IAMFullAccess

  • AmazonS3FullAccess

  • AmazonVPCFullAccess

  • AmazonSNSFullAccess

  • AmazonRoute53FullAccess

Downloading Kubesafe software

Download the latest version of Kubesafe software from download.kubesafe.io.

Deploying Kubesafe software

Use helm and kubectl (or oc for OpenShift clusters) to install the Kubesafe software in each cluster you’re managing.

Before continuing, ensure that your kubectl context is pointing to the cluster you want to configure with Kubesafe software. Refer to the Kubernetes task documentation for details.

mkdir /tmp/kubesafe
cd /tmp/kubesafe
tar -xzf kubesafe_v2.0.3.tar.gz

After extracting the downloaded tar file, change into the helm directory:

cd helm

This directory contains the Kubesafe helm chart.

Customizing the deployment

You will need to customize the kubesafe/values.yaml for your installation:

  1. If you have your own Amazon Cognito authentication service, specify the configuration details for your identity provider.

  2. If you are deploying on OpenShift Container Platform, set the openshift.enable: “true” flag. Note: Kubesafe software does not yet support OpenShift Container Platform with NetApp ONTAP storage.

openshift:
  enable: "true"
  1. Kubesafe services normally create their own self-signed SSL certificates for HTTPS. If you have your own SSL certificates, specify the keyFile, certFile, caCert, pemFile.

  2. Search for storageClass and update if necessary to specify the storage class the Kubesafe software should use for its internal storage.

  3. If you are deploying with NetApp ONTAP storage, there are several steps you must take to import Trident security certificates so that communication between Kubesafe software and Trident is secure. Replace -n trident below if you are using a different namespace for the Trident services.

IP=$(kubectl get service -n trident -l app=controller.csi.trident.netapp.io -o jsonpath='{.items[0].spec.clusterIP}')
mkdir -p kubesafe/trident-certs
for S in clientKey clientCert caCert ;
do
  kubectl get secret -n trident trident-csi -o jsonpath=\{.data.$S\} | base64 -d > kubesafe/trident-certs/$S.pem ;
done
  1. If you are not deploying with NetApp ONTAP storage, change the trident setting to false:

apiserver:

  # set trident to true if NetApp trident CSI drivers are used for volume mapping
  trident: "false"

Installing Kubesafe services

The installation command varies based on the version of Kubernetes you’re using. In each case, select a release-name to identify the installation.

  1. For vanilla Kubernetes, enter:

    helm install <release-name> kubesafe -n kubesafe --create-namespace
    
  2. For OpenShift Container Platform 4.x, enter:

    oc new-project kubesafe
    helm install <release-name> kubesafe -n kubesafe --create-namespace
    
  3. For OpenShift Container Platform 3.x, enter:

    oc new-project kubesafe
    helm template  <release-name> kubesafe > rendered.yaml
    oc create -f rendered.yaml
    

Note that the Kubesafe software must be deployed in the kubesafe namespace.

The installation process displays output like this for vanilla Kubernetes and OpenShift Container Platform 4.x:

$ helm install ksafe-evaluation kubesafe -n kubesafe --create-namespace
NAME: ksafe-evaluation
LAST DEPLOYED: Mon Jul 13 15:17:24 2020
NAMESPACE: kubesafe
STATUS: deployed
REVISION: 1
TEST SUITE: None

For OpenShift Container Platform 3.x, the output is different:

$ oc create -f rendered.yaml
secret/kubesafe-api-secrets created
secret/kubesafe-db created
secret/kubesafe-db-certs created
secret/kubesafe-api-certs created
secret/kubesafe-ui-certs created
persistentvolumeclaim/kubesafe-api-pvc created
persistentvolumeclaim/kubesafe-db-pvc created
clusterrole.rbac.authorization.k8s.io/kubesafe-api-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-kubesafe-api created
service/kubesafe-api created
service/kubesafe-db created
service/kubesafe-ui-cache created
service/kubesafe-ui created
deployment.apps/kubesafe-api created
deployment.apps/kubesafe-db created
deployment.apps/kubesafe-ui created
securitycontextconstraints.security.openshift.io/kubesafe created

Launching the Kubesafe UI

You access the Kubesafe UI through port 8080 at the IP address your load balancer assigns for the kubesafe-ui pod. This address might take several minutes to come up.

$ kubectl get svc kubesafe-ui -n kubesafe
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)          AGE
kubesafe-ui   LoadBalancer   100.65.65.216   a24a5990451854a0cbda2b4a41613a0f-1973994899.us-west-2.elb.amazonaws.com   8080:31321/TCP   3m54s

In this example, Kubesafe is available at https://a24a5990451854a0cbda2b4a41613a0f-1973994899.us-west-2.elb.amazonaws.com:8080.

Log in using the email address and password you registered with kubesafe.io/signup.

The first time you log in you will be prompted to upload the storage configuration file for both the local and remote clusters. If you want to save data in an S3 store, you also have to upload the S3 storage configuration. See the sections below for details on how to create the needed configuration files.

Cleaning up a failed installation

If the deployment is unsuccessful, you might need to clean up the namespace containing Kubesafe to try again. If so, use this command:

helm uninstall <release-name> -n kubesafe && kubectl delete ns kubesafe

For OpenShift Container Platform 3.x, use instead:

kubectl delete namespace kubesafe
oc delete scc kubesafe
oc delete clusterrole kubesafe-api-runner
oc delete clusterrolebindings run-kubesafe-api

Customizing the cluster configuration

To configure Kubesafe software, create and upload into Kubesafe a configuration file for each of the clusters in your environment. The storage-config-samples directory contains several examples.

The configuration files have the following fields:

ks_cluster_id

Unique identifier for this cluster. This will appear in the UI as the name of this cluster. The identifier must not use these characters: /. “$

ks_ui_url

URL to use for sending API requests within Kubesafe. The format is https://{IP}:8080, where IP is the external IP address displayed in output of kubectl get svc kubesafe-ui -n kubesafe.

awsElasticBlockStore.name

Use any name to identify the EBS service used for provisioning storage for Kubesafe.

metadata_backup_config.s3.name

Name of the S3 bucket you have created in AWS to store Kubesafe metadata. No more than two Kubesafe installations can share the same S3 metadata bucket.

clusteraz

Name of the availability zone for the Amazon EC2 instances hosting this cluster.

clusterregion

Name of the region where this cluster runs.

accesskey

AWS access key ID for your cluster.

privatekey

AWS secret access key for your cluster.

ownerid

AWS account ID. To find this ID, log in to the AWS console and select “My Security Credentials” from the pulldown in the top menu bar.

Deployments using ONTAP storage have additional fields:

ontapVolume.site_info.restorestorageclass

On Kubernetes clusters with ONTAP storage, this value specifies which StorageClass to use for new volumes created when restoring applications from non-ONTAP storage. An example might be basic, referring to a Trident-supplied StorageClass.

ontapVolume.site_info.secret.accesskey

On Kubernetes clusters with ONTAP storage, this is the name of administrative user on the storage VM hosting storage for this Kubernetes cluster.

ontapVolume.site_info.secret.privatekey

On Kubernetes clusters with ONTAP storage, this is the password of the user named in the accesskey field.

ontapVolume.netapp.ontapbackend

On Kubernetes clusters with ONTAP storage, the name Trident uses for ONTAP storage backing volumes. An example might be trident_ontap_backend.

ontapVolume.netapp.tridentserviceip

On Kubernetes clusters with ONTAP storage, the IP address of the URL of the Trident service. Find this value by running kubectl get svc -n trident trident-csi. For example, 100.70.252.193.

ontapVolume.netapp.managementlif

On Kubernetes clusters with ONTAP storage, the IP address of the management interface of the ONTAP cluster hosting the storage. For example, 172.12.10.2.

ontapVolume.netapp.datalif

On Kubernetes clusters with ONTAP storage, the IP address of the NFS data interface of the storage VM hosting the storage. For example, 172.12.10.12.

Example Configuration

As with all Kubernetes YAML specifications, the files are sensitive to the indentation of each line. Change the fields below to reflect the values for your environment.

$ cat clusterone.yaml
version: 1
ks_cluster_id: prod-cluster
ks_api_url: http://127.0.0.1:8000/kubesafe/v1
ks_ui_url: https://a795a20aeaa1b4c789aa8660example-2073642121.us-west-2.elb.amazonaws.com:8080
storage_vendor_config:
  awsElasticBlockStore:
    - name: prod-ebs
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE
metadata_backup_config:
  s3:
    - name: metadata-example-com
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE
notify_server_config:
  sns:
    - name: kubesafe-notification
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE

Customizing storage configuration for ONTAP

For Kubernetes clusters using ONTAP storage, specify details about Trident and ONTAP configuration in an ontapVolume section of the configuration file.

$ cat ontapconfig.yaml
version: 1
ks_cluster_id: prod-cluster
ks_api_url: http://127.0.0.1:8000/kubesafe/v1
ks_ui_url: https://a795a20aeaa1b4c789aa8660example-2073642121.us-west-2.elb.amazonaws.com:8080
storage_vendor_config:
  ontapVolume:
    - name: "ontap-prod"
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        restorestorageclass: basic
        secret:
          accesskey: vsadmin
          privatekey: mysvmsecret!
        netapp:
          ontapbackend: "trident_ontap_backend"
          tridentserviceip: tridentcsi-ip
          managementlif: managementlif
          datalif: datalif

metadata_backup_config:
  s3:
    - name: metadata-example-com
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE
notify_server_config:
  sns:
    - name: kubesafe-notification
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE

Customizing storage configuration for Amazon S3 backups

The example objectstore.yaml file illustrates how to configure an Amazon S3 bucket as a backup location. Use this if you want to store backups in S3 that can be recovered on another cluster.

The metadata_backup_config section will have the same values as your cluster; the data_backup_config section specifies the S3 bucket that will hold backups.

$ cat objectstore.yaml
version: 1
ks_cluster_id: backups
ks_api_url: http://127.0.0.1:8000/kubesafe/v1
ks_ui_url:

data_backup_config:
  s3:
    - name: kubesafe-backups
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample
        aws:
          ownerid: 0498EXAMPLE

metadata_backup_config:
  s3:
    - name: kubesafe-metadata
      site_info:
        clusteraz: "us-west-2a"
        clusterregion: "us-west-2"
        secret:
          accesskey: AKIAIOSFODNN7EXAMPLE
          privatekey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYExample

Uploading Kubesafe configurations

When the configuration files are prepared, upload them into Kubesafe from the Configuration tab.

After adding the local cluster configuration, you can add optional remote clusters and optional object stores by clicking the “+” icons on the Configuration page. There are two options for remote clusters and object stores:

  • Read backups: the local cluster is authorized to clone from backups created on the remote cluster or stored on the remote object store

  • Write backups: the local cluster is authorized to store backups on the remote cluster or object store

Repeat the steps on any other clusters, making sure to load the correct “local” configuration for each one, and any optional remote configurations.