Validated Patterns

Deploying the RamenDR Starter Kit Pattern

Prerequisites
  • An OpenShift cluster

    • To create an OpenShift cluster, go to the Red Hat Hybrid Cloud console.

    • Select OpenShift -> Red Hat OpenShift Container Platform -> Create cluster.

  • A GitHub account with a personal access token that has repository read and write permissions.

  • The Helm binary, for instructions, see Installing Helm

  • Additional installation tool dependencies. For details, see Patterns quick start.

It is desirable to have a cluster for deploying the GitOps management hub assets and a separate cluster(s) for the managed cluster(s).

Preparing for deployment

Procedure
  1. Fork the ramendr-starter-kit repository on GitHub. You must fork the repository because your fork is updated as part of the GitOps and DevOps processes.

  2. Clone the forked copy of this repository.

    $ git clone git@github.com:your-username/ramendr-starter-kit.git
  3. Go to your repository: Ensure you are in the root directory of your Git repository by using:

    $ cd /path/to/your/repository
  4. Run the following command to set the upstream repository:

    $ git remote add -f upstream git@github.com:validatedpatterns/ramendr-starter-kit.git
  5. Verify the setup of your remote repositories by running the following command:

    $ git remote -v
    Example output
    origin	git@github.com:kquinn1204/ramendr-starter-kit.git (fetch)
    origin	git@github.com:kquinn1204/ramendr-starter-kit.git (push)
    upstream	git@github.com:validatedpatterns-sandbox/ramendr-starter-kit.git (fetch)
    upstream	git@github.com:validatedpatterns-sandbox/ramendr-starter-kit.git (push)
  6. Make a local copy of secrets template outside of your repository to hold credentials for the pattern.

    Do not add, commit, or push this file to your repository. Doing so may expose personal credentials to GitHub.

    Run the following commands:

    $ cp values-secret.yaml.template ~/values-secret.yaml
  7. Populate this file with secrets, or credentials, that are needed to deploy the pattern successfully:

    $ vi ~/values-secret.yaml
    1. Edit the vm-ssh section to include the username, private key, and public key. To ensure the seamless flow of the pattern, the value associated with the privatekey and publickey has been updated with path. For example:

        - name: vm-ssh
          vaultPrefixes:
          - global
          fields:
          - name: username
            value: 'cloud-user'
          - name: privatekey
            path: '/path/to/private-ssh-key'
          - name: publickey
            path: '/path/to/public-ssh-key'

      Paste the path to your locally stored private and public keys. If you do not have a key pair, generate one using ssh-keygen.

    2. Edit the cloud-init section to include the userData block to use with cloud-init. For example:

        - name: cloud-init
          vaultPrefixes:
          - global
          fields:
          - name: userData
            value: |-
              #cloud-config
              user: 'cloud-user'
              password: 'cloud-user'
              chpasswd: { expire: False }
    3. Edit the aws section to refer to the file containing your AWS credentials:

        - name: aws
          fields:
            - name: aws_access_key_id
              ini_file: ~/.aws/credentials
              ini_key: aws_access_key_id
            - name: aws_secret_access_key
              ini_file: ~/.aws/credentials
              ini_key: aws_secret_access_key
            - name: baseDomain
              value: aws.example.com
            - name: pullSecret
              path: ~/pull_secret.json
            - name: ssh-privatekey
              path: ~/.ssh/privatekey
            - name: ssh-publickey
              path: ~/.ssh/publickey
    4. Edit the openshiftPullSecret section to refer to the file containing your OpenShift pull secret:

        - name: openshiftPullSecret
          fields:
            - name: .dockerconfigjson
              path: ~/pull_secret.json
  8. Create and switch to a new branch named my-branch, by running the following command:

    $ git checkout -b my-branch
  9. In particular, you will almost certainly need to customize the pattern because the pattern cannot infer the AWS domain(s) you have control over. Especially, edit the hub/rdr/values.yaml to edit the baseDomain and possibly edit the aws.region settings. If you made any changes to this or any other files tracked by git, git add them and then commit the changes by running the following command:

    $ git commit -m "any updates"
  10. Push the changes to your forked repository:

    $ git push origin my-branch

The preferred way to install this pattern is by using the script ./pattern.sh script.

Deploying the pattern by using the pattern.sh file

To deploy the pattern by using the pattern.sh file, complete the following steps:

  1. Log in to your cluster by following this procedure:

    1. Obtain an API token by visiting https://oauth-openshift.apps.<your-cluster>.<domain>/oauth/token/request.

    2. Log in to the cluster by running the following command:

      $ oc login --token=<retrieved-token> --server=https://api.<your-cluster>.<domain>:6443

      Or log in by running the following command:

      $ export KUBECONFIG=~/<path_to_kubeconfig>
  2. Deploy the pattern to your cluster. Run the following command:

    $ ./pattern.sh make install
Verification
  1. Verify that the Operators have been installed on the hub cluster. Navigate to Operators → Installed Operators page in the OpenShift Container Platform web console on the Hub cluster (in the "local-cluster" view),

    ramendr-starter-kit-operators
    Figure 1. RamenDR Hub Operators
  2. Verify that the primary and secondary managed clusters have been built. This can take close to an hour on AWS. On the hub cluster, navigate to All Clusters in the OpenShift Container Plaform web console:

    ramendr-starter-kit-operators
    Figure 2. RamenDR Clusters
  3. Wait some time for everything to deploy to all the clusters. It might take up to another hour from when the managed clusters finish building. You can track the progress through the Hub ArgoCD UI from the nines menu, especially the "opp-policy" and the "regional-dr" applications. Most of the critical resources are in the regional-dr application (at present, the opp-policy app may show missing/out-of-sync, and the regional-dr app may show OutOfSync - even when both are healthy. We are working on a fix, track bug progress here):

    ramendr-starter-kit-hub-applications
    Figure 3. RamenDR Starter Kit Applications
  4. Eventually, the Virtual Machines will be deployed and the Disaster Recovery Placement Control (DRPC) will show that resources are now protected. This screen can be reached via All Clusters → Data Services → Disaster Recovery → Protected Applications on the hub cluster. Normally it will be faster to synchronize Kubernetes objects than Application volumes. When these indicators both show Healthy it is safe to trigger a failover:

    ramendr-starter-kit-running-vms
    Figure 4. RamenDR Starter Kit Applications
  5. You might want to see the VMs themselves running. They will be on the primary cluster in the Virtualization → VirtualMachines area. The pattern configures 4 RHEL9 VMs by default:

    ramendr-starter-kit-trigger-failover-1
    Figure 5. RamenDR Starter Kit Trigger Failover, part 1
  6. Clicking the "Failover" option will bring up a modal dialog that indicates where the failover will move the workload, and when the last known good state of the workload is. Click on the "Initiate" button to begin the failover:

    ramendr-starter-kit-trigger-failover-2
    Figure 6. RamenDR Starter Kit Trigger Failover, part 2
  7. While the failover is happening, you can watch the progress of it in the activity area. When it is done, it will say (with a discovered application) that it is necessary to clean up application resources to allow replication to start in the other direction. Notice that the primary cluster should have changed:

    ramendr-starter-kit-failover-cleanup
    Figure 7. RamenDR Starter Kit Failover Cleanup
  8. The pattern provides a script to do this cleanup. Invoke it with your Hub cluster KUBECONFIG set and running ./pattern.sh scripts/cleanup-gitops-vms-non-primary.sh:

    ramendr-starter-kit-failover-cleanup-script
    Figure 8. RamenDR Starter Kit Failover Cleanup
  9. After a few minutes, the resources should show healthy and protected again (the PVCs take a few minutes to synchronize):

    ramendr-starter-kit-reprotected
    Figure 9. RamenDR Starter Kit Reprotected