Validated Patterns

Deploying the Medical Diagnosis pattern

Prerequisites
  • An OpenShift cluster

  • A GitHub account and a token for it with repositories permissions, to read from and write to your forks.

  • An S3-capable Storage set up in your public or private cloud for the x-ray images

  • The Helm binary, see Installing Helm For installation tooling dependencies, see Patterns quick start.

The Medical Diagnosis pattern does not have a dedicated hub or edge cluster.

Setting up an S3 Bucket for the xray-images

An S3 bucket is required for image processing. For information about creating a bucket in AWS S3, see the Utilities section.

For information about creating the buckets on other cloud providers, see the following links:

Utilities

To use the utilities that are available, export some environment variables for your cloud provider.

Example for AWS. Ensure that you replace values with your keys:
export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in utilities repository.

$ python s3-create.py -b mytest-bucket -r us-west-2 -p
$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2
Example output

Bucket setup

Note the name and URL for the bucket for further pattern configuration. For example, you must update these values in a values-global.yaml file, where there is a section for s3:

Preparing for deployment

Procedure
  1. Fork the medical-diagnosis repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes.

  2. Clone the forked copy of this repository.

    $ git clone git@github.com:<your-username>/medical-diagnosis.git
  3. Create a local copy of the Helm values file that can safely include credentials.

    Do not commit this file. You do not want to push personal credentials to GitHub.

    Run the following commands:

    $ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml
    $ vi ~/values-secret-medical-diagnosis.yaml
    Example values-secret.yaml file
    version "2.0"
    secrets:
      # NEVER COMMIT THESE VALUES TO GIT
    
      # Database login credentials and configuration
      - name: xraylab
        fields:
        - name: database-user
          value: xraylab
        - name: database-host
          value: xraylabdb
        - name: database-db
          value: xraylabdb
        - name: database-master-user
          value: xraylab
        - name: database-password
          onMissingValue: generate
          vaultPolicy: validatedPatternDefaultPolicy
        - name: database-root-password
          onMissingValue: generate
          vaultPolicy: validatedPatternDefaultPolicy
        - name: database-master-password
          onMissingValue: generate
          vaultPolicy: validatedPatternDefaultPolicy
    
      # Grafana Dashboard admin user/password
      - name: grafana
        fields:
          - name: GF_SECURITY_ADMIN_USER:
            value: root
          - name: GF_SECURITY_ADMIN_PASSWORD:
            onMissingValue: generate
            vaultPolicy: validatedPatternDefaultPolicy

    By default, Vault password policy generates the passwords for you. However, you can create your own passwords.

    When defining a custom password for the database users, avoid using the $ special character as it gets interpreted by the shell and will ultimately set the incorrect desired password.

  4. To customize the deployment for your cluster, update the values-global.yaml file by running the following commands:

    $ git checkout -b my-branch
    $ vi values-global.yaml

    Replace instances of PROVIDE_ with your specific configuration

       ...omitted
       datacenter:
         cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP
         storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi
         region: PROVIDE_CLOUD_REGION #us-east-2
         clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName
         domain: PROVIDE_DNS_DOMAIN #example.com
    
       s3:
         # Values for S3 bucket access
         # Replace <region> with AWS region where S3 bucket was created
         # Replace <cluster-name> and <domain> with your OpenShift cluster values
         # bucketSource: "https://s3.<region>.amazonaws.com/<s3_bucket_name>"
         bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray
         # Bucket base name used for xray images
         bucketBaseName: "xray-source"
    $ git add values-global.yaml
    $ git commit values-global.yaml
    $ git push origin my-branch
  5. To deploy the pattern, you can use the Validated Patterns Operator. If you do use the Operator, skip to validating the environment.

  6. To preview the changes that will be implemented to the Helm charts, run the following command:

    $ ./pattern.sh make show
  7. Login to your cluster by running the following command:

    $ oc login

    Optional: Set the KUBECONFIG variable for the kubeconfig file path:

     export KUBECONFIG=~/<path_to_kubeconfig>

Check the values files before deployment

To ensure that you have the required variables to deploy the Medical Diagnosis pattern, run the ./pattern.sh make predeploy command. You can review your values and make updates, if required.

You must review the following values files before deploying the Medical Diagnosis pattern:

Values FileDescription

values-secret.yaml

Values file that includes the secret parameters required by the pattern

values-global.yaml

File that contains all the global values used by Helm to deploy the pattern

Before you run the ./pattern.msh make install command, ensure that you have the correct values for:

- domain
- clusterName
- cloudProvider
- storageClassName
- region
- bucketSource

Deploy

  1. To apply the changes to your cluster, run the following command:

    $ ./pattern.sh make install

    If the installation fails, you can go over the instructions and make updates, if required. To continue the installation, run the following command:

    $ ./pattern.sh make update

    This step might take some time, especially for the OpenShift Data Foundation Operator components to install and synchronize. The ./pattern.sh make install command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your ./pattern.sh make install run progress with the following video that shows a successful installation.

    xray deployment
  2. Verify that the Operators have been installed.

    1. To verify, in the OpenShift Container Platform web console, navigate to OperatorsInstalled Operators page.

    2. Check that the Operator is installed in the openshift-operators namespace and its status is Succeeded. Ensure that OpenShift Data Foundation is listed in the list of installed Operators.

Using OpenShift GitOps to check on Application progress

To check the various applications that are being deployed, you can view the progress of the OpenShift GitOps Operator.

  1. Obtain the ArgoCD URLs and passwords.

    The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern.

    Display the fully qualified domain names, and matching login credentials, for all ArgoCD instances:

    ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster`
    CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'`
    eval $CMD
    Example output
    NAME                       HOST/PORT                                                                                      PATH   SERVICES                   PORT    TERMINATION            WILDCARD
    hub-gitops-server   hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com          hub-gitops-server   https   passthrough/Redirect   None
    # admin.password
    xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH
    NAME                      HOST/PORT                                                                         PATH   SERVICES                  PORT    TERMINATION            WILDCARD
    cluster                   cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com                          cluster                   8080    reencrypt/Allow        None
    kam                       kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com                              kam                       8443    passthrough/None       None
    openshift-gitops-server   openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com          openshift-gitops-server   https   passthrough/Redirect   None
    # admin.password
    FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6

    Examine the medical-diagnosis-hub ArgoCD instance. You can track all the applications for the pattern in this instance.

  2. Check that all applications are synchronized. There are thirteen different ArgoCD applications that are deployed as part of this pattern.

Viewing the Grafana based dashboard

  1. Accept the SSL certificates on the browser for the dashboard. In the OpenShift Container Platform web console, go to the Routes for project openshift-storage`. Click the URL for the s3-rgw.

    storage route

    Ensure that you see some XML and not the access denied error message.

    storage rgw route
  2. While still looking at Routes, change the project to xraylab-1. Click the URL for the image-server. Ensure that you do not see an access denied error message. You must to see a Hello World message.

    grafana routes
  3. Turn on the image file flow. There are three ways to go about this.

    You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster.

    $ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1

    Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the xraylab-1 project.

    dev topology

    Right-click on the image-generator pod icon and select Edit Pod count.

    dev topology menu

    Up the pod count from 0 to 1 and save.

    dev topology pod count

    Alternatively, you can have the same outcome on the Administrator console.

    Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project xraylab-1. Click image-generator and increase the pod count to 1.

    start image flow

Making some changes on the dashboard

You can change some of the parameters and watch how the changes effect the dashboard.

  1. You can increase or decrease the number of image generators.

    $ oc scale deploymentconfig/image-generator --replicas=2

    Check the dashboard.

    $ oc scale deploymentconfig/image-generator --replicas=0

    Watch the dashboard stop processing images.

  2. You can also simulate the change of the AI model version - as it’s only an environment variable in the Serverless Service configuration.

    $ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]'

    This changes the model version value, and the revisionTimestamp in the annotations, which triggers a redeployment of the service.