namespaces:
- open-cluster-management
- vault
- golang-external-secrets
- config-demo
- hello-world
ClusterGroup configuration in values files
ClusterGroup serves as a centralized control mechanism, enabling the grouping of clusters based on specific criteria such as geographic location, project, or application type. This feature enhances cluster management efficiency by facilitating the coordination of multiple clusters within the system.
In the validated patterns framework, the ClusterGroup holds a pivotal role, defined as an individual entity or a collection of clusters, each representing a unique configuration class. This foundational concept uses Helm charts and Kubernetes features to determine its attributes.
Typically, a ClusterGroup represents a singular cluster, serving as the foundation for each validated pattern. However, it can also encompass managed ClusterGroups tailored for replication or scaling efforts. Each pattern requires at least one ClusterGroup, with the primary one often named arbitrarily. This designation is defined in values-global.yaml
under the key main.clusterGroupName
, with "Hub" commonly used as a default. Alternatively, other names are acceptable. For example, if main.clusterGroupName
is "hub," the framework searches for values-hub.yaml
in the pattern’s root directory. It’s important to note that the main ClusterGroup is typically a singleton and may incorporate Red Hat Advanced Cluster Management (RHACM) if part of the pattern.
Additionally, the main ClusterGroup can define managedClusterGroups
in its values file, specifying characteristics and policies for spoke clusters, which can be singletons or groups.
Basic parameters configuration in a ClusterGroup
You can set foundational parameters within a ClusterGroup, including cluster location, version, and networking settings. By configuring these basic parameters, you can establish the initial framework for managing and provisioning clusters within the group. These parameters include:
Name: Assigns a unique identifier to the ClusterGroup, acting as a reference point for identification within the broader infrastructure.
TargetCluster: Specifies the primary cluster(s) associated with the ClusterGroup, guiding management and control within its scope. The default is "in-cluster," which is a reasonable default. However, it can be changed when using the "push" model of OpenShift GitOps.
IsHubCluster: Indicates whether the designated cluster serves as a hub cluster, centralizing coordination and management within the ClusterGroup. Each Validated Pattern may have at most one cluster designated as a Hub cluster. This designation affects the use of Red Hat Advanced Cluster Management within the Validated Patterns framework.
SharedValueFiles: Enables specification of shared values files containing configuration settings shared across multiple clusters within the ClusterGroup, promoting consistency and simplifying configuration management. Each
sharedValueFiles
is applied as a value file to every application designated for installation in the clusterGroup.OperatorgroupExcludes: Allows exclusion of specific operator groups from being applied to clusters within the ClusterGroup, providing flexibility in customizing operator deployment and management.
Projects: Defines OpenShift GitOps projects to facilitate grouping applications. Each application must reference a project.
By configuring these basic parameters in a ClusterGroup, you can effectively establish the foundational elements necessary for organizing, managing, and coordinating clustered resources.
Namespace configuration in a ClusterGroup
Namespace configuration within a ClusterGroup enables effective workload management and organization by logically partitioning and isolating resources and applications. This feature enhances security, facilitates resource allocation, and simplifies administrative tasks by creating distinct namespaces within the ClusterGroup.
The namespace
parameter in a Kubernetes Helm chart specifies namespaces within the ClusterGroup, enabling access control enforcement, resource allocation, and workload segregation according to specific requirements or organizational policies.
Sub-parameters
Name: Specifies the name of the namespace to be created within the ClusterGroup.
Labels: Allows you to assign labels or tags to the namespace for categorization and identification purposes.
Annotations: Provides additional metadata or descriptive information about the namespace for documentation or management purposes.
Namespaces without extra labels and annotations (from the multicloud gitops pattern):
Namespaces with Labels and Annotations:
namespaces:
open-cluster-management: {}
vault: {}
golang-external-secrets: {}
config-demo:
labels:
app: 'config-demo'
tier: 'test'
annotations:
'test/annotation': true
hello-world: {}
In this example, we add “app” and “tier” labels to the config-demo app, and the annotation “test/annotation: true” to it. All other namespaces are created with defaults.
Subscription configuration in a ClusterGroup
Configuring subscriptions within a ClusterGroup streamlines access management and resource allocation for users and applications. It also facilitates the creation of a software bill of materials detailing the intended installations for the ClusterGroup.This capability empowers you to establish subscription plans, enforce governance policies, and ensure optimal resource utilization. Through the subscriptions
parameter, you can define access levels, allocate resources, and promote efficient management and cost optimization within the ClusterGroup environment.
Subscriptions are the preferred way of including OpenShift operators in the validated patterns. Known as OLM Packages (OLM for Operator Lifecycle Manager), these subscriptions function similarly to RPM packages for RHEL, with operators responsible for software operation and upgrades.
Sub-parameters
Name: The name of the package to install.
Source: Specifies which catalog to use for packages. The default is “redhat-operators,” which should only be changed if you want to install pre-release operators.
SourceNamespace: Specifies which namespace to use for the catalog. The default is “openshift-marketplace,” which should only be changed if you want to install a pre-release operator.
Channel: Indicates the default “channel” for updates, usually corresponding to major versions of the operator. While operators define a default channel for each major version of OpenShift they support, specifying a particular channel is optional unless you wish to use a non-default one. There is no standard naming convention for channels. They can be named "stable", "development", or "2.9", among others.
InstallPlanApproval: Defaults to “automatic,” allowing the operator to periodically check and update itself. Alternatively, it can be set to “manual,” meaning operators will only upgrade when triggered by an administrator.
Config: Primarily consists of an “env:” key that allows environment variables to be passed to the operator.
UseCSV: Specifies whether to use specific “cluster service versions” for installation. Defaults to false; the channel primarily determines the installed version.
StartingCSV: Determines the lowest “cluster service version” to consider installing. If installPlanApproval is “manual,” only this version will be considered for installation.
Disabled: Optional boolean (defaults to false). Prevents the installation of a subscription specified higher in the value hierarchy. For example, if an Operator is incompatible with Azure, the default pattern may include the installation of the subscription. However, an Azure-specific configuration file could specify disabled: true, thereby skipping the installation of the operator on Azure.
Subscriptions for multicloud-gitops (just ACM):
subscriptions:
acm:
name: advanced-cluster-management
namespace: open-cluster-management
channel: release-2.10
Subscriptions for Industrial Edge (many more):
subscriptions:
acm:
name: advanced-cluster-management
namespace: open-cluster-management
amqbroker-prod:
name: amq-broker-rhel8
namespace: manuela-tst-all
amqstreams-prod-dev:
name: amq-streams
namespaces:
- manuela-data-lake
- manuela-tst-all
camelk-prod-dev:
name: red-hat-camel-k
namespaces:
- manuela-data-lake
- manuela-tst-all
seldon-prod-dev:
name: seldon-operator-certified
namespaces:
- openshift-operators
source: certified-operators
pipelines:
name: openshift-pipelines-operator-rh
source: redhat-operators
odh:
name: opendatahub-operator
source: community-operators
Subscriptions for Ansible Edge GitOps (demonstrating disable and override):
subscriptions:
acm:
name: advanced-cluster-management
namespace: open-clustsubscriptions:
openshift-data-foundation:
disabled: true
portworx:
name: portworx-certified
namespace: portworx
channel: stable
source: certified-operators
er-management
channel: release-2.10
In this example, the values file is intended to override the default settings in the main values-hub.yaml
. Consequently, the openshift-data-foundation
application is overridden and disabled, while the portworx
application is added.
Managed cluster groups configuration in a ClusterGroup
Configuring managed cluster groups within a ClusterGroup enhances the organizational structure and simplifies resource management. Through the managedClusterGroups
parameter, you can define and organize clusters based on specific criteria, promoting efficient management and resource allocation within the ClusterGroup. This functionality streamlines management and coordination tasks across the infrastructure. Managed ClusterGroups mirror the configuration of a single cluster, facilitating the deployment of identical applications or implementing minor configuration adjustments across multiple clusters.
This feature implies the existence of a values-{name}.yaml
file in the pattern directory root, containing the clusterGroup definition for the managed clusterGroup. It can have its subscriptions, applications, namespaces, and projects, which may or may not reflect those of the hub clusterGroup.
Sub-parameters
Name: Provides a descriptive identifier for organizational purposes.
HelmOverrides: Adds values to helm-based applications in the managed clustergroup to adjust cluster configuration based on policy. Each override is specified with a name: and value: parameter.
ClusterSelector: Specifies attributes that the Hub cluster’s Red Hat Advanced Cluster Management instance uses to determine whether or not to assign the clusterGroup policy to a cluster joined to it.
ManagedClusterGroups block within the Industrial Edge pattern:
managedClusterGroups:
factory:
name: factory
helmOverrides:
# Values must be strings!
- name: clusterGroup.isHubCluster
value: "false"
clusterSelector:
matchLabels:
clusterGroup: factory
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
In this example the helmOverrides
section applies helm overrides to the applications in the declared clusterGroup. The clusterSelector
has two matching criteria: it seeks a label "clusterGroup" with the value "factory", and the vendor must be "OpenShift". When these conditions are met, the policy is enacted, and the namespaces, projects, subscriptions, and applications outlined in values-factory.yaml
are applied.
Multiple clusters may match a single managedClusterGroup definition, providing flexibility in deployment. |
ManagedClusterGroups block within the retail pattern:
managedClusterGroups:
raleigh:
name: store-raleigh
helmOverrides:
# Values must be strings!
- name: clusterGroup.isHubCluster
value: "false"
clusterSelector:
matchLabels:
clusterGroup: store-raleigh
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
In this example, we look for an OpenShift cluster labeled with "store-raleigh" as its clusterGroup.
Additionally, the option of implementing more clusterSelectors
and overrides
are also possible for specific customization needs.
Applications configuration in a ClusterGroup
Configuring applications within a ClusterGroup streamlines the process of defining, configuring, and coordinating software applications efficiently across clustered environments. The applications
parameter allows you to specify the properties, dependencies, and resources required for each application deployment within the ClusterGroup, ensuring consistent and efficient deployment across multiple clusters.
In this context, applications refer to those defined by the OpenShift GitOps Operator. These do not need to be applications in the traditional sense but are discrete units of deployment. The Validated Patterns framework supports any type of application that OpenShift GitOps can manage, but the most commonly used type is a Helm chart co-located in the repository that defines the pattern.
Sub-parameters
Name: Specifies the name of the application, providing a unique identifier for management purposes.
Namespace: (Mandatory) : The namespace that the application will be deployed into.
Project: The OpenShift GitOps project associated with the application, used for OpenShift GitOps grouping.
Path: The path, relative to the pattern repository, that contains the application. For a Helm chart, this should be the top level of the Helm structure, including
chart.yaml
, atemplates
directory, and avalues.yaml
to define default values.Kustomize: A boolean indicating whether the application is a Kustomize artifact. If true, it will disable Helm chart processing options. Kustomize artifacts are fully supported by the framework.
Overrides: (Optional): Defines value-by-value overrides for a Helm chart. Each override must include a
name
and avalue
. The name specifies the variable being overridden, and the value is what will be used to override it in the template. Overrides have the highest priority among Helm variables.Plugin: Uses a custom-defined GitOps Config Management Plugin. These plugins offer functionality beyond the standard Helm and Kustomize support provided by the OpenShift GitOps Operator. More information on defining config management plugins in the framework can be found in Argo CD config management plugins in Validated Patterns.
IgnoreDifferences: A structure given to OpenShift GitOps to programmatically not consider differences as “out of sync.” Use this when the data structures are expected to be out of sync because of different (and expected) cluster metadata or security configurations.
ExtraValueFiles: A list of additional value files passed to the Helm chart to render the templates. These files override the defaults in the chart’s
values.yaml
. For hub clusters, the framework automatically parses “automatic” variables.
odf:
name: odf
namespace: openshift-storage
project: hub
path: charts/hub/openshift-data-foundations
extraValueFiles:
- '/overrides/values-odf-{{ $.Values.global.clusterPlatform }}-{{ $.Values.global.clusterVersion }}.yaml'
When the framework renders this block, it uses the cluster settings for global.clusterPlatform
and global.clusterVersion
. For instance, if there’s a file /overrides/values-odf-AWS-4.11.yaml
, and the cluster is running on AWS and OpenShift 4.11, those values will be used by the chart.
The framework ensures that missing value files do not cause errors. If the pattern is running on a different platform or cluster version, this construction will not cause an error; the values will simply be ignored.
Using variables for extraValueFiles
is optional. You can also use constant text and paths. The Industrial Edge pattern does this and employs GitOps workflows to edit the values files in place:
test:
name: manuela-test
namespace: manuela-tst-all
project: datacenter
path: charts/datacenter/manuela-tst
extraValueFiles:
- /overrides/values-test-imagedata.yaml
Imperative values configuration in a ClusterGroup
The imperative
parameter in a ClusterGroup allows direct specification of essential configurations for managing clustered resources, bypassing default settings. This encompasses tasks requiring cluster-wide access, like distributing certificates or access tokens. Within this framework, a pod (a group of containers) is defined to execute specific functions on each cluster within the ClusterGroup, rerunning jobs periodically. You can specify any container image and associated commands for sequential execution. While not mandatory, Ansible facilitates writing imperative jobs, with all values passed as Ansible and Helm values.
Sub-parameters
Parameter Name: Specifies the name of the parameter or property to be configured imperatively within the ClusterGroup.
Value: Defines the value for the specified parameter, ensuring it is explicitly configured within the ClusterGroup.
Scope: Specifies the scope or context in which the imperative value applies, such as a specific application or resource group within the ClusterGroup.
jobs:
- name: deploy-kubevirt-worker
playbook: ansible/deploy_kubevirt_worker.yml
verbosity: -vvv
- name: configure-aap-controller
playbook: ansible/imperative_configure_controller.yml
image: quay.io/hybridcloudpatterns/ansible-edge-gitops-ee:latest
verbosity: -vvv
timeout: "900"
clusterRoleYaml:
- apiGroups:
- "*"
resources:
- machinesets
verbs:
- "*"
- apiGroups:
- "*"
resources:
- "*"
verbs:
- get
- list
- watch
In this example, the imperative section defines two jobs, "deploy-kubevirt-worker" and "configure-aap-controller".
The "deploy-kubevirt-worker" job is responsible for ensuring that the cluster runs on AWS. It uses the OpenShift MachineSet API to add a baremetal node for running virtual machines.
The "configure-aap-controller" job sets up the Ansible Automation Platform (AAP), a crucial component of the Ansible Edge GitOps platform. This job entitles AAP and sets up projects, jobs, and credentials. Unlike the default container image, this example uses a different image.
Additionally, an optional clusterRoleYaml
section is defined. By default, the imperative job runs under Role-based access control (RBAC), providing read-only access to all resources within its cluster. However, if a job requires write access to alter or generate settings, such permissions can be specified within the clusterRoleYaml
section. In the AnsibleEdge scenario, the "deploy-kubevirt-worker" job needs permissions to manipulate and create machinesets, while the "configure-aap-controller" job requires read-only access to Kubernetes objects.