$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class"Deploying the Ingress Mesh BGP pattern
The Ingress Mesh BGP pattern demonstrates multi-cluster networking with BGP-based load balancing and service mesh connectivity. Deploying this pattern requires:
Two OpenShift clusters on AWS in the same region
A BGP routing infrastructure (created by the pattern’s Ansible automation)
The Validated Patterns framework
This pattern is designed specifically for AWS and requires the ability to create EC2 instances for the routing infrastructure. |
Two OpenShift clusters on AWS:
One cluster designated as "west" (hub)
One cluster designated as "east" (spoke)
To create OpenShift clusters, go to the Red Hat Hybrid Cloud console and select OpenShift → Red Hat OpenShift Container Platform → Create cluster.
See the install-configs folder in the repository for example
install-config.yamlfiles.
Both clusters must have dynamic
StorageClassforPersistentVolumes. Verify by running:AWS credentials configured for Ansible with permissions to:
Create and manage EC2 instances
Create and manage VPCs and subnets
Create and manage security groups
SSH key pair for accessing the EC2 instances (default:
~/.ssh/id_rsa.pub)Install the tooling dependencies including:
gitpodmanordockerocCLIansiblewith required collections
Fork the ingress-mesh-bgp repository on GitHub.
Clone the forked repository:
$ git clone git@github.com:<your-username>/ingress-mesh-bgp.git $ cd ingress-mesh-bgpSet up the upstream remote:
$ git remote add -f upstream git@github.com/validatedpatterns/ingress-mesh-bgp.gitCreate a working branch:
$ git checkout -b my-branch main $ git push -u origin my-branchDeploy the west (hub) and east (spoke) OpenShift clusters on AWS.
For simplicity, the pattern defaults to a single Availability Zone. See the example install-config files in
docs/install-configs/for reference.Set the environment variables pointing to your cluster kubeconfig files:
$ export WESTCONFIG=~/west-hub/auth/kubeconfig $ export EASTCONFIG=~/east-spoke/auth/kubeconfigDeploy the BGP routing infrastructure:
$ make bgp-routingThis Ansible playbook creates:
Client VM for testing
Core router (FRR) with BGP ASN 64666
West TOR router (FRR) with BGP ASN 64001
East TOR router (FRR) with BGP ASN 64002
VPC peering between all components
The command also generates
/tmp/launch_tmux.sh, which starts a tmux session with SSH access to all EC2 instances.
If you used custom
install-config.yamlfiles, update the MetalLB peer addresses (This is usually not needed):Edit
values-west.yamland updatemetal.peerAddressto match your west cluster’s TOR router IPEdit
values-east.yamland updatemetal.peerAddressto match your east cluster’s TOR router IPCommit and push the changes to your branch
Install the pattern on the west (hub) cluster:
$ ./pattern.sh make installThis command:
Installs the Validated Patterns Operator
Deploys ArgoCD and configures GitOps
Installs Red Hat Advanced Cluster Management
Deploys MetalLB with BGP configuration
Deploys Red Hat Service Interconnect (Skupper)
Deploys the hello-world frontend application
Wait for all applications to synchronize in the Hub ArgoCD instance (accessible via the nine-box menu). All applications should show "Healthy" and "Synced" status.
Import the east (spoke) cluster into the management hub:
$ make importThis registers the east cluster with Red Hat Advanced Cluster Management, which then deploys the spoke components automatically.
At this point both clusters have are fully configured.
Run the generated tmux launcher to access the EC2 VMs:
$ /tmp/launch_tmux.shOn the client VM, verify connectivity to the anycast IP for the hello world application. The "apps.mcg-hub.aws.validatedpatterns.io" tells us that the chosen route was going through the west TOR switch and on to the hub cluster:
$ curl http://192.168.155.151/hello <html lang="en"> <head> <meta charset="utf-8"> <title>Hello World</title> </head> <body> <h1>Hello World!</h1> <br> <h2> Pod is running on Local Cluster Domain 'apps.mcg-hub.aws.validatedpatterns.io' <br> <br> <br> <br> Hub Cluster domain is 'apps.mcg-hub.aws.validatedpatterns.io' <br> </h2> </body> </html>On the core router, verify ECMP routing is configured:
$ ip rExpected outputdefault via 192.168.8.1 dev enX0 proto dhcp src 192.168.8.100 metric 100 192.168.8.0/24 dev enX0 proto kernel scope link src 192.168.8.100 metric 100 192.168.12.0/24 dev enX1 proto kernel scope link src 192.168.12.200 metric 101 192.168.16.0/24 dev enX2 proto kernel scope link src 192.168.16.200 metric 101 192.168.155.150 nhid 46 proto bgp metric 20 nexthop via 192.168.12.100 dev enX1 weight 1 nexthop via 192.168.16.100 dev enX2 weight 1 192.168.155.151 nhid 46 proto bgp metric 20 nexthop via 192.168.12.100 dev enX1 weight 1 nexthop via 192.168.16.100 dev enX2 weight 1The output shows that routes to the anycast IPs (192.168.155.150 and 192.168.155.151) have multiple next-hops, indicating ECMP is working.
Check the BGP peering status on any FRR router:
$ sudo vtysh -c "show bgp summary"
Next steps
After the pattern is deployed and working correctly, you can:
Verify the BGP routing is functioning correctly by checking the routing table on the core router
Test the anycast service by accessing it from the client VM
Explore the Red Hat Service Interconnect (Skupper) console to see the cross-cluster connectivity
