Validated Patterns

Deploying the Ingress Mesh BGP pattern

The Ingress Mesh BGP pattern demonstrates multi-cluster networking with BGP-based load balancing and service mesh connectivity. Deploying this pattern requires:

  • Two OpenShift clusters on AWS in the same region

  • A BGP routing infrastructure (created by the pattern’s Ansible automation)

  • The Validated Patterns framework

This pattern is designed specifically for AWS and requires the ability to create EC2 instances for the routing infrastructure.

Prerequisites
  • Two OpenShift clusters on AWS:

    • One cluster designated as "west" (hub)

    • One cluster designated as "east" (spoke)

    • To create OpenShift clusters, go to the Red Hat Hybrid Cloud console and select OpenShift → Red Hat OpenShift Container Platform → Create cluster.

    • See the install-configs folder in the repository for example install-config.yaml files.

  • Both clusters must have dynamic StorageClass for PersistentVolumes. Verify by running:

    $ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class"
  • AWS credentials configured for Ansible with permissions to:

    • Create and manage EC2 instances

    • Create and manage VPCs and subnets

    • Create and manage security groups

  • SSH key pair for accessing the EC2 instances (default: ~/.ssh/id_rsa.pub)

  • Install the tooling dependencies including:

    • git

    • podman or docker

    • oc CLI

    • ansible with required collections

Procedure
  1. Fork the ingress-mesh-bgp repository on GitHub.

  2. Clone the forked repository:

    $ git clone git@github.com:<your-username>/ingress-mesh-bgp.git
    $ cd ingress-mesh-bgp
  3. Set up the upstream remote:

    $ git remote add -f upstream git@github.com/validatedpatterns/ingress-mesh-bgp.git
  4. Create a working branch:

    $ git checkout -b my-branch main
    $ git push -u origin my-branch
  5. Deploy the west (hub) and east (spoke) OpenShift clusters on AWS.

    For simplicity, the pattern defaults to a single Availability Zone. See the example install-config files in docs/install-configs/ for reference.

  6. Set the environment variables pointing to your cluster kubeconfig files:

    $ export WESTCONFIG=~/west-hub/auth/kubeconfig
    $ export EASTCONFIG=~/east-spoke/auth/kubeconfig
  7. Deploy the BGP routing infrastructure:

    $ make bgp-routing

    This Ansible playbook creates:

    • Client VM for testing

    • Core router (FRR) with BGP ASN 64666

    • West TOR router (FRR) with BGP ASN 64001

    • East TOR router (FRR) with BGP ASN 64002

    • VPC peering between all components

      The command also generates /tmp/launch_tmux.sh, which starts a tmux session with SSH access to all EC2 instances.

  8. If you used custom install-config.yaml files, update the MetalLB peer addresses (This is usually not needed):

    1. Edit values-west.yaml and update metal.peerAddress to match your west cluster’s TOR router IP

    2. Edit values-east.yaml and update metal.peerAddress to match your east cluster’s TOR router IP

    3. Commit and push the changes to your branch

  9. Install the pattern on the west (hub) cluster:

    $ ./pattern.sh make install

    This command:

    • Installs the Validated Patterns Operator

    • Deploys ArgoCD and configures GitOps

    • Installs Red Hat Advanced Cluster Management

    • Deploys MetalLB with BGP configuration

    • Deploys Red Hat Service Interconnect (Skupper)

    • Deploys the hello-world frontend application

  10. Wait for all applications to synchronize in the Hub ArgoCD instance (accessible via the nine-box menu). All applications should show "Healthy" and "Synced" status.

  11. Import the east (spoke) cluster into the management hub:

    $ make import

    This registers the east cluster with Red Hat Advanced Cluster Management, which then deploys the spoke components automatically.

Verification

At this point both clusters have are fully configured.

  1. Run the generated tmux launcher to access the EC2 VMs:

    $ /tmp/launch_tmux.sh
  2. On the client VM, verify connectivity to the anycast IP for the hello world application. The "apps.mcg-hub.aws.validatedpatterns.io" tells us that the chosen route was going through the west TOR switch and on to the hub cluster:

    $ curl http://192.168.155.151/hello
    <html lang="en">
      <head>
        <meta charset="utf-8">
        <title>Hello World</title>
      </head>
      <body>
        <h1>Hello World!</h1>
        <br>
        <h2>
        Pod is running on Local Cluster Domain 'apps.mcg-hub.aws.validatedpatterns.io' <br>
        <br>
        <br>
        <br>
        Hub Cluster domain is 'apps.mcg-hub.aws.validatedpatterns.io' <br>
        </h2>
      </body>
    </html>
  3. On the core router, verify ECMP routing is configured:

    $ ip r
    Expected output
    default via 192.168.8.1 dev enX0 proto dhcp src 192.168.8.100 metric 100
    192.168.8.0/24 dev enX0 proto kernel scope link src 192.168.8.100 metric 100
    192.168.12.0/24 dev enX1 proto kernel scope link src 192.168.12.200 metric 101
    192.168.16.0/24 dev enX2 proto kernel scope link src 192.168.16.200 metric 101
    192.168.155.150 nhid 46 proto bgp metric 20
            nexthop via 192.168.12.100 dev enX1 weight 1
            nexthop via 192.168.16.100 dev enX2 weight 1
    192.168.155.151 nhid 46 proto bgp metric 20
            nexthop via 192.168.12.100 dev enX1 weight 1
            nexthop via 192.168.16.100 dev enX2 weight 1

    The output shows that routes to the anycast IPs (192.168.155.150 and 192.168.155.151) have multiple next-hops, indicating ECMP is working.

  4. Check the BGP peering status on any FRR router:

    $ sudo vtysh -c "show bgp summary"

Cleaning up

To destroy the routing infrastructure:

Always clean up the BGP routing infrastructure before destroying the OpenShift clusters.

$ make bgp-routing-cleanup

After the routing infrastructure is removed, you can safely destroy the OpenShift clusters.

Next steps

After the pattern is deployed and working correctly, you can:

  • Verify the BGP routing is functioning correctly by checking the routing table on the core router

  • Test the anycast service by accessing it from the client VM

  • Explore the Red Hat Service Interconnect (Skupper) console to see the cross-cluster connectivity