Ingress Mesh BGP
Validation status:
CI status:
Links:
About the Ingress Mesh BGP pattern
- Use case
Deploy multi-cluster applications with unified ingress using BGP-based load balancing.
Enable anycast IP addressing for seamless failover between OpenShift clusters.
Connect services across clusters using Red Hat Service Interconnect (Skupper) for secure east-west traffic.
Leverage Kubernetes Gateway API to give application developers self-service control over service routing without depending on network infrastructure teams.
Demonstrate enterprise-grade BGP routing integration with Kubernetes/OpenShift environments.
This pattern is designed for AWS environments and simulates a datacenter-like BGP network topology using EC2 instances as routing infrastructure.
- Background
Modern distributed applications often span multiple clusters for high availability, geographic distribution, or workload isolation. Traditional ingress solutions provide per-cluster access, but organizations need unified entry points that can intelligently route traffic across clusters.
This pattern addresses these requirements by combining:
MetalLB with BGP mode - Provides load balancer services that advertise routes via BGP, enabling anycast addressing where the same IP is reachable through multiple clusters.
Gateway API - Delivers L4 and L7 service routing via next generation Ingress, Load Balancing, and Service Mesh APIs. Provides GatewayClasses (infrastructure) and Gateways (operations) so application developers can create routes (HTTPRoute, GRPCRoute, etc.) for their services.
Red Hat Service Interconnect (Skupper) - Creates a secure application-layer virtual network connecting services across clusters without requiring VPN or special network configurations.
FRRouting (FRR) - Industry-standard routing software running on EC2 instances that acts as the BGP peering infrastructure, simulating top-of-rack (TOR) switches and core routers.
Gateway API plays a central role in this architecture. It is the intermediary layer between BGP and Skupper: traffic arriving at a cluster via BGP-advertised anycast IPs is routed through Gateway API down to the appropriate Skupper site, where Skupper handles inter-cluster routing for sparse deployments or services not locally available. Gateway API also separates concerns across organizational roles — infrastructure teams define GatewayClasses, operations teams create Gateways, and application developers independently manage their own Routes (HTTPRoute, GRPCRoute, etc.) for both simple routing and more advanced mesh-like routing scenarios.
About the solution
This pattern deploys a complete multi-cluster networking demonstration on AWS that includes:
Two OpenShift clusters, designated as "west" (ACM hub) and "east" (spoke)
A simulated routing infrastructure with FRR-based routers
MetalLB configured in BGP mode on both clusters
Red Hat Service Interconnect linking services between clusters
A hello-world application deployed across both clusters demonstrating the connectivity
The solution uses ECMP (Equal-Cost Multi-Path) routing to distribute traffic across both clusters when accessing the anycast IP address.
Benefits of the Ingress Mesh BGP pattern:
Unified service access - Single IP address reaches services on multiple clusters
Automatic failover - BGP route withdrawal provides fast failover when a cluster becomes unavailable
Secure cross-cluster communication - Skupper encrypts all inter-cluster traffic using mutual TLS
No network infrastructure changes - Skupper works over existing networks without VPNs or firewall changes
GitOps-driven deployment - All components are deployed and managed through ArgoCD
Application owner autonomy - App owners can describe their own routes on approved gateways and gatewayclasses without relying on network infrastructure teams
About the technology
The following technologies are used in this solution:
- Red Hat OpenShift Platform
An enterprise-ready Kubernetes container platform built for an open hybrid cloud strategy. It provides a consistent application platform to manage hybrid cloud, public cloud, and edge deployments.
- Red Hat Advanced Cluster Management for Kubernetes
Controls clusters and applications from a single console, with built-in security policies. Extends the value of Red Hat OpenShift by deploying apps, managing multiple clusters, and enforcing policies across multiple clusters at scale.
- Red Hat Service Interconnect
Based on the open source Skupper project, Red Hat Service Interconnect enables secure communication between services across different environments without requiring VPN infrastructure or special firewall rules. It creates a virtual application network that works at Layer 7.
- MetalLB
A load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. In this pattern, MetalLB operates in BGP mode to advertise service IPs as routes to the upstream network infrastructure.
- FRRouting (FRR)
A free and open source Internet routing protocol suite for Linux and Unix platforms. It implements BGP, OSPF, RIP, and other protocols. In this pattern, FRR runs on EC2 instances to simulate datacenter routing infrastructure.
- Kubernetes Gateway API
The next generation of Kubernetes Ingress, providing a more expressive and extensible API for managing traffic into and within a cluster. Gateway API is the intermediary layer between BGP and Skupper — traffic arriving via BGP passes through Gateway API for routing before reaching Skupper for inter-cluster communication. It provides GatewayClasses for infrastructure providers, Gateways for operations teams, and Routes (HTTPRoute, GRPCRoute, etc.) for application developers, enabling self-service routing without relying on network infrastructure teams.
Architecture
The Ingress Mesh BGP pattern demonstrates a multi-cluster networking architecture that combines BGP-based anycast ingress with service mesh connectivity. The architecture simulates an enterprise network topology on AWS.
Network topology
The pattern creates the following network components on AWS:
Figure 1. Network topology diagram
VPCs and subnets
The pattern provisions several VPCs to simulate separate network segments:
Client-Core VPC (192.168.8.0/24) - Contains the client VM and core router
Core-West TOR VPC (192.168.12.0/24) - Connects the core router to the west cluster’s top-of-rack router
Core-East TOR VPC (192.168.16.0/24) - Connects the core router to the east cluster’s top-of-rack router
West Workers VPC (10.0.0.0/16) - The west OpenShift cluster’s VPC
East Workers VPC (10.1.0.0/16) - The east OpenShift cluster’s VPC
Routing infrastructure
The pattern deploys EC2 instances running FRRouting to create a simulated datacenter network:
| Component | ASN | Description |
|---|---|---|
Core Router | 64666 | Central router that peers with both TOR routers and advertises client network routes |
West TOR | 64001 | Top-of-rack router for the west cluster, peers with core and west OpenShift workers |
East TOR | 64002 | Top-of-rack router for the east cluster, peers with core and east OpenShift workers |
West OpenShift (MetalLB) | 65001 | MetalLB speakers on west cluster workers, peer with west TOR |
East OpenShift (MetalLB) | 65002 | MetalLB speakers on east cluster workers, peer with east TOR |
Anycast addressing
Both clusters advertise the same anycast IP range (192.168.155.0/24) via BGP. When a client accesses an anycast IP:
Figure 2. Client path to services via anycast and BGP
The core router receives BGP advertisements from both TOR routers for the anycast range
ECMP routing distributes traffic across both paths
Requests reach either the west or east cluster based on the routing decision
If one cluster becomes unavailable, BGP route withdrawal automatically redirects traffic to the remaining cluster
Cluster components
West cluster (Hub)
The west cluster acts as the management hub and includes:
Red Hat Advanced Cluster Management - Manages the east cluster as a spoke
HashiCorp Vault - Centralized secrets management
External Secrets Operator - Synchronizes secrets from Vault to Kubernetes
MetalLB - Provides BGP-advertised load balancer services (ASN 65001)
Gateway API - Routes incoming traffic to appropriate services, providing the intermediary layer between BGP ingress and Skupper
Red Hat Service Interconnect (Skupper) - Hosts the Skupper site with link access enabled
Hello-world application - Frontend component of the demo application
East cluster (Spoke)
The east cluster is a managed spoke that includes:
External Secrets Operator - Retrieves secrets from the hub’s Vault
MetalLB - Provides BGP-advertised load balancer services (ASN 65002)
Gateway API - Routes incoming traffic to appropriate services, providing the intermediary layer between BGP ingress and Skupper
Red Hat Service Interconnect (Skupper) - Connects back to the west cluster’s Skupper site
Hello-world application - Backend component of the demo application
MetalLB
MetalLB provides load balancer services on bare metal and cloud environments where cloud-native load balancers are not available or not suitable:
Each cluster runs MetalLB in BGP mode with a unique ASN (65001 for west, 65002 for east)
MetalLB speakers on worker nodes peer with the local TOR router and advertise service IPs via BGP
Both clusters advertise the same anycast IP range (192.168.155.0/24), enabling ECMP routing from the core
When a cluster becomes unavailable, its BGP routes are withdrawn and traffic is automatically redirected to the remaining cluster
Gateway API
Gateway API provides the L4/L7 routing layer between BGP ingress and application services:
GatewayClass - Defined by infrastructure providers to describe the type of gateway infrastructure available
Gateway - Created by operations teams to instantiate a gateway from a GatewayClass, defining listeners and allowed routes
HTTPRoute / GRPCRoute - Created by application developers to describe how traffic should be routed to their services
Gateway API is the intermediary step between BGP and Skupper. Traffic arriving at a cluster via BGP-advertised anycast IPs passes through Gateway API for service routing before reaching Skupper for inter-cluster communication. This separation of concerns allows application developers to define their own routing rules on approved gateways without relying on network infrastructure teams.
Red Hat Service Interconnect
Red Hat Service Interconnect (based on Skupper) creates a virtual application network between the clusters:
The west cluster hosts a Skupper site with
linkAccess: default, allowing other sites to connectThe east cluster establishes a link to the west cluster using a pre-shared access token
Services exposed through Skupper listeners become accessible across both clusters
All traffic between sites is encrypted using mutual TLS
The pattern uses the Skupper v2 API with the following components:
Site - Defines the Skupper installation in each namespace
Listener - Exposes a service to the Skupper network
Connector - Connects a local workload to a Skupper-exposed service
AccessGrant/AccessToken - Manages secure connection between sites
GitOps structure
The pattern follows the Validated Patterns framework:
ingress-mesh-bgp/
├── values-global.yaml # Global configuration
├── values-west.yaml # West (hub) cluster configuration
├── values-east.yaml # East (spoke) cluster configuration
├── charts/
│ ├── all/
│ │ ├── hello-world/ # Demo application
│ │ ├── metallb/ # MetalLB configuration
│ │ └── rhsi/ # Skupper configuration for west
│ └── east-site/
│ └── rhsi-east/ # Skupper configuration for east
└── ansible/
└── playbooks/ # Infrastructure automationArgoCD manages the deployment of all components, with Red Hat Advanced Cluster Management distributing configurations to the appropriate clusters.
