"The What?" - In this post I intend to add onto previous FlexVPN labs/posts where I will cover deploying a secondary hub in a single cloud for redundancy purposes.
Before we dive in, if you want to know more about FlexVPN redundancy design options see here: FlexVPN Redundancy Tidbit
"The Why?" - In these types of scenarios we have the ability to achieve active/standby failover with deploying two hubs & using something called the IKEv2 "client configuration block" on the spokes. This allows us to achieve high availability which is often sought after for redundancy purposes.
"The How?" - I will now go over what is necessary to deploy FlexVPN redundancy with dual hubs in a single cloud. In this scenario I want to emphasize the importance on the client configuration block. This block I mention provides options to adjust failover timers, tracking objects, multiple peers, & backup group functionalities.
Note: The client config block is NOT assigned to the IPsec profile. It gets assigned to an interface.
Here is the topology I used for this lab:
Our single cloud FlexVPN underlay network is 192.168.254.0/24. Each respective CSR has a loopback that identifies each router & acts to emulate a lan for us. The overlay network is 10.0.10.0/24, CSR1 is the primary Flex hub (green), & CSR12 acts as the secondary hub (red). CSR11 (loop:22.214.171.124) will be used as the "remote lan" our flex clients (CSR13/14) will need access to via either hub. Note: ignore the connection between CSR1 & CSR12 as that will get used in later configurations.
Note: A majority of the configuration changes needed to support this topology are conducted on the spokes. The hub/spoke config is very similar to what was used in earlier posts relating to other FlexVPN setups. However, there are some important changes that are needed that I will dive into in order to support this dual hub topology. Note that some defaults are in use. Lastly, we will first focus on configuring & verifying redundancy with the dual hubs/single cloud, & then I will implement dynamic routing on top of everything so that both spokes always can access CSR11 (emulating lan behind the hubs as depicted in the topology).
Ok so to start, here is what the spoke configuration looks like to support this dual hub scenario:
The base spoke configs are:
crypto pki certificate map Cifelli-Lab 10 subject-name co cifelli.lab.net ! crypto pki trustpoint Cifelli-Lab enrollment url http://192.168.254.1:8080 fqdn csr13.cifelli.lab.net subject-name CN=csr13.cifelli.lab.net subject-alt-name csr13.cifelli.lab.net revocation-check none rsakeypair csr13.cifelli.lab.net ! crypto ikev2 profile IKE_CIFELLI_PROF match certificate Cifelli-Lab identity local dn authentication remote rsa-sig authentication local rsa-sig pki trustpoint Cifelli-Lab dpd 10 2 periodic aaa authorization group cert list FLEX_CIF IKE_CIF_AUTHZ virtual-template 1 ! crypto ipsec profile CIF_IPSEC_PROF set ikev2-profile IKE_CIFELLI_PROF !
The one major change above is the activation of dead peer detection (dpd) within the IKEv2 profile. The DPD configuration tells the spoke to send periodic DPD messages every 10 seconds and try to reach a peer twice before declaring it dead.
The next config section on the spokes includes modifying the tunnel interface so that the destination is now dynamic which tells the spoke to let the peer be retrieved from within the FlexVPN client profile:
interface Tunnel0 ip address negotiated ip mtu 1400 ip nhrp network-id 10 ip nhrp shortcut virtual-template 1 ip tcp adjust-mss 1360 tunnel source GigabitEthernet1.254 tunnel mode ipsec ipv4 tunnel destination dynamic tunnel protection ipsec profile CIF_IPSEC_PROF !
The IKEv2 client feature provides flexibility by allowing us to implement redundancy for high availability. Our spoke client profile config is as follows:
crypto ikev2 client flexvpn CIF_FLEX_CLIENT peer 1 192.168.254.1 track 1 peer 2 192.168.254.12 track 2 peer reactivate client connect Tunnel0 !
Above we are declaring CSR1 as the primary hub & CSR12 as the secondary hub. The track commands allow us to utilize IP SLAs to track the reachability of the hubs. The peer reactivate command allows us to reactive the primary (highest priority; in this case CSR1) hub pending that it comes back up after a failover. This determination is conducted via the SLAs configured below:
ip sla 1 icmp-echo 192.168.254.1 threshold 2000 timeout 2000 frequency 5 ip sla schedule 1 life forever start-time now ip sla 2 icmp-echo 192.168.254.12 threshold 2000 timeout 2000 frequency 5 ip sla schedule 2 life forever start-time now ! track 1 ip sla 1 reachability ! track 2 ip sla 2 reachability !
Quick overview of the hub base config used (omitting pki & underlay config):
interface Virtual-Template1 type tunnel ip unnumbered Loopback1 ip mtu 1400 ip nhrp network-id 10 ip nhrp redirect ip tcp adjust-mss 1360 tunnel source GigabitEthernet1.254 tunnel mode ipsec ipv4 tunnel protection ipsec profile CIF_IPSEC_PROF ! crypto ikev2 authorization policy IKE_CIF_AUTHZ pool CIF_POOL netmask 255.255.255.0 route set interface route set access-list CIF_ROUTES ! crypto ikev2 profile IKE_CIF_PROF match certificate Cifelli-Lab identity local dn authentication remote rsa-sig authentication local rsa-sig pki trustpoint Cif aaa authorization group cert list FLEX_CIF IKE_CIF_AUTHZ virtual-template 1 ! aaa authorization network FLEX_CIF local ! crypto pki certificate map Cifelli-Lab 10 subject-name co cifelli.lab.net ! crypto ipsec profile CIF_IPSEC_PROF set ikev2-profile IKE_CIF_PROF
Hub Failover & Redundancy Verification:
To see a proposal mismatch error that caused tunnels to the secondary hub on failover to not establish see here: Dual Hub FlexVPN Error Tidbit
Let's start with verifying our SLAs from spoke1 (CSR13):
We can see that both hubs are up & reachable. This means that our FlexVPN tunnel session will be up via the primary hub based on our config above. Let's verify:
Now let's force failover by shutting the underlay interface on the primary hub & debug crypto ikev2 packet on spoke1:
We can see that the track state has changed, & that our tunnel interface goes down:
After forcing the primary hub to appear as down we can verify that the track is down too via SLA summary:
Shortly after, we can see spoke1 successfully failover & establish a tunnel to the secondary hub:
Also, we can verify by checking the crypto session:
Note that no shutting the interface on hub1 will force our spokes to reactivate & fail back to the primary hub based on the config shared above.
Now that the dual hub, single cloud topology is properly configured & working I threw a dynamic routing protocol on top so I could further test failover/reachability/redundancy. The goal in this setup is for both spokes (CSR13/14) to always have access to CSR11 Loopback interface no matter which hub is used in the topology.
Dynamic Routing Configuration:
To ensure that both spokes upon failover continue to have/keep access to CSR11 behind both hubs I enabled a very simple OSPF setup as shown below & only on the hubs & CSR11:
router ospf 1 redistribute connected network 192.168.111.0 0.0.0.255 area 0 network 192.168.112.0 0.0.0.255 area 0
Redistributing connected will advertise our loopback (fake lan) that the spokes need access to in this scenario.
router ospf 1 redistribute static network 192.168.111.0 0.0.0.255 area 0
router ospf 1 redistribute static network 192.168.112.0 0.0.0.255 area 0
Redistributing static will advertise our spoke addresses. Remember that the 'route set interface' sends the tunnel addresses as static to remote side.
Dynamic Routing Failover & Reachability Verification:
First let's verify that CSR11 has received routes for the spokes from the hub. We can see that it is receiving the route to spoke1 (CSR13; 10.0.10.4) from hub1:
Now we will verify our crypto session from spoke1 (CSR13) to Hub1 (CSR1) . We then ensure that we can reach 126.96.36.199 via pings & perform a trace to confirm the path it currently takes (Hub1->CSR11). Remember that 188.8.131.52 gets advertised to the spokes via IKEv2 routing.
Now to test failover I sent 6,000 pings to CSR11 & forced failover to hub2 by shutting the underlay interface on hub1.
Lastly, a new trace to 184.108.40.206 also verified that we used hub2:
Alright! So that does it for this post. To recap, we successfully deployed a FlexVPN with dual hub & single cloud for redundancy purposes. We utilized the IKEv2 client configuration block, IP SLAs, & dynamic routing to deploy & verify the solution. Cheers!