top of page

DMVPN Dual Hub using OSPF

Date: 30-9-2016

   Table of Contents
   Introduction

In today's network environment, redundancy is one of the most important aspects, whether it’s on the LAN side or on the WAN side.

 

A complete DMVPN deployment consists of the following services:

  1. Dynamic Routing.

  2. mGRE Tunnels.

  3. Tunnel Protection – IPSec Encryption that protects the GRE tunnel and data.

 

The disadvantage of a single hub router is that it’s a single point of failure. Once your hub router fails, the entire DMVPN network is gone.

 

There are two options to configure Dual Hub DMVPN:

 

1. Dual Hub - Single DMVPN network/cloud.

We use a single DMVPN network but we add a second hub. The spoke routers will use only one multipoint GRE interface where we configure the second hub as a next hop server.

- Single mGRE interface on the Spokes & the Hubs.

- Spokes are configured with multiple NHSs (and mappings) on their mGRE.

- Little control over DMVPN Cloud routing.

- It’s recommended the use of Tunnel Protection (IPSec).

 

2. Dual Hub - Dual DMVPN Network/cloud.

The dual cloud option also has two hubs but we will use two DMVPN networks, which means that all spoke routers will get a second multipoint GRE interface.

- Eeach hub controlls its own cloud.

- Two tunnel interfaces on the Spokes but one tunnel interface on each Hub.

- Each Spoke interface is connected to a separate DMVPN Network/Hub.

- DMVPN routing can be controlled by using IGP-related techniques such as bandwidth or delay modifications.

- It’s recommended the use of Tunnel Protection (IPSec).

 

This second way is the best. The major advantage of using this option over the first option is load balancing spokes between hubs.

Asymmetrical routing occurs unless you tune IGP on the tunnels.

​

NOTE In a real world there will be many spokes and hubs, but if you understand how to implement a basic scenery with two hubs and two spokes, you will be able to split the different portions of the topology and choose the best design for your network.

Anchor 1
  OSPF over DMVPN

Many organizations use Open Shortest Path First (OSPF) as their interior routing protocol. It may seem a natural choice to run it over DMVPN as well, but doing so comes with some serious limitations. OSPF is a link state protocol, thus all routers in an area must have the same view of the network. Any change in the area will trigger all routers in the area to run the shortest path first (SPF) algorithm. Depending on the size of the network, this may lead to a lot of SPF runs, which could affect the performance of branch routers with small CPUs.

 

DMVPN requires a single subnet, so all OSPF routers would have to be in the same area. Summarization is only available on area border routers (ABRs) and autonomous system boundary routers  (ASBRs), which means that the hub must be an ABR for it to summarize routes. Misconfiguring the designated router (DR) or backup designated router (BDR) role would also break the connectivity. Any form of traffic engineering is very difficult in a link state protocol such as OSPF.

 

Follow these guidelines if implementing OSPF over DMVPN:

  • Don’t make spokes ABR routers.

  • Put spokes in totally not so stubby area (NSSA) area if possible.

  • All DMVPN tunnels must be in the same area.

  • All hub routers must be in the same area.

 

For small scale DMVPN deployments, running OSPF may be acceptable. Large scale implementations will either run EIGRP or BGP.

Anchor 2
Considerations for the DMVPN Design

1) By default Tunnel interface will have OSPF network Type set to Point-to-Point --> this is not supported for DMVPN because we cannot have more than one couple of routers on point-to-point network.

 

2) On Phase1 OSPF network type is not important (do not configure Point-to-Point), HUB is always the next-hop, we can filter specific routes from the RIB of each spoke (if spokes have only one direct path to the HUB and have not other paths connecting them on the DMVPN cloud).

 

3) On Phase2 Network type point-to-multipoint is not supported because we must enforce the HUB not changing the next-hop to itself. Using newtork Type Broadcast grant us to have a DR election, the DR will preserve the next-hop of other spokes. To have fully working Phase2 with OSPF do this:

  • Configure newtork type to Broadcast on all routers of the cloud.

  • Configure "ip ospf priority 0" on all tunnel interfaces of the Spokes. Set the ospf priority on the HUBs (DR/BDR) to be bigger than the priority on spokes (ip ospf priority 2 for Primary HUB and ip ospf priority 1 for Secondary HUB).

  • Set the tunnel mode to mGRE on each spoke to have spoke-to-spoke dynamic tunnels.

  • Make sure OSPF timers match.

  • Because spokes are generally low-end devices, they probably can’t cope with LSA flooding generated within the OSPF domain. Therefore, it’s recommended to make areas Stubby (filter-in LSA5 from external areas) or totally stubby (neither LSA5 nor inter-area LSA3 are accepted).

 

NOTE Make sure appropriate MTU value matches between tunnel interfaces (“ip mtu 1400 / ip tcp mss-adjust 1360”).

Consider the OSPF scalability limitation (50 routers per area). OSPF requires much more tweekening for large scale deployments.

 

When using OSPF on a DMVPN a choice has to be made about where to place area 0. There are three options:

  • Area 0 behind the hub; a non-zero area across the DMVPN and at the sites.

  • Area 0 on the DMVPN; a unique non-zero area at each spoke site.

  • Area 0 everywhere.

The third option has the worst scaling properties and the highest change of control plane instability. It’s not recommended.

Dual Hub Single Cloud using OSPF
anchor 3

Topology for GNS3                                                                                                                                                                                             

IOS=c7200-adventerprisek9-mz.124-15.T6.bin

 

In this topology I couldn’t tune metric to avoid asymmetrical routing. This can be done using dual cloud, as we’ll see later.

 

NOTE Because OSPF is link state, there is no chance to use a concept like an offset-list to selectively modify the cost of a few of intra area routes:  link state database must be identical in all routers that belong to the same area, so any change to the cost between routers would impact all routes.

DMVPN Dual HUB-Single Cloud with route redistribution, failover and symmetrical routing
anchor 4

This is a better solution than the previous topology to avoid suboptimal routing.

In the following design, you can isolate the Main Site (Headquarter) from the branchs (SOHO, etc.)

Topology for GNS3                                                                                                                                                                                             

IOS= c7200-adventerprisek9-mz.124-15.T6.bin

  • Internal LAN on R1=8.8.8.8/32

  • Internal LAN on SpokeA=1.1.1.0/24

  • Internal LAN on SpokeB=2.2.2.0/24

Loopback interfaces simulate these networks.

​

In this case, independently of the tunnel interface bandwidth, R1 will have two equal metric routes to reach both hubs. So, I’ll use the same bandwidth for the tunnel interface on both hubs. Later we can tune metric for symmetrical routing.
​

NOTE Remember, I’ll use the command bandwidth 1000 because the guaranteed bandwidth specified by the ISP is 1 Mbps.

To guarantee HubA is the DR for OSPF, I’ll give it the highest priority: 2 is the highest priority in this topology. Also is important to boot up first HubA, then HubB and finally the other routers.

 

NOTE Priority 1 is the default and priority 0 keeps the router from becoming eligible to be elected as a DR/BDR.

For the sake of simplicity, I won’t use IPSec. This design is Naked DMVPN Phase 2.

 

I would like you to implement this topology, but I’ll give you some advices.

 

R1                                                                                                                                                                                                                          

 

Use a loopback interface for the internal LAN (use better /24) and advertise it along with the fastethernet interface using EIGRP.

 

HubA                                                                                                                                                                                                                     

 

Use the following commands for the tunnel interface:

​

  • ip ospf network broadcast

  • ip ospf priority 2

  • ip ospf 1 area 0

 

For routing protocols:

 

router eigrp 100

 redistribute ospf 1 metric 100000 1 255 1 1500

 network 192.168.0.0 0.0.0.255

 no auto-summary

!

router ospf 1

 log-adjacency-changes

 redistribute eigrp 100 subnets

 passive-interface default

 no passive-interface Tunnel0

 

HubB                                                                                                                                                                                                                     

 

For the tunnel interface:

​

  • ip ospf priority 1

  • This is the default priority.

 

For the DR election process, add the following commands to the tunnel interface:

​

ip nhrp map 10.0.0.1 172.17.0.1

ip nhrp map multicast 172.17.0.1

ip nhrp nhs 10.0.0.1

 

NOTE For the spokes, use a single tunnel interface pointing to both hubs. Use priority 0 to avoid them to become the DR.

​

Verifying DMVPN                                                                                                                                                                                               

 

HubA#sh ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface

1.1.1.1           0   FULL/DROTHER    00:00:39    10.0.0.3        Tunnel0

2.2.2.2           0   FULL/DROTHER    00:00:39    10.0.0.4        Tunnel0

192.168.0.2       1   FULL/BDR        00:00:39    10.0.0.2        Tunnel0

 

HubB#sh ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface

1.1.1.1           0   FULL/DROTHER    00:00:30    10.0.0.3        Tunnel0

2.2.2.2           0   FULL/DROTHER    00:00:36    10.0.0.4        Tunnel0

192.168.0.1       2   FULL/DR         00:00:31    10.0.0.1        Tunnel0

 

NOTE Obviously, HubA is the DR and HubB is the BDR. But If HubB boots up before HubA, it will become the DR even having a lower priority.

 

HubA#sh dmvpn

Legend: Attrb --> S - Static, D - Dynamic, I - Incompletea

        N - NATed, L - Local, X - No Socket

        # Ent --> Number of NHRP entries with same NBMA peer

Tunnel0, Type:Hub, NHRP Peers:3,

 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb

 ----- --------------- --------------- ----- -------- -----

     1      172.17.0.2        10.0.0.2    UP    never D

     1      172.17.0.3        10.0.0.3    UP    never D

     1      172.17.0.4        10.0.0.4    UP    never D

 

R1#sh ip route | section D

--output omitted--

Gateway of last resort is not set

     1.0.0.0/32 is subnetted, 1 subnets

D EX    1.1.1.1 [170/28416] via 192.168.0.2, 00:17:25, FastEthernet0/0

                [170/28416] via 192.168.0.1, 00:17:25, FastEthernet0/0

     2.0.0.0/32 is subnetted, 1 subnets

D EX    2.2.2.2 [170/28416] via 192.168.0.2, 00:05:15, FastEthernet0/0

                [170/28416] via 192.168.0.1, 00:05:15, FastEthernet0/0

     172.17.0.0/24 is subnetted, 1 subnets

D       172.17.0.0 [90/30720] via 192.168.0.2, 00:18:14, FastEthernet0/0

         10.0.0.0/24 is subnetted, 1 subnets

D EX    10.0.0.0 [170/28416] via 192.168.0.2, 00:18:11, FastEthernet0/0

                 [170/28416] via 192.168.0.1, 00:18:11, FastEthernet0/0

​

So, two equal metric routes to 1.1.1.1 and 2.2.2.2 (asymmetrical routing):

 

R1#traceroute 1.1.1.1

Type escape sequence to abort.

Tracing the route to 1.1.1.1

  1 192.168.0.2 80 msec

    192.168.0.1 48 msec

    192.168.0.2 36 msec

  2 10.0.0.3 80 msec 72 msec *

 

The same situation happens for packets to 8.8.8.8/32 from spokes:

 

SpokeA#sh ip route | section O

--output omitted--

          2.0.0.0/32 is subnetted, 1 subnets

O       2.2.2.2 [110/101] via 10.0.0.4, 00:04:33, Tunnel0

     172.17.0.0/24 is subnetted, 1 subnets

     8.0.0.0/32 is subnetted, 1 subnets

O E2    8.8.8.8 [110/20] via 10.0.0.2, 00:04:33, Tunnel0

                [110/20] via 10.0.0.1, 00:04:43, Tunnel0 

O E2 192.168.0.0/24 [110/20] via 10.0.0.2, 00:04:33, Tunnel0

                    [110/20] via 10.0.0.1, 00:04:43, Tunnel0

 
Tunning metric for symmetrical routing                                                                                                                                                    

 

In this example, the primary internet connection is through HubA (DR). So I’m going to tune metric on HubB to only use HubA.

 

Tunning metric on Hub B for packets from 8.8.8.8/32 to 1.1.1.0/24 and 2.2.2.0/24

 

ip access-list standard INCR_METRIC

 permit 1.1.1.0 0.0.0.255

 permit 2.2.2.0 0.0.0.255

router eigrp 100

 offset-list INCR_METRIC out 25600 FastEthernet0/1

 

R1#sh ip route | section D

--output omitted--

     1.0.0.0/32 is subnetted, 1 subnets

D EX    1.1.1.1 [170/28416] via 192.168.0.1, 00:00:07, FastEthernet0/0

     2.0.0.0/32 is subnetted, 1 subnets

D EX    2.2.2.2 [170/28416] via 192.168.0.1, 00:00:07, FastEthernet0/0

     8.0.0.0/32 is subnetted, 1 subnets

C       8.8.8.8 is directly connected, Loopback0

     10.0.0.0/24 is subnetted, 1 subnets

D EX    10.0.0.0 [170/28416] via 192.168.0.2, 00:00:07, FastEthernet0/0

                 [170/28416] via 192.168.0.1, 00:00:07, FastEthernet0/0

C    192.168.0.0/24 is directly connected, FastEthernet0/0

 

R1#traceroute 1.1.1.1

Type escape sequence to abort.

Tracing the route to 1.1.1.1

  1 192.168.0.1 76 msec 56 msec 40 msec

  2 10.0.0.3 84 msec 80 msec *

 

Tunning metric on HubB for packets from Spokes’s LAN (loopback interface simulates the internal LAN) to 8.8.8.8/32:

 

router ospf 1

 redistribute eigrp 100 metric 1000 subnets

 

SpokeA#sh ip route | section O

--output omitted--

         2.0.0.0/32 is subnetted, 1 subnets

O       2.2.2.2 [110/101] via 10.0.0.4, 01:48:33, Tunnel0

         8.0.0.0/32 is subnetted, 1 subnets

O E2    8.8.8.8 [110/20] via 10.0.0.1, 00:33:33, Tunnel0

     10.0.0.0/24 is subnetted, 1 subnets

O E2 192.168.0.0/24 [110/20] via 10.0.0.1, 02:00:33, Tunnel0

 

SpokeA#traceroute 8.8.8.8

Type escape sequence to abort.

Tracing the route to 8.8.8.8

  1 10.0.0.2 52 msec 80 msec 48 msec

  1. 192.168.0.3 28 msec *  92 msec

 
Testing failover                                                                                                                                                                                             

 

HubA#conf t

HubA(config)#int fa0/0

HubA(config-if)#shut

 

NOTE Wait some seconds.

 

SpokeA#sh ip route | section O

--output omitted--

          2.0.0.0/32 is subnetted, 1 subnets

O       2.2.2.2 [110/101] via 10.0.0.4, 01:54:52, Tunnel0

     172.17.0.0/24 is subnetted, 1 subnets

     8.0.0.0/32 is subnetted, 1 subnets

O E2    8.8.8.8 [110/1000] via 10.0.0.2, 00:00:11, Tunnel0

     10.0.0.0/24 is subnetted, 1 subnets

O E2 192.168.0.0/24 [110/1000] via 10.0.0.2, 00:00:10, Tunnel0

 

R1#sh ip route

--output omitted--

     1.0.0.0/32 is subnetted, 1 subnets

D EX    1.1.1.1 [170/54016] via 192.168.0.2, 00:03:17, FastEthernet0/0

     2.0.0.0/32 is subnetted, 1 subnets

D EX    2.2.2.2 [170/54016] via 192.168.0.2, 00:03:17, FastEthernet0/0

     8.0.0.0/32 is subnetted, 1 subnets

C       8.8.8.8 is directly connected, Loopback0

     10.0.0.0/24 is subnetted, 1 subnets

D EX    10.0.0.0 [170/28416] via 192.168.0.2, 00:03:17, FastEthernet0/0

C    192.168.0.0/24 is directly connected, FastEthernet0/0

 

NOTE There may be scenerys where you need to tune the default timers to speed up network convergence during a hardware failure. By default the timers on a broadcast network (which includes Ethernet, point-to-point and point-to-multipoint) are 10 seconds hello and 40 seconds dead. The timers on a non-broadcast network are 30 seconds hello 120 seconds dead.

 

In this topology, use the following timers on every tunnel interface to speed up network convergence:

ip ospf hello-interval 1

ip ospf dead-interval 4

 

NOTE Even so, when shutting down interfaces on GNS3, full adjacencies (FULL/..) could last several minutes, unless you shut/no shut tunnel interfaces on spokes after shutting down/no shutting down the physical interfaces on hubs.

Dual Hub Dual Cloud using OSPF
anchor 5

A few highlights on how this is different from the single cloud.

1. Each Hub connects to only a single cloud. The spokes connect to both clouds instead of just one.

2. Because we now have two tunnels that the spokes can choose from tunnel to either Hub A or B, we can start modifying routing metrics on each tunnel to better influence which path to take (before we only had one choice).

 

As the documentation states, this setup is little more trickier to configure, but allows for more control on where we want our routes to go.

​

Topology for GNS3                                                                                                                                                                                             

IOS= c7200-adventerprisek9-mz.124-15.T6.bin

  • Internal LAN on R1=8.8.8.8/32

  • Internal LAN on SpokeA=1.1.1.0/24

  • Internal LAN on SpokeB=2.2.2.0/24

Loopback interfaces simulate these networks.

​

DMVPN and OSPF Configuration                                                                                                                                                                    
​
To configure this, take into account the following considerations:
  • There are two DMVPN clouds – 10.0.0.0/24 (Primary DMVPN Cloud) and 20.0.0.0/24 (Secondary cloud).

  • Only one tunnel interface for each hub, two tunnel interfaces for each spoke.

  • The NHRP network IDs and tunnel keys on the hubs should be different.

  • Each hub will be the DR for each Cloud.

 

NOTE You can use ppGRE or mGRE for spokes. I’ll use mGRE.

 

HubA

​

interface Tunnel0

 description PRIMARY CLOUD

 bandwidth 1000

 ip address 10.0.0.1 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map multicast dynamic

 ip nhrp network-id 1

 tunnel key 1

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 1

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel mode gre multipoint

!

router ospf 1

 log-adjacency-changes

 passive-interface default

 no passive-interface FastEthernet0/1

 no passive-interface Tunnel0

 

HubB

​

interface Tunnel0

 description SECONDARY CLOUD

 bandwidth 1000

 ip address 20.0.0.1 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map multicast dynamic

 ip nhrp network-id 2

 tunnel key 2

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 1

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel mode gre multipoint

!

router ospf 1

 log-adjacency-changes

 passive-interface default

 no passive-interface FastEthernet0/1

 no passive-interface Tunnel0

 

SpokeA

 

interface Tunnel0

 description PRIMARY CLOUD

 bandwidth 1000

 ip address 10.0.0.2 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map 10.0.0.1 172.17.0.1

 ip nhrp network-id 1

 tunnel key 1

 ip nhrp nhs 10.0.0.1

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 0

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel destination 172.17.0.1

!

interface Tunnel1

 description SECONDARY CLOUD

 bandwidth 1000

 ip address 20.0.0.2 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map 20.0.0.1 172.17.0.2

 ip nhrp network-id 2

 tunnel key 2

 ip nhrp nhs 20.0.0.1

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 0

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel destination 172.17.0.2

!

router ospf 1

 log-adjacency-changes

 passive-interface default

 no passive-interface Tunnel0

 no passive-interface Tunnel1

 

SpokeB

​

interface Tunnel0

 description PRIMARY CLOUD

 bandwidth 1000

 ip address 10.0.0.3 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map 10.0.0.1 172.17.0.1

 ip nhrp network-id 1

 tunnel key 1

 ip nhrp nhs 10.0.0.1

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 0

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel destination 172.17.0.1

!

interface Tunnel1

 description SECONDARY CLOUD

 bandwidth 1000

 ip address 20.0.0.3 255.255.255.0

 ip nhrp authentication CISCO

 ip nhrp map 20.0.0.1 172.17.0.2

 ip nhrp network-id 2

 tunnel key 2

 ip nhrp nhs 20.0.0.1

 ip ospf network broadcast

 ip ospf hello-interval 1

 ip ospf priority 0

 ip ospf 1 area 1

 tunnel source FastEthernet0/0

 tunnel destination 172.17.0.2

!

router ospf 1

 log-adjacency-changes

 passive-interface default

 no passive-interface Tunnel0

 no passive-interface Tunnel1

 
DMVPN Verification                                                                                                                                                                    

 

HubA#sh ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface

8.8.8.8           1   FULL/DROTHER    00:00:03    192.168.0.3     FastEthernet0/1

192.168.0.2       1   FULL/BDR        00:00:03    192.168.0.2     FastEthernet0/1

1.1.1.1           1   FULL/DROTHER    00:00:03    10.0.0.2        Tunnel0

2.2.2.2           1   FULL/BDR        00:00:03    10.0.0.3        Tunnel0

​

HubB#sh ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface
8.8.8.8           1   FULL/DROTHER    00:00:03    192.168.0.3     FastEthernet0/1
192.168.0.1       2   FULL/DR         00:00:03    192.168.0.1     FastEthernet0/1
1.1.1.1           1   FULL/DROTHER    00:00:03    20.0.0.2        Tunnel0
2.2.2.2           1   FULL/BDR        00:00:03    20.0.0.3        Tunnel0
 

SpokeA#sh ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface

192.168.0.2       1   FULL/DR         00:00:06    20.0.0.1        Tunnel1

192.168.0.1       1   FULL/DR         00:00:07    10.0.0.1        Tunnel0

 

SpokeA#sh ip route | section O

       --output omitted--

O       2.2.2.2 [110/101] via 20.0.0.3, 00:39:17, Tunnel1

                [110/101] via 10.0.0.3, 00:11:51, Tunnel0

     20.0.0.0/24 is subnetted, 1 subnets

O IA    8.8.8.8 [110/102] via 20.0.0.1, 00:41:49, Tunnel1

                [110/102] via 10.0.0.1, 00:12:20, Tunnel0

     10.0.0.0/24 is subnetted, 1 subnets

O IA 192.168.0.0/24 [110/101] via 20.0.0.1, 00:41:50, Tunnel1

                    [110/101] via 10.0.0.1, 00:12:20, Tunnel0

 

NOTE Similar output for SpokeB.

​

R1#sh ip route | section O

       output omitted--

     1.0.0.0/32 is subnetted, 1 subnets

O IA    1.1.1.1 [110/102] via 192.168.0.2, 00:45:31, FastEthernet0/0

                [110/102] via 192.168.0.1, 00:16:02, FastEthernet0/0

     2.0.0.0/32 is subnetted, 1 subnets

O IA    2.2.2.2 [110/102] via 192.168.0.2, 00:43:09, FastEthernet0/0

                [110/102] via 192.168.0.1, 00:15:33, FastEthernet0/0

     20.0.0.0/24 is subnetted, 1 subnets

O IA    20.0.0.0 [110/101] via 192.168.0.2, 00:55:32, FastEthernet0/0

     8.0.0.0/32 is subnetted, 1 subnets

C       8.8.8.8 is directly connected, Loopback0

     10.0.0.0/24 is subnetted, 1 subnets

O IA    10.0.0.0 [110/101] via 192.168.0.1, 00:16:52, FastEthernet0/0

C    192.168.0.0/24 is directly connected, FastEthernet0/0

 

So, asymmetrical routing and failover is up (see the highlighted areas in the above outputs), load balancing is done.

 
Tunning metric                                                                                                                                                                    

 

If we want HubA to be the primary connection (symmetrical routing):

 

HubB

​

int tu0

 bandwidth 900

 

Spokes

​

int tu1

 bandwidth 900

​

Final note

With this article ends my contribution to this wide technology. Nonetheless, I propose you to implement the following topology (using EIGRP):

Dual Hub Single Cloud with EIGRP, using failover and load balancing
ANCHOR 6

IOS= c3725-adventerprisek9-mz.124-15.T5.bin

​

This is one of my favourite GNS3 topologies. I’ve seen many articles about DMVPN, but none of them mention how to use redundancy in the headquarter site. I focus not only in the DMVPN configuration but also in configuring redundancy using HSRP in the internal network.

​

Linkhttp://resources.intenseschool.com/dmvpn-redundancy-dual-hub-dual-isp-links/#article

 

A single DMVPN network is configured for this design. The spoke routers will use only one multipoint GRE interface and the second hub is configured as a next hop server.

  • Single mGRE interface on the Spokes & the Hubs.

  • Spokes are configured with multiple NHSs (and mappings) on their mGRE.

​

This is a real lab, except for the INTERNET portion of the topology.

​

NOTE Normally you would connect the two hub routers and two spokes to different ISPs and you would use different public IP addresses. For the sake of simplicity, I connected all routers to the 192.0.0.0/24 subnet using a simple switch.

 
Prerequisites
  • Two different internal networks 10.1.2.0/24 and 10.1.3.0/24 for each hub.

  • 10.1.2.0/24 clients use HUB1 for Internet access and 10.1.3.0/24 use HUB2, but both networks need Internet access in case of faliure on the ISP.

  • No primary link for the Internet. So, we can achieve load balancing with this design.

  • HUB1 and HUB2 coexist in the same site.

  • Only one tunnel interface on each hub and spoke (single DMVPN cloud).

  • Tune routing EIGRP protocol to avoid asymmetrical routing.

​
For failover, I configured HSRP with interface tracking on both hubs. Also, each hub acts as a DHCP server for both internal networks.
​
HUB1

ip dhcp excluded-address 10.1.2.248 10.1.2.250

ip dhcp excluded-address 10.1.3.248 10.1.3.250

!

ip dhcp pool vlan2

   network 10.1.2.0 255.255.255.0

   default-router 10.1.2.250

!

ip dhcp pool vlan3

   network 10.1.3.0 255.255.255.0

   default-router 10.1.3.250

!

interface FastEthernet0/0

 no ip address

 no shut

 speed 100

 full-duplex

!

interface FastEthernet0/0.2

 encapsulation dot1Q 2

 ip address 10.1.2.248 255.255.255.0

 standby 1 ip 10.1.2.250

 standby 1 priority 105

 standby 1 preempt

 standby 1 track FastEthernet1/0

!

interface FastEthernet0/0.3

 encapsulation dot1Q 3

 ip address 10.1.3.248 255.255.255.0

 standby 1 ip 10.1.3.250

 standby 1 preempt

​

NOTE Use a similar configuration for HUB2.

​

In the real world, high availability is needed so you can use two similar switches with 3 interfaces forming an Etherchannel connection. Remember, only on mode is available on GNS3.

 

For load balancing I also used HSRP, trying to equally distribute the access ports to belong to a different vlan. I used EIGRP as the routing protocol, so you can tune this protocol to achieve load balancing (use Offset Lists for this).

​

​

Thanks to Adeolu Owokade for his great articles on these topics.

NOTE: Your e-mail will not be shown in the output. You can use an invalid e-mail, if you want.

Thank you for your co-operation in helping me to improve.

bottom of page