GRE is a very common protocol that allows to create a tunnel between two endpoints and encapsulate packets going through that tunnel.
GRE is without doubt one of the first overlay solutions networks have seen.
GRE relies on IP, meaning that “where we have IP, we can have GRE”.
Think of a classic BGP VPN. Normally, between PEs we have a MPLS backbone transporting packets through LSPs (static, RSVP, LDP).
However, it might happen that our backbone cannot provide MPLS connectivity between two PEs. If so, a valid alternative is to replace LSPs with GRE tunnels.
By doing this, we no longer have MPLSoMPLS (external MPLS transport label and internal MPLS service label) but we move to MPLSoGRE (external GRE header and MPLS service label).
Let’s consider this lab topology:
Between PE1 and PE2 we have a GRE tunnel. This tunnel works as a LSP. and will be used as next-hop for BGP-signaled VPN routes.
TO make things more complex I added another tunnel: a GRE tunnel between PE1 VRF and branch 2. This tunnel connects customer VRF on SP router (PE1) to customer branch router directly.
As a result, backbone is traversed by MPLSoverGREoverGRE packets:
- external GRE tunnel (PE1 VRF to branch2)
- MPLS VPN service label
- internal GRE (PE1 to PE2)
Let’s start by configuring the backbone GRE tunnel, the one connecting PE1 to PE2.
On PE1, we enable tunnel services:
set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
Next, we define a GRE tunnel towards PE2:
set interfaces gr-0/0/10 unit 4 tunnel source 1.1.1.1
set interfaces gr-0/0/10 unit 4 tunnel destination 4.4.4.4
set interfaces gr-0/0/10 unit 4 family inet
set interfaces gr-0/0/10 unit 4 family mpls
MPLS must be enabled as PE1 has to push MPLS service label (VPN label).
Of course, be sure we have reachability to tunnel endpoint:
root@pe1# run show route table inet.0 4.4.4.4 active-path
inet.0: 27 destinations, 27 routes (27 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
4.4.4.4/32 *[OSPF/10] 01:18:12, metric 2
> to 192.168.13.1 via ge-0/0/2.0
Configuration on PE2 is identical so we omit it.
In order to use that GRE tunnel for VPN routes, we need to add a static route into inet.3:
set routing-options rib inet.3 static route 4.4.4.4/32 next-hop gr-0/0/10.4
Basically, we tell Junos to use the GRE tunnel for VPN routes whose protocol next-hop is 4.4.4.4 (GRE tunnel endpoint but PE2 loopback also).
As a result, a route to 4.4.4.4 is available within inet.3:
root@pe1# run show route 4.4.4.4
inet.0: 27 destinations, 27 routes (27 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
4.4.4.4/32 *[OSPF/10] 01:31:23, metric 2
> to 192.168.13.1 via ge-0/0/2.0
inet.3: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
4.4.4.4/32 *[Static/5] 05:50:42
> via gr-0/0/10.4
Now, let’s move to the other GRE tunnel whose endpoints are:
- PE1 VRF
- branch 2 router
Endpoint addresses are:
- on PE1 50.50.50.50
- on branch2 100.100.100.100
Let’s start with PE1.
On PE1 we have a VRF for a L3VPN. That VRF “sees” branch 1 on one side (branch1 is a CE) and branch2 on the other side (through the GRE tunnel).
GRE tunnel address is configured as a loopback IFL assigned to the VRF:
set interfaces lo0 unit 0 family inet address 1.1.1.1/32
set interfaces lo0 unit 100 family inet address 50.50.50.50/32
set routing-instances l3vpn instance-type vrf
set routing-instances l3vpn interface ge-0/0/0.100
set routing-instances l3vpn interface lo0.100
set routing-instances l3vpn route-distinguisher 1.1.1.1:101
set routing-instances l3vpn vrf-import l3vpn-import
set routing-instances l3vpn vrf-export l3vpn-export
set routing-instances l3vpn vrf-table-label
Interface ge-0/0/0.100 connects PE1 to branch1 (CE). We have eBGP with branch 1:
set interfaces ge-0/0/0 flexible-vlan-tagging
set interfaces ge-0/0/0 encapsulation flexible-ethernet-services
set interfaces ge-0/0/0 unit 100 vlan-id 100
set interfaces ge-0/0/0 unit 100 family inet address 192.168.100.1/31
set routing-instances l3vpn protocols bgp group ce type external
set routing-instances l3vpn protocols bgp group ce peer-as 65001
set routing-instances l3vpn protocols bgp group ce neighbor 192.168.100.0
Branch1 advertises the address of a user connected to its LAN:
root@pe1# run show route receive-protocol bgp 192.168.100.0 table l3vpn.inet
l3vpn.inet.0: 9 destinations, 10 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.1.1.1/32 192.168.100.0 65001 I
PE1 VRF – branch1 is done!
Now the GRE. We define a new GRE IFL and assign it to the VRF:
set interfaces gr-0/0/10 unit 100 tunnel source 50.50.50.50
set interfaces gr-0/0/10 unit 100 tunnel destination 100.100.100.100
set interfaces gr-0/0/10 unit 100 tunnel routing-instance destination l3vpn
set interfaces gr-0/0/10 unit 100 family inet address 100.64.1.1/31
set routing-instances l3vpn interface gr-0/0/10.100
Notice, we tell Junos that GRE endpoint (100.100.100.100) is reachable via the VRF itself.
root@pe1# run show route table l3vpn.inet.0 100.100.100.100
l3vpn.inet.0: 9 destinations, 10 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
100.100.100.100/32 *[BGP/170] 05:08:36, localpref 100, from 10.10.10.10
AS path: 65003 I, validation-state: unverified
> via gr-0/0/10.4, Push 299856
Here we see a MPLSoGRE route. This is what we typically find with a BGP VPN relying on a GRE backbone. That route tells us that traffic towards 100.100.100.100 will be encapsulated into a MPLS packet (label 299856) first and into a GRE tunnel (src: 1.1.1.1, dst: 4.4.4.4).
You may have noticed that, this time, GRE IFL was assigned an IP address as well. This is because we are going to configure eBGP between tunnel endpoints. On PE1 we configure BGP inside the VRF:
set routing-instances l3vpn protocols bgp group pe type external
set routing-instances l3vpn protocols bgp group pe peer-as 65003
set routing-instances l3vpn protocols bgp group pe neighbor 100.64.1.0
This BGP session is used to advertise branch1 LAN address (our end user) to branch2:
root@pe1# run show route advertising-protocol bgp 100.64.1.0
l3vpn.inet.0: 9 destinations, 10 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.1.1.1/32 Self 65001 I
Here is VRF full config:
set routing-instances l3vpn instance-type vrf
set routing-instances l3vpn interface ge-0/0/0.100
set routing-instances l3vpn interface gr-0/0/10.100
set routing-instances l3vpn interface lo0.100
set routing-instances l3vpn route-distinguisher 1.1.1.1:101
set routing-instances l3vpn vrf-import l3vpn-import
set routing-instances l3vpn vrf-export l3vpn-export
set routing-instances l3vpn vrf-table-label
set routing-instances l3vpn protocols bgp group ce type external
set routing-instances l3vpn protocols bgp group ce peer-as 65001
set routing-instances l3vpn protocols bgp group ce neighbor 192.168.100.0
set routing-instances l3vpn protocols bgp group pe type external
set routing-instances l3vpn protocols bgp group pe peer-as 65003
set routing-instances l3vpn protocols bgp group pe neighbor 100.64.1.0
Let’s have a look at policies:
set policy-options policy-statement l3vpn-export term ok from interface lo0.100
set policy-options policy-statement l3vpn-export term ok then community set l3vpn
set policy-options policy-statement l3vpn-export term ok then accept
set policy-options policy-statement l3vpn-export then reject
set policy-options policy-statement l3vpn-import term ok from protocol bgp
set policy-options policy-statement l3vpn-import term ok from community l3vpn
set policy-options policy-statement l3vpn-import term ok then accept
set policy-options policy-statement l3vpn-import then reject
set policy-options community l3vpn members target:100:1
We advertise lo0.100 (PE1-branch2 GRE endpoint). This way we allow PE2 to learn about 50.50.50.50. PE2 needs it as branch 2 will send GRE packets destined to 50.50.50.50 to PE2. PE2 will take that GRE packet, push a MPLS service label and encapsulate the packet into another GRE packet (the PE-PE tunnel).
PE1 should be fine.
It might be worth to check PE2 as well. VRF configuration is lighter:
set routing-instances l3vpn-gre instance-type vrf
set routing-instances l3vpn-gre interface ge-0/0/0.100
set routing-instances l3vpn-gre route-distinguisher 4.4.4.4:101
set routing-instances l3vpn-gre vrf-import l3vpn-import
set routing-instances l3vpn-gre vrf-export l3vpn-export
set routing-instances l3vpn-gre protocols bgp group ce type external
set routing-instances l3vpn-gre protocols bgp group ce peer-as 65003
set routing-instances l3vpn-gre protocols bgp group ce neighbor 192.168.100.0
There is a BGP session with branch2:
root@pe4# run show route receive-protocol bgp 192.168.100.0 table l3vpn-gre.inet
l3vpn-gre.inet.0: 6 destinations, 6 routes (4 active, 0 holddown, 2 hidden)
Prefix Nexthop MED Lclpref AS path
* 100.100.100.100/32 192.168.100.0 65003 I
PE2 receives the GRE endpoint address. This way, now, PE2 VRF is able to reach both 50.50.50.50 (via MPLSoGRE backbone) and 100.100.100.100.
Like PE1, PE-PE GRE tunnel is in inet.0 and referenced as next-hop in a inet.3 static route:
root@pe4# show interfaces gr-0/0/10 | display set
set interfaces gr-0/0/10 unit 1 tunnel source 4.4.4.4
set interfaces gr-0/0/10 unit 1 tunnel destination 1.1.1.1
set interfaces gr-0/0/10 unit 1 family inet
set interfaces gr-0/0/10 unit 1 family mpls
[edit]
root@pe4# show routing-options | display set
set routing-options rib inet.3 static route 1.1.1.1/32 next-hop gr-0/0/10.1
[edit]
root@pe4# run show route 1.1.1.1
inet.0: 26 destinations, 26 routes (26 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
1.1.1.1/32 *[OSPF/10] 01:52:59, metric 2
> to 192.168.36.0 via ge-0/0/1.0
inet.3: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
1.1.1.1/32 *[Static/5] 06:11:22
> via gr-0/0/10.1
Let’s move to branch2!
Again, we enable gre tunneling:
set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
And we create the GRE tunnel:
set interfaces gr-0/0/10 unit 0 tunnel source 100.100.100.100
set interfaces gr-0/0/10 unit 0 tunnel destination 50.50.50.50
set interfaces gr-0/0/10 unit 0 tunnel routing-instance destination l3vpn-gre
set interfaces gr-0/0/10 unit 0 family inet address 100.64.1.0/31
set interfaces lo0 unit 100 family inet address 100.100.100.100/32
For lab reasons, on branch2, I isolated this use-case (GRE backbone + PE-CE GRE tunnel) into a virtual router:
set routing-instances l3vpn-gre instance-type virtual-router
set routing-instances l3vpn-gre interface ge-0/0/0.100
set routing-instances l3vpn-gre interface gr-0/0/10.0
set routing-instances l3vpn-gre interface lo0.100
set routing-instances l3vpn-gre protocols bgp group pe type external
set routing-instances l3vpn-gre protocols bgp group pe export l3vpn-exp-bgp
set routing-instances l3vpn-gre protocols bgp group pe peer-as 100
set routing-instances l3vpn-gre protocols bgp group pe neighbor 192.168.100.1
set routing-instances l3vpn-gre protocols bgp group gre type external
set routing-instances l3vpn-gre protocols bgp group gre export exp-gre
set routing-instances l3vpn-gre protocols bgp group gre peer-as 100
set routing-instances l3vpn-gre protocols bgp group gre neighbor 100.64.1.1
Within that VR we have both the IFD towards PE2 (ge-0/0/0.100) and the GRE interface.
Then, we have 2 BGP sessions:
- one with PE2 to send 100.100.100.100 and receive 50.50.50.50 (to establish PE-CE GRE tunnel)
- one with PE1 through the GRE tunnel to send/receive branches LAN addresses
LAN addresses are:
- branch1: 10.1.1.1
- branch2: 10.3.3.3
For lab reasons, branch2 LAN address in configured on the same loopback interface used as endpoint for the PE-CE tunnel:
root@ce3# show interfaces lo0
unit 100 {
family inet {
address 10.3.3.3/32;
address 100.100.100.100/32;
}
}
Let’s see everything is in place:
root@ce3# run show route advertising-protocol bgp 192.168.100.1
l3vpn-gre.inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 100.100.100.100/32 Self I
[edit]
root@ce3# run show route receive-protocol bgp 192.168.100.1
inet.0: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
l3vpn-gre.inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 50.50.50.50/32 192.168.100.1 100 I
[edit]
root@ce3# run show route advertising-protocol bgp 100.64.1.1
l3vpn-gre.inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.3.3.3/32 Self I
[edit]
root@ce3# run show route receive-protocol bgp 100.64.1.1
inet.0: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
l3vpn-gre.inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
* 10.1.1.1/32 100.64.1.1 100 65001 I
All the routes are there!
Las, we verify e2e connectivity:
root@ce3# run ping routing-instance l3vpn-gre source 10.3.3.3 10.1.1.1 size 800 count 13 rapid
PING 10.1.1.1 (10.1.1.1): 800 data bytes
!!!!!!!!!!!!!
--- 10.1.1.1 ping statistics ---
13 packets transmitted, 13 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.886/6.466/26.326/5.850 ms
It works!
Let’s sum up what we have built:
- end to end IP connectivity
- PE-PE GRE tunnel instead of MPLS LSPs
- MPLS BGP L3VPN relying on that GRE backbone tunnel
- PE-CE GRE tunnel
- PE-PE tunnel encapsulates PE-CE GRE tunnel and adds a MPLS VPN service label
not bad 🙂
How can we be sure all the headers were added correctly? For now, trust me…next time, we’ll see how!
Ciao
IoSonoUmberto