Multicast PIM NBMA Mode

Multicast over frame-relay can be tricky when you try to run it over a hub and spoke topology. In a previous lesson I described the issue when you are using auto-rp and your mapping agent is behind a spoke. This time we’ll take a look at PIM NBMA mode.

Let me show you the topology that I will use to explain and demonstrate this to you:

multicast ip nbma mode example

Above you see 3 routers. R1 is the hub router, R2 and R3 are my spokes. We are using point-to-multipoint frame-relay so there is only a single subnet. R1 is also the RP (Rendezvous Point)

PIM treats our frame-relay network as a broadcast media. It expects that all routers can hear each other directly. This however is only true when we have a full mesh…using a hub and spoke topology like the network above this doesn’t apply because there is only a PVC between the hub and spoke routers. The spoke routers are unable to reach each other directly, they have to go through the hub router.

This causes a number of issues. First of all, whenever a spoke router sends a multicast packet it will be received by the hub router, but the hub router doesn’t forward it to other spoke routers because of the RPF rule (never send a packet out of the interface you received it on). One of the methods of dealing with this problem is by using point-to-point sub-interfaces as its solves the split horizon problem.

The other problem is that spoke routers don’t hear each others PIM messages. For example, let’s say that R2 and R3 are both receiving a certain multicast stream. After awhile there are no users behind R2 that are interested in this stream and as a result R2 will send a PIM prune message to R1.

If R3 still has active receivers it would normally send a PIM override message to let R1 know that we still want to keep receiving the multicast stream. R1 however assumes that the prune message from R2 is heard by all PIM routers but this is not the case in our hub and spoke topology…only the hub router has received it, R3 never heard this PIM prune message. As a result R1 will prune the multicast stream and R3 will not receive anything anymore…

PIM NBMA mode solves these issues that I just described to you. Basically it will tell PIM that the frame-relay network should be treated as a collection of point-to-point links, not as a multi-access network.  Let’s take a look at the example above and configure it so you can see how it works.

OSPF has been configured to advertise the loopback0 interface of R1 so that we can use it as the IP address for the RP. Let’s start by enabling PIM on the interfaces:

R1(config)#interface serial 0/0
R1(config-if)#ip pim sparse-mode
R2(config)#interface serial 0/0
R2(config-if)#ip pim sparse-mode 
R3(config-if)#interface serial 0/0
R3(config-if)#ip pim sparse-mode

This will activate PIM on all serial interfaces. Let’s verify that we have PIM neighbors:

R1#show ip pim neighbor 
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.123.3     Serial0/0                00:03:51/00:01:21 v2    1 / DR S
192.168.123.2     Serial0/0                00:04:04/00:01:35 v2    1 / S

That’s looking good. Now let’s configure the RP:

R1(config)#ip pim rp-address 1.1.1.1
R2(config)#ip pim rp-address 1.1.1.1
R3(config)#ip pim rp-address 1.1.1.1

I will use a static RP as it saves the hassle configuring auto-RP and a mapping agent. Let’s configure R3 as a receiver for the 239.1.1.2 multicast group address, I will use R2 as a source by sending pings:

R3(config-if)#ip igmp join-group 239.1.1.2
R2#ping 239.1.1.2 repeat 9999

Type escape sequence to abort.
Sending 9999, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:
.....

As you can see nothing no packets are arriving. Let’s take a closer look to see what is going on:

R3#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.2), 00:01:40/00:02:27, RP 1.1.1.1, flags: SJPCL
  Incoming interface: Serial0/0, RPF nbr 192.168.123.1
  Outgoing interface list: Null

R3 has registered itself at the RP but doesn’t receive anything.

R1#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.2), 00:02:10/00:03:18, RP 1.1.1.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0, Forward/Sparse, 00:02:10/00:03:18

(192.168.123.2, 239.1.1.2), 00:01:38/00:02:00, flags: PJT
  Incoming interface: Serial0/0, RPF nbr 0.0.0.0
  Outgoing interface list: Null

R1 is receiving traffic from R2 but doesn’t forward it out of the same interface to R3 (Serial0/0).

We're Sorry, Full Content Access is for Members Only...

If you like to keep on reading, Become a Member Now! Here is why:

  • Learn any CCNA, CCNP and CCIE R&S Topic. Explained As Simple As Possible.
  • Try for Just $1. The Best Dollar You've Ever Spent on Your Cisco Career!
  • Full Access to our 655 Lessons. More Lessons Added Every Week!
  • Content created by Rene Molenaar (CCIE #41726)

561 Sign Ups in the last 30 days

satisfaction-guaranteed
100% Satisfaction Guaranteed!
You may cancel your monthly membership at any time.
No Questions Asked!

Forum Replies

  1. Hi Rene,

    Is the mapping agent behind spoke issue also can be solve by this command ip pim NBMA-mode ?

    Davis

  2. Hi Davis,

    I’m afraid not. The AutoRP addresses 224.0.1.39 and 224.0.1.40 are using dense mode flooding and this is not supported by NBMA mode. If your mapping agent is behind a spoke router then you’ll have to pick one of the three options to fix this:

    • Get rid of the point-to-multipoint interfaces and use sub-interfaces. This will allow the hub router to forward multicast to all spoke routers since we are using different interfaces to send and receive traffic.
    • Move the mapping agent to a router above the hub router, make sure it's not behind a spoke router.
    ... Continue reading in our forum

  3. Hi Hussein,

    I first thought you would need ip pim nbma-mode on the hub tunnel interface but in reality, we don’t. For example, take this topology:

    Then add these additional commands:

    Hub, 
    ... Continue reading in our forum

  4. Hi Hussein,

    I had to think about this for awhile and do another lab…something interesting happened :smile: With the DMVPN topology that I usually use (switch in the middle), multicast traffic went directly from spoke1 to spoke2. Take a look at this Wireshark capture:

    //cdn-forum.networklessons.com/uploads/default/original/1X/45c5f4cfbaf80921dc7da699241007f586549885.jpg

    The ICMP request from spoke1 isn’t encapsulated. The reply from spoke2 is.

    So I labbed this up again, replaced the switch in the middle with a router. When you do this, all multicast traffic from spoke1

    ... Continue reading in our forum

  5. Hi Hussein,

    I just redid this lab on some real hardware and I think this is some Cisco VIRL quirk. When I run it on VIRL, I don’t need to use PIM nbma mode. On real hardware, I do need it.

    Spoke1-VIRL#show version | include 15
    Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(3)M2, RELEASE SOFTWARE (fc2)
    Spoke1-REAL#show version | include 15
    Cisco IOS Software, 2800 Software (C2800NM-ADVENTERPRISEK9-M), Version 15.1(4)M7, RELEASE SOFTWARE (fc2)

    Let’s join a group:

    Spoke2(config)#interface gi0/0
    Spoke2(config-if)#ip igmp join-group 239.3.3
    ... Continue reading in our forum

10 more replies! Ask a question or join the discussion by visiting our Community Forum