Forum Replies

  1. Hi Azm,

    Congestion management is about dealing with congestion when it occurs and congestion avoidance is trying to avoid congestion. To understand congestion avoidance, you have to think about how the TCP window size and global synchronization works. I explained this in this lesson:

    Congestion avoidance works by dropping certain TCP segments so that the window size reduces, slowing down the TCP transmission and by doing so, preventing congestion from happening.

    The difference between priority and bandwidth is about the scheduler. Take a look at this picture:

    There are four output queues here. Q1 is attached to the “LLQ” scheduler and Q2, Q3 and Q4 are attached to the “CBWFQ” scheduler.

    Packets in Q2, Q3 and Q4 are emptied in a weighted round robin fashion. Round robin means that we move between the queues like this: Q2 > Q3 > Q4 and back to Q2. Weighted means that we can serve more packets from one queue if we want to. For example, when it’s Q2’s turn we send 2 packets…Q3 also gets to send 2 packets but Q4 only gets to send 1 packet.

    Q1 is “special” since it’s attached to the LLQ scheduler. When something ends up in Q1, it is transmitted immediately…even if there are packets in Q2, Q3, and Q4. Traffic in Q1 is always prioritized over these queues.

    Q1 is our priority queue, configured with the priority command. Q2, Q3, and Q4 have a bandwidth guarantee because of our scheduler, configured with the bandwidth command.

    I hope this helps!

  2. Hello Rene,
    Thanks a lot for your explanation. However, still needs some more clarification. Lets’ say I have an interface GIg 0/1 that is capable of handling 100 packets per second. I have three different kinds of traffic passing through the interface. They are Traffic A, B and C.

    And the QoS is configured like this:

    Traffic A: Priority 50 packets
    Traffic B: Bandwidth 25 packets
    Traffic C: Bandwidth 25 packets

    I am just using packets per second instead of MB or KB per second.

    As far as my understanding goes, QoS will work like this:

    Every second the interface will send out 100 packets totally, but QoS will make sure that out of those 100 packets, the interface will send 50 packets of Traffic A first and then 25 packets from Traffic B and 25 packets from Traffic C.Is it correct?

    If I draw this, the queue will be like this:

    B + C + A =========>>Out
    25 + 25 + 50
    or

    C + B + A ===========>Out
    25 + 25 + 50

    Meaning Traffic A will be delivered first.
    Is it correct?

    Thank you so much.

    Azm

  3. Hello Laz,
    That was my question and that is the answer I was looking for. Thank you so much as usual.

    Azm

  4. Hello Laz,
    I have a question and I am going to refer to the below configuration for my question.

    class-map match-any VOICE
     match dscp ef 
     match dscp cs5 
     match dscp cs4 
    
    policy-map NESTED-POLICY
     class VOICE
      priority percent 80
     class class-default
      bandwidth percent 20
      fair-queue
      random-detect dscp-based
    
    policy-map INTERNET-OUT
     class class-default
      shape average 10000000
       service-policy NESTED-POLICY
    
    Interface Gig 1/0/1
     service-policy output INTERNET-OUT
    

    In this configuration, I have two different classes: Voice and other traffic. Here, 80 % bandwidth is allocated for Voice traffic and 20% bandwidth is allocated to other traffic.
    That means, during congestion, Voice traffic can use 8M and other traffic can use 2M bandwidth since I am also shaping it to 10M.

    Scenario:
    Now, let’s say I have a scenario where I do not have any voice traffic traversing this device even though 80 % (8M) bandwidth is reserved for Voice traffic, but I have 5M other traffic trying to go out of this device constantly where only 20 % (2M) bandwidth is reserved for other traffic.

    Question:

    What is going to happen in this scenario?

    As far as my understanding goes, QoS comes into play only during congestion. Even during congestion, if one class of traffic does not use its allocated bandwidth, other class of traffic can use the unused bandwidth. Therefore, this 5M other traffic should be able to go through the interface since there is no voice traffic at all so the voice bandwidth is unused completely.
    But I have run into a situation where other traffic is not allowed to use more than 20 % (2M) of bandwidth even though there is no voice traffic at all. It looks like the router is reserving the 80 % bandwidth and not allowing other traffic to use it even though the voice bandwidth is unused. Would you please explain this to me?

    I also like to thank you in advance as usual for your help.

    Azm

  5. Hi Azm,

    On Cisco IOS routers, your priority and bandwidth commands only come into play when there is congestion. Your shaper is set to 10M, that is a hard limit.

    When there is no voice traffic, other traffic should be able to get up to 10M. I loaded your config on a router and tried Iperf to demonstrate this:

    $ iperf -c 192.168.2.2
    ------------------------------------------------------------
    Client connecting to 192.168.2.2, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.1.1 port 56526 connected with 192.168.2.2 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.3 sec  12.0 MBytes  9.76 Mbits/sec
    

    You can see this traffic is getting shaped at ~9.76 Mbps.

    R1#show policy-map interface fa0/1
     FastEthernet0/1 
    
      Service-policy output: INTERNET-OUT
    
        Class-map: class-default (match-any)
          9145 packets, 13280475 bytes
          5 minute offered rate 81000 bps, drop rate 0 bps
          Match: any 
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/56/0
          (pkts output/bytes output) 9089/13195691
          shape (average) cir 10000000, bc 40000, be 40000
          target shape rate 10000000
    
          Service-policy : NESTED-POLICY
    
            queue stats for all priority classes:
              
              queue limit 64 packets
              (queue depth/total drops/no-buffer drops) 0/0/0
              (pkts output/bytes output) 0/0
    
            Class-map: VOICE (match-any)
              0 packets, 0 bytes
              5 minute offered rate 0 bps, drop rate 0 bps
              Match:  dscp ef (46)
                0 packets, 0 bytes
                5 minute rate 0 bps
              Match:  dscp cs5 (40)
                0 packets, 0 bytes
                5 minute rate 0 bps
              Match:  dscp cs4 (32)
                0 packets, 0 bytes
                5 minute rate 0 bps
              Priority: 80% (8000 kbps), burst bytes 200000, b/w exceed drops: 0
              
    
            Class-map: class-default (match-any)
              9145 packets, 13280475 bytes
              5 minute offered rate 81000 bps, drop rate 0 bps
              Match: any 
              Queueing
              queue limit 64 packets
              (queue depth/total drops/no-buffer drops/flowdrops) 0/56/0/0
              (pkts output/bytes output) 9089/13195691
              bandwidth 20% (2000 kbps)
              Fair-queue: per-flow queue limit 16
                Exp-weight-constant: 9 (1/512)
                Mean queue depth: 9 packets
                dscp     Transmitted       Random drop      Tail/Flow drop Minimum Maximum Mark
                          pkts/bytes    pkts/bytes       pkts/bytes    thresh  thresh  prob
                
                default     9089/13195691       56/84784          0/0                 20            40  1/10

5 more replies! Ask a question or join the discussion by visiting our Community Forum