Forum Replies

  1. Hello Rene,
    Couple of questions.

    1. Is it possible to provide me an example to show the difference between congestion management and congestion avoidance. As a matter of fact, I do not understand TCP Global Synchronization very well. It will be great if you use an example to explain congestion management and avoidance by relating to TCP Global Synchronization.

    2. This question is basically based upon Priority command VS Bandwidth command. Priority command is used to ensure the maximum bandwidth during the congestion. On the other hand, Bandwidth is used to ensure minimum bandwidth during congestion as you know. For example, Priority 10 Kb command will ensure the maximum of 10 Kb traffic during the congestion. On the other hand, Bandwidth 10 Kb will ensure minimum 10 Kb traffic during the congestion. Why does it work that way? What is the benefit? Would you please explain it with an example so I can visualize it?

    Thank you so much.

    Azm

  2. Hi Azm,

    Congestion management is about dealing with congestion when it occurs and congestion avoidance is trying to avoid congestion. To understand congestion avoidance, you have to think about how the TCP window size and global synchronization works. I explained this in this lesson:

    https://networklessons.com/cisco/ccnp-route/tcp-window-size-scaling/

    Congestion avoidance works by dropping certain TCP segments so that the window size reduces, slowing down the TCP transmission and by doing so, preventing congestion from happening.

    The difference between priority and bandwidth is about the scheduler. Take a look at this picture:

    There are four output queues here. Q1 is attached to the “LLQ” scheduler and Q2, Q3 and Q4 are attached to the “CBWFQ” scheduler.

    Packets in Q2, Q3 and Q4 are emptied in a weighted round robin fashion. Round robin means that we move between the queues like this: Q2 > Q3 > Q4 and back to Q2. Weighted means that we can serve more packets from one queue if we want to. For example, when it’s Q2’s turn we send 2 packets…Q3 also gets to send 2 packets but Q4 only gets to send 1 packet.

    Q1 is “special” since it’s attached to the LLQ scheduler. When something ends up in Q1, it is transmitted immediately…even if there are packets in Q2, Q3, and Q4. Traffic in Q1 is always prioritized over these queues.

    Q1 is our priority queue, configured with the priority command. Q2, Q3, and Q4 have a bandwidth guarantee because of our scheduler, configured with the bandwidth command.

    I hope this helps!

  3. Hello Rene,
    Thanks a lot for your explanation. However, still needs some more clarification. Lets’ say I have an interface GIg 0/1 that is capable of handling 100 packets per second. I have three different kinds of traffic passing through the interface. They are Traffic A, B and C.

    And the QoS is configured like this:

    Traffic A: Priority 50 packets
    Traffic B: Bandwidth 25 packets
    Traffic C: Bandwidth 25 packets

    I am just using packets per second instead of MB or KB per second.

    As far as my understanding goes, QoS will work like this:

    #Every second the interface will send out 100 packets totally, but QoS will make sure that out of those 100 packets, the interface will send 50 packets of Traffic A first and then 25 packets from Traffic B and 25 packets from Traffic C.Is it correct?
    If I draw this, the queue will be like this:

    B + C + A =========>>Out
    25 + 25 + 50
    or

    C + B + A ===========>Out
    25 + 25 + 50

    Meaning Traffic A will be delivered first.
    Is it correct?

    Thank you so much.

    Azm

  4. Hello Laz,
    That was my question and that is the answer I was looking for. Thank you so much as usual.

    Azm

  5. Hello Laz,
    I have a question and I am going to refer to the below configuration for my question.

    class-map match-any VOICE
     match dscp ef 
     match dscp cs5 
     match dscp cs4 
    
    policy-map NESTED-POLICY
     class VOICE
      priority percent 80
     class class-default
      bandwidth percent 20
      fair-queue
      random-detect dscp-based
    
    policy-map INTERNET-OUT
     class class-default
      shape average 10000000
       service-policy NESTED-POLICY
    
    Interface Gig 1/0/1
     service-policy output INTERNET-OUT
    

    In this configuration, I have two different classes: Voice and other traffic. Here, 80 % bandwidth is allocated for Voice traffic and 20% bandwidth is allocated to other traffic.
    That means, during congestion, Voice traffic can use 8M and other traffic can use 2M bandwidth since I am also shaping it to 10M.

    **Scenario:**
    Now, let’s say I have a scenario where I do not have any voice traffic traversing this device even though 80 % (8M) bandwidth is reserved for Voice traffic, but I have 5M other traffic trying to go out of this device constantly where only 20 % (2M) bandwidth is reserved for other traffic.

    **Question:**

    What is going to happen in this scenario?

    As far as my understanding goes, QoS comes into play only during congestion. Even during congestion, if one class of traffic does not use its allocated bandwidth, other class of traffic can use the unused bandwidth. Therefore, this 5M other traffic should be able to go through the interface since there is no voice traffic at all so the voice bandwidth is unused completely.
    But I have run into a situation where other traffic is not allowed to use more than 20 % (2M) of bandwidth even though there is no voice traffic at all. It looks like the router is reserving the 80 % bandwidth and not allowing other traffic to use it even though the voice bandwidth is unused. Would you please explain this to me?

    I also like to thank you in advance as usual for your help.

    Azm

5 more replies! Ask a question or join the discussion by visiting our Community Forum