Wimpy Buffers on Cisco 3750/3560 switches

Pretty much anyone who’s worked with Cisco switches is familiar with the 3750 series and its sister series, the 3560.  These switches started out as 100Mb some 15 years ago, went to Gigabit with the G series , 10 Gb with the E series, and finally 10 Gb SFPs and StackPower with the X series in 2010.  In 2013, the 3560 and 3750 series rather abruptly went end of sale, in favor of the 3650 and 3850 series, respectively.  Cisco did however continue to sell their lower-end cousin, the Layer 2 only 2960 series.

3560 & 3750s are deployed most commonly in campus and enterprise wiring closets, but it’s not uncommon to see them as top of rack switches in the data center.  The 3750s are especially popular in this regard because they’re stackable.  In addition to managing multiple switches via a single IP, they can connect to the core/distribution layer via aggregate uplinks, which saves cabling mess and port cost.

Unfortunately, I was reminded recently the 3750s come with a huge caveat: small buffer sizes.  What’s really shocking is as Cisco added horsepower in terms of bps and pps with the E, and X series, they kept the buffer sizes exactly the same: 2MB per 24 ports.  In comparison, a WS-X6748-GE-TX blade on a 6500 has 1.3 MB per port. That’s about 20x as much.  When a 3750 is handling high bandwidth flows, you’ll almost always see output queue drops:

 

Switch#show mls qos int gi1/0/1 stat
  cos: outgoing 
-------------------------------

  0 -  4 :  3599026173            0            0            0            0  
  5 -  7 :           0            0      2867623  
  output queues enqueued: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0 
 queue 1:  3599026173           0     2867623 
 queue 2:           0           0           0 
 queue 3:           0           0           0 

  output queues dropped: 
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0 
 queue 1:    29864113           0         171 
 queue 2:           0           0           0 
 queue 3:           0           0           0 

There is a partial workaround for this shortcoming: enabling QoS and tinkering with queue settings.  When enabling QoS, the input queue goes 90/10 while the output queue goes 25/25/25/25.  If the majority of traffic is CoS 0 (which is normal for a data center), the buffer settings for output queue #2 can be pushed way up.

mls qos queue-set output 1 threshold 2 3200 3200 50 3200
mls qos queue-set output 1 buffers 5 80 5 10
mls qos

Note here that queue-set 1 is the “default” set applied to all ports.  If you want to do some experimentation first, modify queue-set 2 and apply this to a test port with the “queue-set 2” command.  Also note that while the queues are called 1-2-3-4 in configuration mode, they’ll show up as 0-1-2-3 respectively in the show commands.  So, clearly the team writing the configuration and writing the show output weren’t on the same page.  That’s Cisco for you.

Bottom line: don’t expect more than 200 Mbps per port when deploying a 3560 or 3750 to a server farm.  I’m able to work with them for now, but will probably have to look at something beefier long term.  Since we have Nexus 5548s and 5672s at the distribution layer, migrating to the Nexus 2248 fabric extenders is the natural path here.  I have worked with the 4948s in the past but was never a big fan due to the high cost and non-stackability.  End of row 6500 has always been my ideal deployment scenario for a Data Center, but the reality is sysadmins love top of rack because they see it as “plug-n-play”, and ironically fall under the misconception that having a dedicated switch makes over-subscription less likely.

2 thoughts on “Wimpy Buffers on Cisco 3750/3560 switches

  1. SA says:

    “migrating to the Nexus 2248 fabric extenders is the natural path here”.
    Nope, the buffers on the 2248 fabric extenders are even smaller and will cause you even more packet loss.

    Like

Leave a reply to J5 Cancel reply