Compare Physical Interface and Cabling Types CCNA

Physical interface and cabling types is another topic from the current CCNA exam blueprint. Network engineers must know what the physical connectivity options exist, understand their limitations in speed and bandwidth. Power over Ethernet (PoE) is another related and important topic, as many critical devices are now dependent on network-delivered power. Network vendors and IEEE work to identify and standardize new ways to support higher power demands.

CCNA Exam test knowledge of these topics:

1.3 Compare physical interface and cabling types

1.3.a Single-mode fiber, multimode fiber, copper

1.3.b Connections (Ethernet shared media and point-to-point)

1.3.c Concepts of PoE

This first section is dedicated to types of physical interfaces available in Cisco LAN switches. The further sections present cabling and POE details.

Physical Interfaces

Cisco network devices can have either fixed ports or hot-pluggable transceivers slots. Figure 1 shows a Cisco Catalyst 9200 switch with 48x 10/100/1000 copper POE-enabled ports and an extension module, C9200-NM-4X which provides 4x SFP/SFP+ slots (on the right).

Cisco Catalyst 9200 switch with 4x 10GE SFP module
Figure 1. Cisco Catalyst 9200 switch with 4x 10GE SFP module
Courtesy of Cisco Systems, Inc. Unauthorized use not permitted

Different types of transceivers can be inserted into an SFP slot. For example, C9200-NM-4X module shown in Figure 1, can accept 1Gbps SFP modules and 10Gbps SFP+ modules. Figure 2 below shows SFP modules on the left and direct-attach Twinax copper cable on the right. This cable combines 2 connected SFPs and is a cost-effective way to connect devices in the same or adjacent racks.

SFPs and Direct-Attach Twinax Cables
Figure 2. SFPs and Direct-Attach Twinax Cables
Courtesy of Cisco Systems, Inc. Unauthorized use not permitted

Modern Catalyst switches, such as Catalyst 9000 series, have 2 types of copper interfaces:

  • 10/100/1000Mbps
  • Multigigabit with speed up to 10Gbps

Both types of interfaces support several standards and can negotiate different speeds with the connected device. For example, 10/100/1000 copper ports of Catalyst 9200 switch shown in Figure 1 support 10Base-T, 100-BaseTX and 1000Base-T. Multi-gigabit ports can negotiate 100Mbps, 1Gbps, 2.5Gbps, 5Gbps, and 10Gbps.

802.3 Standards

IEEE 802.3 family of standards defines physical interface specifications for the wired Ethernet. The table below shows some of the 802.3 standards.

StandardSpecificationPhysical Media
802.310Base-TUTP Cat 3 or higher
802.3u100Base-TXUTP Cat 5 or higher
802.3ab1000Base-TUTP Cat 5 or higher
802.3z*1G over fiberDifferent types of fiber
802.3bzMultirate 2.5G/5GUTP Cat 5E or higher
802.3an10G Base-TUTP Cat 6 (55m), Cat 6A
802.3ae**10G over fiberDifferent types of fiber
802.3by25GbpsDifferent types of fiber, twinax
802.3ba40Gbps/100GbpsDifferent types of fiber, twinax

Table 1. 802.3 Standards, Speed and Physical Media

*802.3z standard is called Ethernet over Fiber-Optic at 1Gbit/s and references multiple other standards. The example of commonly used options are 1000Base-SX (multi-mode fiber) and 1000Base-LX (multi-mode/single-mode fiber). Check this Wikipedia article for the full list.

**Has references to multiple standards depending on fiber type. The most commonly used options are 10GBase-SR, 10GBase-LR. Check this Wikipedia article for the full list.

Small Form-factor Pluggable Transceivers (SFPs)

SFPs are network interface modules. Their specifications are developed and maintained by industry vendors group, i.e. not by IEEE. While the modules manufactured by different companies should be compatible, many vendors, including Cisco, support only their own branded SFPs. QSFP has a larger size and the picture below shows the difference between SFPs and QSFPs modules, as well as switch-side sockets. In this example, it is a Catalyst 9300 48-SFP+ port switch with a C9300-NM-2Q network module (accepting 2x QSFP+ modules).

Catalyst 9300 with SFPs (on the left) and QSFPs (on the right)
Figure 3. Catalyst 9300 with SFPs (on the left) and QSFPs (on the right)
Courtesy of Cisco Systems, Inc. Unauthorized use not permitted

The table below lists different types of SFPs along with the supported speed. To confirm if a specific module can be used in a specific Cisco device use the transceiver compatibility tool available here.

NameSpeed
SFP1 Gbps
SFP+10 Gbps
SFP2825 Gbps
QSFP40 Gbps
QSFP2840/100 Gbps
QSFP-DD100/400 Gbps

Table 2. SFPs and Speed

Unshielded Twisted Pair

Copper connectivity is based on Unshielded Twisted Pair (UTP) cabling of different categories. A higher category number refers to the newer standard and better parameters. An Ethernet cable consists of 8 wires, which are twisted together in pairs. The maximum distance for copper cabling is 100m. The connector is called 8P8C and also commonly referred to as RJ45.  There are 2 standards defining how individual wires are terminated within the connector – T568A and T568B. Refer to Wikipedia article for further information on pin-outs.

End devices have MDI (Medium Dependent Interfaces) ports and switches have MDI-X ports. -X means that the receive and transmit pairs are switched. To connect MDI to MDI-X straight-through cable is used. This cable has connectors with the same pin-out scheme used on both sides – either T568A or T568B. To connect MDI to MDI (host to host back-to-back), or MDI-X to MDI-X (switch to switch) crossover cable is required. A crossover cable has a connector with T568A pin-out on one side and T568B pin-out on another side.

Many modern switches can automatically switch their ports between MDI-X and MDI. They can use straight-through cables to connect to each other and don’t require a crossover cable.

Optical Fiber

Optical fiber cabling is usually more expensive to install, however, it has many benefits when compared to copper. In most cases, fiber cables can provide higher bandwidth over greater distances.

Fiber cabling is divided into 2 types:

  • Multi-mode with categories of cables OM1, OM2, OM3, OM4 and OM5
  • Single-mode of two types – OS1 and OS2

A fiber cable has a core and cladding around it. Multi-mode cable’s core is either 50 or 62.5 micrometers in diameter with 125 micrometers cladding. For comparison, human hair has a diameter between 20-40 micrometers. Single-mode cable’s core is thinner – between 8 and 10.5 micrometers in diameter with the same size 125 micrometers cladding. Multi-mode transmitters use a wavelength of 850nm and 1300nm; single-mode is 1310 or 1550 nm based. Cisco publishes information for each SFP on maximum supported distance based on cabling characteristics. These datasheets can be accessed via the Cisco transceiver compatibility tool. A very detailed comparison table of multimode cables is available here.

Single-mode cabling can cover much greater distances than multi-mode cables. See “Modal dispersion” article on Wikipedia explaining physics behind this.

Multimode OM numbers, as UTP categories, are better with larger number and provide better speed and distance. Single-mode OS1 is for indoor use/shorter distances and OS2 is for outdoor/long distance-use.

Connectors

Cisco fiber SFPs and some QSFPs have a duplex LC connector. Some QSFPs can also have MPO connectors. Check this article on Wikipedia with photos and specifications of different types of connectors.

Power Over Ethernet (POE)

Cisco Catalyst switches perform role of Power Sourcing Equipment (PSE). Cisco IP Phones, Access Points and other end devices are Powered Devices (PDs). Standards and data sheets usually list 2 power values: 

  • delivered on the switch port (PSE)
  • received at the end device (PD)

The value at PD is always smaller than at PSE due to the power dissipation in cabling.

Standards

Cisco introduced its proprietary technology before IEEE standardized the POE. Cisco inline power can provide up to 10W at the PSE. Switch sends a fast link pulse to detect power enabled device, which then sends a link pulse back. The switch and device negotiate the final power level via Layer 2 capability exchange protocol – Cisco Discovery Protocol (CDP). Original Cisco inline power switches and end devices reached their End-Of-Support dates a long time ago and are replaced with newer platforms using POE standards described below.

In 2003 IEEE released the first POE standard – 802.3af. The standard isn’t compatible with Cisco’s proprietary implementation. PSE can deliver a maximum of 15.40W with available power at PD of 12.95W. This specification defined PD detection and classification mechanisms using electrical signaling. PD has an option to signal to the switch which class it belongs to. With this information, the switch knows how much power it should deliver. As table 3 shows, 802.3af defined 3 classes and class 0, which means that no classification is supported.

Power (PSE side)SpecificationClass
4WIEEE 802.3af Type 1Class 1
7WIEEE 802.3af Type 1Class 2
10WCisco inline-power
15.4WIEEE 802.3af Type 1Class 3
15.4WIEEE 802.3af Type 1Class 0 (not classified)
30WIEEE 802.3at Type 2Class 4
45WIEEE 802.3bt Type 3Class 5
60WIEEE 802.3bt Type 3Class 6
60WCisco UPOE
75WIEEE 802.3bt Type 4Class 7
90WIEEE 802.3bt Type 4Class 8
90WCisco UPOE+

Table 3. POE Wattage and Associated Standards

In 2009 IEEE released the new 802.3at standard. Devices supporting it were called Type 2 or POE+. PSEs and PDs complying with earlier 802.3af standard were labeled as Type 1 devices. 802.3at provides up to 30W/25.50W of power. Power levels of 30W and higher have additional stage negotiations using either electrical signals or layer 2 capability exchange protocols, such as LLDP and CDP.

Standard is backward compatible and supports 802.3af Class 1-3 devices. New Class 4 is allocated for 30W devices. 802.3at is widely used. Current generation of access switches, such as Catalyst 9200, and modern access points supporting it. Both 802.3af and 802.3at use only 2 pairs of wires in 4-pair UTP cable to provide power.

New use cases emerged, for example, smart buildings with POE-enabled lighting and network-powered display screens. These devices demanded more power. In 2011, Cisco introduced proprietary Universal Power over Ethernet (UPOE) technology to support up to 60W with the use of all 4 pairs in UTP cable. IEEE released 802.3bt standard in 2018 with up to 90W of power at PSE. The standard introduced Type 3 devices (60W) and Type 4 devices (90W). IEEE standard also made use of all 4.

Cisco UPOE and IEEE 802.bt Type 3 both deliver 60W but operate differently. Cisco publishes a list of UPOE Catalyst switches and line cards that comply with 802.3bt. Cisco proprietary UPOE+ was released to support 90W. UPOE+ switch modules can support 802.3bt Type 4 devices.

Some switches and line cards from Catalyst 9300 and 9400 families support UPOE and UPOE+. Catalyst 9200 switches support only POE+ (802.3at).

Self-test Questions

What are the 2 types of copper ports Catalyst 9000 series switches support?
• 10/100/1000Mbps

• Multigigabit – 100Mbps and 1/2.5/5/10Gbps
What are 2 types of fiber cabling?
• Multi-mode (OM1, OM2, OM3, OM4 and OM5)

• Single-mode (OS1 and OS2)
What are 2 roles a device can perform in POE configuration?
• Power Sourcing Equipment (PSE) – the switch providing power

• Powered Devices (PDs) – end device consuming power

Cisco ACI Concepts

In this blog post we will explore Cisco ACI fabric components and provide high-level overview of important Cisco ACI Concepts. We will not be looking into configuration workflows, which will be a topic for another post.

ACI (Application Centric Infrastructure) is a multi-tenant data center switching solution based on intent-based approach.

What is intent-based networking and how it is different from traditional software-defined networking?

Cisco defines intent-based networking as 3 processes:

  • Translation, or converting business requirements into policies
  • Activation, or transforming a policy into specific configuration applied to a device
  • Assurance, or ensuring that the intent has been realized

Traditional software-defined networking focuses on activation, i.e. orchestration and configuration automation. See Cisco Viptela SD-WAN post to read about Cisco SDN approach for WAN.

Cisco products implement all 3 processes. ACI is responsible for translation and activation. Cisco Tetration and Network Assurance Engine are responsible for assurance aspect.

What are the benefits of implementing Cisco ACI in the data center?

ACI fabric is centrally managed via single Web-based management interface. ACI also provides extensive Application Programming Interface (API), so it can be fully automated.

ACI has multi-tenant design out of the box. It ensures that tenants are separated not only on data plane, but also by providing tenant-specific management capability.

Cisco ACI is easy to deploy, as user doesn’t need to understand or configure fabric protocols, such as VXLAN, underlay routing protocols or multicast routing. Provisioning of new leaf switches or replacing existing ones is very simple from discovery to applying template-based configuration.

There are some new concepts and configuration patterns to master, as ACI is rather different from the way traditional switches are configured and operated. However, ACI brings many benefits with centralized configuration based on templates and policies. For example, consistency across many devices is easily achieved and known working settings can be re-used when new device or tenant is introduced.

Cisco ACI Components

2 main components of ACI are:

  • Switching Fabric  
  • Controllers

ACI Switching Fabric

Switching fabric is based on leaf-and-spine topology. Each leaf connects to every spine with no direct connections between leafs or spines. Servers, routers for external connectivity, firewalls and other network devices connect to leaf switches only.

Cisco ACI Switching Fabric
Figure 1. ACI Switching Fabric

With two layers there is always a single hop between any pairs of leaf switches – spine switch layer. Throughput can be horizontally scaled by introducing additional spine switches. The inter-switch connections are point-to-point layer-3 links. Therefore, all links can be evenly utilized with Equal-Cost Multi Pathing (ECMP). Switch fabric utilizes VXLAN encapsulation or MAC in UDP with Cisco proprietary extensions. Data plane operation will be explained in the next section in more detail.

Cisco ACI switch portfolio consists of modular Nexus 9500 and fixed Nexus 9300 families of switches. Not all switches in these families can run in ACI mode. Some of the switches are NX-OS mode only and some of them can run in both modes.

ACI Spine Switches

Important: Always check Cisco website for the latest updates and compatibility information.

Switch modelDescriptionACI spine/NX-OS
X9736PQ line card
(reached end of sale)
36 x 40G QSFP+ACI Spine
X9732C-EX line card32 x 100G QSFP28Both
X9732C-FX line card
(on roadmap)
32 x 100G QSFP28Both
X9736C-FX line card36 x 100G QSFP28Both
X9336PQ switch
(reached end of sale)
36 x 40G QSFP+ACI Spine
9332C switch32 x 40/100G QSFP28Both
9364C switch64 x 40/100G QSFP28Both
9316D-GX switch16 x 400/100G QSFP-DDBoth
93600CD-GX switch
(on roadmap)
28 x 40/100G QSFP28
and
8 x 400/100G QSFP-DD
Both

Table 1. Cisco ACI Spine Switches

Nexus 9500 family has 3 models of chassis with 4-, 8- and 16- slots for line cards. Each of the models accepts a single or pair of supervisor cards, set of fabric modules and line cards. Fabric modules and line cards is what provides ability of the chassis to run in ACI mode. Currently there are 3 families of line cards:

  • Cisco and merchant ASICs based. Only single line card X9736PQ supports ACI spine functionality in this family and is compatible with C9504-FM, C9508-FM and C9516-FM fabric modules.
  • R-Series (Deep Buffer). This family doesn’t provide ACI support and model of its line cards name starts with X96xx.
  • Cloud Scale ASICs based. This more recent family of modules contains ACI spine capable X9732C-EX, X9732C-FX (roadmap as of Sep 2019), X9736C-FX line cards and C9504-FM-E, C9508-FM-E, C9508-FM-E2 and C9516-FM-E2 fabric modules

Fixed Nexus 9300 switches that can also be spine switches are as per list below:

  • 9332C
  • 9364C
  • 9316D-GX
  • 93600CD-GX (roadmap as of Sep 2019)

All of the switches in this list are Cloud Scale based.

ACI Leaf Switches

Leaf switches are all part of Nexus 9300 family on Cloud Scale technology with the exception of 93120TX. The table below shows available options for ACI leafs.

Switch modelDescriptionACI leaf/NX-OS
93120TX96 x 100M/1/10GBASE-T
and
6 x 40G QSFP+
Both
93108TC-EX48 x 10GBASE-T
and
6 x 40/100-G QSFP28
Both
93180YC-EX48 x 10/25G
and
6 x 40/100G QSFP28
Both
93180LC-EXUp to 32 x 40/50G QSFP+
or
18 x 100G QSFP28
Both
9348GC-FXP48 x 100M/1GBASE-T,
4 x 10/25G SFP28
and
2 x 40/100G QSFP28
Both
93108TC-FX48 x 100M/1/10GBASE-T
and
6 x 40/100G QSFP28
Both
93180YC-FX48 x 1/10/25G fiber ports
and
6 x 40/100G QSFP28
Both
9336C-FX236 x 40/100G QSFP28Both
93216TC-FX296 x 100M/1/10GBASE-T
and
12 x 40/100G QSFP28
Both
93240YC-FX248 x 1/10/25G fiber ports
and
12 x 40/100G QSFP28
Both
93360YC-FX296 x 1/10/25G fiber ports
and
12 x 40/100G QSFP28
Both
9316D-GX
(on roadmap)
16 x 400/100G QSFP-DDBoth
93600CD-GX28 x 40/100G QSFP28
and
8 x 400/100G QSFP-DD
Both

Table 2. Cisco ACI Leaf Switches

APIC Controllers

The core of ACI deployment is Cisco Application Policy Infrastructure Controller, or APIC. It is central point for ACI fabric configuration and monitoring.

APIC is a physical appliance based on Cisco UCS C-series server. ACI deployment requires at least 3 APIC controllers forming APIC cluster. The maximum number of APIC controllers in cluster is 5.

For fabric management, each APIC is physically connected to 2 different leaf switches, with one of the interfaces as active and the second one as standby. In addition to these 2 links, out-band connections for CIMC and appliance are required.

Virtual APIC controller can be launched on VMWare ESXi hypervisor and is component of Cisco Mini ACI fabric for small scale deployments. In Cisco Mini ACI fabric only single physical APIC is required, while second and third can be virtualized.

There are 2 APIC configurations currently available – medium and large (more than 1200 edge ports). Appliance must be ordered using published part number and not as C-series server with matching parameters. The configuration details for each the options are shown in the Table 3.

ConfigurationMediumLarge
Part numberAPIC-M3APIC-L3
CPU2 x 1.7 GHz Xeon Scalable 3106/85W 8C/11MB Cache/DDR4 2133M2 x 2.1 GHz Xeon Scalable 4110/85W 8C/11MB Cache/DDR4 2400MHz
RAM6 x 16GB DDR4-2666-MHz RDIMM/PC4-2130012 x 16GB DDR4-2666-MHz RDIMM/PC4-21300
HDD2 x 1 TB 12G SAS 7.2K RPM SFF HDD2 x 2.4 TB 12G SAS 10K RPM SFF HDD
CNACisco UCS VIC 1455 Quad Port 10/25G SFP28 CNA PCIECisco UCS VIC 1455 Quad Port 10/25G SFP28 CNA PCIE

Table 3. Cisco APIC Controllers

ACI Fabric Operation

ACI Fabric Forwarding Overview

Let’s consider the example topology in the diagram below. Orange links between leafs and spines are Layer 3. Therefore, no Layer 2 loops can occur and no Spanning Tree Protocol is required. These links form underlay network. All data traffic traversing over them is VXLAN-encapsulated.

If you capture a packet on any of those links, it will be UDP-encapsulated traffic between loopback interfaces of leaf switches. This IP address is called TEP for Tunnel End Point. In some scenarios, the destination IP address can be multicast or spine switches loopbacks as well.

This UDP traffic is encapsulated payload of Layer 2 traffic received on downstream interface. Let’s start with Server A sending IP packet to Server B and to simplify our example, let’s assume it already knowns MAC address of Server B. Server A will create unicast IP packet, pack it into Ethernet frame and send it to the switch.

The switch will try to resolve the destination leaf’s TEP IP address. There are several mechanisms available, but let’s assume it knows that it is connected to leaf switch #4. It will take Ethernet frame and pack it into new UDP VXLAN datagram, with new IP header with the source IP as leaf switch #2’s VTEP IP and destination as leaf switch #4’s VTEP IP. Encapsulated traffic will be load-balanced via 2 available spines.

Cisco ACI Forwarding
Figure 2. ACI Forwarding

Underlay Protocols

In ACI terminology, underlay or set of orange links in the diagram above is called Infra VRF. The IP addresses in underlay are isolated and not exposed to tenants. In contrast, the data traffic between servers and clients is transferred in overlay networks. It is similar to how VPNs are built over Internet or Layer 3 VPNs over MPLS network.

The orange links in the Figure 2 run link-state routing protocol – IS-IS. It’s main purpose is to provide reachability between Tunnel End Points (TEPs). It is similar to how VXLAN network is built on Nexus switches using NX-OS, which can run OSPF as routing protocol instead.

Different to VXLAN EVPN setup, ACI doesn’t run EVPN with BGP to distribute endpoint reachability information. Instead, COOP (Council of Oracle Protocol) is responsible for endpoint information tracking and resolution. MP-BGP, however, is still used to propagate routing information that is external to fabric.

Cisco ACI Basic Concepts

Cisco introduced many new terms with ACI. All configuration constructs and their interaction is documented in ACI policy model. Each construct is represented by a Managed Object (MO), which form hierarchical Management Information Tree (MIT).

Figure 3 displays partial view of the MIT. Policy Universe on the left is root. Solid lines represent containment and dotted lines – association. For example, Tenant class contains one or more Bridge Domain instances and a Bridge Domain is associated with a VRF.

As this post is introductory, we will review some of the terms relevant in context of how fabric works. There are also important terms around how fabric is being configured, however, this will be cover in another post.

Cisco ACI Management Information Tree
Figure 3. ACI Management Information Tree

Tenants

Tenant is a logical grouping of various policies. It can be a customer or a department within your organization. By creating different tenants you provide ability to delegate management of tenant-specific settings.

There are 3 built-in tenants: Infra, Common and Management. Infra tenant is responsible for fabric underlay, Common tenant hosts resources that are shared between other tenants and Management tenant is for in-band and out-of-band configuration.

VRFs

Virtual Routing and Forwarding instance or VRF has the same meaning as in traditional network, it is a Layer 3 routing domain. The isolation is achieved by keeping routing information separate.

For example, 2 different VRFs can both have 192.168.0.0/24 network defined in the same way if both had dedicated non-connected physical networks and routers. By default, VRFs cannot communicate to each other.

You can export or leak some of the routes between VRFs, but in this case you need to ensure that the network don’t have overlapping subnets.

A tenant can have a single or multiple VRFs.

Bridge Domains and Subnets

Bridge domain is a Layer 2 flood domain. A VLAN in traditional network is a Layer 2 flood domain. You might be wondering, why not to keep the same term. One of the reasons, is that fabric uses VXLAN IDs to differentiate Layer 2 networks between each other. VLAN IDs can be re-used and even overlap between different ports in recent versions of ACI software, so they cannot be used as fabric-wide identifiers for a specific Layer 2 domain.

Bridge domain requires association with VRF and can contain one or more subnets. It is possible to assign multiple subnets to a single bridge domain (analogy is a secondary address on SVI) or one to one relationship between bridge domain and subnet can be established.

Adding a subnet to bridge domain and enabling unicast routing creates routed interface or SVI in that subnet. In ACI all leafs are using the same SVI’s IP address for use as default gateway for the subnet. This functionality is called pervasive gateway (or anycast gateway) and optimize Layer 3 processing efficiency, as routing is distributed across all leafs without need to have a central device to perform routing.

Application Profiles and EPGs

Application Profiles are containers for Endpoint Groups. EPG is a logical group of endpoints and one of the main components of ACI policy model. Endpoints include physical servers, virtual machines and other network-connected devices.

EPG membership can be statically configured, for example, to be based on a specific port and VLAN on it. Or it can be based on VM’s NIC port group membership via dynamic negotiation with Virtual Machine Manager.

The policies in ACI are applied to EPGs and, by default, each EPG is isolated from other EPGs.

Contracts

If one EPG A needs to access services provided by EPG B, then EPG A is called consumer and EPG B is called provider. Default behavior in ACI is to block all inter-EPG traffic. Contract must be defined to facilitate this communication.

Contract consists of subjects which in turn contain list of filters. Filters are similar to access-lists and contain entries which match the traffic.

Contracts are directional and differentiate between traffic going from consumer to provider and traffic in reverse direction.

Access Policies

Access policies control configuration of interfaces connecting to physical servers, routers and hypervisors. Objects living under Access Policies include:

  • Pools of IDs, or grouping of VLANs, VXLAN IDs and multicast addresses
  • Domains and their types define how devices are connected to leaf switches, for example, physical domain is used for bare metal servers and VMM domain is used for integration with hypervisors
  • Interface Policies, Policy Groups and Profiles. Policy controls specific setting of an interface, which are grouped together to be used in profile along with interface selector
  • Switch Policies, Policy Groups and Profiles. These objects control switch-level configuration and by associating Interface Profiles with Switch Profiles, interface settings can be applied to the specific leaf switch

Fabric Policies

Fabric policies and objects under it control internal fabric interface and protocols configuration. For example, parameters such as Fabric MTU is defined by Global Fabric policy and SNMP, date and time parameters are specified by Pod Profiles.

Reference Materials

Cisco ACI Policy Model

Cisco ACI Policy Model Guide

Cisco Routers Performance

In this blog post I will summarize available information on Cisco ISR and ASR performance. The following platforms will be covered: ISR G2, ISR 1100, ISR 4000, ASR 1000.

Cisco Routers Performance

Update: check my new article on SD-WAN routers and platforms here.

ISR G2

Let’s start with ISR G2 performance numbers. ISR G2s are legacy products with Classic IOS, however, they are still around and it is important to know how they perform to properly size newer replacement routers.

Important: These are not real-world numbers. Please read further.

ModelPackets Per SecondMegabits Per Second
Cisco 86025,000197
Cisco 88050,000198
Cisco 890100,0001,400
Cisco 1921290,0002,770
Cisco 1941330,0002,932
Cisco 2901330,0003,114
Cisco 2911352,0003,371
Cisco 2921479,0003,502
Cisco 2951579,0005,136
Cisco 3925833,0006,903
Cisco 3925E1,845,0006,703
Cisco 3945982,0008,025
Cisco 3945E2,924,0008,675

Table 1. Cisco ISR G2 RFC 2544 Performance

The second column displays the number of packets per second that the platform can forward under maximum CPU utilization just before starting to drop the packets. For a router’s CPU it takes the same amount of effort to route the 64-byte packet as it would take for 1500-byte one. So it is usually a more reliable metric that removes packet size from the equation.

The third column displays the value in bytes per second (i.e. packet size in bytes x packets per second). As the results can differ more than 20x times based on the size of the packets selected, the specification must provide average packet size that was used during the test.

What is IMIX? The traffic doesn’t consist of packets of the same size, many tests are using packets of different sizes (called Internet Mix (IMIX)). For example, in a simple IMIX sample in every 12 packets transmitted – 7 will be 40 bytes long, 4 – 576, and 1 – 1500. The average packet size in this case will be 340 bytes.

Values provided in Table 1 are based only on IP packet routing without any additional processing, such as QoS, encryption, or NAT, so it is a maximum performance that a platform can deliver. The real-world number will be significantly smaller.

Another important thing to note is how a packet is counted, for example, it can be counted twice – as it enters an ingress interface and exits egress one. Cisco counts this is as a single packet, as it is seen by the forwarding engine. On the other hand, to select a router for a specific WAN interface bandwidth utilization in each direction must be added. For example, in the case of 10Mbps WAN with expected 9Mbps download and 3Mbps upload – calculation should be based on 12Mbps of the load.

For G2 platforms Cisco recommended WAN-link based sizing is as per the table below. Values are much smaller compared to normal IP forwarding. It is also expected that the router will not be running at 99% CPU and will be dropping packets.

PlatformWAN Link
8604
8808
89015
192115
194125
290125
291135
292150
295175
3925100
3945150
3925E250
3945E350

Table 2. ISR G2 Recommended Sizing Based on WAN Link Speed

ISR 4000

ISR 4000s are running IOS-XE and have introduced performance-based licensing with 3 tiers:

  • Default
  • Performance (x2-3 of default throughput level)
  • Boost (removes shaping completely)

Cisco publishes the following statistics for basic IP routing without services with IMIX traffic (~330 bytes packets).

ModelDefault (Mbps)Performance
(Mbps @ CPU %)
Boost
(Mbps @ CPU %)
Boost
(pps @ CPU %)
Encryption
(Mbps, AES 256)
42213575
@ 8% CPU
1,400
@ 94% CPU
530,000
@ 94% CPU
75
432150100
@ 8% CPU
2,000
@ 68% CPU*
760,000
@ 68% CPU*
100
4331100300
@ 16% CPU
2,000
@ 53% CPU*
760,000
@ 53% CPU*
500
4351200400
@ 17% CPU
2,000
@ 45% CPU*
760,000
@ 45% CPU*
500
44315001,000
@ 18% CPU
4,000
@ 62% CPU*
1,520,000
@ 62% CPU*
900
44511,0002,000
@ 19% CPU
4,000
@ 35% CPU*
1,520,000
@ 35% CPU*
1,600
44611,5003,000
@ not published
10,000+
@ not published
3,790,000+ @ not published7,000

Table 3. ISR 4000 Performance (IP forwarding, IMIX 330 byte average packet size)

*- bottleneck was the physical interface speed, not forwarding CPU

As the routers are capable to forward significantly more traffic than default and performance license allows, the numbers in table 3 for these license tiers are close to real-life when services are getting added. It is safe to choose ISR 4000 with “factory default” and “performance” levels and in most cases lower models with a “performance” license if you plan to use multiple services.

Recently added boost license removes shaping completely. Table 3 displays PPS values for ISR 4000, however, most of the routers didn’t have high CPU utilization, as the bottleneck was at the interface clock speed. The calculation is based on an IMIX size of 330 Bytes.

The data provided should be used as an only approximation, as there are many variables that can affect actual device performance which also will not scale linearly with CPU load increase.

ISR 1100

ISR 1100 is a new branch office platform running IOS-XE and similar to Cisco 890 and 1921. Published performance numbers are listed in Table 4. IP forwarding of ISR 1100 is comparable to ISR 4221 with a boost license. Note that ISR 1100 doesn’t support voice features.

PlatformRFC-2544
(Mbps, IMIX)
RFC-2544
(pps, IMIX)
Encryption
(Mbps, AES 256, IMIX)
NAT (Mbps, IMIX)ACL + NAT + HQoS (Mbps, IMIX)
C1100-4P1,252475,000230660330
C1100-8P1,750660,000335960510

Table 4. ISR 1100 Performance

ASR 1000

In the cases when you need more than 10Gbps of throughput provided by ISR 4461, ASR 1000 will be the platform of choice. All models in the ASR 1000 range have 2 dedicated hardware components – RP (Route Processor) and ESP (Embedded Service Processor). RP is responsible for control-plane operations and ESP for data forwarding.

Lower-end models, such as ASR1001-X and ASR1002-X have RP and ESP integrated into chassis. The throughput of the system depends on ESP, which runs Cisco-proprietary programmable ASICs called Quantum Flow Processor (QFP).

The performance of 3 integrated models is shown in Table 5. For the models presented in Table 5, an incremental throughput license is required.

ModelESP Bandwidth (Mbps)Throughput (pps)
ASR1001-X20,00019,000,000
ASR1002-X30,00036,000,000
ASR1002-HX100,00058,000,000

Table 5. ASR 1000 Performance (integrated ESP models)

Related Links

RFC-2544: Provides information on recommended way to perform testing

Portable Product Sheets Routing Performance – ISR G1, Legacy Platforms Performance

ISR 4000 Performance – 3rd Party Testing Report by Miercom

ASR 1000 FAQ 

Cisco Firewalls Performance

Cisco ACI Switch Models