vSphere ESXi Networking Guide – Part 1: Standard Switches

This is part 1 in the vSphere ESXi Networking Guide series which will cover theory, operation, and configuration for different components of vSphere Networking.

Virtual machines need to communicate with each other within an ESXi host and with other nodes reachable over the network. A virtual switch is a software component that enables this communication. vSphere can use Standard Switches (VSS) or Distributed Switches (VDS). vSphere Standard Switch is available with any license tier or edition of vSphere and it is the topic of this article.

ESXi hosts and vSwitches

An ESXi host can contain one or multiple standard virtual switches. By default, a virtual standard switch is created during the host installation which provides default VM network and management access to the host itself. It is also possible to delete all standard switches from an ESXi host and use distributed switches instead.

Each virtual switch is isolated from other virtual switches on the same host. In the diagram below, a VM on Switch0 will not be able to reach another VM connected to Switch1 directly within the host. It is possible, however, for them to communicate via upstream physical switches if the required VLAN and interface configuration is in place.

Each vSwitch also has a dedicated uplink or multiple uplinks allocated to it. In cases where no external communication is required, vSwitch can operate without uplinks.

IP-based traffic is often segregated with the use of dedicated physical network adapters which can be connected to a separate set of switches. In such scenarios, storage-related uplinks and corresponding VMKernel ports can be placed into a separate vSwitch.

Other use cases for creating additional vSwitches include:

  • Separation of Dev/Test environments from the production systems
  • Security-related isolation, for example, placing uplinks to a dedicated physical switch for DMZ virtual machines
Standard vSwitches and Hosts
Figure 1. Standard vSwitches and Hosts

Port groups

Single virtual switch can contain one or multiple port groups, or no port groups at all. Each port group can either contain one or multiple VM-facing ports or it can contain a single VMKernel port. Refer to the diagram below, which illustrates the relationship between port groups and their members.

VM Port Groups and VM Kernel Port Groups
Figure 2. VM Port Groups and VM Kernel Port Groups

Port groups can be and usually are mapped to different VLANs. However, multiple port groups can be mapped to the same VLAN, in which case they don’t provide isolation and ports can communicate directly within the host. Note the difference to vSwitch operation that prevents direct connectivity within the host.

A port group, as its name suggests, aggregates ports with similar configuration requirements, including VLAN membership, security parameters, traffic shaping, and uplink selection policies.


VLAN or Virtual LAN concept originates from physical Ethernet switching as a way to split a single switch into multiple isolated groups of ports. Each of such groups is the same Layer 2 or broadcast domain, which means that hosts connected to the same group can send each other Layer 2 frames directly without involving Layer 3 device, such as a router. VLAN usually represents a single IPv4 subnet.

End-user device-facing ports can be assigned to a single VLAN, in this case, they are called access ports. Upstream or another switch-facing port can also be allocated to a single VLAN, however, this requires additional port for every VLAN that has members behind the upstream switch.

A more efficient way is to add additional information to each frame called 802.1q tag, so multiple VLANs can be transmitted over a single interface. Interface that carries traffic for multiple VLANs using tags is called 802.1q trunk port.

One of the VLANs can be designated as native for an 802.1q trunk. Switch will not add a tag to the native VLAN frames. As a result, if an end device is not using tagging, it will still be able to process frames in the native VLAN.

It is also possible to configure a port connected to a server as an 802.1q trunk. The operating system driver can then present the trunk as multiple virtual adapters, each in the corresponding VLAN.

How VLANs are mapped to different virtual switches components?

A virtual switch is designed to operate in a similar way as its physical equivalent. Uplinks are network interface cards of an ESXi host. No special configuration is required on uplinks to enable 802.1q tagging. VLANs are defined on a port-group level and tagging is automatically enabled on uplinks. Upstream switch must have relevant configuration to enable tagging towards the switch.

Virtual Standard Switch and VLANs
Figure 3. Virtual Standard Switch and VLANs

Consider sample topology shown in Figure 3. Physical switches have 5 VLANs defined and have the same configuration on their downstream ports connected to a single ESXi host. Cisco Catalyst syntax is provided in this example.

Frames on these physical links will be tagged with VLAN IDs of 10, 20, 30 and 40. VLAN ID 5 is the exception, as it is a native VLAN on the trunk. Frames of this VLAN will be sent without a tag from the physical switch to the virtual switch.

Virtual standard switch has 4 port groups defined – 3 for VM communication and one VMKernel port group. When Yellow VLAN 10 or Red VLAN 20 (or any VLAN ID in the range of 1 to 4094) is allocated to a port group, the virtual switch strips tag when delivering it to VM or VMKernel port in this group. It is similar to how access ports operate on a physical switch.

Green VLAN 0 is a special case. This port group will get all untagged frames received on uplinks. In the example shown in figure 3, the green port group will effectively be in VLAN 5, as it is designated as native by the upstream switch. VM adapters connected to this port group will also receive untagged frames. To avoid confusion when determining VLAN ID of a port group uses explicit VLAN tagging for all port groups. Designate one of the not-in-use VLANs as native on physical switch interfaces. It is inline with security recommendations for physical inter-switch communication to prevent so-called VLAN-hopping attacks.

The VLAN ID of 4095, when allocated to the port group, tells that tags must not be stripped and delivered to VM’s guest OS to process. Ports in this port group are 802.1q trunks with native VLAN 5. Its frames will be sent without a tag to a VMs.

Virtual Machines interfaces and VMKernel ports

A virtual machine uses one or multiple adapters to connect to port groups. Virtual Machine facing ports are Layer 2, so no IP addresses need to be specified on ESXi host. Router for VMs will be outside of ESXi host with the exception of virtual software routers or firewalls.

Guest OS requires a NIC driver, as it would with a physical adapter. Currently, there are 2 types of adapters available with ESXi: E1000 and VMXNET 3. E1000 emulates Intel network card and guest OS usually has a driver for it included in a standard set of drivers. VMXNET 3, also known as para-virtual adapter, provides better performance and more features. Drivers for VMXNET 3 are available with VMWare tools.

VMKernel port provides connectivity to the ESXi host itself. Each VMKernel port requires a dedicated port group. VMKernel port is layer 3 port with IP address assigned, however, multiple VMKernel ports can share the same VLAN.

During installation, a default management VMKernel port is automatically created. Additional ports can be created for the following types of host-sourced traffic:

  • iSCSI or NFS
  • vMotion
  • Provisioning
  • Fault Tolerance logging
  • vSphere Replication
  • vSphere Replication NFC
  • vSAN

VMKernel also associates with a TCP/IP stack. Each stack has a separate IP routing table and DNS configuration. Different types of management traffic can use its own default gateway instead of adding multiple static routes with a single stack.

vSwitch and Port Group policies

There are 3 categories of settings that can be applied to Port Groups. While the settings can also be defined on a vSwitch level, they are ultimately applied to Port Groups.

  • Uplink configuration (NIC teaming)
  • Security
  • Traffic shaping

We will discuss each of these categories in the following section. By default, all port groups inherit global configuration applied on vSwitch, which serves as a template for a default set of settings.  If a specific port group requires different policy settings, an administrator can override vSwitch settings on a port group level.

Uplink configuration settings

NIC teaming settings define how traffic will be distributed across multiple network adapters. Before discussing the uplink configuration further, I will provide a short overview of how multiple uplinks can be active at the same time without creating a layer 2 loop.

How Virtual Standard Switch prevents loops? In figure 3 two network adapters create a loop, as the physical switches are inter-connected directly upstream. You might be wondering if the switches recognize that there is a loop and if they need to block any of the interfaces using spanning tree protocol, as it would be done in a traditional switched network. Virtual Standard Switches don’t run STP. They also do not forward BPDUs received from one upstream switch to another.

To prevent consequences of Layer 2 loops, such as broadcast storms, a virtual switch uses a simple rule of not forwarding traffic received from one uplink to any other uplink. This way every frame’s source or destination address must always be either a VM or a VMKernel port. With this rule in place, all physical interfaces can forward traffic at the same time. Upstream switch ports can be configured as edge ports or using features such as portfast, which disables STP on the port.

Note that nothing stops a VM perform “inside-OS” Layer 2 bridging between its adapters which are pinned to different uplinks. This case doesn’t fall under the rule above, as from hypervisor perspective traffic goes to and from a VM. Such configuration can bring the network down or significantly degrade its performance due to the Layer 2 loop created.

Now we have discussed how multiple adapters can be active at the same time, let’s discuss different types of load balancing of traffic across these uplinks.

There are several load-balancing mechanisms available under NIC teaming configuration of a port group:

  • Originating port ID
  • Source MAC hash
  • IP hash

Diagram 4 shows 2 standard switches with each port group configured to use one of the algorithms above.

vSwitch Uplink Pinning and Load-Balancing Algorithms
Figure 4. vSwitch Uplink Pinning and Load-Balancing Algorithms

Virtual port- and source MAC-based

Virtual port-based balancing is the default algorithm, which assigns a virtual port to a specific uplink interface. This way if there are 8 VM ports and 2 uplinks, 4 VM ports will be mapped to one uplink and other four to another.

MAC-based load-balancing algorithm is similar to virtual port-based with the difference of input for the algorithm is not a virtual port ID, but VM’s source MAC address.

As different VMs can generate different loads but will have the same weight, these two algorithms cannot evenly utilize available upstream interface bandwidth as a downside. The benefit of these load balancing methods is that they are simple to troubleshoot and will work with any upstream switch configuration, as a VM MAC address will be consistently reachable via a single physical interface.

IP hash-based

This algorithm is based on selecting uplink based on a combination of source and destination IP address of a packet. This provides even uplink utilization, as in this scenario traffic from every single VM will be split across many uplinks if it communicates with multiple devices. With this load balancing configuration, the upstream switches must have a specific configuration for the reason below.

A physical switch learns source MAC addresses as part of its operation. When traffic from a single source MAC address is seen over more than one interface, the switch has to rapidly invalidate and update its MAC table. In some cases, switches can block offending ports, as MAC flapping is also a sign of a Layer 2 loop.

The solution to this is to configure the upstream switch as static EtherChannel (or Link Aggregation Group). It has to be static configuration, as shown in the configuration example of figure 4, as LACP is not supported on Standard Switches. If there are multiple upstream switches they must be either physically stacked together or employ some virtual form of stacking, such as Cisco Nexus VPC or Catalyst VSS/Virtual StackWise.

Failover settings

If no load balancing is required for the port group select “Use Explicit Failover Order” listed as one of the available load balancing methods in the configuration menu. This setting essentially saying do not use load balancing, but just failover to another active adapter if the active one fails.

Failover order is managed by placing adapters in one of three categories:

  • Active adapters. To use load balancing mechanisms described above, place at least 2 adapters to this list.
  • Standby adapters. Adapter in this category replaces failed active adapters.
  • Unused adapters. Prevents specific port groups from using particular adapters.

To identify if the adapter went down ESXi host by default will check its link status. This will not detect upstream switch failures and the host might be sending traffic to black hole over the failed link. An administrator can change the failure detection method to be beacon-probe based. When it is enabled, the host will send broadcast beacon that must be received via multiple NICs in the same team. It is recommended to have at least 3 physical adapters connected to different switches.

Security settings

Each port group has 3 parameters controlling layer 2 security:

  • Promiscuous mode
  • MAC address changes
  • Forged transmits

Promiscuous mode

Ports in a port group that has Accept action for promiscuous mode will receive a copy of all traffic in the VSwitch that port group is in. It is similar to the port mirroring feature available in physical switches. By default, this setting is set to Reject.

This setting should be enabled with care and only when required, for example, to support different appliances that work with a copy of the traffic for security and monitoring analysis. In such cases, create a dedicated port group and override default behavior on it by enabling Promiscuous mode. Enabling this setting on the switch level or on the port group which contains other VM ports is a security risk, as VMs can run Wireshark or similar tools to intercept traffic exchanged by other VMs.

MAC address changes

The hypervisor assigns a virtual MAC address to a VM. Guest OS can try to change this MAC address, however, with this setting set to Reject, no packets will be sent to Guest VM’s new MAC address. This setting works for the traffic in the direction towards VM. By default, this setting is set to Accept when a switch is created via vCenter and to Reject when created directly via ESXi host.

Forged transmits

Mirrored version of the previous setting which checks frames as they are received from VM. If the source MAC address doesn’t match the one that was assigned by ESXi host, such a frame will be discarded. This setting has similar defaults as the previous setting.

Traffic shaping

The final set of settings we will discuss in this article is related to traffic shaping which is a mechanism to limit the throughput of traffic to a specified value. Standard Virtual Switches support only outbound traffic shaping, i.e. virtual machine’s upload. When it is enabled you have access to 3 parameters:

  • Average Bandwidth (bits per seconds)
  • Peak Bandwidth (bits per seconds)
  • Burst Size (bytes)

Average Bandwidth defines the target rate over a period of time. Many applications send traffic in non-uniform pattern and there will be quiet periods followed by short bursts. By configuring Burst Size you allow the algorithm to accumulate bonus credit for quite periods which then can be used to send at rates higher than average, up to peak bandwidth.

The settings are applied to each port, i.e. values are not aggregate numbers for all ports.

In the next article, I will provide information on how to perform the configuration of concepts described in this post.

Cisco SD-Access Components

I’ve posted earlier overview articles about Cisco’s WAN and Data Center software-defined technologies – Cisco Viptela SD-WAN (link) and ACI (link). Now it’s time to explore the solution for LAN. Cisco SD-Access is the evolutionary step in how campus networks are built and operated. In this blog post, we will discover components of Cisco SD-Access, namely control and data plane elements.

What are the main SD-Access benefits?

The key advantage of a software-defined solution is management centralization. DNA Center with SD-Access application simplifies campus network operation by providing a single point of management for multiple devices. DNA Center not only automates devices configuration but also exposes APIs, so it can be accessed programmatically.

With Cisco SD-Access administrators can create and apply common policies across the entire campus network. Operational expense savings is one of the main selling points of the Cisco SD-Access.

Network flow telemetry gives operators better visibility into what is happening in the network. Cisco ISE and TrustSec provide user and device identification and segmentation within the same virtual network boundary. SD-Access can also support fully isolated virtual networks, for example, between multiple tenants. As a result better security is achieved with less effort.

Components of Cisco SD-Access

SD-Access consists of 3 categories of components:

  • Network fabric – Switches, routers, wireless LAN controllers and access points. Routed access with VXLAN data plane and LISP control plane
  • Cisco DNA Center with SD-Access – one or multiple appliances
  • Cisco ISE – one or multiple appliances

Check this document for detailed information on supported component combinations and licensing requirements (external link).

This link is an official matrix listing compatibility between versions of different components.

SD-Access Fabric

Switches and Routers

Different roles that switches can perform will be covered in later sections of this article. However, for the purpose of right platform selection 2 main switch roles should be considered – Edge and Border/Control plane nodes.

Edge switches are similar to access switches, as they have end-user devices connected to them and platforms that currently recommended (Catalyst 9000) and supported (other platforms; check the release notes and licensing documentation for feature support) are listed below:

  • Catalyst 9000-series: 9200, 9300, 9400, 9500
  • Catalyst 3850 and 3650
  • Catalyst 4500E: Sup 8-E, 9-E

Border/Control plane switches perform Endpoint ID tracking and are responsible for running Layer 3 routing with networks outside of the fabric. Therefore, these switches have higher memory requirements. If only control plane operation to be implemented with no traffic transit routing virtual CSR 1000v can be used. And when border node functions without control plane operations are required Nexus 7700 is a supported option.

 Border/Control plane switches and routers to choose from are:

  • Catalyst 9000-series: 9300, 9400, 9500, 9600
  • Catalyst 3850
  • Catalyst 6500/6807-XL: Sup 2T, 6T
  • Catalyst 6840-X, 6880-X
  • Nexus 7700: Sup 2-E, 3-E, M3 line cards only – border functionality only
  • ISR 4300, 4400
  • ASR 1000-X, 1000-HX
  • CSR 1000v

Fabric Wireless Controllers and Access Points

SD-Access supports traditional WLCs and APs without integration with fabric and they communicate between each other in overlay over-the-top as any other data traffic. Fabric-integrated Wireless Controllers and Access Points participate in the control plane and data flow is changed in comparison with traditional WLCs and APs.

This integration provides additional benefits and better efficiency. For example, user traffic from a fabric access point is de-capsulated on the edge switch without tunneling it up to its WLC. This section lists supported fabric-integrated wireless components.

Supported WLCs are:

  • Catalyst 9800 Wireless Controller: 9800-40, 9800-80, 9800-CL and Embedded on C9300, C9400 and C9500
  • Cisco 3504, 5520 and 8540 WLC

Fabric mode APs must be directly connected to a fabric edge node. Supported models are:

  • WiFi 6 APs: Catalyst 9115AX, 9117AX and 9120AX
  • Wave 2 APs: Aironet 1800, 2800 and 3800
  • Wave 2 APs, outdoor models: Aironet 1540, 1560
  • Wave 1 APs: Aironet 1700, 2700 and 3700
  • Aironet 4800 APs

DNA Center

DNA Center is responsible for fabric management. The software must be installed on a physical DNA Center Appliance which is based on the Cisco UCS C-series Server. SD-Access is one of the applications of DNA Center.

Check this article dedicated to DNA Center role and functions.

If DNA Center appliance becomes unavailable fabric would continue to function, however, automatic provisioning will be impacted. For redundancy, a highly available cluster of 3 nodes of the same model is recommended.

DNA Center Appliances have 3 options to choose from:

  • Entry-level of up to 1,000 devices: DN2-HW-APL (C220 M5, 44 cores)
  • Mid-size of up to 2,000 devices: DN2-HW-APL-L (C220 M5, 56 cores)
  • Large of up to 5,000 devices: DN2-HW-APL-XL (C480 M5, 112 cores)

Identity Services Engine (ISE)

Cisco Identity Services Engine (ISE) provides identity services for the solution. Access control policies which are based on user and device identity are also ISE’s responsibility. With Cisco TrustSec edge device applies Security Group Tags (SGTs) on the traffic based on the identity. Then these tags can be used to perform filtering using SGT-based access-lists.

ISE is available as a virtual or a physical appliance. The following models of ISE appliances are available:

  • Small physical:  SNS-3515
  • Large physical: SNS-3595
  • Small virtual: R-ISE-VMS
  • Medium virtual: R-ISE-VMM
  • Large virtual: R-ISE-VML

ISE appliances can also be implemented in a high-availability setup with load balancing achieved by splitting functions between nodes.

Cisco ISE integrates with DNA Center using REST API and PXGrid. DNA uses REST API to automate policy configuration on ISE and PXGrid is used for endpoint information exchange.

Data Plane

Figure 1 shows a sample network. Fabric is shown in a blue rectangle. Fabric switches in SD-Access are connected to each other using Layer 3 links. These links establish underlay or transport networks.

Switch fabric physical topology can follow traditional access-distribution-core patterns. There is no requirement to connect switches in leaf-and-spine topology as in data center underlay. Campus networks usually don’t need to accommodate intensive east-west communication as data centers do.

Cisco SD-Access Fabric
Figure 1. SD-Access Fabric

On top of the underlay, virtual networks are created with the use of VXLAN encapsulation. This is similar to the way how modern data center switch fabrics are built, such as Cisco ACI or native Cisco NX-OS VXLAN fabrics.

Packets on inter-switch links will be encapsulated in UDP on the transport layer and have source and destination IP addresses of Edge device loopbacks called routing locators or RLOCs. Edge nodes are responsible for VXLAN encapsulation/decapsulation when sending and receiving traffic towards fabric.

For broadcast/unknown unicast/multicast or BUM traffic, underlay can either use headend replication or in newer versions of SD-Access multicast in underlay can be utilized.

End-user devices connected to downstream ports of edge switches don’t see any difference from traditional Ethernet networking. The only exception is fabric access points. They must be attached to fabric edge nodes and VXLAN encapsulation is extended down to access points.

To deliver a packet, edge nodes sends a query to the control node to determine the target edge’s node IP address (RLOC) using LISP. If a reply is received, the edge node encapsulates traffic into VXLAN datagram and sends it directly to the destination node. If the query cannot be resolved, for example, in the case when the destination is not fabric-attached then traffic is sent to the default border node which in turn performs normal route lookup.

Control Plane

Fabric runs multiple control-plane protocols which can be divided into several categories:

  • Underlay network protocols
  • Endpoint ID tracking protocol
  • External to fabric routing protocols
  • WLC-related protocols

Underlay Protocols

The main task of the underlay is to ensure that edge devices can reach each other via their RLOCs or IP addresses that are used in the VXLAN IP header. SD-Access supports automated provisioning with IS-IS and it is recommended for greenfield deployment. It can, however, be replaced with OSPF or EIGRP with manual configuration.

The other protocol that can be used in underlay is a multicast routing protocol to replace resource and bandwidth-intensive headend replication. PIM-SM is the supported protocol.

All switches in the fabric run underlay protocols. Intermediate routers are similar to P routers in MPLS in the way that they work only with outer IP packet headers. Therefore, they don’t need to run or understand any other protocols described in the next sections.

Endpoint ID tracking

Endpoint IDs are IP and MAC addresses of devices connected to edge nodes. The SD-Access control plane is based on the Locator ID Separation Protocol (LISP).

Each designated control plane node performs LISP Map-Server (MS) and Map-Resolver (MR) roles.

Edge nodes register endpoints by sending Map-Register message to a control plane node. Map-Server stores endpoint ID to edge device information in Host Tracking Database (HTDB).

When the edge node needs to find the address of the edge device behind which specific endpoint is located, it sends a query to Map-Resolver. After checking HTDB, MR sends back RLOC for the requested endpoint.

Control plane and border node functionality can coexist on the same device and each should be deployed on at least two devices for redundancy.

Cisco SD-Access Endpoint ID Tracking
Figure 2. SD-Access Endpoint ID Tracking

External to fabric routing protocols

Control nodes know all endpoints connected to a fabric using the process described above. If an endpoint is not in HTDB and cannot be resolved, the edge node will assume that it is outside of the fabric and forward such traffic to the default fabric border node.

Border nodes connect the fabric to external networks and BGP is the recommended protocol to run on the boundary. Border nodes are also responsible for SGT propagation outside of the fabric.

Cisco SD-Access External Connectivity via Border Nodes
Figure 3. SD-Access External Connectivity

There are 3 types of border nodes in SD-Access:

  • External. Default exit from fabric with no specific routes injection
  • Internal. Gateway only for a set of networks, such as shared services prefixes
  • Anywhere. Combination of external and internal functionality

With multiple virtual networks overlaid on top of the SD-Access fabric, isolation on the fabric border is achieved with the use of VRFs.

Access to shared services, such as Cisco DNA Center, WLC controllers, DNS and DHCP servers are required from both underlay and overlay. Such access can be provided by connecting fusion routers to border nodes with VRF-lite. Fusion routers perform route leaking between VRFs to provide reachability information to the shared services from the fabric.

WLC-related protocols

Fabric-integrated WLCs run traditional control plane protocols, such as CAPWAP tunneling from APs to the WLC. However, CAPWAP tunnels are not used for data traffic and WLC doesn’t participate in user traffic forwarding.

When a client connects to a fabric enabled access point, the LISP registration process is different from described above for wired clients. With fabric APs, registration is not performed by the access point or the edge switch. Instead, WLC performs proxy registration with the LISP Map-Server in HTDB. If a wireless client roams, WLC ensures that the LISP mapping is updated.

Cisco ACI Concepts

In this blog post we will explore Cisco ACI fabric components and provide high-level overview of important Cisco ACI Concepts. We will not be looking into configuration workflows, which will be a topic for another post.

ACI (Application Centric Infrastructure) is a multi-tenant data center switching solution based on intent-based approach.

What is intent-based networking and how it is different from traditional software-defined networking?

Cisco defines intent-based networking as 3 processes:

  • Translation, or converting business requirements into policies
  • Activation, or transforming a policy into specific configuration applied to a device
  • Assurance, or ensuring that the intent has been realized

Traditional software-defined networking focuses on activation, i.e. orchestration and configuration automation. See Cisco Viptela SD-WAN post to read about Cisco SDN approach for WAN.

Cisco products implement all 3 processes. ACI is responsible for translation and activation. Cisco Tetration and Network Assurance Engine are responsible for assurance aspect.

What are the benefits of implementing Cisco ACI in the data center?

ACI fabric is centrally managed via single Web-based management interface. ACI also provides extensive Application Programming Interface (API), so it can be fully automated.

ACI has multi-tenant design out of the box. It ensures that tenants are separated not only on data plane, but also by providing tenant-specific management capability.

Cisco ACI is easy to deploy, as user doesn’t need to understand or configure fabric protocols, such as VXLAN, underlay routing protocols or multicast routing. Provisioning of new leaf switches or replacing existing ones is very simple from discovery to applying template-based configuration.

There are some new concepts and configuration patterns to master, as ACI is rather different from the way traditional switches are configured and operated. However, ACI brings many benefits with centralized configuration based on templates and policies. For example, consistency across many devices is easily achieved and known working settings can be re-used when new device or tenant is introduced.

Cisco ACI Components

2 main components of ACI are:

  • Switching Fabric  
  • Controllers

ACI Switching Fabric

Switching fabric is based on leaf-and-spine topology. Each leaf connects to every spine with no direct connections between leafs or spines. Servers, routers for external connectivity, firewalls and other network devices connect to leaf switches only.

Cisco ACI Switching Fabric
Figure 1. ACI Switching Fabric

With two layers there is always a single hop between any pairs of leaf switches – spine switch layer. Throughput can be horizontally scaled by introducing additional spine switches. The inter-switch connections are point-to-point layer-3 links. Therefore, all links can be evenly utilized with Equal-Cost Multi Pathing (ECMP). Switch fabric utilizes VXLAN encapsulation or MAC in UDP with Cisco proprietary extensions. Data plane operation will be explained in the next section in more detail.

Cisco ACI switch portfolio consists of modular Nexus 9500 and fixed Nexus 9300 families of switches. Not all switches in these families can run in ACI mode. Some of the switches are NX-OS mode only and some of them can run in both modes.

ACI Spine Switches

Important: Always check Cisco website for the latest updates and compatibility information.

Switch modelDescriptionACI spine/NX-OS
X9736PQ line card
(reached end of sale)
36 x 40G QSFP+ACI Spine
X9732C-EX line card32 x 100G QSFP28Both
X9732C-FX line card
(on roadmap)
32 x 100G QSFP28Both
X9736C-FX line card36 x 100G QSFP28Both
X9336PQ switch
(reached end of sale)
36 x 40G QSFP+ACI Spine
9332C switch32 x 40/100G QSFP28Both
9364C switch64 x 40/100G QSFP28Both
9316D-GX switch16 x 400/100G QSFP-DDBoth
93600CD-GX switch
(on roadmap)
28 x 40/100G QSFP28
8 x 400/100G QSFP-DD

Table 1. Cisco ACI Spine Switches

Nexus 9500 family has 3 models of chassis with 4-, 8- and 16- slots for line cards. Each of the models accepts a single or pair of supervisor cards, set of fabric modules and line cards. Fabric modules and line cards is what provides ability of the chassis to run in ACI mode. Currently there are 3 families of line cards:

  • Cisco and merchant ASICs based. Only single line card X9736PQ supports ACI spine functionality in this family and is compatible with C9504-FM, C9508-FM and C9516-FM fabric modules.
  • R-Series (Deep Buffer). This family doesn’t provide ACI support and model of its line cards name starts with X96xx.
  • Cloud Scale ASICs based. This more recent family of modules contains ACI spine capable X9732C-EX, X9732C-FX (roadmap as of Sep 2019), X9736C-FX line cards and C9504-FM-E, C9508-FM-E, C9508-FM-E2 and C9516-FM-E2 fabric modules

Fixed Nexus 9300 switches that can also be spine switches are as per list below:

  • 9332C
  • 9364C
  • 9316D-GX
  • 93600CD-GX (roadmap as of Sep 2019)

All of the switches in this list are Cloud Scale based.

ACI Leaf Switches

Leaf switches are all part of Nexus 9300 family on Cloud Scale technology with the exception of 93120TX. The table below shows available options for ACI leafs.

Switch modelDescriptionACI leaf/NX-OS
93120TX96 x 100M/1/10GBASE-T
6 x 40G QSFP+
93108TC-EX48 x 10GBASE-T
6 x 40/100-G QSFP28
93180YC-EX48 x 10/25G
6 x 40/100G QSFP28
93180LC-EXUp to 32 x 40/50G QSFP+
18 x 100G QSFP28
9348GC-FXP48 x 100M/1GBASE-T,
4 x 10/25G SFP28
2 x 40/100G QSFP28
93108TC-FX48 x 100M/1/10GBASE-T
6 x 40/100G QSFP28
93180YC-FX48 x 1/10/25G fiber ports
6 x 40/100G QSFP28
9336C-FX236 x 40/100G QSFP28Both
93216TC-FX296 x 100M/1/10GBASE-T
12 x 40/100G QSFP28
93240YC-FX248 x 1/10/25G fiber ports
12 x 40/100G QSFP28
93360YC-FX296 x 1/10/25G fiber ports
12 x 40/100G QSFP28
(on roadmap)
16 x 400/100G QSFP-DDBoth
93600CD-GX28 x 40/100G QSFP28
8 x 400/100G QSFP-DD

Table 2. Cisco ACI Leaf Switches

APIC Controllers

The core of ACI deployment is Cisco Application Policy Infrastructure Controller, or APIC. It is central point for ACI fabric configuration and monitoring.

APIC is a physical appliance based on Cisco UCS C-series server. ACI deployment requires at least 3 APIC controllers forming APIC cluster. The maximum number of APIC controllers in cluster is 5.

For fabric management, each APIC is physically connected to 2 different leaf switches, with one of the interfaces as active and the second one as standby. In addition to these 2 links, out-band connections for CIMC and appliance are required.

Virtual APIC controller can be launched on VMWare ESXi hypervisor and is component of Cisco Mini ACI fabric for small scale deployments. In Cisco Mini ACI fabric only single physical APIC is required, while second and third can be virtualized.

There are 2 APIC configurations currently available – medium and large (more than 1200 edge ports). Appliance must be ordered using published part number and not as C-series server with matching parameters. The configuration details for each the options are shown in the Table 3.

Part numberAPIC-M3APIC-L3
CPU2 x 1.7 GHz Xeon Scalable 3106/85W 8C/11MB Cache/DDR4 2133M2 x 2.1 GHz Xeon Scalable 4110/85W 8C/11MB Cache/DDR4 2400MHz
RAM6 x 16GB DDR4-2666-MHz RDIMM/PC4-2130012 x 16GB DDR4-2666-MHz RDIMM/PC4-21300
HDD2 x 1 TB 12G SAS 7.2K RPM SFF HDD2 x 2.4 TB 12G SAS 10K RPM SFF HDD
CNACisco UCS VIC 1455 Quad Port 10/25G SFP28 CNA PCIECisco UCS VIC 1455 Quad Port 10/25G SFP28 CNA PCIE

Table 3. Cisco APIC Controllers

ACI Fabric Operation

ACI Fabric Forwarding Overview

Let’s consider the example topology in the diagram below. Orange links between leafs and spines are Layer 3. Therefore, no Layer 2 loops can occur and no Spanning Tree Protocol is required. These links form underlay network. All data traffic traversing over them is VXLAN-encapsulated.

If you capture a packet on any of those links, it will be UDP-encapsulated traffic between loopback interfaces of leaf switches. This IP address is called TEP for Tunnel End Point. In some scenarios, the destination IP address can be multicast or spine switches loopbacks as well.

This UDP traffic is encapsulated payload of Layer 2 traffic received on downstream interface. Let’s start with Server A sending IP packet to Server B and to simplify our example, let’s assume it already knowns MAC address of Server B. Server A will create unicast IP packet, pack it into Ethernet frame and send it to the switch.

The switch will try to resolve the destination leaf’s TEP IP address. There are several mechanisms available, but let’s assume it knows that it is connected to leaf switch #4. It will take Ethernet frame and pack it into new UDP VXLAN datagram, with new IP header with the source IP as leaf switch #2’s VTEP IP and destination as leaf switch #4’s VTEP IP. Encapsulated traffic will be load-balanced via 2 available spines.

Cisco ACI Forwarding
Figure 2. ACI Forwarding

Underlay Protocols

In ACI terminology, underlay or set of orange links in the diagram above is called Infra VRF. The IP addresses in underlay are isolated and not exposed to tenants. In contrast, the data traffic between servers and clients is transferred in overlay networks. It is similar to how VPNs are built over Internet or Layer 3 VPNs over MPLS network.

The orange links in the Figure 2 run link-state routing protocol – IS-IS. It’s main purpose is to provide reachability between Tunnel End Points (TEPs). It is similar to how VXLAN network is built on Nexus switches using NX-OS, which can run OSPF as routing protocol instead.

Different to VXLAN EVPN setup, ACI doesn’t run EVPN with BGP to distribute endpoint reachability information. Instead, COOP (Council of Oracle Protocol) is responsible for endpoint information tracking and resolution. MP-BGP, however, is still used to propagate routing information that is external to fabric.

Cisco ACI Basic Concepts

Cisco introduced many new terms with ACI. All configuration constructs and their interaction is documented in ACI policy model. Each construct is represented by a Managed Object (MO), which form hierarchical Management Information Tree (MIT).

Figure 3 displays partial view of the MIT. Policy Universe on the left is root. Solid lines represent containment and dotted lines – association. For example, Tenant class contains one or more Bridge Domain instances and a Bridge Domain is associated with a VRF.

As this post is introductory, we will review some of the terms relevant in context of how fabric works. There are also important terms around how fabric is being configured, however, this will be cover in another post.

Cisco ACI Management Information Tree
Figure 3. ACI Management Information Tree


Tenant is a logical grouping of various policies. It can be a customer or a department within your organization. By creating different tenants you provide ability to delegate management of tenant-specific settings.

There are 3 built-in tenants: Infra, Common and Management. Infra tenant is responsible for fabric underlay, Common tenant hosts resources that are shared between other tenants and Management tenant is for in-band and out-of-band configuration.


Virtual Routing and Forwarding instance or VRF has the same meaning as in traditional network, it is a Layer 3 routing domain. The isolation is achieved by keeping routing information separate.

For example, 2 different VRFs can both have network defined in the same way if both had dedicated non-connected physical networks and routers. By default, VRFs cannot communicate to each other.

You can export or leak some of the routes between VRFs, but in this case you need to ensure that the network don’t have overlapping subnets.

A tenant can have a single or multiple VRFs.

Bridge Domains and Subnets

Bridge domain is a Layer 2 flood domain. A VLAN in traditional network is a Layer 2 flood domain. You might be wondering, why not to keep the same term. One of the reasons, is that fabric uses VXLAN IDs to differentiate Layer 2 networks between each other. VLAN IDs can be re-used and even overlap between different ports in recent versions of ACI software, so they cannot be used as fabric-wide identifiers for a specific Layer 2 domain.

Bridge domain requires association with VRF and can contain one or more subnets. It is possible to assign multiple subnets to a single bridge domain (analogy is a secondary address on SVI) or one to one relationship between bridge domain and subnet can be established.

Adding a subnet to bridge domain and enabling unicast routing creates routed interface or SVI in that subnet. In ACI all leafs are using the same SVI’s IP address for use as default gateway for the subnet. This functionality is called pervasive gateway (or anycast gateway) and optimize Layer 3 processing efficiency, as routing is distributed across all leafs without need to have a central device to perform routing.

Application Profiles and EPGs

Application Profiles are containers for Endpoint Groups. EPG is a logical group of endpoints and one of the main components of ACI policy model. Endpoints include physical servers, virtual machines and other network-connected devices.

EPG membership can be statically configured, for example, to be based on a specific port and VLAN on it. Or it can be based on VM’s NIC port group membership via dynamic negotiation with Virtual Machine Manager.

The policies in ACI are applied to EPGs and, by default, each EPG is isolated from other EPGs.


If one EPG A needs to access services provided by EPG B, then EPG A is called consumer and EPG B is called provider. Default behavior in ACI is to block all inter-EPG traffic. Contract must be defined to facilitate this communication.

Contract consists of subjects which in turn contain list of filters. Filters are similar to access-lists and contain entries which match the traffic.

Contracts are directional and differentiate between traffic going from consumer to provider and traffic in reverse direction.

Access Policies

Access policies control configuration of interfaces connecting to physical servers, routers and hypervisors. Objects living under Access Policies include:

  • Pools of IDs, or grouping of VLANs, VXLAN IDs and multicast addresses
  • Domains and their types define how devices are connected to leaf switches, for example, physical domain is used for bare metal servers and VMM domain is used for integration with hypervisors
  • Interface Policies, Policy Groups and Profiles. Policy controls specific setting of an interface, which are grouped together to be used in profile along with interface selector
  • Switch Policies, Policy Groups and Profiles. These objects control switch-level configuration and by associating Interface Profiles with Switch Profiles, interface settings can be applied to the specific leaf switch

Fabric Policies

Fabric policies and objects under it control internal fabric interface and protocols configuration. For example, parameters such as Fabric MTU is defined by Global Fabric policy and SNMP, date and time parameters are specified by Pod Profiles.

Reference Materials

Cisco ACI Policy Model

Cisco ACI Policy Model Guide

AWS Networking Introduction – Part 1

In this article, we introduce basic AWS Networking Concepts, such as Subnets, Route Tables, Elastic IPs, and Internet Gateways.

VPCs and CIDR Blocks

Virtual Private Cloud (VPC) is an isolated network within AWS that physically spans across all Availability Zones (AZ is a physical data center) in a region. A single AWS account can have several VPCs in the same or different regions.

VPC is assigned an IPv4 prefix (CIDR block) with the length between /16 and /28. Additional secondary IPv4 prefixes can be allocated if required.

There is no default communication between VPCs, even if they belong to the same account.

Figure 1. AWS Virtual Private Cloud (VPC)
Figure 1. AWS Virtual Private Cloud (VPC)

What is an AWS Subnet and a Route Table?

A subnet is a part of VPC’s IP CIDR block and has virtual machines or instance’s network interfaces attached to it. Subnet always belongs to a single availability zone.

Figure 2. AWS Subnets
Figure 2. AWS Subnets

All subnets can communicate with each other directly within the VPC. Routing is done by implicit AWS VPC router, which is allocated the first IP address in each subnet’s range, for example, for the first subnet in AZ A. Second IP address in each subnet is reserved for AWS DNS server, and third address is reserved for future use.

When you create a VPC, the main route table is created and all subnets are associated with it unless you override this configuration manually.

In the diagram below, you can see that the route table has a single route for VPC’s CIDR block, which is marked as local. Every route table will have this route automatically installed. The rule is that no route table within VPC can contain this route (or it is components, for example, as propagated or statically configured. Therefore, don’t allocate IP prefixes from a VPC CIDR block outside of the VPC.

Local routes use implicit AWS VPC router, which is a virtual router that has an interface in every subnet and cannot be bypassed for subnet-to-subnet communication.

Figure 3. AWS Route Table
Figure 3. AWS Route Table

How routing works in AWS?

With the configuration performed so far, there is no connectivity outside of the VPC. There is a single route in the main routing table that allows connectivity between subnets.

Traditional routing in IP is based on the ability to direct traffic to the next-hop router using the encapsulation appropriate for the link. In Ethernet, the MAC address of the next-hop router is carried in a frame and is resolved using a broadcast-based protocol ARP. AWS routing doesn’t work in the same way.

AWS doesn’t support broadcast (and multicast). Forwarding to the first hop (or default gateway) is not based on MAC address-based forwarding. To explain this concept, let’s launch 2 instances of Windows-based OS (any version will do). On the client VM ( we will configure the default gateway of (server virtual instance).

Figure 4. AWS Routing From a Workstation (non-working)
Figure 4. Routing From a Workstation (non-working)

By default, the network adapter of an instance will not accept packets not destined to one of its addresses. To disable this behavior, Source/dest check set to false in network interface property of the server. Connectivity to external destinations outside of the VPC range is then tested by using tracert command. The expected behavior in traditional routing is for the first hop to reply, however, this didn’t happen.

In-instance routing if there are multiple interfaces only identifies the correct egress network adapter. Once the packet is transmitted and is on the subnet, the AWS route table is responsible for the forwarding. In our example, the default gateway can be configured to be any address on the subnet.

What can be a next-hop for a route in the AWS route table?

In contrast to traditional routing – the IP address cannot be specified as the next hop. Available options are displayed in the screenshot below.

Figure 5. Route Table Next-Hop Options
Figure 5. Route Table Next-Hop Options

To fix the issue presented in the previous example we will select the network interface (or ENI) attached to the server instance using its resource ID. Now traceroute returns the correct first hop of

The final configuration of route table is shown in the figure below.

Figure 6. Adjusted Main Route Table

After introducing this default route, every virtual instance in every subnet within the VPC will be sending all it’s traffic to the server, as per the main routing table configuration. What if this behavior is not desired?

What is the use case for using additional AWS route tables?

A subnet can be associated with a single route table. If a specific subnet requires different forwarding compared to other subnets, a dedicated route table can be created and associated with this subnet. For the next example, we will remove the default route from the main route table, so no Internet access will be available from all subnets in the VPC. The web server in subnet, requires access to and from the Internet.

As this example is about routing, we assume that Internet Gateway and Elastic IPs and are pre-configured. These concepts will be covered later. For now, a new route table is created and the required subnet is explicitly associated with it. Then the default route with the next-hop of Internet Gateway (IGW) is configured. As a result, only 1 subnet will be able to communicate with the Internet.

Figure 7. AWS VPC Additional Route Table
Figure 7. AWS VPC Additional Route Table

Internet Gateway and Elastic IPs

Internet Gateway (IGW) provides access to the Internet from VPC. It is attached to VPC and can be accessed from all subnets in VPC. It is a highly available component and AWS ensures that it is up even if one of the AZs fails. As shown in the previous section, to provide access to the Internet from a subnet, a default route is required which uses IGW as a default gateway.

The other requirement is publicly routed IP address. AWS performs NAT (Network Address Translation) and all hosts within the VPCs are using the private IP addresses. Publicly routed IP addresses are 1-to-1 mapped to an instance’s private IP address, as shown in the figure below. IGW is responsible for address translation.

Figure 8. Elastic IPs and NAT

What is Elastic IP and how it is different to Public IP?

AWS Public IP can be assigned to your instance during provisioning. It maps to the primary IP address of the primary interface (eth0) of the instance. It is dynamic and is assigned from the AWS pool and will be returned back to the pool when an instance is powered off.

Elastic IP on the other hand is assigned to your account and will be reserved, even if the instance is powered off.

Is it possible to assign multiple publicly routable IP addresses to a single interface?

Yes, it is done by assigning several secondary private IP addresses to the network interface. Then each of the elastic IPs is associated with a secondary IP address.

vCenter Server 6.7 Installation and Configuration

vCenter Appliance (vCSA) Deployment

vCSA is a virtual machine and can be deployed on ESXi hosts running version 5.5 or later. Depending on size of the vSphere deployment and whether you plan to install vCenter appliance into existing environment or starting new one, you have an option to install vCSA with embedded or external Platform Services Controller. This article provides information about how these components work together (link).

Let’s start with a simple option of deploying vCenter with embedded PSC. vCSA distribution media is ISO file named in the following format: VMware-VCSA-all-<version>-<build-number>.iso. To start installation mount this file on a workstation running Windows, Linux or MAC OS. The root of the folder contains readme.txt file explaining different installation options.

Read More

vSphere 6.7 ESXi Host Installation and Configuration


To install an ESXi host you will need to verify that the hardware meets minimum requirements. The server platform also must be supported and listed in VMware Compatibility List (link). You most likely will be able to install ESXi on non-supported hardware, however, it should be done only for non-production environments, as VMware will not provide support for this installation.

The server running ESXi 6.7 requires at least 2 x CPU cores, 4GB of RAM, a Gigabit network adapter and if local disk is to be used for boot at least 5.2 GB of disk space must be available (or 4 GB for USB or SD boot). NX/SD bit for the CPU and hardware virtualization support in BIOS must be enabled.

Download ESXi installation file from VMware website (the filename is in VMware-VMvisor-Installler-<version>-<build>.x86_64.iso format).

Read More

vSphere 6.7 Editions, Licensing, Architecture and Solutions

Check our new post on vSphere 7.0 Editions.

vSphere 6.7 Editions and Licensing

VMware vSphere 6.7 licensing is based on physical CPU count of the hosts. Every edition requires Support and Subscription contract purchase.

License key has edition and quantity information encoded in it. These keys are not tied to a specific hardware and can be assigned to multiple hosts, as long as the number of CPUs are within licensed limit.

vSphere customers with current contracts are entitled for version upgrades. vSphere is also allowed to be downgraded to the previous versions.

In vSphere 6.7 there are three editions available:

  • Standard
  • Enterprise Plus
  • Platinum
Read More

Configure SNMP on Cisco Devices

Configure SNMP on Cisco Devices

SNMP Overview

SNMP (Simple Network Management Protocol) defines communication and message format between network management stations and agents.

Every managed network element, such as a router, switch, or host is running a management agent. Its function is to retrieve and modify operational variables’ values as requested by network management stations.

This article contains information on how to enable SNMP agents on different Cisco devices, including IOS, IOS-XE, and NX-OS-based.

SNMPv1/SNMPv2c Configuration

SNMPv1 and SNMPv2c use the same security mechanisms based on communities transmitted in clear-text format. It is still used in some networks, however, SNMPv3 should be used in new deployments.

I will start with SNMPv1 and SNMPv2 configuration first. SNMPv3 configuration will be shown in the later sections.

I’m using 3 different types of devices in this demonstration: Classic IOS, IOS-XE, and NX-OS. The community string is the only required configuration and it is the same for SNMPv1/v2c on our platforms with slightly different keyword options on NX-OS.

Classic IOS (Cisco 1940)

You can specify if the community string is for read-only and read-write access, as well as access-list to control which management stations are allowed to query the device. All options except for community string are optional, with read-only access being the default if none is specified. You can enter more than one community string, as the command doesn’t overwrite previous community value.

C1940(config)#snmp-server community FastRerouteRO ?
<1-99> Std IP accesslist allowing access with this community
<1300-1999> Expanded IP accesslist allowing access with this
community string
WORD Access-list name
ipv6 Specify IPv6 Named Access-List
ro Read-only access with this community string
rw Read-write access with this community string
view Restrict this community to a named MIB view

C1940(config)#snmp-server community FastRerouteRO ro
C1940(config)#snmp-server community FastRerouteRW rw


IOS-XE has the same options and keywords as classic IOS:

CSR1000V(config)#snmp-server community FastRerouteRO ?
<1-99> Std IP accesslist allowing access with this community
<1300-1999> Expanded IP accesslist allowing access with this
community string
WORD Access-list name
ipv6 Specify IPv6 Named Access-List
ro Read-only access with this community string
rw Read-write access with this community string
view Restrict this community to a named MIB view

CSR1000V(config)#snmp-server community FastRerouteRO ro
CSR1000V(config)#snmp-server community FastRerouteRW rw

NX-OS (Nexus 9000V)

N9K-1(config)# snmp-server community FastRerouteRO ?

group Group to which the community belongs
ro Read-only access with this community string
rw Read-write access with this community string
use-ipv4acl Specify IPv4 ACL, the ACL name specified
after must be IPv4 ACL.
use-ipv6acl Specify IPv6 ACL, the ACL name specified
after must be IPv6 ACL.

N9K-1(config)#snmp-server community FastRerouteRO ro
N9K-1(config)#snmp-server community FastRerouteRW rw

NMS Configuration

To test the configuration I will be using a great free application called SnmpB (link). For each device, you will require an Agent Profile. Press the Tools button as shown in Figure 1.

Figure 1. SnmpB User Interface
Figure 2. SnmpB Agent Profile Configuration

I’ve created a profile for each of the 3 devices. The settings are shown in Figure 3.

My Cisco 1940 router’s IP address is with the SNMP community of FastRerouteRO as shown in Figure 4.

Figure 3. SnmpB Agent Profile General Settings
Figure 4. Agent Profile SNMPv1/v2c Settings

Once profiles are configured, let’s test simple get request for the device uptime. We need to request (using SNMP GET) value of an object that represents device uptime. Any object in SNMP has a unique identifier (OID) and its format and description will be defined in a MIB.

What is MIB and OID?

As per RFC1155 (link) – “Managed objects are accessed via a virtual information store, termed the Management Information Base or MIB… Each type of object (termed an object type) has a name, a syntax, and an encoding. The name is represented uniquely as an OBJECT IDENTIFIER. An OBJECT IDENTIFIER is an administratively assigned name.”

MIB describes a set of objects, including their identifiers, expected reply format, and if values are read-only or can be changed.

For example, MIB-II has the following definition for interface description:

Figure 5. SNMP Interface Description Object

A network device usually supports a standard-based MIB, such as MIB-II (link), as well as vendor-proprietary MIBs. Most NMS have pre-loaded modules for standard MIBs. Import is required to support vendor-specific extensions.

Object Identifier (OID) is written in dotted notation starting with the top-level node. For example, the Internet subtree of Object Identifiers is The object hierarchy has an unlabelled root. Under root, there are 3 allocated child nodes: ccitt (0), iso (1), and joint-iso-ccitt (2).

ISO has a subtree for other organizations org (3), with the child node of (6) assigned to the US Department of Defense (DOD). DoD in turn allocated a node (1) to Internet Activities Board (IAB).

SNMPv2 Testing

To test – expand the MIB tree and navigate to sysUpTime object ( Note that the Node Info window displays detailed information about the selected object. Right-click on sysUpTime and then select Get.

Figure 6. Get Request for sysUpTime

The Figure 7 shows uptime of the Cisco 1940 router.

Figure 7. Reply for sysUpTime (Cisco 1940)

Figure 8 and Figure 9 shows uptime of the Nexus 9000V and CSR. To poll different devices select the corresponding entry in the drop-down box called Remote SNMP Agent.

Figure 8. Reply for sysUpTime (Nexus 9000V)
Figure 9. Reply for sysUpTime (CSR1000)

SNMPv3 Configuration

SNMPv3 defines the User-based Security Model (USM) with the ability to authenticate and encrypt communication between agents and monitoring stations. There are 3 security levels listed below with the weakest first:
• noAuthNoPriv (no authentication or encryption)
• authNoPriv (authentication only)
• authPriv (authentication and encryption)

Minimal configuration of SNMPv3 requires 2 components: Group and User.

Note: There are some interoperability issues between Cisco IOS and IOS-XE devices and SnmpB when AES192 and AES256 used, so AES128 is configured instead in all examples. SNMP debug (debug snmp detail and debug snmp packets) produce the following error with AES192 and AES256:

*Dec 26 02:47:55.691: SNMP: Packet received via UDP from on GigabitEthernet1no such type in ParseType (152) (0x98)
ParseSequence, Unexpected type: FFFFFFFFFFFFFFFF
SrParseV3SnmpMessage: ParseSequence:
SrParseV3SnmpMessage: Failed.
SrDoSnmp: ASN Parse Error
*Dec 26 02:47:58.693: SNMP: Packet received via UDP from on GigabitEthernet1no such type in ParseType (152) (0x98)
ParseSequence, Unexpected type: FFFFFFFFFFFFFFFF
SrParseV3SnmpMessage: ParseSequence:
SrParseV3SnmpMessage: Failed.
SrDoSnmp: ASN Parse Error

Classic IOS (Cisco 1940)

C1940(config)#snmp-server group SNMP-Group v3 ?                                                      
auth group using the authNoPriv Security Level
noauth group using the noAuthNoPriv Security Level
priv group using SNMPv3 authPriv security level

C1940(config)#snmp-server group SNMP-Group v3 priv
C1940(config)#snmp-server user SNMP-Admin SNMP-Group v3
auth sha FastReroute priv aes 128 FastReroute

Note: SNMP users are not stored as part of running or startup configuration, so the second line will not be visible via “show running-config“.

SnmpB requires the configuration of SNMPv3 User. To access the configuration setting click on Options > Manage SNMPv3 USM Profile. Once the USM profile window opens, right-click on a blank space in the list of profiles and select “New USM profile”. I’ve configured username, security parameters to match the ones we configured on the router earlier. See Figures 9 and 10 for details.

Figure 9. SnmpB: SNMP User Configuration
Figure 10. SnmpB: SNMP User Configuration – 2

Go back to our device profiles, as shown in the Figure 1. Select SNMPv3 as supported version and choose corresponding Security Name and Levels as shown in Figure 11 and 12.

Figure 11. SnmpB: Enable SNMPv3
Figure 12. SnmpB: Enable SNMPv3 – 2

Let’s try to poll the Cisco 1940 to confirm that we still can access uptime information as shown in Figure 13.

Figure 13. SnmpB: Poll Uptime with SNMPv3 Enabled (Cisco 1940)


IOS-XE is configured identically as Classic IOS.

CSR1000V(config)#snmp-server group SNMP-Group v3 priv
CSR1000V(config)#snmp-server user SNMP-Admin SNMP-Group v3
auth sha FastReroute priv aes 128 FastReroute
Figure 14. SnmpB: Poll Uptime with SNMPv3 Enabled (CSR1000)

NX-OS (Nexus 9000V)

Nexus 9000V minimal configuration is based on a single string, as SNMP groups in NX-OS are replaced by roles for Role-Based Access Control, and by default new users will be assigned network-operator permissions. As a side effect, by default SNMP users will be able to log-in via CLI to the switch with access to all show commands.

Note that there is no group option under SNMP. Use the “role” set of commands, which then can be used as groups in SNMP.

N9K-1(config)# snmp-server ?
aaa-user Set duration for which aaa-cached snmp user
community Set community string and access privs
contact Modify sysContact
context SNMP context to be mapped
counter Configure port counter configuration
drop Silently drop unknown v3 user packets
enable Enable SNMP Traps
engineID Configure a local SNMPv3 engineID
globalEnforcePriv Globally enforce privacy for all the users
host Specify hosts to receive SNMP notifications
location Modify sysLocation
mib Mib access parameters
packetsize Largest SNMP packet size
protocol Snmp protocol operations
source-interface Source interface to be used for sending out SNMP
system-shutdown Configure snmp-server for reload(2)
tcp-session Enable one time authentication for snmp over tcp
user Define a user who can access the SNMP engine

You can assign users to a group for SNMP-Admin by typing it in straight after the username.

N9K-1(config)# snmp-server user SNMP-Admin ?

WORD Group name (ignored for notif target user) (Max Size
auth Authentication parameters for the user
enforcePriv Enforce privacy for the user
use-ipv4acl Specify IPv4 ACL, the ACL name specified after must be
use-ipv6acl Specify IPv6 ACL, the ACL name specified after must be

N9K-1(config)# snmp-server user SNMP-Admin auth
sha FastReroute priv aes-128 FastReroute

NX-OS also creates a normal user in addition to the SNMP user. Both users are stored in the running configuration.

N9K-1(config)# show run | incl SNMP 

username SNMP-Admin password 5 #password# role network-operator

snmp-server user SNMP-Admin network-operator auth sha
#password# priv aes-128 #password# localizedkey

Let’s test that we can poll N9K using SNMPv3.

Figure 15. SnmpB: Poll Uptime with SNMPv3 Enabled (Nexus 9000V)

SNMP show commands

Classic IOS (Cisco 1940) and IOS-XE (CSR1000V)

Devices keep track of which objects were polled and associated timestamps, as shown in the listings below.

CSR1000V#show snmp stats oid 

time-stamp #of times requested OID
03:27:46 UTC Dec 21 2018 6 sysUpTime
09:54:49 UTC Dec 18 2018 3 system.6
09:54:46 UTC Dec 18 2018 3 system.4
09:53:49 UTC Dec 18 2018 2 system.5
09:53:49 UTC Dec 18 2018 2 system.1
11:27:41 UTC Dec 17 2018 1 sysOREntry.3

To get the list of SNMP groups use the “show snmp group” command. Note that SNMPv1 and SNMPv2c have groups and as there is no concept of users, they are named as the community name. Also not covered in this article, SNMP views allow restricting access only to specific OIDs or subtrees.

CSR1000V#show snmp group
groupname: ILMI security model:v1
contextname: storage-type: permanent
readview : *ilmi writeview: *ilmi
row status: active

groupname: ILMI security model:v2c
contextname: storage-type: permanent
readview : *ilmi writeview: *ilmi
row status: active

groupname: SNMP-Group security model:v3 priv
contextname: storage-type: nonvolatile
readview : v1default writeview:
row status: active

groupname: FastRerouteRO security model:v1
contextname: storage-type: permanent
readview : v1default writeview:
row status: active

groupname: FastRerouteRO security model:v2c
contextname: storage-type: permanent
readview : v1default writeview:
row status: active

groupname: FastRerouteRW security model:v1
contextname: storage-type: permanent
readview : v1default writeview: v1default
row status: active

groupname: FastRerouteRW security model:v2c
contextname: storage-type: permanent
readview : v1default writeview: v1default
row status: active

To get the list of SNMP users use the “show snmp user” command. As users are not displayed in the configuration, this command is the only way to check the SNMP users.

CSR1000V#show snmp user
User name: SNMP-Admin
Engine ID: 800000090300000C29B86282
storage-type: nonvolatile active
Authentication Protocol: SHA
Privacy Protocol: AES128
Group-name: SNMP-Group

NX-OS (Nexus 9000V)

N9K-1# show snmp oid-statistics 

SNMP OID Stats -
Object ID Min Max Avg
Max Access TS Last-polled NMS Poll Count
(ms) (ms) (ms)

iso. <1 <1 <1
02:33:25:515 Dec 21 2018 1

NX-OS in addition to OID statistics also provides show command to display statistics related to a management station.

N9K-1# show snmp nms-statistics 

- SNMP NMS OID Stats -
NMS IP Address                              GET    GET    GET    SET           
First Poll                 Last Poll
                                                  NEXT   BULK
----------------------------------------                                  1      0      0      0 
02:33:25:515 Dec 21 2018  02:33:25:515 Dec 21 2018

To get the list of SNMP groups use the “show snmp group” command. Its output is the same as the “show role” command would produce.

N9K-1(config)# show snmp group 

Role: aaa-db-admin
Description: Predefined AAA DB admin, has no cli permissions. Allows RESTful A

Rule Perm Type Scope Entity

1 permit read-write

#some output omitted

Role: network-admin
Description: Predefined network admin role has access to all commands
on the switch

Rule Perm Type Scope Entity

1 permit read-write
Role: network-operator
Description: Predefined network operator role has access to all read
commands on the switch

Rule Perm Type Scope Entity

1 permit read

#some output omitted

To get the list of SNMP users use the “show snmp user” command. Admin users are automatically enabled as SNMP users, as NX-OS implements a single user and role storage.

N9K-1(config)# show snmp user

User Auth Priv(enforce) Groups acl_filter
_ __ _ ___
admin md5 des(no) network-admin
SNMP-Admin sha aes-128(no) network-operator

NOTIFICATION TARGET USERS (configured for sending V3 Inform)

User Auth Priv
_ ___

SNMP debug commands

Classic IOS (Cisco 1940) and IOS-XE (CSR1000V)

Two commands displaying if there is communication with NMS are “debug snmp detail” and “debug snmp packets“. Below is the output generated when a simple SNMP Get request is performed.

CSR1000V#debug snmp detail
SNMP Detail Debugs debugging is on
CSR1000V#debug snmp packets
SNMP packet debugging is on
CSR1000V#terminal monitor
*Dec 26 23:41:59.539: SNMP: Packet received via UDP from on GigabitEthernet1SrParseV3SnmpMessage: Failed..

*Dec 26 23:41:59.539: SNMP: Get request, reqid 1062, errstat 0, erridx 0
sysUpTime.0 = NULL TYPE/VALUESrDoSnmp: received get pdu
CheckClassMIBView: all included
CheckMIBView: OID is in MIB view.

*Dec 26 23:41:59.539: SNMP: Response, reqid 1062, errstat 0, erridx 0
sysUpTime.0 = 305892
*Dec 26 23:41:59.540: SNMP: Packet sent via UDP to

NX-OS (Nexus 9000V)

In NX-OS use “debug snmp pkt-dump” which is similar to commands shown above for IOS/IOS-XE. Below is the output generated when a simple SNMP Get request is performed.

N9K-1# debug snmp pkt-dump  
2018 Dec 27 11:45:07.929429 snmpd: 1063.000000:iso. = NULL SNMPPKTEND
2018 Dec 27 11:45:07.929489 snmpd: SNMPPKTSTRT: 3.000000 160 1063.000000 393237.000000 0.000000 0.000000 0 4 3 3 0 0 remote ip,v4: snmp_54789_172.16.17.75 \200 11 0 \200 11 SNMP-Admin 10 0 0 0x11e950d4 90
2018 Dec 27 11:45:07.929560 snmpd: 1063.000000:iso. = Timeticks: (339820) 0:56:38.20 SNMPPKTEND
2018 Dec 27 11:45:07.929577 snmpd: SNMPPKTSTRT: 3.000000 162 1063.000000 393237.000000 0.000000 0.000000 0 4 3 3 0 0 remote ip,v4: snmp_54789_172.16.17.75 \200 11 0 \200 11 SNMP-Admin 10 0 0 0x11e950d4 90