vSphere 7.0 Editions

Editions and Packaging

vSphere 7.0 Editions

vSphere 7 is available from April 2020. It is available in the same editions as vSphere 6.7 except for vSphere Platinum edition. As with vSphere 6.7, a support and subscription contract (SNS) is required. See our article on 6.7 editions here.

vSphere Standard and vSphere Enterprise Plus are the two main editions. Standard edition offers a base set of features, and Enterprise Plus extends the feature set with advanced capabilities.

VMware announced the end of availability (EOA) for vSphere Platinum Edition. AppDefense SAAS is now available as a standalone offering. See the details here.

vSphere 7 introduces add-on for Kubernetes, which allows deploying containerized applications. It is available as part of the VMware Cloud Foundation.

Licensing is CPU count-based. However, VMware had an update for the per-CPU pricing model in March 2020. After this update, CPUs with core count exceeding 32 require an additional license. Detailed information is available here.

As in previous editions, different types of packages and licensing types are available:

  • vSphere Desktop. It can be used only for Virtual Desktop Infrastructure (VDI) deployments with either VMware Horizon View or third-party connection brokers.
  • vSphere Acceleration Kits. These are bundles of vSphere licenses for 6 or 8 CPUs and a license for vCenter Server. The kits are convenience packages for purchasing. The actual licenses and software contracts can be used and renewed independently.
  • vSphere Essentials Kits. These kits have enforceable limits of 3 hosts with 2 CPU each and a license for a single instance of vCenter Server for Essentials. In comparison to vSphere Acceleration Kits, Essentials Kits cannot be combined with other editions or decoupled.
  • vSphere Remote Office Branch Office.  This type of packaging limits the number of VMs available for deployment to 25 and targets remote branches.
  • vSphere Scale-Out. This package is sold in packs of 8 CPUs for high-performance computing (HPC) and big data.

The following table lists feature set as available at the time of writing (June 2020). Check the following document for up-to-date information (URL).

Certain features may not be listed for specific packaging in VMware datasheets. To eliminate ambiguity, we’ve marked such features as “Not listed explicitly”.

FeatureEssentialsEssentials PlusROBO StandardROBO AdvancedROBO EnterpriseScale-OutStandardEnterprise Plus
vSphere Hypervisor
vMotion
VMware vCenter Hybrid Linked ModeNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyvCenter Server StandardvCenter Server Standard
vSphere Virtual Symmetric Multiprocessing (SMP)Not listed explicitlyNot listed explicitlyNot listed explicitly
vSphere High Availability (HA)Not listed explicitly
Storage vMotionNot listed explicitlyNot listed explicitly
Fault ToleranceNot listed explicitlyNot listed explicitly2 vCPU4 vCPU4 vCPUNot listed explicitly2 vCPU8 vCPU
VMware vShield Endpoint
vSphere ReplicationNot listed explicitly
Support for 4K native storageNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
vSphere Quick Boot
vCenter High AvailabilityvCenter Server EssentialsvCenter Server EssentialsvCenter Server FoundationvCenter Server FoundationvCenter Server FoundationvCenter Server StandardvCenter Server StandardvCenter Server Standard
vCenter Backup and RestorevCenter Server EssentialsvCenter Server EssentialsvCenter Server FoundationvCenter Server FoundationvCenter Server FoundationvCenter Server StandardvCenter Server StandardvCenter Server Standard
vCenter Server Appliance Migration Tool vCenter Server EssentialsvCenter Server EssentialsvCenter Server FoundationvCenter Server FoundationvCenter Server FoundationvCenter Server StandardvCenter Server StandardvCenter Server Standard
TPM 2.0 support and virtual TPMNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
FIPS 140-2 compliance and TLS 1.2 support
Support for Microsoft virtualization-based security (VBS)
Per-VM Enhanced vMotion CompatibilityNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
VMware Instant CloneNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Identity federation with Active Directory Federation Services (ADFS)Not listed explicitlyNot listed explicitly
Content LibraryNot listed explicitlyNot listed explicitly
APIs for Storage AwarenessNot listed explicitlyNot listed explicitly
APIs for Array Integration, MultipathingNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Virtual Volumes and Storage-Policy Based ManagementNot listed explicitlyNot listed explicitly
Next-generation infrastructure image management
Cross-vCenter and Long Distance vMotionNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
VM encryptionNot listed explicitlyNot listed explicitlyNot listed explicitly
Distributed SwitchNot listed explicitlyNot listed explicitly
Host Profiles and Auto DeployNot listed explicitlyNot listed explicitly
Distributed Resource Scheduler (DRS)Not listed explicitlyNot listed explicitly• (limited)Not listed explicitly
Distributed Power Management (DPM)Not listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Storage DRSNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
I/O Controls (Network and Storage) and SR-IOVNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
vSphere Trust AuthorityNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
vSphere Persistent MemoryNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
NVIDIA GRID vGPUNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Proactive HANot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Predictive DRSNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Accelerated graphics for VMsNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
Dynamic vSphere DirectPath I/O™Not listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
vCenter Server ProfileNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyvCenter Server Standard
vCenter Server update plannerNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly
BitfusionNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitlyNot listed explicitly2 GPU

vCenter Server is available in 3 editions:

  • vCenter Server for Essentials (up to 3 hosts; can manage only Essentials and Essentials Plus)
  • vCenter Server Foundations (up to 4 hosts; can manage Standard, Enterprise Plus and vCloud Suite)
  • vCenter Server Standard (unlimited hosts; can manage Standard, Enterprise Plus and vCloud Suite)

Feature and Scaling Comparison by Version

To check the full and the latest information, follow this URL.

FeaturevSphere 6.5vSphere 6.7vSphere 7
Kubernetes, VMware Tanzu Runtime and Hybrid Infrastructure Services• (VMware Cloud Foundation)
vMotion (storage, cross-vSwitch, cross-vCenter, long-distance, cross-cloud hot and cold)
Content Library
Per-VM enhanced vMotion compatibility
DRS, Storage DRS
Enhanced DRS
Network-aware DRS
Network and Storage I/O Control
Assignable hardware support for vGPU and DirectPath I/O initial placement
Fault Tolerance4 vCPU8 vCPU8vCPU
Replication, HA VM component protection, Proactive HA, vCenter HA, Orchestrated HA Restart
Virtual Volumes, Storage Policy-based Management, NFS 4.1 Support, Automated UNMAP
Support for 4K native storage
Multifactor Authentication, vSphere security hardening and compliance, VM encryption, Secure boot, Audit quality logging
TPM 2.0, virtual TPM 2.0, support for Microsoft VBS, FIPS 140-2 Validated
vSphere Trust Authority, Identity federation with ADFS
VMware Instant Clone, NVIDIA vGPU
vSphere Integrated OpenStack, vSphere Integrated Containers
vMotion support for NVIDIA vGPU, vSphere Persistent Memory, support for memory mapping for 1GB page size
Support for RoCE v2, iSCSI extension for RDMA (iSER), Support for RDMA InfiniBand
vCenter Server Appliance, Certificate management, Migration to vCenter Server Appliance
Enhanced Linked Mode, vCenter Server Hybrid Linked Mode, vCenter Server backup and restore
HTML5-based vSphere Client, REST-based API for vCenter Server Management
Single reboot upgrades, vSphere Quick Boot, vCenter Server Appliance backup with built-in scheduler and retention
vSphere Lifecycle Manager, vCenter Server update planner, vCenter Server Profiles

Table 2. vSphere 6.5, 6.7 and 7.0 Feature Comparison

vSphere 7 supports a higher number of hosts per cluster and a higher number of hosts for linked vCenter servers. Table 3 captures scaling limits for different editions.

Scale MetricvSphere 6.5vSphere 6.7vSphere 7
Hosts per cluster646496
VMs per cluster8,0008,0008,000
Hosts per VMWare vCenter Server2,0002,0002,000
Power-on VMs per vCenter Server25,00025,00030,000
Hosts for 15 linked vCenter Servers5,0005,00015,000
CPUs per host576768768
RAM per host12TB16TB16TB
VMs per host1,0241,0241,024
vCPUs per VM128128128
vRAM per VM6TB6TB6TB
Non-volatile memory per host1TB1TB
Non-volatile memory per VM1TB1TB

Table 3. vSphere 6.5, 6.7 and 7.0 Scaling Limits Comparison

vSphere ESXi Networking Guide – Part 3: Standard Switches Configuration ESXi 6.7

This is the third part of the vSphere ESXi Networking Guide. In the previous post, we’ve created three virtual switches and assigned uplink ports to them. In this post we will add port groups and VMKernel ports to the vSwitches. The examples in this article are based on the ESXi version 6.7.

The start state for this article is shown in Figure 1.

Figure 1. ESXi Standard vSwitches Lab Topology – Start State
Figure 1. ESXi Standard vSwitches Lab Topology – Start State

This post’s configuration examples will bring the state of the virtual network to the one displayed in Figure 2, as shown below.

Figure 2. ESXi Standard vSwitches Lab Topology – Target State
Figure 2. ESXi Standard vSwitches Lab Topology – Target State

VM Port Group Tasks

The article’s examples will follow the same pattern we’ve used in the previous post – first we will use the WebGUI configuration of ESXi host and vCenter. And then PowerCLI configuration will be demonstrated.

ESXi Host Based Configuration

The first task is the addition of the INFRA-SERVERS port group. It is mapped to VLAN 10, as shown in Figure 2. With web browser navigate to IP address or full domain name of ESXi host and login with ESXi local credentials.

Click on the Networking navigation menu, then on Port groups tab, and press the “Add port group” button.

Figure 3. ESXi Host Configuration – Add a Port Group
Figure 3. ESXi Host Configuration – Add a Port Group

As shown in Figure 3, enter the port group name, the VLAN ID and select the virtual switch. Let’s accept default security settings which are inheriting configuration done on the vSwitch level.

vCenter Based Configuration

We will use the vCenter Web GUI to add the second port group called CORP-SERVERS mapped to VLAN 20. Firstly click on Hosts and Cluster icon, then select the IP address of VM host. Click on the Configure tab, then select Virtual switches on the menu on the left and then click Add Networking button.

Figure 4. vCenter Configuration – Add a Port Group
Figure 4. vCenter Configuration – Add a Port Group

Select “Virtual Machine Port Group for a Standard Switch” as a connection type. Then choose the virtual standard switch. In the screenshot below, the default option of vSwitch0 is selected. Then specify “CORP-SERVERS” as the port group name and 20 as VLAN ID. Review the summary and press Finish.

Figure 5. vCenter Configuration – Add a Port Group Wizard
Figure 5. vCenter Configuration – Add a Port Group Wizard

PowerCLI Configuration

In this section, we will create a “LAB-SERVERS” port group as part of the vSwitch1. The port group is assigned a VLAN ID of 30, as shown in the target state diagram.

The first two cmdlets connect to vCenter and store the virtual switch object in a variable called $VariableSwitch01, which we will use in the next commands to specify the parent switch for the “LAB-SERVERS” port group. This is similar to the examples we used in the previous article.

 PS C:\WINDOWS\system32> Connect-VIServer 192.168.99.220
 Name                           Port  User
 ----                           ----  ----
 192.168.99.220                 443   LAB.LOCAL\Administrator
 PS C:\WINDOWS\system32> $VariableSwitch01 = Get-VMhost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch1"

New cmdlets we will be using in the example below are New-VirtualPortGroup and Get-VirtualPortGroup.

PS C:\WINDOWS\system32> New-VirtualPortGroup -VirtualSwitch $VariableSwitch01 -Name "LAB-SERVERS" -VLanId 30
 Name                      Key                            VLanId PortBinding NumPorts
 ----                      ---                            ------ ----------- --------
 LAB-SERVERS               key-vim.host.PortGroup-LAB-… 30
 PS C:\WINDOWS\system32> Get-VirtualPortGroup -VirtualSwitch $VariableSwitch01
 Name                      Key                            VLanId PortBinding NumPorts
 ----                      ---                            ------ ----------- --------
 LAB-SERVERS               key-vim.host.PortGroup-LAB-… 30
 PS C:\WINDOWS\system32> Get-VirtualPortGroup -VirtualSwitch $VariableSwitch01 | Format-List
 Name              : LAB-SERVERS
 VirtualSwitchId   : key-vim.host.VirtualSwitch-vSwitch1
 VirtualSwitchUid  : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch1/
 VirtualSwitch     : vSwitch1
 Key               : key-vim.host.PortGroup-LAB-SERVERS
 Port              :
 VLanId            : 30
 VirtualSwitchName : vSwitch1
 VMHostId          : HostSystem-host-29
 VMHostUid         : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 Uid               : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch1/VirtualPortGroup=key-vim.host.PortGroup-LAB-SERVERS/
 ExtensionData     : VMware.Vim.HostPortGroup

To demonstrate how to delete a port group we will remove the default port group “VM Network” that was automatically created during ESXi host installation. Remove-VirtualPortGroup cmdlet is used to perform the operation.

The first line of the listing below is similar to the one we used in the example above but uses another variable name. It stores the virtual switch named “vSwitch0” on ESXi host 192.168.99.202 as a variable. The second line stores the port group named “VM Network” in another variable. Then we use Remove-VirtualPortGroup cmdlet to delete the port group.

PS C:\WINDOWS\system32> $VariableSwitch00 = Get-VMhost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch0"
 PS C:\WINDOWS\system32> $VariablePortGroup = Get-VirtualPortGroup -VirtualSwitch $VariableSwitch00 -Name "VM Network"
 PS C:\WINDOWS\system32> Remove-VirtualPortGroup $VariablePortGroup
 Perform operation?
 Perform operation 'Remove virtual port group.' on 'LAB-SERVERS'.
 [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): y

VMs Interface Configuration

Now we have the required infrastructure prepared for connecting VM’s virtual adapters to port groups. Let’s start with the ESXi Host-Based configuration.

ESXi Host Based Configuration

Log in into ESXi host, press on the Virtual Machines menu and then click on the checkbox next to the VM that will be configured.

Click on the Actions menu button and select the “Edit settings” option. Now we can move VM’s network adapter to the required port group.

Figure 6. ESXi Host Configuration – Change Network Adapter Port Group Membership
Figure 6. ESXi Host Configuration – Change Network Adapter Port Group Membership

The next section shows how to perform the same configuration using the vCenter WebGUI interface.

vCenter Based Configuration

Log in into vCenter and click on the ESXi hostname or IP address, then click on the VM tab. Right-click on the VM’s row and select Edit Settings. Click on the drop-down box next to the network adapter and select Browse. In the list of available networks select one of the port groups.

Figure 7. vCenter Configuration – Change Network Adapter Port Group Membership
Figure 7. vCenter Configuration – Change Network Adapter Port Group Membership

PowerCLI Configuration

To perform configuration using PowerCLI we first need to locate the correct VM using Get-VM cmdlet and then either saving it as variable or piping it to Get-NetworkAdapter cmdlet we will be able to get access to its network adapter. Then Set-NetworkAdapter can be used to connect the network adapter to the correct port group. We will locate port group via virtual switch, as demonstrated in the previous example.

Let’s first refer to the diagram explaining how commands components are related and then check the command listing. Figure 8 shows the variables and cmdlets used in this example. The cmdlet that performs the required configuration is Set-NetworkAdapter. As per the command reference, we need to specify two pieces of information to change the adapter’s port group:

  • Adapter that we want to move
  • Port group that will be hosting the adapter
Figure 8. PowerCLI – Set-NetworkAdapter
Figure 8. PowerCLI – Set-NetworkAdapter

The diagram shows how to get access to these objects by running commands from the bottom to the top. Note that there are two different ways of how $VariableSwitch01 can be defined – with and without pipe operator ‘|’.

Full listing of the commands is provided in the sample below:

PS C:\WINDOWS\system32> Connect-VIServer 192.168.99.220
 Name                           Port  User
 ----                           ----  ----
 192.168.99.220                 443   LAB.LOCAL\Administrator
 PS C:\WINDOWS\system32> Get-VM -Location 192.168.99.202
 Name                 PowerState Num CPUs MemoryGB
 ----                 ---------- -------- --------
 VM-4                 PoweredOff 1        2.000
 VM-2                 PoweredOff 1        2.000
 PS C:\WINDOWS\system32> $VariableVM2 = Get-VM -Location 192.168.99.202 -Name VM-2
 PS C:\WINDOWS\system32> $VariableNetworkAdapter = Get-NetworkAdapter -VM $VariableVM2
 PS C:\WINDOWS\system32> $VariableNetworkAdapter | Format-List
 MacAddress       : 00:50:56:91:e3:e5
 WakeOnLanEnabled : True
 NetworkName      : VM Network
 Type             : e1000
 ParentId         : VirtualMachine-vm-85
 Parent           : VM-2
 Uid              : /VIServer=lab.local\administrator@192.168.99.220:443/VirtualMachine=VirtualMachine-vm-85/NetworkAdapter=4000/
 ConnectionState  : NotConnected, GuestControl, StartConnected
 ExtensionData    : VMware.Vim.VirtualE1000
 Id               : VirtualMachine-vm-85/4000
 Name             : Network adapter 1
 PS C:\WINDOWS\system32> $VariableSwitch01 = Get-VMhost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch1"
 PS C:\WINDOWS\system32> $VariablePortGroup = Get-VirtualPortGroup -VirtualSwitch $VariableSwitch01 -Name "LAB-SERVERS"
 PS C:\WINDOWS\system32> Set-NetworkAdapter -NetworkAdapter $VariableNetworkAdapter -Portgroup $VariablePortGroup
 Confirm
 Are you sure you want to perform this action?
 Performing the operation "Connect to portgroup" on target "Network adapter 1".
 [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): y
 Name                 Type       NetworkName  MacAddress         WakeOnLan
                                                                   Enabled
 ----                 ----       -----------  ----------         ---------
 Network adapter 1    e1000      LAB-SERVERS  00:50:56:91:e3:e5       True

VMKernel Port Configuration

The next step in this article is the configuration of the VMKernel ports.  These ports are used to provide communication to the ESXi host itself. The first example is based on WebGUI of the ESXi host to create a VMKernel adapter for VMotion.

ESXi Host Based Configuration

Log in into the ESXi host and then click on the “Networking” option on the side menu, then select VMKernel NICs and press the “Add VMKernel NIC” button. On the pop-up menu fill-in the port group details, the vSwitch and the VLAN ID.

It is recommended to use the “New port group” option, as VMKernel requires a dedicated port group that cannot be shared with the VM ports. Placing a VMKernel port into the existing port group with VM ports attached will cause these ports to be moved out of the port group with a probability of causing downtime.

vMotion stack is available by default, so we will select it from the list of the TCP/IP stacks. Note that the TCP/IP stack cannot be changed after the VMKernel adapter is created.

Figure 9. ESXi Host - Add VMKernel NIC
Figure 9. ESXi Host – Add VMKernel NIC

The static IP address of 192.168.100.201/24 is specified. Note that there is no default gateway configuration available under the VMKernel NIC. To configure it change the settings of vMotion stack as shown in the next screenshot.

To perform this configuration, click on the TCP/IP stacks tab and then right-click on vMotion stack and select the “Edit settings” option in the context menu invoked by the right-click. Adjust the IPv4 gateway setting. Note that the option can be modified only if there is a VMKernel adapter associated with the port group.

Figure 10. ESXi Host – Configure TCP/IP Stack
Figure 10. ESXi Host – Configure TCP/IP Stack

Let’s now delete VMKernel NIC by right-clicking on it and selecting the “Remove” option, so we can perform the same procedure using the vCenter interface.

Figure 11. ESXi Host – Delete VMKernel NIC
Figure 11. ESXi Host – Delete VMKernel NIC

vCenter Based Configuration

Log in into vCenter and click on the ESXi hostname or IP address, then select the Configure tab. Chose the “VMKernel adapters” option in the host’s menu and then press the “Add Networking” button. This will launch the familiar “Add Networking” configuration wizard. This time select VMKernel Network Adapter, select vSwitch0 and fill in the port group settings. Note that there is an option to override the TCP/IP stack default gateway with “User static IPv4 settings”. This default gateway will not appear in the TCP/IP stack routing table.

Figure 12. vCenter Configuration – Create a VMKernel NIC
Figure 12. vCenter Configuration – Create a VMKernel NIC

PowerCLI Configuration

To create an iSCSI VMKernel adapter with PowerCLI we will use New-VMHostNetworkAdapter cmdlet. After logging into vCenter let’s save vSwitch 2 as a variable.

Then a VMKernel adapter in a new PortGroup named iSCSI within vSwitch 2 is created. The VMKernel port’s IP address is set to 192.168.101.201. Get-VMHostNetworkAdapter cmdlet shows the settings of a newly created VMKernel interface. Finally, we will move the port group to the correct VLAN.

PS C:\WINDOWS\system32> Connect-VIServer 192.168.99.220
 Name                           Port  User
 ----                           ----  ----
 192.168.99.220                 443   LAB.LOCAL\Administrator
 PS C:\WINDOWS\system32> $VariableSwitch02 = Get-VMhost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch2"
 PS C:\WINDOWS\system32> New-VMHostNetworkAdapter -VirtualSwitch $VariableSwitch02 -PortGroup "iSCSI" -IP 192.168.101.201  -SubnetMask 255.255.255.0
 Name       Mac               DhcpEnabled IP              SubnetMask      DeviceName
 ----       ---               ----------- --              ----------      ----------
 vmk2       00:50:56:6e:69:2a False       192.168.101.201 255.255.255.0         vmk2
 PS C:\WINDOWS\system32> Get-VMHostNetworkAdapter -VirtualSwitch $VariableSwitch02 -Name vmk2 | Format-List
 VMotionEnabled               : False
 FaultToleranceLoggingEnabled : False
 ManagementTrafficEnabled     : False
 IPv6                         : {fe80::250:56ff:fe6e:692a/64}
 AutomaticIPv6                : False
 IPv6ThroughDhcp              : False
 IPv6Enabled                  : False
 Mtu                          : 1500
 VsanTrafficEnabled           : False
 PortGroupName                : iSCSI
 Id                           : key-vim.host.VirtualNic-vmk2
 VMHostId                     : HostSystem-host-29
 VMHost                       : 192.168.99.202
 VMHostUid                    : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 DeviceName                   : vmk2
 Mac                          : 00:50:56:6e:69:2a
 DhcpEnabled                  : False
 IP                           : 192.168.101.201
 SubnetMask                   : 255.255.255.0
 Uid                          : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/HostVMKernelVirtualNic=key-vim.host.VirtualNic-vmk2/
 Name                         : vmk2
 ExtensionData                : VMware.Vim.HostVirtualNic
 PS C:\WINDOWS\system32> $VariablePortGroup = Get-VirtualPortGroup -Name "iSCSI"
 PS C:\WINDOWS\system32> Set-VirtualPortGroup -VirtualPortGroup $VariablePortGroup -VLanId 6
 Name                      Key                            VLanId PortBinding NumPorts
 ----                      ---                            ------ ----------- --------
 iSCSI                     key-vim.host.PortGroup-iSCSI   6

Load Balancing and Security Parameters

Let’s consider an example when we want to enable per-packet load balancing for vSwitch 1. This will affect LAB-SERVERS and vMotion port groups, as, by default, port groups inherit configuration defined on a vSwitch.

As shown in the first article of this series, per-packet load balancing requires the upstream switch (or switches in some scenarios) to use static link aggregation. As our switches are not physically or virtually stacked we will need to move both links to the same switch for this example to work properly. Otherwise, switches will have to rapidly flush and re-learn VM’s MAC address with frames sent by VM host over different uplinks as the result of load balancing. This will degrade the network performance and produce multiple MAC flapping alerts in the logs.

To perform configuration using vCenter, select the host, then press on the Configure tab, select Virtual switches. Then click on the vSwitch1 and press the Edit button. In the “Edit Settings” pop-up window select the “Teaming and failover” menu on the left and then choose the “Route based on IP hash” option as the load balancing mechanism.

Figure 13. vCenter Configuration – Set per-packet load balancing for a vSwitch
Figure 13. vCenter Configuration – Set per-packet load balancing for a vSwitch

For the next example, assume that we’ve been instructed to send a copy of all traffic received on vSwitch0 to the INFRA-SERVERS port group, so this traffic can be captured with Wireshark for some troubleshooting. The setting that we need to enable is Promiscuous mode.

We know that the correct approach for this task is to create a new port group with a single server that runs Wireshark, however, to make this example more focused on the task we will enable this setting for the existing port group. Refer to the first section of this blog post on how to create a new port group for the production environment.

We will use vCenter for this configuration. Navigate to Virtual switches in WebGUI, then select vSwitch0, as it contains the INFRA-SERVERS port group. Select the port group and press the Edit button.

In the configuration pop-up window, click on the Security menu and enable the override checkbox next to Promiscuous mode and select Accept.

Figure 14. vCenter Configuration – Enable Promiscuous mode for a Port Group
Figure 14. vCenter Configuration – Enable Promiscuous mode for a Port Group

Conclusion

This is the final article in the VMWare standard switch series (see part 1 and part 2) and I hope that it is helpful in getting familiar with how standards switches operate and configured.

vSphere ESXi Networking Guide – Part 2: Standard Switches Configuration ESXi 6.7

This is the second part of the vSphere ESXi Networking Guide. In the previous post, we’ve discovered basic concepts and components of vSphere ESXi Standard Switches. This article shows how to create vSwitches step-by-step. The examples provided in the following sections are based on ESXi version 6.7.

There are several ways to configure a standard switch:

  • With Direct Console User Interface (DCUI)
  • Using Web-based vSphere Client of ESXi host or vCenter
  • With PowerCLI

Refer to the diagram in Figure 1 for the sample topology that we will be building in this and the next articles. The environment consists of a single ESXi host running multiple VMs, which are grouped by their function as infrastructure, corporate and lab servers. The default port group called Management Network has a management port attached to it. To provide vMotion and iSCSI capability 2 extra ports and 2 port groups are configured.

vSwitch Sample Lab Network Diagram
Figure 1. Sample Network Diagram

VLAN allocation for each port group is documented in the list below:

  • Management – VLAN 4
  • vMotion – VLAN 5
  • iSCSI – VLAN 6
  • INFRA-SERVERS – VLAN 10
  • CORP-SERVERS – VLAN 20
  • LAB-SERVERS – VLAN 30

The starting topology is a newly installed ESXi host with 6 physical adapters. Let’s assume that we’ve connected physical cables and enabled only a single port on the upstream switch. Figure 2 shows the switch port configuration. It is set up as an access port in VLAN 4, meaning that there will be no 802.1q tagged frames crossing this interface. We will change this port to tag traffic in the next section.

vSwitch Lab Starting topology
Figure 2. Starting topology

At this stage, the ESXi host has a single virtual switch, a single VM port group for virtual machines and a single VMKernel port for management.

The end state that we will achieve as the result of configuration steps in this article is shown in Figure 3.

vSwitch Lab Target-state topology
Figure 3. Target-state topology

Console Configuration (DCUI)

There is a limited number of things you can do with the network configuration via DCUI. The console is accessed by connecting a monitor and keyboard to ESXi host or by using out-of-band vendor-specific management options provided by the server, such as HP ILO or DELL DRAC. The main use case for this method of access is the initial setup or management access troubleshooting.

Press F2 on the initial screen and type in the username and password. Figure 4 shows available options available after the login.

ESXi Console Configuration Menu
Figure 4. ESXi Console Configuration Menu

The next screenshot displays the Configure Management Network menu’s options and dialog windows. Network Adapters menu allows you to select physical NICs that will be used as uplinks for the default standard switch containing management port. VLAN and IPv4 Configuration settings are applied to the VMKernel ports and their group. As we don’t tag frames from the switch side, VLAN is left as unspecified.

DNS Configuration includes DNS server IPs, as well as ESXi host’s name.

ESXi Console Management Network Configuration Options
Figure 5. ESXi Console Management Network Configuration Options

After changing any of the settings above, restart the management network to activate the changes using the menu shown in Figure 6, and perform optional testing.

ESXi Console Restart and Test Management Network
Figure 6. ESXi Console Restart and Test Management Network

The last network-related menu is Network Restore Options. As shown in the screenshot below, there are 3 available options:

ESXi DCUI Network Restore Options
Figure 7. ESXi DCUI Network Restore Options

Restore Network Settings resets all network settings to their defaults. It removes vSwitches, port groups, VMKernel adapters that you might have created and also impacts virtual machine connectivity, so use this option only when you cannot fix the network connectivity any other way.

The next two options deal with management connectivity to ESXi host when distributed switch is used. Restore Standard Switch helps you with moving management interface to a Standard Switch when VMKernel port is currently on a Distributed Switch that is not operating as expected. Restore vDS (Virtual Distributed Switch) clones settings to a new management port keeping it within vDS.

Let’s now change upstream switch configuration for the port, so frames are now tagged. This will let us introduce additional VLANs for port groups on this switch in the following sections. The configuration on the switch will be similar to the listing below:

interface TenGigabitEthernet1/0/1
  switchport trunk encapsulation dot1q
  switchport mode trunk
  switchport trunk allowed vlan 4,5,10,20
  switchport trunk native vlan 999

The configuration applied to all other switches will be following the same pattern with allowed VLAN list will change to reflect port-groups VLANs for a specific switch.

Note that unused VLAN with ID 999 is specified as native. Once this configuration is applied the connectivity to the host will be lost, as we expect VLAN 4 to be untagged. To fix this issue use DCUI: Configure Management Network > VLAN (optional) and type in VLAN ID of 4. Refer to Figure 5 which shows relevant menu screenshots. When prompted, restart management network and management connectivity will be restored.

Create vSwitch1 with WebGUI

Standard switches can be configured directly via the host, as their settings are self-contained within a single host. However, it is possible to perform configuration using vCenter too. This section will show how to create switch using direct connection first, and then how to do it via vCenter.

Create Standard Switch using ESXi host WebGUI

Log-in directly to the host. Click on Networking and then on the Virtual switches tab. Press Add standard virtual switch button and type-in switch name and optionally change any of the default settings.

vSwitch Configuration via Direct ESXi Host Interface
Figure 8. vSwitch Configuration via Direct ESXi Host Interface

As shown in the screenshot, only a single uplink can be selected when creating a new vSwitch. To add the second uplink, click on vSwitch1 and then click on the Add uplink button. Select the correct interface opposite the “Uplink 2” label.

Add additional uplink to vSwitch
Figure 9. Add additional uplink to vSwitch

Let’s now remove the new vSwitch, so we can create it with vCenter. Click on Networking > Virtual switches > select row with vSwitch1 > click on Actions > Remove.

Delete vSwitch using ESXi host WebGUI
Figure 10. Delete vSwitch using ESXi host WebGUI

Create Standard Switch using vCenter host WebGUI

Another available option is to perform configuration via vCenter. The process is slightly different, but it achieves the same result as the direct configuration via ESXi host. Login into vCenter, Click on the desired hostname or IP address, then navigate to Configure > Networking > Virtual Switches and press Add Networking.

Virtual Switches via vCenter Management Interface
Figure 11. Virtual Switches via vCenter Management Interface

The next series of screenshots show the steps involved in creating new vSwitch. Note that the wizard combines this process with the configuration of a new VMKernel adapter, Virtual Machine Port group, or an upstream physical network adapter. As port groups will be covered in the next blog post, we will just use uplink adapter as our choice.

Note that you can add multiple uplinks at once by either pressing the “+” button several times on the third mini-screenshot below or by holding the Alt button to select multiple adapters on the fourth screen step.

Add Networking Wizard
Figure 12. Add Networking Wizard

Create vSwitch2 with PowerCLI

PowerCLI is a PowerShell Module provided by VMware. This how-to article provides instructions on how to install it.

As with WebGUI, it is possible to connect with PowerCLI either to ESXi host directly or to a vCenter appliance. In the examples of this section, we will connect to vCenter. The commands behave in a similar way, with the exception that we need to specify which host’s virtual switch we want to apply PowerShell cmdlets to. We will start with first connecting to the vCenter and then displaying virtual switches with the Get-VirtualSwitch command. I am using an example from command reference for Get-VirtualSwitch on the VMware website to perform pipe-based filtering from Get-VMHost cmdlet.

Note that we can see that there are 2 vSwitches we’ve configured in earlier sections. If you have a connection to an ESXi host, then you can just use Get-VirtualSwitch, as you will have access to a single host, so it doesn’t need to be explicitly specified.

By default, PowerShell formats the output as a table, so we cannot see all available properties. To address this, we can pipe the output with “|” character to Format-List cmdlet, which uses list-based formatting.

PowerCLI exposes certain properties that are not visible in GUI, such as a number of ports virtual switch has.

Windows PowerShell
 Copyright (C) 2016 Microsoft Corporation. All rights reserved.
 PS C:\WINDOWS\system32> Connect-VIServer 192.168.99.220
 Name                           Port  User
 ----                           ----  ----
 192.168.99.220                 443   LAB.LOCAL\Administrator
 PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch
 Name                           NumPorts   Mtu   Notes
 ----                           --------   ---   -----
 vSwitch0                       2560       1500
 vSwitch1                       2560       1500
 PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch | Format-List
 Id                : key-vim.host.VirtualSwitch-vSwitch0
 Key               : key-vim.host.VirtualSwitch-vSwitch0
 Name              : vSwitch0
 NumPorts          : 2560
 NumPortsAvailable : 2547
 Nic               : {vmnic0}
 Mtu               : 1500
 VMHostId          : HostSystem-host-29
 VMHost            : 192.168.99.202
 VMHostUid         : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 Uid               : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch0/
 ExtensionData     : VMware.Vim.HostVirtualSwitch
 Id                : key-vim.host.VirtualSwitch-vSwitch1
 Key               : key-vim.host.VirtualSwitch-vSwitch1
 Name              : vSwitch1
 NumPorts          : 2560
 NumPortsAvailable : 2547
 Nic               : {vmnic2, vmnic3}
 Mtu               : 1500
 VMHostId          : HostSystem-host-29
 VMHost            : 192.168.99.202
 VMHostUid         : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 Uid               : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch1/
 ExtensionData     : VMware.Vim.HostVirtualSwitch

To create a virtual switch with PowerCLI we need to use New-VirtualSwitch cmdlet. We will use the example provided in command reference to achieve this. The last command in the listing below uses –Name switch with Get-VirtualSwitch to filter the output so the only newly created switch is shown.

PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | New-VirtualSwitch -Name "vSwitch2" -Nic vmnic4,vmnic5
 Name                           NumPorts   Mtu   Notes
 ----                           --------   ---   -----
 vSwitch2                       2560       1500
 PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch
 Name                           NumPorts   Mtu   Notes
 ----                           --------   ---   -----
 vSwitch0                       2560       1500
 vSwitch1                       2560       1500
 vSwitch2                       2560       1500
 PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch2" | Format-List
 Id                : key-vim.host.VirtualSwitch-vSwitch2
 Key               : key-vim.host.VirtualSwitch-vSwitch2
 Name              : vSwitch2
 NumPorts          : 2560
 NumPortsAvailable : 2544
 Nic               : {vmnic4, vmnic5}
 Mtu               : 1500
 VMHostId          : HostSystem-host-29
 VMHost            : 192.168.99.202
 VMHostUid         : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 Uid               : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch2/
 ExtensionData     : VMware.Vim.HostVirtualSwitch

Now we have almost achieved the desired target topology with the exception of the second physical adapter attached to vSwitch0. The cmdlet name performing this operation is Add-VirtualSwitchPhysicalNetworkAdapter and we are using modified example 2 from the command reference.

Note how variables are used to store an object returned by Get-* cmdlets. They must start with the dollar sign “$”. We then can use these variables as parameters in other cmdlets.

PS C:\WINDOWS\system32> $VariableSwitch01 = Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch0"
 PS C:\WINDOWS\system32> $VariableAdapter01 = Get-VMHost -Name "192.168.99.202" | Get-VMHostNetworkAdapter -Physical -Name vmnic1
 PS C:\WINDOWS\system32> Add-VirtualSwitchPhysicalNetworkAdapter -VirtualSwitch $VariableSwitch01 -VMHostPhysicalNic $VariableAdapter01
 Confirm
 Are you sure you want to perform this action?
 Performing the operation "Adding physical network adapter(s) 'vmnic1'" on target "vSwitch0".
 [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): y
 PS C:\WINDOWS\system32> Get-VMHost -Name "192.168.99.202" | Get-VirtualSwitch -Name "vSwitch0" | Format-List
 Id                : key-vim.host.VirtualSwitch-vSwitch0
 Key               : key-vim.host.VirtualSwitch-vSwitch0
 Name              : vSwitch0
 NumPorts          : 2560
 NumPortsAvailable : 2543
 Nic               : {vmnic0, vmnic1}
 Mtu               : 1500
 VMHostId          : HostSystem-host-29
 VMHost            : 192.168.99.202
 VMHostUid         : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/
 Uid               : /VIServer=lab.local\administrator@192.168.99.220:443/VMHost=HostSystem-host-29/VirtualSwitch=key-vim.host.VirtualSwitch-vSwitch0/
 ExtensionData     : VMware.Vim.HostVirtualSwitch

In the next article, we will continue the configuration of our topology.

HowTo: VMware PowerCLI Installation on Windows

This is a how-to article on how to perform VMware PowerCLI installation on Windows. PowerCLI is a module for Windows PowerShell, however, it also supports macOS and Ubuntu running PowerShell Core 6.x.

Installation procedure

Step 1. In this example, I’m using Windows 10 Professional. Start typing-in “powershell” and in search results right-click on the Windows PowerShell icon and select Run as Administrator.

Start Windows PowerShell

Step 2. Validate that the system doesn’t have modules installed. In PowerShell console type in:

Get-Module -ListAvailable -Name VMware*

The cmdlet above displays currently imported and installed (with -ListAvailable) modules which have a name starting with VMware.

Step 3. To perform the installation:

Install-Module -Name VMware.PowerCLI

The prompt will appear with a warning of an untrusted repository. To proceed, type Y and wait for the installation to complete.

Install-Module -Name VMware.PowerCLI

Wait for the installation to complete as shown in the screenshot below.

VMWare PowerCLI module installation

Re-running Get-Module cmdlet from step 1 will now display VMware modules.

Validate that VMware PowerCLI has been installed

Connect to ESXi Host or vCenter

To connect to VMHost use the following command:

Connect-VIServer <vm_host_ip_address_or_fqdn>

In many cases self-signed certificate is used with ESXi and the invalid certificate error will be raised, as shown in the screenshot below.

Connect to ESXi host with PowerCLI

To get the system prompt for invalid certificates and remove warning about the Customer Experience Improvement Program type in:

Set-PowerCLIConfiguration -Scope User –ParticipateInCeip:$false -InvalidCertificateAction Prompt
Set-PowerCLIConfiguration

To ignore the certificate warning and disable prompts use this variant of command:

Set-PowerCLIConfiguration -Scope User –ParticipateInCeip:$false -InvalidCertificateAction Ignore –Confirm:$false

Now Connect-VIServer will work after the certificate is accepted.

Connect-VIServer

Pre-requisites and useful links

PowerCLI Official Downloads and Documentation page. This page contains other links below and has version switch which leads to documentation for the versions different than 11.5.0.

https://code.vmware.com/web/tool/11.5.0/vmware-powercli

Compatibility matrix (with pre-requisites):

https://code.vmware.com/docs/10257/compatibility-matrix

PowerCLI cmdlet reference:

https://code.vmware.com/docs/10197/cmdlet-reference

PowerCLI user guide:

https://code.vmware.com/docs/10242/powercli-11-5-0-user-s-guide

PowerShell Gallery:

https://www.powershellgallery.com/packages/VMware.PowerCLI/11.5.0.14912921

vSphere ESXi Networking Guide – Part 1: Standard Switches

This is part 1 in the vSphere ESXi Networking Guide series which will cover theory, operation, and configuration for different components of vSphere Networking.

Virtual machines need to communicate with each other within an ESXi host and with other nodes reachable over the network. A virtual switch is a software component that enables this communication. vSphere can use Standard Switches (VSS) or Distributed Switches (VDS). vSphere Standard Switch is available with any license tier or edition of vSphere and it is the topic of this article.

ESXi hosts and vSwitches

An ESXi host can contain one or multiple standard virtual switches. By default, a virtual standard switch is created during the host installation which provides default VM network and management access to the host itself. It is also possible to delete all standard switches from an ESXi host and use distributed switches instead.

Each virtual switch is isolated from other virtual switches on the same host. In the diagram below, a VM on Switch0 will not be able to reach another VM connected to Switch1 directly within the host. It is possible, however, for them to communicate via upstream physical switches if the required VLAN and interface configuration is in place.

Each vSwitch also has a dedicated uplink or multiple uplinks allocated to it. In cases where no external communication is required, vSwitch can operate without uplinks.

IP-based traffic is often segregated with the use of dedicated physical network adapters which can be connected to a separate set of switches. In such scenarios, storage-related uplinks and corresponding VMKernel ports can be placed into a separate vSwitch.

Other use cases for creating additional vSwitches include:

  • Separation of Dev/Test environments from the production systems
  • Security-related isolation, for example, placing uplinks to a dedicated physical switch for DMZ virtual machines
Standard vSwitches and Hosts
Figure 1. Standard vSwitches and Hosts

Port groups

Single virtual switch can contain one or multiple port groups, or no port groups at all. Each port group can either contain one or multiple VM-facing ports or it can contain a single VMKernel port. Refer to the diagram below, which illustrates the relationship between port groups and their members.

VM Port Groups and VM Kernel Port Groups
Figure 2. VM Port Groups and VM Kernel Port Groups

Port groups can be and usually are mapped to different VLANs. However, multiple port groups can be mapped to the same VLAN, in which case they don’t provide isolation and ports can communicate directly within the host. Note the difference to vSwitch operation that prevents direct connectivity within the host.

A port group, as its name suggests, aggregates ports with similar configuration requirements, including VLAN membership, security parameters, traffic shaping, and uplink selection policies.

VLANs

VLAN or Virtual LAN concept originates from physical Ethernet switching as a way to split a single switch into multiple isolated groups of ports. Each of such groups is the same Layer 2 or broadcast domain, which means that hosts connected to the same group can send each other Layer 2 frames directly without involving Layer 3 device, such as a router. VLAN usually represents a single IPv4 subnet.

End-user device-facing ports can be assigned to a single VLAN, in this case, they are called access ports. Upstream or another switch-facing port can also be allocated to a single VLAN, however, this requires additional port for every VLAN that has members behind the upstream switch.

A more efficient way is to add additional information to each frame called 802.1q tag, so multiple VLANs can be transmitted over a single interface. Interface that carries traffic for multiple VLANs using tags is called 802.1q trunk port.

One of the VLANs can be designated as native for an 802.1q trunk. Switch will not add a tag to the native VLAN frames. As a result, if an end device is not using tagging, it will still be able to process frames in the native VLAN.

It is also possible to configure a port connected to a server as an 802.1q trunk. The operating system driver can then present the trunk as multiple virtual adapters, each in the corresponding VLAN.

How VLANs are mapped to different virtual switches components?

A virtual switch is designed to operate in a similar way as its physical equivalent. Uplinks are network interface cards of an ESXi host. No special configuration is required on uplinks to enable 802.1q tagging. VLANs are defined on a port-group level and tagging is automatically enabled on uplinks. Upstream switch must have relevant configuration to enable tagging towards the switch.

Virtual Standard Switch and VLANs
Figure 3. Virtual Standard Switch and VLANs

Consider sample topology shown in Figure 3. Physical switches have 5 VLANs defined and have the same configuration on their downstream ports connected to a single ESXi host. Cisco Catalyst syntax is provided in this example.

Frames on these physical links will be tagged with VLAN IDs of 10, 20, 30 and 40. VLAN ID 5 is the exception, as it is a native VLAN on the trunk. Frames of this VLAN will be sent without a tag from the physical switch to the virtual switch.

Virtual standard switch has 4 port groups defined – 3 for VM communication and one VMKernel port group. When Yellow VLAN 10 or Red VLAN 20 (or any VLAN ID in the range of 1 to 4094) is allocated to a port group, the virtual switch strips tag when delivering it to VM or VMKernel port in this group. It is similar to how access ports operate on a physical switch.

Green VLAN 0 is a special case. This port group will get all untagged frames received on uplinks. In the example shown in figure 3, the green port group will effectively be in VLAN 5, as it is designated as native by the upstream switch. VM adapters connected to this port group will also receive untagged frames. To avoid confusion when determining VLAN ID of a port group uses explicit VLAN tagging for all port groups. Designate one of the not-in-use VLANs as native on physical switch interfaces. It is inline with security recommendations for physical inter-switch communication to prevent so-called VLAN-hopping attacks.

The VLAN ID of 4095, when allocated to the port group, tells that tags must not be stripped and delivered to VM’s guest OS to process. Ports in this port group are 802.1q trunks with native VLAN 5. Its frames will be sent without a tag to a VMs.

Virtual Machines interfaces and VMKernel ports

A virtual machine uses one or multiple adapters to connect to port groups. Virtual Machine facing ports are Layer 2, so no IP addresses need to be specified on ESXi host. Router for VMs will be outside of ESXi host with the exception of virtual software routers or firewalls.

Guest OS requires a NIC driver, as it would with a physical adapter. Currently, there are 2 types of adapters available with ESXi: E1000 and VMXNET 3. E1000 emulates Intel network card and guest OS usually has a driver for it included in a standard set of drivers. VMXNET 3, also known as para-virtual adapter, provides better performance and more features. Drivers for VMXNET 3 are available with VMWare tools.

VMKernel port provides connectivity to the ESXi host itself. Each VMKernel port requires a dedicated port group. VMKernel port is layer 3 port with IP address assigned, however, multiple VMKernel ports can share the same VLAN.

During installation, a default management VMKernel port is automatically created. Additional ports can be created for the following types of host-sourced traffic:

  • iSCSI or NFS
  • vMotion
  • Provisioning
  • Fault Tolerance logging
  • vSphere Replication
  • vSphere Replication NFC
  • vSAN

VMKernel also associates with a TCP/IP stack. Each stack has a separate IP routing table and DNS configuration. Different types of management traffic can use its own default gateway instead of adding multiple static routes with a single stack.

vSwitch and Port Group policies

There are 3 categories of settings that can be applied to Port Groups. While the settings can also be defined on a vSwitch level, they are ultimately applied to Port Groups.

  • Uplink configuration (NIC teaming)
  • Security
  • Traffic shaping

We will discuss each of these categories in the following section. By default, all port groups inherit global configuration applied on vSwitch, which serves as a template for a default set of settings.  If a specific port group requires different policy settings, an administrator can override vSwitch settings on a port group level.

Uplink configuration settings

NIC teaming settings define how traffic will be distributed across multiple network adapters. Before discussing the uplink configuration further, I will provide a short overview of how multiple uplinks can be active at the same time without creating a layer 2 loop.

How Virtual Standard Switch prevents loops? In figure 3 two network adapters create a loop, as the physical switches are inter-connected directly upstream. You might be wondering if the switches recognize that there is a loop and if they need to block any of the interfaces using spanning tree protocol, as it would be done in a traditional switched network. Virtual Standard Switches don’t run STP. They also do not forward BPDUs received from one upstream switch to another.

To prevent consequences of Layer 2 loops, such as broadcast storms, a virtual switch uses a simple rule of not forwarding traffic received from one uplink to any other uplink. This way every frame’s source or destination address must always be either a VM or a VMKernel port. With this rule in place, all physical interfaces can forward traffic at the same time. Upstream switch ports can be configured as edge ports or using features such as portfast, which disables STP on the port.

Note that nothing stops a VM perform “inside-OS” Layer 2 bridging between its adapters which are pinned to different uplinks. This case doesn’t fall under the rule above, as from hypervisor perspective traffic goes to and from a VM. Such configuration can bring the network down or significantly degrade its performance due to the Layer 2 loop created.

Now we have discussed how multiple adapters can be active at the same time, let’s discuss different types of load balancing of traffic across these uplinks.

There are several load-balancing mechanisms available under NIC teaming configuration of a port group:

  • Originating port ID
  • Source MAC hash
  • IP hash

Diagram 4 shows 2 standard switches with each port group configured to use one of the algorithms above.

vSwitch Uplink Pinning and Load-Balancing Algorithms
Figure 4. vSwitch Uplink Pinning and Load-Balancing Algorithms

Virtual port- and source MAC-based

Virtual port-based balancing is the default algorithm, which assigns a virtual port to a specific uplink interface. This way if there are 8 VM ports and 2 uplinks, 4 VM ports will be mapped to one uplink and other four to another.

MAC-based load-balancing algorithm is similar to virtual port-based with the difference of input for the algorithm is not a virtual port ID, but VM’s source MAC address.

As different VMs can generate different loads but will have the same weight, these two algorithms cannot evenly utilize available upstream interface bandwidth as a downside. The benefit of these load balancing methods is that they are simple to troubleshoot and will work with any upstream switch configuration, as a VM MAC address will be consistently reachable via a single physical interface.

IP hash-based

This algorithm is based on selecting uplink based on a combination of source and destination IP address of a packet. This provides even uplink utilization, as in this scenario traffic from every single VM will be split across many uplinks if it communicates with multiple devices. With this load balancing configuration, the upstream switches must have a specific configuration for the reason below.

A physical switch learns source MAC addresses as part of its operation. When traffic from a single source MAC address is seen over more than one interface, the switch has to rapidly invalidate and update its MAC table. In some cases, switches can block offending ports, as MAC flapping is also a sign of a Layer 2 loop.

The solution to this is to configure the upstream switch as static EtherChannel (or Link Aggregation Group). It has to be static configuration, as shown in the configuration example of figure 4, as LACP is not supported on Standard Switches. If there are multiple upstream switches they must be either physically stacked together or employ some virtual form of stacking, such as Cisco Nexus VPC or Catalyst VSS/Virtual StackWise.

Failover settings

If no load balancing is required for the port group select “Use Explicit Failover Order” listed as one of the available load balancing methods in the configuration menu. This setting essentially saying do not use load balancing, but just failover to another active adapter if the active one fails.

Failover order is managed by placing adapters in one of three categories:

  • Active adapters. To use load balancing mechanisms described above, place at least 2 adapters to this list.
  • Standby adapters. Adapter in this category replaces failed active adapters.
  • Unused adapters. Prevents specific port groups from using particular adapters.

To identify if the adapter went down ESXi host by default will check its link status. This will not detect upstream switch failures and the host might be sending traffic to black hole over the failed link. An administrator can change the failure detection method to be beacon-probe based. When it is enabled, the host will send broadcast beacon that must be received via multiple NICs in the same team. It is recommended to have at least 3 physical adapters connected to different switches.

Security settings

Each port group has 3 parameters controlling layer 2 security:

  • Promiscuous mode
  • MAC address changes
  • Forged transmits

Promiscuous mode

Ports in a port group that has Accept action for promiscuous mode will receive a copy of all traffic in the VSwitch that port group is in. It is similar to the port mirroring feature available in physical switches. By default, this setting is set to Reject.

This setting should be enabled with care and only when required, for example, to support different appliances that work with a copy of the traffic for security and monitoring analysis. In such cases, create a dedicated port group and override default behavior on it by enabling Promiscuous mode. Enabling this setting on the switch level or on the port group which contains other VM ports is a security risk, as VMs can run Wireshark or similar tools to intercept traffic exchanged by other VMs.

MAC address changes

The hypervisor assigns a virtual MAC address to a VM. Guest OS can try to change this MAC address, however, with this setting set to Reject, no packets will be sent to Guest VM’s new MAC address. This setting works for the traffic in the direction towards VM. By default, this setting is set to Accept when a switch is created via vCenter and to Reject when created directly via ESXi host.

Forged transmits

Mirrored version of the previous setting which checks frames as they are received from VM. If the source MAC address doesn’t match the one that was assigned by ESXi host, such a frame will be discarded. This setting has similar defaults as the previous setting.

Traffic shaping

The final set of settings we will discuss in this article is related to traffic shaping which is a mechanism to limit the throughput of traffic to a specified value. Standard Virtual Switches support only outbound traffic shaping, i.e. virtual machine’s upload. When it is enabled you have access to 3 parameters:

  • Average Bandwidth (bits per seconds)
  • Peak Bandwidth (bits per seconds)
  • Burst Size (bytes)

Average Bandwidth defines the target rate over a period of time. Many applications send traffic in non-uniform pattern and there will be quiet periods followed by short bursts. By configuring Burst Size you allow the algorithm to accumulate bonus credit for quite periods which then can be used to send at rates higher than average, up to peak bandwidth.

The settings are applied to each port, i.e. values are not aggregate numbers for all ports.

In the next article, I will provide information on how to perform the configuration of concepts described in this post.

vCenter Server 6.7 Installation and Configuration

vCenter Appliance (vCSA) Deployment

vCSA is a virtual machine and can be deployed on ESXi hosts running version 5.5 or later. Depending on size of the vSphere deployment and whether you plan to install vCenter appliance into existing environment or starting new one, you have an option to install vCSA with embedded or external Platform Services Controller. This article provides information about how these components work together (link).

Let’s start with a simple option of deploying vCenter with embedded PSC. vCSA distribution media is ISO file named in the following format: VMware-VCSA-all-<version>-<build-number>.iso. To start installation mount this file on a workstation running Windows, Linux or MAC OS. The root of the folder contains readme.txt file explaining different installation options.

Read More

vSphere 6.7 ESXi Host Installation and Configuration

Installation

To install an ESXi host you will need to verify that the hardware meets minimum requirements. The server platform also must be supported and listed in VMware Compatibility List (link). You most likely will be able to install ESXi on non-supported hardware, however, it should be done only for non-production environments, as VMware will not provide support for this installation.

The server running ESXi 6.7 requires at least 2 x CPU cores, 4GB of RAM, a Gigabit network adapter and if local disk is to be used for boot at least 5.2 GB of disk space must be available (or 4 GB for USB or SD boot). NX/SD bit for the CPU and hardware virtualization support in BIOS must be enabled.

Download ESXi installation file from VMware website (the filename is in VMware-VMvisor-Installler-<version>-<build>.x86_64.iso format).

Read More

vSphere 6.7 Editions, Licensing, Architecture and Solutions

Check our new post on vSphere 7.0 Editions.

vSphere 6.7 Editions and Licensing

VMware vSphere 6.7 licensing is based on physical CPU count of the hosts. Every edition requires Support and Subscription contract purchase.

License key has edition and quantity information encoded in it. These keys are not tied to a specific hardware and can be assigned to multiple hosts, as long as the number of CPUs are within licensed limit.

vSphere customers with current contracts are entitled for version upgrades. vSphere is also allowed to be downgraded to the previous versions.

In vSphere 6.7 there are three editions available:

  • Standard
  • Enterprise Plus
  • Platinum
Read More