Tagged: NIC teaming

Windows Server 2012 – configuring NIC teaming

Configuring NIC teaming

NIC teaming is now built into the Windows operating system. This post briefly looks at what is supported, the configurations possible and some example configurations.

What is supported

Datacenter bridging, IPsec task offload, large send offload, receive side coalescing, receive side scaling, receive side checksum offloads, transmit side checksum offloads and virtual machine queues.

What is not supported

SR-IOV, rDMA and TCP Chimney are not supported by NIC teaming because these technologies bypass the network stack. SR-IOV can be implemented with NIC teaming within a virtual machine. 802.1x and QoS are not supported either.

Mixing different speed network adapters in teams is not supported in active / active teams.

NIC teaming configurations

There are two teaming modes switch independent and switch dependent; switch independent does not require the switch to be intelligent all the intelligence is provided in software; this mode also allows the team to span switches. Switch dependent requires that the switch supports Link Aggregation Control Protocol (LACP / 802.1ax) or static teaming (802.3ad draft v1) and only supports one switch.

The load balancing algorithms available are address hashing and Hyper-V ports; address hashing will hash the TCP or UDP port and IP address of the outgoing packet, if the TCP or UDP port is hidden i.e. is protected by IPsec or may be the traffic is not TCP or UDP then the IP addresses are hashed instead. Finally is the traffic is not IP then a hash of the MAC address is generated.

Hyper-V port load balancing uses the virtual port identifier to load balance traffic.

The combination of the teaming modes and algorithms defined above are:

  1. Switch independent with address hashing
    1. Uses all team members to sent outbound traffic. Can only receive inbound traffic on the primary network adapter of the NIC team. This mode is best suited for applications such as IIS, teaming within a virtual machine, teaming where switch diversity is a concern and where active / standby teaming is required.
  2. Switch independent with Hyper-V ports
    1. Each virtual machine will be limited to the bandwidth of one network adapter in the NIC team. The virtual machines will send and receive data on the same network adapter. This mode is best suited when the number of virtual machines far exceeds the number of team members and the limitation of bandwidth is not a concern. This mode also is best suited when virtual machine queue is used.
  3. Switch dependent with address hashing
    1. Uses all team members to send outbound traffic, inbound traffic will be distributed by the switch. This mode is best suited for applications which run on Windows natively and require maximum network performance and can also be used where virtual machines have a requirement for more bandwidth than one network adapter can provide.
  4. Switch independent with Hyper-V ports
    1. More or less the same as the switch independent mode but with the following exceptions: the inbound network traffic is distributed by the switch, this may result in the inbound traffic arriving on all team members thus a VMQ will be present on all team members for a particular VM so this mode may not be best suited when VMQ is configured. The mode suits when the Hyper-V host is densely populated and policy dictates that LACP should be used.

NIC teaming within Hyper-V

NIC teaming within the virtual machine allows you to provide access to two or more virtual switches. Each network adapter must be configured to allow teaming. The team within the virtual machine cannot be anything other than switch independent with address hashing. NIC teaming is also useful where SR-IOV is being used; SR-IOV allows the virtual machine direct access to hardware, more here.

The vSwitches must connect to physically different switches. The virtual machine should be configured with a network adapter assigned to each vSwitch

Image below shows three virtual switches each with one network adapter assigned.

NICTeamingVMConfig0

Within the virtual machine create two network adapters connecting each one to a different virtual switch.

NICTeamingVMConfig01

Within Server Manager you will notice two network adapters, nothing new there.

NICTeamingVMConfig03

Click on the NIC Teaming link to configure NIC teaming.

NicTeamingGUIVM1

Select tasks > New Team

NicTeamingGUIVM2

Name the team and select the adapters; you’ll notice because the operating system is installed within a virtual machine you’re restricted to address hashing in switch independent mode.

NicTeamingGUIVM3

NICTeamingVMConfig05

Alternatively to using the GUI you can do the same in one line of PowerShell using New-NetLbfoTeam.

Native NIC teaming

NIC teaming natively allows you to untag multiple VLANs using team interfaces, the most common NIC teaming configuration is switch dependent with address hashing.

NIC teaming configuration example:

LACP1NicTeam

Before running the command above I would suggest the network ports are not connected to protect against loops and broadcast storms. I have a HP procurve switch which supports dynamic LACP, so I only had to configure the switch ports to be LACP active and the switch does the rest. HP Documentation here.

To remove the NIC teaming configuration use Remove-NetLbfoTeam -Name [Team name]

NIC teaming can also be used where VLAN tagging at the operating system is required.

Advertisements