Tagged: Failover Clustering

Building a Hyper-V 2012 cluster


  • Hardware to present at least four computers – I’ve gone for a PowerEdge T110 2 with VMware ESXi 5.5 installed; check out this article regarding running Hyper-V on top of ESX.
  • Active Directory domain – I’ve gone for a single domain controller.
  • At least two computers running Windows Server 2012 with the Hyper-V role installed – I’ve gone for two computers running Windows Server 2012 R2 core edition with the Hyper-V role and failover clustering feature installed.
  • Shared storage of some form – I’ve presented storage to Windows Server 2012 R2 and utilised the built-in iSCSI target functionality.
  • At least one network interface per function e.g. one for management, one for virtual machines, one for cluster heartbeat and one for storage (if you’re using iSCSI) – I’ve gone for dual storage links, single cluster link, single management link and a single link for virtual machine traffic.

Network configuration

Hyper-V management interfaces.


Hyper-V Cluster interfaces


Storage interfaces


Virtual machine traffic interfaces


The network interface binding order on the Hyper-V hosts should be as follows:

  1. Failover cluster virtual adapter
  2. Management adapters
  3. Cluster heartbeat adapters
  4. Storage adapters
  5. Microsoft virtual switch adapter

If you are using Windows Server 2012 R2 core edition then you’ll need to compare the output of the following WMI query and registry key to configure the binding order.

WMI: wmic nicconfig get description, settingid


Registry: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind


The binding order is top to bottom so GUID {34CCAEDD…} should match up with the Failover cluster virtual adapter listed in the WMI output, the GUID {486EB2A7…} should match up with the management adapter, in my case a Intel E1000, the next GUID should match the cluster adapter…

If you are not sure which adapter is which e.g. what is vmxnet3 Ethernet adapter #2 then run ipconfig /all make a note of the adapter description and MAC address and cross reference this with whatever network that MAC address lives on. So in VMware that’s really easy but in a physical environment you would more than likely need to checkout the MAC table on the switch.

Management adapter configuration

Each adapter is assigned an IP address within the management subnet of

Cluster adapter configuration

Each adapter is assigned an IP address within the cluster subnet of

Storage adapter configuration

I have selected to have two adapters for storage; I have an adapter on each host connected to a different subnet which in turn connects to the a specific adapter on the iSCSI target; see the storage adapters image above. I think it is worth noting that the storage vendor will usually provide guidance on how your storage adapters should be configured.

I have configured my storage adapters to use jumbo frames. In Windows Server 2012 R2 core edition jumbo frames can be configured via the registry.

First of all get the network adapter IDs of the storage adapters. The easiest way is to run sconfig.cmd and then select 8 for network settings then make a note of the index number which is associated with the storage adapters.

Open the registry and navigate to HKLM\System\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}0[Index number from above]\JumboPacket


9014 is jumbo frames whereas 1514 is the default MTU.


I have configured Windows Server 2012 R2 to be my iSCSI target; this computer has two adapters dedicated to iSCSI.


Once the networking is in place the MPIO feature should be installed on the Hyper-V hosts and the iSCSI initiator should be configured.

To install MPIO run – Install-WindowsFeature -Name Multipath-IO

To configure MPIO run – mpiocpl.exe from the command prompt then select ‘Add support for iSCSI devices’ click ‘Add’ then ‘Ok’.


restart when prompted.

Hyper-V virtual switch configuration

An adapter on each host will be assigned as an external switch. The external switch name must be identical across the Hyper-V hosts and it is best practice not the share this network adapter with the Hyper-V host.


Network adapter naming

The network adapters across all hosts should be consistently named.

In server core network adapters can be renamed using netsh.

Use ‘Netsh Interface Ipv4 Show Interfaces’ to get a list of interfaces then use Netsh Interface Set Interface Name=”[Current adapter name]” NewName=”[New Name]” to set the adapters new name.

iSCSI configuration

The iSCSI target will need to present a quorum disk and one or more disks for the virtual machine storage; I’ve configured a 512MB LUN for the quorum and a 1TB LUN for the virtual machine storage.

The iSCSI target has been configured to only present storage to the initiators of the Hyper-V hosts.


To configure the iSCSI initiator on the hosts run iscsicpl.exe from the command prompt.

Add the iSCSI targets


then click connect and specify the initiator and target IP addresses and enable multi-path.


Once you have connected to both iSCSI targets using their associated adapter you can configure MPIO.

Click devices then MPIO, you’ll need to configure MPIO for each device.


then select the applicable multi-path policy


Failover cluster configuration

I have installed the failover cluster feature tool on a computer running the full GUI, I suppose you could use a domain joined Windows client operating system.

Once installed open the failover cluster tool and create a new cluster – make sure you run all validation tests, the end result should be green ticks against all tests; any warning or errors are usually descriptive enough to suss out the underlying problem.


IP Addressing and network names

The failover clustering wizard will prompt for a cluster network name and cluster IP address which can float between hosts.


Naming and configuring cluster networks

Failover clustering will identify the networks on each host and label them ‘cluster network #’, I tend the name them to define their function i.e. ‘cluster’, ‘Hyper-V Management’…


In the image above I have configured the storage adapter not pass cluster heartbeats over those interfaces. If you are short on network interfaces and have to share then you can configure QoS on those interfaces; lookup New-NetQosPolicy for more info.

Cluster storage

The disks I presented to the hosts earlier are now configured; the 512MB disk is used as the quorum disk as per the image below and the 1TB disk is used to store highly available virtual machines.


The 1TB disk will need to be configured as a cluster shared volume to support HA; simply right click and select enable Cluster Shared Volumes.


Note: The image shows remove because I’ve already enabled cluster shared volumes.


Cluster shared volumes can be found as a mount point in c:\clusterstorage\

Diskpart shows this too.


Next step is to deploy some virtual machines…deploy System Center VMM…

MCTS 70-646 Plan high availability

Plan high availability

Service redundancy

More on Microsoft failover clustering here

Microsoft failover clustering no longer supports direct parallel SCSI and requires shared storage that is SCSI Primary Commands (SPC-3) compatible i.e. uses SCSI-3 persistent reservations.

Service availability

More on Microsoft Load balancing and DNS round robin here

Mixed-mode clusters are possible but it is recommended that all cluster  nodes are running the same operating system.

If IGMP multicast mode is used then IGMP snooping must be enabled on the switch.

Each cluster host sends a heartbeat every second, if five consecutive heartbeats are missed then the cluster converges removing the failed host.

The MCITP self-paced training kit for Windows Server 2008 administrator seems to suggest that Microsoft testing has found that NLB clusters with more than 8 nodes are not efficient. If you need more than 8 nodes consider multiple NLB clusters which use DNS round robin.

DNS round robin is enabled by default, remember than netmask ordering is enabled by default too; netmask ordering will return the ip address of a particular record which is within the same subnet. DNS netmask ordering uses class C to determine whether the network is local.