Building a Hyper-V 2012 cluster

Prerequisites

  • Hardware to present at least four computers – I’ve gone for a PowerEdge T110 2 with VMware ESXi 5.5 installed; check out this article regarding running Hyper-V on top of ESX.
  • Active Directory domain – I’ve gone for a single domain controller.
  • At least two computers running Windows Server 2012 with the Hyper-V role installed – I’ve gone for two computers running Windows Server 2012 R2 core edition with the Hyper-V role and failover clustering feature installed.
  • Shared storage of some form – I’ve presented storage to Windows Server 2012 R2 and utilised the built-in iSCSI target functionality.
  • At least one network interface per function e.g. one for management, one for virtual machines, one for cluster heartbeat and one for storage (if you’re using iSCSI) – I’ve gone for dual storage links, single cluster link, single management link and a single link for virtual machine traffic.

Network configuration

Hyper-V management interfaces.

NetworkConfiguration1

Hyper-V Cluster interfaces

NetworkConfiguration2

Storage interfaces

NetworkConfiguration3

Virtual machine traffic interfaces

NetworkConfiguration4

The network interface binding order on the Hyper-V hosts should be as follows:

  1. Failover cluster virtual adapter
  2. Management adapters
  3. Cluster heartbeat adapters
  4. Storage adapters
  5. Microsoft virtual switch adapter

If you are using Windows Server 2012 R2 core edition then you’ll need to compare the output of the following WMI query and registry key to configure the binding order.

WMI: wmic nicconfig get description, settingid

NetworkBindingOrder1

Registry: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind

NetworkBindingOrder2

The binding order is top to bottom so GUID {34CCAEDD…} should match up with the Failover cluster virtual adapter listed in the WMI output, the GUID {486EB2A7…} should match up with the management adapter, in my case a Intel E1000, the next GUID should match the cluster adapter…

If you are not sure which adapter is which e.g. what is vmxnet3 Ethernet adapter #2 then run ipconfig /all make a note of the adapter description and MAC address and cross reference this with whatever network that MAC address lives on. So in VMware that’s really easy but in a physical environment you would more than likely need to checkout the MAC table on the switch.

Management adapter configuration

Each adapter is assigned an IP address within the management subnet of 192.168.4.0/24

Cluster adapter configuration

Each adapter is assigned an IP address within the cluster subnet of 192.168.2.0/24.

Storage adapter configuration

I have selected to have two adapters for storage; I have an adapter on each host connected to a different subnet which in turn connects to the a specific adapter on the iSCSI target; see the storage adapters image above. I think it is worth noting that the storage vendor will usually provide guidance on how your storage adapters should be configured.

I have configured my storage adapters to use jumbo frames. In Windows Server 2012 R2 core edition jumbo frames can be configured via the registry.

First of all get the network adapter IDs of the storage adapters. The easiest way is to run sconfig.cmd and then select 8 for network settings then make a note of the index number which is associated with the storage adapters.

Open the registry and navigate to HKLM\System\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}0[Index number from above]\JumboPacket

iSCSIConfig7

9014 is jumbo frames whereas 1514 is the default MTU.

StorageNetworkConfig2

I have configured Windows Server 2012 R2 to be my iSCSI target; this computer has two adapters dedicated to iSCSI.

StorageNetworkConfig1

Once the networking is in place the MPIO feature should be installed on the Hyper-V hosts and the iSCSI initiator should be configured.

To install MPIO run – Install-WindowsFeature -Name Multipath-IO

To configure MPIO run – mpiocpl.exe from the command prompt then select ‘Add support for iSCSI devices’ click ‘Add’ then ‘Ok’.

iSCSIConfig1

restart when prompted.

Hyper-V virtual switch configuration

An adapter on each host will be assigned as an external switch. The external switch name must be identical across the Hyper-V hosts and it is best practice not the share this network adapter with the Hyper-V host.

HyperVvSwitchConfig1

Network adapter naming

The network adapters across all hosts should be consistently named.

In server core network adapters can be renamed using netsh.

Use ‘Netsh Interface Ipv4 Show Interfaces’ to get a list of interfaces then use Netsh Interface Set Interface Name=”[Current adapter name]” NewName=”[New Name]” to set the adapters new name.

iSCSI configuration

The iSCSI target will need to present a quorum disk and one or more disks for the virtual machine storage; I’ve configured a 512MB LUN for the quorum and a 1TB LUN for the virtual machine storage.

The iSCSI target has been configured to only present storage to the initiators of the Hyper-V hosts.

iSCSIConfig2

To configure the iSCSI initiator on the hosts run iscsicpl.exe from the command prompt.

Add the iSCSI targets

iSCSIConfig3

then click connect and specify the initiator and target IP addresses and enable multi-path.

iSCSIConfig4

Once you have connected to both iSCSI targets using their associated adapter you can configure MPIO.

Click devices then MPIO, you’ll need to configure MPIO for each device.

iSCSIConfig5

then select the applicable multi-path policy

iSCSIConfig6

Failover cluster configuration

I have installed the failover cluster feature tool on a computer running the full GUI, I suppose you could use a domain joined Windows client operating system.

Once installed open the failover cluster tool and create a new cluster – make sure you run all validation tests, the end result should be green ticks against all tests; any warning or errors are usually descriptive enough to suss out the underlying problem.

FailoverClusterConfig2

IP Addressing and network names

The failover clustering wizard will prompt for a cluster network name and cluster IP address which can float between hosts.

FailoverClusterConfig6

Naming and configuring cluster networks

Failover clustering will identify the networks on each host and label them ‘cluster network #’, I tend the name them to define their function i.e. ‘cluster’, ‘Hyper-V Management’…

FailoverClusterConfig9

In the image above I have configured the storage adapter not pass cluster heartbeats over those interfaces. If you are short on network interfaces and have to share then you can configure QoS on those interfaces; lookup New-NetQosPolicy for more info.

Cluster storage

The disks I presented to the hosts earlier are now configured; the 512MB disk is used as the quorum disk as per the image below and the 1TB disk is used to store highly available virtual machines.

FailoverClusterConfig4

The 1TB disk will need to be configured as a cluster shared volume to support HA; simply right click and select enable Cluster Shared Volumes.

FailoverClusterConfig5

Note: The image shows remove because I’ve already enabled cluster shared volumes.

FailoverClusterConfig7

Cluster shared volumes can be found as a mount point in c:\clusterstorage\

Diskpart shows this too.

FailoverClusterConfig8

Next step is to deploy some virtual machines…deploy System Center VMM…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s