Tagged: Hyper-V

Kerberos double-hop with Hyper-V and File shares

This article describes how to troubleshoot and fix the errors received when trying to attach an ISOs to a Hyper-V virtual machine from a client computer running the Hyper-V management tool or a System Center Virtual Machine Manager.

The scenario would look similar to diagram below.


If you try and mount an ISO hosted on \\fileserver\isos\ to a virtual machine running on the Hyper-V server from management tools hosted on another server with default configuration you will see an error similar to the one below.


At first you may want to check permissions on the file share hosted on \\fileserver\isos\ e.g the share permissions


Next check the NTFS permissions; I suppose you’d look to have the Hyper-V computer accounts granted read access but I believe the account that needs access is the account of the user attaching the ISO, so I have granted domain users read access.


The next step to troubleshoot is authentication as the authorisation looks to be in place.

The security event log on the file server at the time I attempted to attach the ISO image shows anonymous logon from the Hyper-V server where the virtual machine is hosted.


This would suggest the Hyper-V host cannot forward the credentials of the user attempting to attach the ISO image. I suppose one way to prove this, would be to logon to the Hyper-V server, open the Hyper-V tools then attach the ISO, this way your credentials should be used to access the share.

So if I configure delegation on the computer accounts of the Hyper-V hosts for the file server where the share exists my user account should be forwarded to the file server.

The image below shows that the cifs services has been allowed for delegation from the Hyper-V computer accounts.


If I attempt to attach a ISO to a virtual machine I can see an event that shows my user account successfully authenticating and more importantly the ISO was attached without error.


Hopefully this helps someone else


Building a Hyper-V 2012 cluster


  • Hardware to present at least four computers – I’ve gone for a PowerEdge T110 2 with VMware ESXi 5.5 installed; check out this article regarding running Hyper-V on top of ESX.
  • Active Directory domain – I’ve gone for a single domain controller.
  • At least two computers running Windows Server 2012 with the Hyper-V role installed – I’ve gone for two computers running Windows Server 2012 R2 core edition with the Hyper-V role and failover clustering feature installed.
  • Shared storage of some form – I’ve presented storage to Windows Server 2012 R2 and utilised the built-in iSCSI target functionality.
  • At least one network interface per function e.g. one for management, one for virtual machines, one for cluster heartbeat and one for storage (if you’re using iSCSI) – I’ve gone for dual storage links, single cluster link, single management link and a single link for virtual machine traffic.

Network configuration

Hyper-V management interfaces.


Hyper-V Cluster interfaces


Storage interfaces


Virtual machine traffic interfaces


The network interface binding order on the Hyper-V hosts should be as follows:

  1. Failover cluster virtual adapter
  2. Management adapters
  3. Cluster heartbeat adapters
  4. Storage adapters
  5. Microsoft virtual switch adapter

If you are using Windows Server 2012 R2 core edition then you’ll need to compare the output of the following WMI query and registry key to configure the binding order.

WMI: wmic nicconfig get description, settingid


Registry: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind


The binding order is top to bottom so GUID {34CCAEDD…} should match up with the Failover cluster virtual adapter listed in the WMI output, the GUID {486EB2A7…} should match up with the management adapter, in my case a Intel E1000, the next GUID should match the cluster adapter…

If you are not sure which adapter is which e.g. what is vmxnet3 Ethernet adapter #2 then run ipconfig /all make a note of the adapter description and MAC address and cross reference this with whatever network that MAC address lives on. So in VMware that’s really easy but in a physical environment you would more than likely need to checkout the MAC table on the switch.

Management adapter configuration

Each adapter is assigned an IP address within the management subnet of

Cluster adapter configuration

Each adapter is assigned an IP address within the cluster subnet of

Storage adapter configuration

I have selected to have two adapters for storage; I have an adapter on each host connected to a different subnet which in turn connects to the a specific adapter on the iSCSI target; see the storage adapters image above. I think it is worth noting that the storage vendor will usually provide guidance on how your storage adapters should be configured.

I have configured my storage adapters to use jumbo frames. In Windows Server 2012 R2 core edition jumbo frames can be configured via the registry.

First of all get the network adapter IDs of the storage adapters. The easiest way is to run sconfig.cmd and then select 8 for network settings then make a note of the index number which is associated with the storage adapters.

Open the registry and navigate to HKLM\System\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}0[Index number from above]\JumboPacket


9014 is jumbo frames whereas 1514 is the default MTU.


I have configured Windows Server 2012 R2 to be my iSCSI target; this computer has two adapters dedicated to iSCSI.


Once the networking is in place the MPIO feature should be installed on the Hyper-V hosts and the iSCSI initiator should be configured.

To install MPIO run – Install-WindowsFeature -Name Multipath-IO

To configure MPIO run – mpiocpl.exe from the command prompt then select ‘Add support for iSCSI devices’ click ‘Add’ then ‘Ok’.


restart when prompted.

Hyper-V virtual switch configuration

An adapter on each host will be assigned as an external switch. The external switch name must be identical across the Hyper-V hosts and it is best practice not the share this network adapter with the Hyper-V host.


Network adapter naming

The network adapters across all hosts should be consistently named.

In server core network adapters can be renamed using netsh.

Use ‘Netsh Interface Ipv4 Show Interfaces’ to get a list of interfaces then use Netsh Interface Set Interface Name=”[Current adapter name]” NewName=”[New Name]” to set the adapters new name.

iSCSI configuration

The iSCSI target will need to present a quorum disk and one or more disks for the virtual machine storage; I’ve configured a 512MB LUN for the quorum and a 1TB LUN for the virtual machine storage.

The iSCSI target has been configured to only present storage to the initiators of the Hyper-V hosts.


To configure the iSCSI initiator on the hosts run iscsicpl.exe from the command prompt.

Add the iSCSI targets


then click connect and specify the initiator and target IP addresses and enable multi-path.


Once you have connected to both iSCSI targets using their associated adapter you can configure MPIO.

Click devices then MPIO, you’ll need to configure MPIO for each device.


then select the applicable multi-path policy


Failover cluster configuration

I have installed the failover cluster feature tool on a computer running the full GUI, I suppose you could use a domain joined Windows client operating system.

Once installed open the failover cluster tool and create a new cluster – make sure you run all validation tests, the end result should be green ticks against all tests; any warning or errors are usually descriptive enough to suss out the underlying problem.


IP Addressing and network names

The failover clustering wizard will prompt for a cluster network name and cluster IP address which can float between hosts.


Naming and configuring cluster networks

Failover clustering will identify the networks on each host and label them ‘cluster network #’, I tend the name them to define their function i.e. ‘cluster’, ‘Hyper-V Management’…


In the image above I have configured the storage adapter not pass cluster heartbeats over those interfaces. If you are short on network interfaces and have to share then you can configure QoS on those interfaces; lookup New-NetQosPolicy for more info.

Cluster storage

The disks I presented to the hosts earlier are now configured; the 512MB disk is used as the quorum disk as per the image below and the 1TB disk is used to store highly available virtual machines.


The 1TB disk will need to be configured as a cluster shared volume to support HA; simply right click and select enable Cluster Shared Volumes.


Note: The image shows remove because I’ve already enabled cluster shared volumes.


Cluster shared volumes can be found as a mount point in c:\clusterstorage\

Diskpart shows this too.


Next step is to deploy some virtual machines…deploy System Center VMM…