Category: Windows Server

Kerberos double-hop with Hyper-V and File shares

This article describes how to troubleshoot and fix the errors received when trying to attach an ISOs to a Hyper-V virtual machine from a client computer running the Hyper-V management tool or a System Center Virtual Machine Manager.

The scenario would look similar to diagram below.


If you try and mount an ISO hosted on \\fileserver\isos\ to a virtual machine running on the Hyper-V server from management tools hosted on another server with default configuration you will see an error similar to the one below.


At first you may want to check permissions on the file share hosted on \\fileserver\isos\ e.g the share permissions


Next check the NTFS permissions; I suppose you’d look to have the Hyper-V computer accounts granted read access but I believe the account that needs access is the account of the user attaching the ISO, so I have granted domain users read access.


The next step to troubleshoot is authentication as the authorisation looks to be in place.

The security event log on the file server at the time I attempted to attach the ISO image shows anonymous logon from the Hyper-V server where the virtual machine is hosted.


This would suggest the Hyper-V host cannot forward the credentials of the user attempting to attach the ISO image. I suppose one way to prove this, would be to logon to the Hyper-V server, open the Hyper-V tools then attach the ISO, this way your credentials should be used to access the share.

So if I configure delegation on the computer accounts of the Hyper-V hosts for the file server where the share exists my user account should be forwarded to the file server.

The image below shows that the cifs services has been allowed for delegation from the Hyper-V computer accounts.


If I attempt to attach a ISO to a virtual machine I can see an event that shows my user account successfully authenticating and more importantly the ISO was attached without error.


Hopefully this helps someone else


Building a Hyper-V 2012 cluster


  • Hardware to present at least four computers – I’ve gone for a PowerEdge T110 2 with VMware ESXi 5.5 installed; check out this article regarding running Hyper-V on top of ESX.
  • Active Directory domain – I’ve gone for a single domain controller.
  • At least two computers running Windows Server 2012 with the Hyper-V role installed – I’ve gone for two computers running Windows Server 2012 R2 core edition with the Hyper-V role and failover clustering feature installed.
  • Shared storage of some form – I’ve presented storage to Windows Server 2012 R2 and utilised the built-in iSCSI target functionality.
  • At least one network interface per function e.g. one for management, one for virtual machines, one for cluster heartbeat and one for storage (if you’re using iSCSI) – I’ve gone for dual storage links, single cluster link, single management link and a single link for virtual machine traffic.

Network configuration

Hyper-V management interfaces.


Hyper-V Cluster interfaces


Storage interfaces


Virtual machine traffic interfaces


The network interface binding order on the Hyper-V hosts should be as follows:

  1. Failover cluster virtual adapter
  2. Management adapters
  3. Cluster heartbeat adapters
  4. Storage adapters
  5. Microsoft virtual switch adapter

If you are using Windows Server 2012 R2 core edition then you’ll need to compare the output of the following WMI query and registry key to configure the binding order.

WMI: wmic nicconfig get description, settingid


Registry: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind


The binding order is top to bottom so GUID {34CCAEDD…} should match up with the Failover cluster virtual adapter listed in the WMI output, the GUID {486EB2A7…} should match up with the management adapter, in my case a Intel E1000, the next GUID should match the cluster adapter…

If you are not sure which adapter is which e.g. what is vmxnet3 Ethernet adapter #2 then run ipconfig /all make a note of the adapter description and MAC address and cross reference this with whatever network that MAC address lives on. So in VMware that’s really easy but in a physical environment you would more than likely need to checkout the MAC table on the switch.

Management adapter configuration

Each adapter is assigned an IP address within the management subnet of

Cluster adapter configuration

Each adapter is assigned an IP address within the cluster subnet of

Storage adapter configuration

I have selected to have two adapters for storage; I have an adapter on each host connected to a different subnet which in turn connects to the a specific adapter on the iSCSI target; see the storage adapters image above. I think it is worth noting that the storage vendor will usually provide guidance on how your storage adapters should be configured.

I have configured my storage adapters to use jumbo frames. In Windows Server 2012 R2 core edition jumbo frames can be configured via the registry.

First of all get the network adapter IDs of the storage adapters. The easiest way is to run sconfig.cmd and then select 8 for network settings then make a note of the index number which is associated with the storage adapters.

Open the registry and navigate to HKLM\System\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}0[Index number from above]\JumboPacket


9014 is jumbo frames whereas 1514 is the default MTU.


I have configured Windows Server 2012 R2 to be my iSCSI target; this computer has two adapters dedicated to iSCSI.


Once the networking is in place the MPIO feature should be installed on the Hyper-V hosts and the iSCSI initiator should be configured.

To install MPIO run – Install-WindowsFeature -Name Multipath-IO

To configure MPIO run – mpiocpl.exe from the command prompt then select ‘Add support for iSCSI devices’ click ‘Add’ then ‘Ok’.


restart when prompted.

Hyper-V virtual switch configuration

An adapter on each host will be assigned as an external switch. The external switch name must be identical across the Hyper-V hosts and it is best practice not the share this network adapter with the Hyper-V host.


Network adapter naming

The network adapters across all hosts should be consistently named.

In server core network adapters can be renamed using netsh.

Use ‘Netsh Interface Ipv4 Show Interfaces’ to get a list of interfaces then use Netsh Interface Set Interface Name=”[Current adapter name]” NewName=”[New Name]” to set the adapters new name.

iSCSI configuration

The iSCSI target will need to present a quorum disk and one or more disks for the virtual machine storage; I’ve configured a 512MB LUN for the quorum and a 1TB LUN for the virtual machine storage.

The iSCSI target has been configured to only present storage to the initiators of the Hyper-V hosts.


To configure the iSCSI initiator on the hosts run iscsicpl.exe from the command prompt.

Add the iSCSI targets


then click connect and specify the initiator and target IP addresses and enable multi-path.


Once you have connected to both iSCSI targets using their associated adapter you can configure MPIO.

Click devices then MPIO, you’ll need to configure MPIO for each device.


then select the applicable multi-path policy


Failover cluster configuration

I have installed the failover cluster feature tool on a computer running the full GUI, I suppose you could use a domain joined Windows client operating system.

Once installed open the failover cluster tool and create a new cluster – make sure you run all validation tests, the end result should be green ticks against all tests; any warning or errors are usually descriptive enough to suss out the underlying problem.


IP Addressing and network names

The failover clustering wizard will prompt for a cluster network name and cluster IP address which can float between hosts.


Naming and configuring cluster networks

Failover clustering will identify the networks on each host and label them ‘cluster network #’, I tend the name them to define their function i.e. ‘cluster’, ‘Hyper-V Management’…


In the image above I have configured the storage adapter not pass cluster heartbeats over those interfaces. If you are short on network interfaces and have to share then you can configure QoS on those interfaces; lookup New-NetQosPolicy for more info.

Cluster storage

The disks I presented to the hosts earlier are now configured; the 512MB disk is used as the quorum disk as per the image below and the 1TB disk is used to store highly available virtual machines.


The 1TB disk will need to be configured as a cluster shared volume to support HA; simply right click and select enable Cluster Shared Volumes.


Note: The image shows remove because I’ve already enabled cluster shared volumes.


Cluster shared volumes can be found as a mount point in c:\clusterstorage\

Diskpart shows this too.


Next step is to deploy some virtual machines…deploy System Center VMM…

Windows Server 2012 – Deploy and manage IPAM


 Installation requirements and prerequisites

IPAM has recommended hardware requirements of a 2GHz CPU, 4GB RAM and 80GB of free disk space.

The prerequisites of IPAM are:

  • IPAM must be installed in an Active Directory domain
  • IPAM cannot be installed on a domain controller
  • Only one forest can be managed
  • IPv6 must be enabled in order for IPAM to manage IPv6 addresses
  • Only works with Microsoft servers using TCP/IP protocol
  • IPAM database support in Windows Server 2012 is WID but in Windows Server 2012 R2 now supports SQL server too

You can install IPAM from the Add Roles and Features or PowerShell using Install-WindowsFeature IPAM -IncludeManagementTools

Provisioning methods

Manual provisioning requires you configure each managed server manually. The following links describes the configuration required for each role.

Group Policy automated deployment eases configuration and deployment i.e. it configure firewall rules, shares, configures the security group membership etc.

If you have hardware firewalls then the following ports will be used:

  • RPC end point mapper – TCP 135
  • SMB – TCP 445
  • Dynamic RPC ports – use ‘netsh interface ipv4 show dynamicports tcp’
  • NetBIOS Session service – TCP 139
  • DNS – TCP 53

To configure the Group Policy objects use Invoke-IpamGpoProvisioning cmdlet.


This will create the following Group Policy Objects and link then to the domain.


Configure server discovery

Select configure server discovery and select the domain which will be managed and which server roles exist within the domain e.g. Domain Controller, DNS and DHCP.


Once the domain is selected select start server discovery or wait for the next schedule of the ServerDiscovery task; once per day by default.

Create and manage IP blocks and ranges

IP Address blocks can be added manually if required; note that IP address blocks discovered by DHCP are automatically added to IPAM but IP address lease information is not. See below.

To import DHCP lease information from DHCP download this PowerShell script from the TechNet script center.

Once you have downloaded the script run IpamIntegration_dhcp.ps1 then run Invoke-IpamDhcpLease. You may need to register IPAM with PowerShell if you get an error in PowerShell related to Microsoft.ipam session configuration. To register ipam run Register-PSSessionConfiguration -Name Microsoft.ipam.

If you’re running Windows Server 2012 R2 and the import of DHCP leases fails then be sure to check out the Q & A section of the script download page.


If you run Invoke-IpamDhcpLease with the periodic parameter the PowerShell will create a scheduled task that runs a every 6 hours.


IP addresses can also be imported from csv files. Scope utilisation can be viewed from the IP address range groups within IPAM.IPAM5

The IP address inventory contains all IP addresses which have been imported from DHCP (if the script above has been used) and manually added IP addresses. Note that manually added IP addresses cannot be managed by MS DHCP.

A manually created IP address block cannot be set to managed by MS DHCP.


The IP address inventory shows the status of each IP address lease.


If you have devices which need statically assigned IP addresses or DHCP reservations then you can use the ‘Find and Allocate Available IP Address…’ functionality to find an available IP address and set a DHCP reservation.


The ‘Reclaim IP Addresses…’ functionality just removes the IP Address entry from the IPAM database, it does not affect DHCP or DNS. To remove DHCP reservations and DNS records use the IP Address Inventory and right click the IP address you want to remove.


IPAM Delegation

IPAM administrators can view all IPAM data and manage all features.

IPAM ASM administrators can manage IP blocks, IP ranges and IP addresses.

IPAM IP Audit administrators can view IP address tracking data.

IPAM MSM administrators can manage DHCP and DNS servers from within IPAM.

IPAM Users can view IPAM data but cannot manage features or view IP tracking data.

IP Address Tracking

The domain controllers and NPS servers should have ‘Account Logon Events’ auditing enabled to collect user and computer authentication events and cross reference against DHCP leases.



Best Practice

The IPAM best practices can be found here.

Windows Server 2012 – Configure local storage

Storage spaces

Design storage spaces


Primordial pool – One or more disks with no partitions and are at least 10GB

Storage pool – One or more disks allocated to a storage pool

Storage space – A disk carved out of a storage pool

Requirements of storage spaces

  • Disks that have no partitions or volumes and are at least 10GB
  • SAS storage must support Storage Enclosure Support (SES) for use with clustering
  • SATA, SCSI, iSCSI, SAS, USB or JBOD enclosures
  • Windows Server 2012 or Windows 8
  • Disks presented via a RAID controller must not be configured has a RAID set

Storage space functionality

Storage pools are the building blocks of storage spaces; storage pools can contain spinny (SAS and SATA) and SSD disks. New functionality in Windows Server 2012 R2 introduces Storage Tiers. Storage Tiers will move frequently access data onto SSD and less frequently accessed data onto slower storage.

Storage space resiliency is configured when you create a storage space. The options are as follows:

  1. Mirror disk (RAID 1) – requires the underlying storage pool contain at least two disks for a two way mirror and three disks for a three way mirror.
  2. Parity disk (RAID 5) – requires the underlying storage pool contain at least three disks. Not supported in a failover cluster.
  3. Simple disk (RAID 0) – requires one disk, good for hosting temporary files and files which can be recreated easily.

Storage space provisioning supports both fixed and thin provisioning; thin provisioning is not supported in a failover cluster. It is worth noting that thin provisioning will impact latency when the metadata I/O provisions the blocks for data to be written. Windows 2012 also has trim provisioning which allows you to reclaim space that is no longer in use. This functionality requires that the hardware is certified compatible with Windows Server 2012 and uses standards-compliant hardware for identification.

The space reclamation can impact performance as it is configured by default to reclaim space in real-time. If this is not acceptable then this registry key can be configured: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification = 1. Space can then be reclaimed using the ‘Defragment and Optimise Drives’ tool.

Storage spaces support cluster deployments and Cluster Shared Volumes for scale out deployments.

Storage spaces supports multi-tenancy for hosting scenarios and supports Active Directory security model.

Configure storage pools and disk pools

Storage pools are configured from Server Manager.


To create a storage pool right click in the storage spaces white space and select new storage pool…, then select from the available physical disks.

Verify the storage pool or primordial pool has disks with a healthy status:






Verify the virtual disk (storage space) and the physical disks which underpin the storage pool.





Get-StoragePool -FriendlyName [Name] | Get-VirtualDisk | Format-Table FriendlyName, OperationalStatus, HealthStatus


Get-StoragePool -FriendlyName [Name] | Get-PhysicalDisk | Format-Table FriendlyName, OperationalStatus, HealthStatus


To remove a degraded disk from a pool for repair run:

Set-PhysicalDisk -FriendlyName [Name of Disk] -Usage Retired

Get-PhysicalDisk -FriendlyName [Name of Disk] | Get-VirtualDisk | Repair-VirtualDisk

Deploy and Manage Storage Spaces with PowerShell

Enumerating the commands available to PowerShell:

Get-Command -Module Storage | sort Noun, Verb

Storage spaces requires the disk be blank, so it is important to check the disks you’re going to use for partitions.

Get-Disk | Get-Partition | get-Volume


You could specify a disk number or a list of disk number separated by commas.

To clear data from the disk run:

Clear-Disk # -RemoveData

So which disks can I use to create a storage pool? – running Get-PhysicalDisk -CanPool $True will show you.


Grab the disks you want to pool into a variable.


Use the disk in the variable to create a storage pool


Create a virtual disk (storage space) from the storage pool. Note the storage space cannot be use mirror or parity functionality because there is only one disk.


I have had to specify the resiliency type as simple because the storage pool default was mirror even though I only has one disk!


Other useful PowerShell cmdlets are Initialize-Disk, New-Partition and Format-Volume.

Configure basic and dynamic disks

Disks within Windows can be basic or dynamic regardless of the partition type or whether they’re part of a storage pool.

Disks presented by a storage device which has multiple paths should not be configured as a dynamic disks as this will more than likely break multipathing.

Dynamic disks are good for providing fault tolerance when fault tolerance storage controllers are not available.

The system partition is always a basic disk as this contains the boot files.

Configure Master Boot Record (MBR) and GUID Partition Table (GPT) disks

MBR disks

The Master Boot Record (MBR) is the traditional disk configuration which defines the disk configuration such as partition configuration and disk layout, in addition the MBR contains boot code to load the operating system boot loaders winload.exe, NTLDR…

The MBR is located on the first sector of the disk and supports a 32 bit partition table which limits the MBR disk to 2TB.

MBR disks are limited to 4 partitions; 3 primary partitions and 1 extended partition.

GPT Disks

The GUID partition table (GPT) disks were introduced in Windows Server 2003 SP1 and support disks greater than 2TB. The GPT disks have redundant copies of the partition table which allow for self healing.

The GPT disk has a protected MBR at LBA0 which allows BIOS based computers to boot from GPT disks as longa as the boot loaders and operating systems is GPT aware.

GPT have a 16KB partition table which allows for 128 partitions if each partition entry is 128bytes, whereas MBR disks have a 64 bytes partition table with each entry being 16 bytes.

Disks created within Server Manager automatically creates GPT disks.

Noteworthy changes

EFS has been deprecated because Bitlocker and Rights management are more scalable. The number of files in a volume is 18 quintilion. ReFS is targetted at data volumes, does not support deduplication.

iSCSI target now a role service

Storage spaces has de-duplication built into the operating system and works at the block level. Example scenarios of where de-duplication can be used is general file storage and VHD libraries. Deduplication cannot be enabled on boot or system disks i.e. primarily c:\.

Storage tiers – data is moved between fast and slow disk depending on access frequency; this is new in Windows Server 2012 R2.

Windows Server 2012 introduces a new filesystem; ReFS, ReFS is a self healing file system and I believe is the file system used in the back end to provide the fault tolerance and reliability of storage pools.

Windows Server 2012 – Configure server for remote management


Windows remote management is enabled by default in Windows Server 2012, if it is disabled for some reason it can be re-enable from the command line using Configure-SMRemoting.exe [-Enable | -Disable]. 


WinRM runs as a service within the Windows operating system and listens on TCP 5985 or 5986; the latter is used for SSL.

Use WinRM get winrm/config to get the current configuration.

To use WinRM to run commands on another server run winrs /r:[server name] [command].


You can also right click the server in Server Manager and select the tool you which to run remotely.


If you are running in a workgroup environment then first of all add the computer you want to manage to the trustedhosts of your client.


Configure UAC override on the server you want to manage.


Then add the server to Server Manager and right click Manage As…


Enter the builtin administrator username and password of the server you want to manage.

As previously discussed Server Manager can be used to group servers by role or a custom grouping, from the group you can view the events, services, roles and features, BPA info and performance information. See here.

MMC tools and DCOM

To remotely manage systems using MMC tools you must enable the following firewall rules using wf.msc or Enable-NetFirewallRule:

COM+ Network Access (DCOM-In)

The remote event log management group rules.


The Windows firewall remote management group rules.


The remote service management group rules.


this is because the MMC tools still use WMI over COM for network communication. Whereas Server Manager will use WMI over WinRM.

Configure down-level server management

Windows Server 2012 R2 can manage down-level operating systems when they have the Windows Management Framework 4.0 and Microsoft .NET framework 4.5 installed.

Windows Server 2012 can manage down-level operating systems when they have the Windows Management Framework 3.0 and Microsoft .NET framework 4.0.

In order for performance data to be collected from Server 2008 SP2 or R2 the hotfix detailed in KB2682011 must be installed.

Once the above has been completed enable remote management.

Server 2008 R2


NOTE: Server Manager is backward compatible i.e. Windows Server 2012 can manage down-level clients and other Windows Server 2012 servers but cannot manage Windows Server 2012 R2 servers.

Remember to access MMC tools you will need to open firewall ports too.

You will more than likely see errors related to WinRM not being able to register the SPN for the WinRM service, this is because the network service users does not have validated write to service principal name permission within Active Directory. To fix this use AdsiEdit.


Configure Server Core

Server Core is configured using the sconfig.cmd server configuration tool. If you need to enable remote management of MMC tools you will need to configure the Windows firewall using Enable-NetFirewallRule.

Group Policy configuration of WinRM and Windows Firewall

Group policies can be configured to enable WinRM on all IP addresses or a range of IP addresses. The Windows firewall can be configured via Group policy to open the DCOM ports for MMC tool management.

Windows Server 2012 – configuring NIC teaming

Configuring NIC teaming

NIC teaming is now built into the Windows operating system. This post briefly looks at what is supported, the configurations possible and some example configurations.

What is supported

Datacenter bridging, IPsec task offload, large send offload, receive side coalescing, receive side scaling, receive side checksum offloads, transmit side checksum offloads and virtual machine queues.

What is not supported

SR-IOV, rDMA and TCP Chimney are not supported by NIC teaming because these technologies bypass the network stack. SR-IOV can be implemented with NIC teaming within a virtual machine. 802.1x and QoS are not supported either.

Mixing different speed network adapters in teams is not supported in active / active teams.

NIC teaming configurations

There are two teaming modes switch independent and switch dependent; switch independent does not require the switch to be intelligent all the intelligence is provided in software; this mode also allows the team to span switches. Switch dependent requires that the switch supports Link Aggregation Control Protocol (LACP / 802.1ax) or static teaming (802.3ad draft v1) and only supports one switch.

The load balancing algorithms available are address hashing and Hyper-V ports; address hashing will hash the TCP or UDP port and IP address of the outgoing packet, if the TCP or UDP port is hidden i.e. is protected by IPsec or may be the traffic is not TCP or UDP then the IP addresses are hashed instead. Finally is the traffic is not IP then a hash of the MAC address is generated.

Hyper-V port load balancing uses the virtual port identifier to load balance traffic.

The combination of the teaming modes and algorithms defined above are:

  1. Switch independent with address hashing
    1. Uses all team members to sent outbound traffic. Can only receive inbound traffic on the primary network adapter of the NIC team. This mode is best suited for applications such as IIS, teaming within a virtual machine, teaming where switch diversity is a concern and where active / standby teaming is required.
  2. Switch independent with Hyper-V ports
    1. Each virtual machine will be limited to the bandwidth of one network adapter in the NIC team. The virtual machines will send and receive data on the same network adapter. This mode is best suited when the number of virtual machines far exceeds the number of team members and the limitation of bandwidth is not a concern. This mode also is best suited when virtual machine queue is used.
  3. Switch dependent with address hashing
    1. Uses all team members to send outbound traffic, inbound traffic will be distributed by the switch. This mode is best suited for applications which run on Windows natively and require maximum network performance and can also be used where virtual machines have a requirement for more bandwidth than one network adapter can provide.
  4. Switch independent with Hyper-V ports
    1. More or less the same as the switch independent mode but with the following exceptions: the inbound network traffic is distributed by the switch, this may result in the inbound traffic arriving on all team members thus a VMQ will be present on all team members for a particular VM so this mode may not be best suited when VMQ is configured. The mode suits when the Hyper-V host is densely populated and policy dictates that LACP should be used.

NIC teaming within Hyper-V

NIC teaming within the virtual machine allows you to provide access to two or more virtual switches. Each network adapter must be configured to allow teaming. The team within the virtual machine cannot be anything other than switch independent with address hashing. NIC teaming is also useful where SR-IOV is being used; SR-IOV allows the virtual machine direct access to hardware, more here.

The vSwitches must connect to physically different switches. The virtual machine should be configured with a network adapter assigned to each vSwitch

Image below shows three virtual switches each with one network adapter assigned.


Within the virtual machine create two network adapters connecting each one to a different virtual switch.


Within Server Manager you will notice two network adapters, nothing new there.


Click on the NIC Teaming link to configure NIC teaming.


Select tasks > New Team


Name the team and select the adapters; you’ll notice because the operating system is installed within a virtual machine you’re restricted to address hashing in switch independent mode.



Alternatively to using the GUI you can do the same in one line of PowerShell using New-NetLbfoTeam.

Native NIC teaming

NIC teaming natively allows you to untag multiple VLANs using team interfaces, the most common NIC teaming configuration is switch dependent with address hashing.

NIC teaming configuration example:


Before running the command above I would suggest the network ports are not connected to protect against loops and broadcast storms. I have a HP procurve switch which supports dynamic LACP, so I only had to configure the switch ports to be LACP active and the switch does the rest. HP Documentation here.

To remove the NIC teaming configuration use Remove-NetLbfoTeam -Name [Team name]

NIC teaming can also be used where VLAN tagging at the operating system is required.

Windows Server 2012 – Monitor and maintain servers

Monitor Servers

Server Manager

Server Manager allows you to pull event log data, service status information, management and accessibility information, performance counter data and best practice information from servers on your local LAN, private cloud and public cloud into one management console.

The thumbnails show manageability: i.e is the server online, is it accessible, is it reporting information back to Server Manager etc. NOTE: for Server 2003 Server Manager can only query for online or offline status. Other down-level operating systems such as Server 2008 SP2 / 2008 R2 require the Windows Management Framework 3.0 components and a hotfix for performance data capture.


The data is retained for one day but can configured as you wish.


Filtering information in the dashboard


Configure Data Collector Sets (DCS)

The data collection set configuration is the same as 2008; see here for review.

Configure alerts

Alert configuration is the same as 2008; see here for review. Alerts can be configured within Server Manager too.


Monitor real-time performance

Resource monitor still exists as it did in 2008, task manager has had a overhaul, process tab now displays applications and processes and a number of tabs have links to respective tools which allow you to dig deeper.


Monitor virtual machines (VMs)

Virtual machine monitoring is a feature of failover clustering. The administrator running failover clustering must be an administrator of the guest virtual machine and the guest must have the ‘Virtual Machine Monitoring’ firewall rule enabled. PowerShell: Set-NetFirewallRule -DisplayGroup “Virtual Machine Monitoring” -Enabled True.

Monitoring is configured from within the failover clustering manager and works by monitoring services selected by the administrator. If the service fails then the cluster agent honours the settings defined within the recovery tab of the monitored service. If the service recovery action is ‘Take No Action’ then the cluster agent will raise a Event ID of 1250 from the FailoverClustering source within the system log; this can be picked up by a SCOM monitor which has diagnostic and recovery configurations.

By default Failover Clustering will restart the guest on the same host using a forced but graceful shutdown, if that does not fix the issue then the guest is restarted on another Hyper-V server.

More information here.

Monitor events

Event monitoring is still the same as 2008; see here for review.

Event logs are now integrated into Server Manager and can be aggregated to your workstation for quick analysis and remediation where necessary. The events shown in server manager are context sensitive to the role or selected servers.


Configure event subscriptions

Event subscription configuration is the same as 2008; see here for review.

Configure network monitoring

Network monitoring configuration is the same as 2008; see here for review.

MCTS 70-646 Monitor and maintain security and policies

Monitor and maintain security and policies

May include but is not limited to: remote access, monitor and maintain NPAS, network access, server security, firewall rules and policies, authentication and authorization, data security, auditing

Which information should be protected and how sensitive is business critical

what information needs to be protected, how sensitive it the data, and is it business critical, who should be authorised to access the data, implement a workable monitoring policy.

MCTS 70-646 Monitor servers for performance evaluation and optimisation

Monitor servers for performance evaluation and optimisation

May include but is not limited to: server and service monitoring, optimization, event management, trending and baseline analysis

A baseline of a computer systems performance and reliability should be captured when a system is first deployed for a number of weeks and if applicable when month end processes run; it is advisable to define quiet, busy and business as usual periods. Baselines should be re-analysed when a computer system is changed i.e. more resource is added or a new application is deployed.

Performance Monitor

Performance monitor allows you to view real time data and data collected over a longer period using a data collector set; see data collector sets under performance monitor here for more info.

By default the real time data in performance monitor is overwritten every 140 seconds, to change this behaviour select properties > graph > scrolling type > scroll.

System diagnostic reports

These reports can be created using perfmon /report or by reviewing the reports within the performance monitor GUI.

Action Center

Monitors the computer and reports problems with security, maintenance and other related settings such as firewall, anti virus, anti spyware and windows firewall configuration.

Task Manager

Task manager can be used for viewing process utilisation.

Resource Monitor

Resource monitor is accessible from Task Manager and can be used to stop, start, suspend or resume processes and services, useful for troubleshooting and can be used to highlight CPU, memory, disk and network usage for a particular process or service.

Active Directory – Lab 1 ADDS with internal subdomain

The majority of Active Directory domains I have seen use a standalone internal domain such as .local, .internal or .company name.

When you suggest that the Active Directory domain should be a subdomain of the company’s public domain, you get the worried look that you’re exposing Active Directory to the internet… o_O.

If you take time to read Microsoft TechNet you’ll discover an article which details an internal subdomain of your public domain as the recommended way to deploy a DNS namespace, see here.

So If you’re building a new Active Directory domain then please feel free to follow the instructions details below.

Windows Server 2008 R2


  • Windows Server 2008 R2 media – download an evaluation from here
  • A static IPv4 address
  • Your internet providers DNS server IP addresses
  • A public DNS name registered with an applicable authority


Install Windows Server 2008 R2, assign a static IP address, give the computer a suitable name and then run dcpromo.

Computer Name: ADDS01

IP Address Gateway Primary DNS


Forest root domain FQDN: office.[public domain name]


Reboot when prompted

DNS Configuration

Create a reverse DNS zone for your local subnet.


Create DNS forwarders which point to your ISPs DNS servers and deselect use root hints if forwarders are unavailable; this basically passes on the recursive DNS queries to your ISP rather than your DNS server. If root hints is left ticked then should your ISPs DNS servers be unavailable then your internal DNS will perform recursive DNS queries. Just FYI recursive DNS can be vulnerable to DOS attacks, cache poisoning and other issues commonly found when a DNS server is incorrectly configured.


Remove the root hints.


Testing DNS resolution using Network Monitor

Run Network monitor and scope the display filter to the DNS protocol, click start.

Because forwarding is enabled there are only two ethernet frames.


Initial query to my ISPs DNS servers.


Response from my ISPs DNS servers.