Exporting E-Mail to Outlook data files programmatically

Introduction

This post discusses a simple script to export one or more Exchange Online mailboxes to an Outlook data file programmatically.

Why am I doing this?

I wrote the script to export the mailbox data from an Office 365 tenant for an Office 365 tenant-to-tenant migration. Now I know there are tools out there that do this but money constraints meant I had to come up with something else.

How does it work?

First of all your Outlook profile must contain all the mailboxes you want to export i.e. you have been granted ‘Full Access’ over the mailboxes and they must be in online mode not cached, the reason being if they’re cached the copy process might not contain all the mailbox data. Mailboxes which are cached are reported in the logs as per the image below.

GetOutlookDataLog01

So, the script, well the script uses the Outlook Object Model to connect to Outlook and enumerate the MAPI stores. Each store is assigned an associated Outlook data file, which is attached and named PST: ‘mailbox store name’, next the script enumerates the mailbox store folder structure and copies the data from each folder to the Outlook data file store, once complete it disconnects the Outlook data file store and moves onto the next mailbox store in the profile.

The Outlook data file exports are stored in a directory named exports within the directory where the script is run from and the log file is stored in the script root directory.

GetOutlookDataStore01

Conclusion

The script has scope for lots of improvements such as better error checking, retries if the copy process fails on a particular folder and maybe even consuming user credentials to have mailboxes from both tenants, then run the copy process.

The script can be downloaded from GitHub here: https://github.com/heathen1878/ExportMailboxToPST

References

https://social.technet.microsoft.com/Forums/scriptcenter/en-US/c76c7167-8336-4261-ac40-2fb44ff3b3f3/powershell-and-outlook-removestore-method?forum=ITCG – used to name the Outlook data file store in order to disconnect it reliably.

http://www.bytemedev.com/powershell-disable-cached-mode-in-outlook-20102013/ – used for the disabling of the Outlook cache mode.

 

Advertisements

Spinning up and spinning down Azure VMs

Introduction

This post discusses a relatively simple piece of code used to start or stop one or more Azure virtual machines within a defined time window.

Why am I doing this?

So, I have come up with this code to solve an issue with one of our customers wanting to de-allocate their VMs during non-business hours using PowerShell; the alternative would be to login to the Azure portal and selecting start…

AzureVMStart

…or shutdown everyday; not going to happen.

AzureVMStop

How does it work?

So this post references an earlier article I wrote around authenticating against Azure when scripting – see here.

The script reads in data from a csv file called AzureIaaSVMs.csv; the script looks for this file within the directory the script is being run from.

CSVFile

CheckForCsv

Once the data has been imported it works out what day it is and then checks whether that particular Azure VM should be running or shutdown and does the necessary.

AzureVMLogic

I have this script scheduled to run every hour using Task Scheduler. The command for example would be c:\…\powershell.exe -file c:\…\deallocateAzureVMs.ps1

Conclusion

So the this script is relatively simple at the minute but does a job. The script could be extended to…

…report failures using email notification

…include multiple subscriptions

…be more granular regarding start or stop time and improved time logic

The current version of the script can be downloaded from GitHub here – https://github.com/heathen1878/AzureVMControl

Copying VHDs between Windows Azure Storage Accounts

Script overview

Thought I’d write an interactive script which allows you to specify the source and destination Storage Account name, container and blob.

Requires you’re already connected to Windows Azure – see this article for more info – should really integrate connecting to Azure as part of this script.

Functions

The functions are there basically to check whether the storage accounts, containers and blobs exist.

CheckStorageAccount

Script body

The main body of the script uses a valid variable to determine whether it can move to the next step.

CheckStorageAccount1

Example of script output:

BlobCopyExample

The script is available here.

Connecting to Windows Azure with PowerShell

Installing the PowerShell module

First of all you need the Windows Azure PowerShell module which can be downloaded from here.

InstallPoSHModule

The module is installed via the Microsoft Web PI, simply follow the installer. If PowerShell is open when you install the Windows Azure module simply restart PowerShell i.e. close it and reopen.

Next check the module is available using Get-Module -ListAvailable

GetModule

This will be listed at the bottom if Azure is available.

AzureModule

Import the module using Import-Module Azure

ImportModule

Get-Command -Module Azure will give you a list of the available commands.

Connecting to Azure

[Updated 16/03/2016]:

If you’re looking to use an interactive PowerShell session with Azure then the Add-AzureAccount cmdlet is suitable; this will give you a 12 hour session token; after this time you need to re-authenticate.

AddAzureAccount

Enter your email address associated with the Azure subscription and follow the prompts. Once you have authenticated return to PowerShell and type Get-AzureSubscription to see your subscriptions.

If you have multiple subscription then you’ll need to determine which one if default and which is current.

AzureSubscriptions

Get-AzureSubscription -Default returns the default subscription.

defaultSub

Get-AzureSubscription -Current returns the currently selected subscription.

currentSub

If you have multiple subscriptions with the same name then the -ExtendedDetails parameter is useful to determine what is what.

extendedSub

Now comes the question…how do I authenticate against Azure when scripting? Well you need to use the PublishSettingsFile but If you’ve already added the subscription using the Add-AzureAccount cmdlet you’ll need to remove it first.

scriptingSub1

Use the Remove-AzureAccount and then use the Get-AzurePublishSettingsFile to get the certificate for the subscription to enable non-interactive authentication.

removeSub

[Updated 16/03/2016]

Using PublishSettingsFile

First of all run Get-AzurePublishSettingsFile

GetPublishSettingsFile

This will open an internet browser and you’ll be prompted to enter your credentials associated with your Azure subscription, once authenticated a publishsettings file will be downloaded.

PublishSettingsFile

Next import the publish settings file using Import-AzurePublishSettingsFile -PublishSettingsFile FileName…

PublishSettingsFile1

When you run Get-AzureSubscription you’ll notice your subscription will contain a certificate, you should now delete the downloaded .publishSettings file.

Subscription1

Run some commands against your subscription…Get-AzureVM…Get-AzureStorageAccount…

 

Kerberos double-hop with Hyper-V and File shares

This article describes how to troubleshoot and fix the errors received when trying to attach an ISOs to a Hyper-V virtual machine from a client computer running the Hyper-V management tool or a System Center Virtual Machine Manager.

The scenario would look similar to diagram below.

KerberosDelegation1

If you try and mount an ISO hosted on \\fileserver\isos\ to a virtual machine running on the Hyper-V server from management tools hosted on another server with default configuration you will see an error similar to the one below.

KerberosDelegation4

At first you may want to check permissions on the file share hosted on \\fileserver\isos\ e.g the share permissions

KerberosDelegation5

Next check the NTFS permissions; I suppose you’d look to have the Hyper-V computer accounts granted read access but I believe the account that needs access is the account of the user attaching the ISO, so I have granted domain users read access.

KerberosDelegation8

The next step to troubleshoot is authentication as the authorisation looks to be in place.

The security event log on the file server at the time I attempted to attach the ISO image shows anonymous logon from the Hyper-V server where the virtual machine is hosted.

KerberosDelegation7

This would suggest the Hyper-V host cannot forward the credentials of the user attempting to attach the ISO image. I suppose one way to prove this, would be to logon to the Hyper-V server, open the Hyper-V tools then attach the ISO, this way your credentials should be used to access the share.

So if I configure delegation on the computer accounts of the Hyper-V hosts for the file server where the share exists my user account should be forwarded to the file server.

The image below shows that the cifs services has been allowed for delegation from the Hyper-V computer accounts.

KerberosDelegation9

If I attempt to attach a ISO to a virtual machine I can see an event that shows my user account successfully authenticating and more importantly the ISO was attached without error.

KerberosDelegation10

Hopefully this helps someone else

 

Building a Hyper-V 2012 cluster

Prerequisites

  • Hardware to present at least four computers – I’ve gone for a PowerEdge T110 2 with VMware ESXi 5.5 installed; check out this article regarding running Hyper-V on top of ESX.
  • Active Directory domain – I’ve gone for a single domain controller.
  • At least two computers running Windows Server 2012 with the Hyper-V role installed – I’ve gone for two computers running Windows Server 2012 R2 core edition with the Hyper-V role and failover clustering feature installed.
  • Shared storage of some form – I’ve presented storage to Windows Server 2012 R2 and utilised the built-in iSCSI target functionality.
  • At least one network interface per function e.g. one for management, one for virtual machines, one for cluster heartbeat and one for storage (if you’re using iSCSI) – I’ve gone for dual storage links, single cluster link, single management link and a single link for virtual machine traffic.

Network configuration

Hyper-V management interfaces.

NetworkConfiguration1

Hyper-V Cluster interfaces

NetworkConfiguration2

Storage interfaces

NetworkConfiguration3

Virtual machine traffic interfaces

NetworkConfiguration4

The network interface binding order on the Hyper-V hosts should be as follows:

  1. Failover cluster virtual adapter
  2. Management adapters
  3. Cluster heartbeat adapters
  4. Storage adapters
  5. Microsoft virtual switch adapter

If you are using Windows Server 2012 R2 core edition then you’ll need to compare the output of the following WMI query and registry key to configure the binding order.

WMI: wmic nicconfig get description, settingid

NetworkBindingOrder1

Registry: HKLM\System\CurrentControlSet\Services\Tcpip\Linkage\Bind

NetworkBindingOrder2

The binding order is top to bottom so GUID {34CCAEDD…} should match up with the Failover cluster virtual adapter listed in the WMI output, the GUID {486EB2A7…} should match up with the management adapter, in my case a Intel E1000, the next GUID should match the cluster adapter…

If you are not sure which adapter is which e.g. what is vmxnet3 Ethernet adapter #2 then run ipconfig /all make a note of the adapter description and MAC address and cross reference this with whatever network that MAC address lives on. So in VMware that’s really easy but in a physical environment you would more than likely need to checkout the MAC table on the switch.

Management adapter configuration

Each adapter is assigned an IP address within the management subnet of 192.168.4.0/24

Cluster adapter configuration

Each adapter is assigned an IP address within the cluster subnet of 192.168.2.0/24.

Storage adapter configuration

I have selected to have two adapters for storage; I have an adapter on each host connected to a different subnet which in turn connects to the a specific adapter on the iSCSI target; see the storage adapters image above. I think it is worth noting that the storage vendor will usually provide guidance on how your storage adapters should be configured.

I have configured my storage adapters to use jumbo frames. In Windows Server 2012 R2 core edition jumbo frames can be configured via the registry.

First of all get the network adapter IDs of the storage adapters. The easiest way is to run sconfig.cmd and then select 8 for network settings then make a note of the index number which is associated with the storage adapters.

Open the registry and navigate to HKLM\System\CurrentControlSet\Control\Class\{4d36e972-e325-11ce-bfc1-08002be10318}0[Index number from above]\JumboPacket

iSCSIConfig7

9014 is jumbo frames whereas 1514 is the default MTU.

StorageNetworkConfig2

I have configured Windows Server 2012 R2 to be my iSCSI target; this computer has two adapters dedicated to iSCSI.

StorageNetworkConfig1

Once the networking is in place the MPIO feature should be installed on the Hyper-V hosts and the iSCSI initiator should be configured.

To install MPIO run – Install-WindowsFeature -Name Multipath-IO

To configure MPIO run – mpiocpl.exe from the command prompt then select ‘Add support for iSCSI devices’ click ‘Add’ then ‘Ok’.

iSCSIConfig1

restart when prompted.

Hyper-V virtual switch configuration

An adapter on each host will be assigned as an external switch. The external switch name must be identical across the Hyper-V hosts and it is best practice not the share this network adapter with the Hyper-V host.

HyperVvSwitchConfig1

Network adapter naming

The network adapters across all hosts should be consistently named.

In server core network adapters can be renamed using netsh.

Use ‘Netsh Interface Ipv4 Show Interfaces’ to get a list of interfaces then use Netsh Interface Set Interface Name=”[Current adapter name]” NewName=”[New Name]” to set the adapters new name.

iSCSI configuration

The iSCSI target will need to present a quorum disk and one or more disks for the virtual machine storage; I’ve configured a 512MB LUN for the quorum and a 1TB LUN for the virtual machine storage.

The iSCSI target has been configured to only present storage to the initiators of the Hyper-V hosts.

iSCSIConfig2

To configure the iSCSI initiator on the hosts run iscsicpl.exe from the command prompt.

Add the iSCSI targets

iSCSIConfig3

then click connect and specify the initiator and target IP addresses and enable multi-path.

iSCSIConfig4

Once you have connected to both iSCSI targets using their associated adapter you can configure MPIO.

Click devices then MPIO, you’ll need to configure MPIO for each device.

iSCSIConfig5

then select the applicable multi-path policy

iSCSIConfig6

Failover cluster configuration

I have installed the failover cluster feature tool on a computer running the full GUI, I suppose you could use a domain joined Windows client operating system.

Once installed open the failover cluster tool and create a new cluster – make sure you run all validation tests, the end result should be green ticks against all tests; any warning or errors are usually descriptive enough to suss out the underlying problem.

FailoverClusterConfig2

IP Addressing and network names

The failover clustering wizard will prompt for a cluster network name and cluster IP address which can float between hosts.

FailoverClusterConfig6

Naming and configuring cluster networks

Failover clustering will identify the networks on each host and label them ‘cluster network #’, I tend the name them to define their function i.e. ‘cluster’, ‘Hyper-V Management’…

FailoverClusterConfig9

In the image above I have configured the storage adapter not pass cluster heartbeats over those interfaces. If you are short on network interfaces and have to share then you can configure QoS on those interfaces; lookup New-NetQosPolicy for more info.

Cluster storage

The disks I presented to the hosts earlier are now configured; the 512MB disk is used as the quorum disk as per the image below and the 1TB disk is used to store highly available virtual machines.

FailoverClusterConfig4

The 1TB disk will need to be configured as a cluster shared volume to support HA; simply right click and select enable Cluster Shared Volumes.

FailoverClusterConfig5

Note: The image shows remove because I’ve already enabled cluster shared volumes.

FailoverClusterConfig7

Cluster shared volumes can be found as a mount point in c:\clusterstorage\

Diskpart shows this too.

FailoverClusterConfig8

Next step is to deploy some virtual machines…deploy System Center VMM…

Windows Server 2012 – Deploy and manage IPAM

IPAM

 Installation requirements and prerequisites

IPAM has recommended hardware requirements of a 2GHz CPU, 4GB RAM and 80GB of free disk space.

The prerequisites of IPAM are:

  • IPAM must be installed in an Active Directory domain
  • IPAM cannot be installed on a domain controller
  • Only one forest can be managed
  • IPv6 must be enabled in order for IPAM to manage IPv6 addresses
  • Only works with Microsoft servers using TCP/IP protocol
  • IPAM database support in Windows Server 2012 is WID but in Windows Server 2012 R2 now supports SQL server too

You can install IPAM from the Add Roles and Features or PowerShell using Install-WindowsFeature IPAM -IncludeManagementTools

Provisioning methods

Manual provisioning requires you configure each managed server manually. The following links describes the configuration required for each role.

Group Policy automated deployment eases configuration and deployment i.e. it configure firewall rules, shares, configures the security group membership etc.

If you have hardware firewalls then the following ports will be used:

  • RPC end point mapper – TCP 135
  • SMB – TCP 445
  • Dynamic RPC ports – use ‘netsh interface ipv4 show dynamicports tcp’
  • NetBIOS Session service – TCP 139
  • DNS – TCP 53

To configure the Group Policy objects use Invoke-IpamGpoProvisioning cmdlet.

IPAM1

This will create the following Group Policy Objects and link then to the domain.

IPAM2

Configure server discovery

Select configure server discovery and select the domain which will be managed and which server roles exist within the domain e.g. Domain Controller, DNS and DHCP.

IPAM3

Once the domain is selected select start server discovery or wait for the next schedule of the ServerDiscovery task; once per day by default.

Create and manage IP blocks and ranges

IP Address blocks can be added manually if required; note that IP address blocks discovered by DHCP are automatically added to IPAM but IP address lease information is not. See below.

To import DHCP lease information from DHCP download this PowerShell script from the TechNet script center.

Once you have downloaded the script run IpamIntegration_dhcp.ps1 then run Invoke-IpamDhcpLease. You may need to register IPAM with PowerShell if you get an error in PowerShell related to Microsoft.ipam session configuration. To register ipam run Register-PSSessionConfiguration -Name Microsoft.ipam.

If you’re running Windows Server 2012 R2 and the import of DHCP leases fails then be sure to check out the Q & A section of the script download page.

IPAM8new

If you run Invoke-IpamDhcpLease with the periodic parameter the PowerShell will create a scheduled task that runs a every 6 hours.

IPAM9

IP addresses can also be imported from csv files. Scope utilisation can be viewed from the IP address range groups within IPAM.IPAM5

The IP address inventory contains all IP addresses which have been imported from DHCP (if the script above has been used) and manually added IP addresses. Note that manually added IP addresses cannot be managed by MS DHCP.

A manually created IP address block cannot be set to managed by MS DHCP.

IPAM4

The IP address inventory shows the status of each IP address lease.

IPAM10

If you have devices which need statically assigned IP addresses or DHCP reservations then you can use the ‘Find and Allocate Available IP Address…’ functionality to find an available IP address and set a DHCP reservation.

IPAM6

The ‘Reclaim IP Addresses…’ functionality just removes the IP Address entry from the IPAM database, it does not affect DHCP or DNS. To remove DHCP reservations and DNS records use the IP Address Inventory and right click the IP address you want to remove.

IPAM11

IPAM Delegation

IPAM administrators can view all IPAM data and manage all features.

IPAM ASM administrators can manage IP blocks, IP ranges and IP addresses.

IPAM IP Audit administrators can view IP address tracking data.

IPAM MSM administrators can manage DHCP and DNS servers from within IPAM.

IPAM Users can view IPAM data but cannot manage features or view IP tracking data.

IP Address Tracking

The domain controllers and NPS servers should have ‘Account Logon Events’ auditing enabled to collect user and computer authentication events and cross reference against DHCP leases.

IPAM12

 

Best Practice

The IPAM best practices can be found here.

Windows Server 2012 – Configure local storage

Storage spaces

Design storage spaces

Terminology

Primordial pool – One or more disks with no partitions and are at least 10GB

Storage pool – One or more disks allocated to a storage pool

Storage space – A disk carved out of a storage pool

Requirements of storage spaces

  • Disks that have no partitions or volumes and are at least 10GB
  • SAS storage must support Storage Enclosure Support (SES) for use with clustering
  • SATA, SCSI, iSCSI, SAS, USB or JBOD enclosures
  • Windows Server 2012 or Windows 8
  • Disks presented via a RAID controller must not be configured has a RAID set

Storage space functionality

Storage pools are the building blocks of storage spaces; storage pools can contain spinny (SAS and SATA) and SSD disks. New functionality in Windows Server 2012 R2 introduces Storage Tiers. Storage Tiers will move frequently access data onto SSD and less frequently accessed data onto slower storage.

Storage space resiliency is configured when you create a storage space. The options are as follows:

  1. Mirror disk (RAID 1) – requires the underlying storage pool contain at least two disks for a two way mirror and three disks for a three way mirror.
  2. Parity disk (RAID 5) – requires the underlying storage pool contain at least three disks. Not supported in a failover cluster.
  3. Simple disk (RAID 0) – requires one disk, good for hosting temporary files and files which can be recreated easily.

Storage space provisioning supports both fixed and thin provisioning; thin provisioning is not supported in a failover cluster. It is worth noting that thin provisioning will impact latency when the metadata I/O provisions the blocks for data to be written. Windows 2012 also has trim provisioning which allows you to reclaim space that is no longer in use. This functionality requires that the hardware is certified compatible with Windows Server 2012 and uses standards-compliant hardware for identification.

The space reclamation can impact performance as it is configured by default to reclaim space in real-time. If this is not acceptable then this registry key can be configured: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\DisableDeleteNotification = 1. Space can then be reclaimed using the ‘Defragment and Optimise Drives’ tool.

Storage spaces support cluster deployments and Cluster Shared Volumes for scale out deployments.

Storage spaces supports multi-tenancy for hosting scenarios and supports Active Directory security model.

Configure storage pools and disk pools

Storage pools are configured from Server Manager.

StoragePools1

To create a storage pool right click in the storage spaces white space and select new storage pool…, then select from the available physical disks.

Verify the storage pool or primordial pool has disks with a healthy status:

GUI:

getstoragepoolgui

PowerShell:

Get-StoragePool

getstoragepool

Verify the virtual disk (storage space) and the physical disks which underpin the storage pool.

GUI:

getstoragepoolPhygui

getstoragepoolVDgui

PowerShell:

Get-StoragePool -FriendlyName [Name] | Get-VirtualDisk | Format-Table FriendlyName, OperationalStatus, HealthStatus

getstoragepoolVD

Get-StoragePool -FriendlyName [Name] | Get-PhysicalDisk | Format-Table FriendlyName, OperationalStatus, HealthStatus

 getstoragepoolPhy

To remove a degraded disk from a pool for repair run:

Set-PhysicalDisk -FriendlyName [Name of Disk] -Usage Retired

Get-PhysicalDisk -FriendlyName [Name of Disk] | Get-VirtualDisk | Repair-VirtualDisk

Deploy and Manage Storage Spaces with PowerShell

Enumerating the commands available to PowerShell:

Get-Command -Module Storage | sort Noun, Verb

Storage spaces requires the disk be blank, so it is important to check the disks you’re going to use for partitions.

Get-Disk | Get-Partition | get-Volume

GetVolumesOnDisks

You could specify a disk number or a list of disk number separated by commas.

To clear data from the disk run:

Clear-Disk # -RemoveData

So which disks can I use to create a storage pool? – running Get-PhysicalDisk -CanPool $True will show you.

PoolDisks

Grab the disks you want to pool into a variable.

SelectOneDisk

Use the disk in the variable to create a storage pool

NewStoragePool

Create a virtual disk (storage space) from the storage pool. Note the storage space cannot be use mirror or parity functionality because there is only one disk.

StorageSpace1

I have had to specify the resiliency type as simple because the storage pool default was mirror even though I only has one disk!

PoolProvTypeDefault

Other useful PowerShell cmdlets are Initialize-Disk, New-Partition and Format-Volume.

Configure basic and dynamic disks

Disks within Windows can be basic or dynamic regardless of the partition type or whether they’re part of a storage pool.

Disks presented by a storage device which has multiple paths should not be configured as a dynamic disks as this will more than likely break multipathing.

Dynamic disks are good for providing fault tolerance when fault tolerance storage controllers are not available.

The system partition is always a basic disk as this contains the boot files.

Configure Master Boot Record (MBR) and GUID Partition Table (GPT) disks

MBR disks

The Master Boot Record (MBR) is the traditional disk configuration which defines the disk configuration such as partition configuration and disk layout, in addition the MBR contains boot code to load the operating system boot loaders winload.exe, NTLDR…

The MBR is located on the first sector of the disk and supports a 32 bit partition table which limits the MBR disk to 2TB.

MBR disks are limited to 4 partitions; 3 primary partitions and 1 extended partition.

GPT Disks

The GUID partition table (GPT) disks were introduced in Windows Server 2003 SP1 and support disks greater than 2TB. The GPT disks have redundant copies of the partition table which allow for self healing.

The GPT disk has a protected MBR at LBA0 which allows BIOS based computers to boot from GPT disks as longa as the boot loaders and operating systems is GPT aware.

GPT have a 16KB partition table which allows for 128 partitions if each partition entry is 128bytes, whereas MBR disks have a 64 bytes partition table with each entry being 16 bytes.

Disks created within Server Manager automatically creates GPT disks.

Noteworthy changes

EFS has been deprecated because Bitlocker and Rights management are more scalable. The number of files in a volume is 18 quintilion. ReFS is targetted at data volumes, does not support deduplication.

iSCSI target now a role service

Storage spaces has de-duplication built into the operating system and works at the block level. Example scenarios of where de-duplication can be used is general file storage and VHD libraries. Deduplication cannot be enabled on boot or system disks i.e. primarily c:\.

Storage tiers – data is moved between fast and slow disk depending on access frequency; this is new in Windows Server 2012 R2.

Windows Server 2012 introduces a new filesystem; ReFS, ReFS is a self healing file system and I believe is the file system used in the back end to provide the fault tolerance and reliability of storage pools.

Windows Server 2012 – Configure server for remote management

WinRM

Windows remote management is enabled by default in Windows Server 2012, if it is disabled for some reason it can be re-enable from the command line using Configure-SMRemoting.exe [-Enable | -Disable]. 

RemoteServerManagement2012

WinRM runs as a service within the Windows operating system and listens on TCP 5985 or 5986; the latter is used for SSL.

Use WinRM get winrm/config to get the current configuration.

To use WinRM to run commands on another server run winrs /r:[server name] [command].

WinRS

You can also right click the server in Server Manager and select the tool you which to run remotely.

WinRM_ManageAs_RemotePoSH

If you are running in a workgroup environment then first of all add the computer you want to manage to the trustedhosts of your client.

WinRM_ManageAs_Workgroup

Configure UAC override on the server you want to manage.

WinRM_ManageAs_Workgroup2

Then add the server to Server Manager and right click Manage As…

WinRM_ManageAs

Enter the builtin administrator username and password of the server you want to manage.

As previously discussed Server Manager can be used to group servers by role or a custom grouping, from the group you can view the events, services, roles and features, BPA info and performance information. See here.

MMC tools and DCOM

To remotely manage systems using MMC tools you must enable the following firewall rules using wf.msc or Enable-NetFirewallRule:

COM+ Network Access (DCOM-In)

The remote event log management group rules.

FirewallRulesRemoteServerManagement2

The Windows firewall remote management group rules.

FirewallRulesRemoteServerManagement3

The remote service management group rules.

FirewallRulesRemoteServerManagement1

this is because the MMC tools still use WMI over COM for network communication. Whereas Server Manager will use WMI over WinRM.

Configure down-level server management

Windows Server 2012 R2 can manage down-level operating systems when they have the Windows Management Framework 4.0 and Microsoft .NET framework 4.5 installed.

Windows Server 2012 can manage down-level operating systems when they have the Windows Management Framework 3.0 and Microsoft .NET framework 4.0.

In order for performance data to be collected from Server 2008 SP2 or R2 the hotfix detailed in KB2682011 must be installed.

Once the above has been completed enable remote management.

Server 2008 R2

RemoteServerManagement2008R2

NOTE: Server Manager is backward compatible i.e. Windows Server 2012 can manage down-level clients and other Windows Server 2012 servers but cannot manage Windows Server 2012 R2 servers.

Remember to access MMC tools you will need to open firewall ports too.

You will more than likely see errors related to WinRM not being able to register the SPN for the WinRM service, this is because the network service users does not have validated write to service principal name permission within Active Directory. To fix this use AdsiEdit.

WSMANADWrites

Configure Server Core

Server Core is configured using the sconfig.cmd server configuration tool. If you need to enable remote management of MMC tools you will need to configure the Windows firewall using Enable-NetFirewallRule.

Group Policy configuration of WinRM and Windows Firewall

Group policies can be configured to enable WinRM on all IP addresses or a range of IP addresses. The Windows firewall can be configured via Group policy to open the DCOM ports for MMC tool management.

Windows Server 2012 – configuring NIC teaming

Configuring NIC teaming

NIC teaming is now built into the Windows operating system. This post briefly looks at what is supported, the configurations possible and some example configurations.

What is supported

Datacenter bridging, IPsec task offload, large send offload, receive side coalescing, receive side scaling, receive side checksum offloads, transmit side checksum offloads and virtual machine queues.

What is not supported

SR-IOV, rDMA and TCP Chimney are not supported by NIC teaming because these technologies bypass the network stack. SR-IOV can be implemented with NIC teaming within a virtual machine. 802.1x and QoS are not supported either.

Mixing different speed network adapters in teams is not supported in active / active teams.

NIC teaming configurations

There are two teaming modes switch independent and switch dependent; switch independent does not require the switch to be intelligent all the intelligence is provided in software; this mode also allows the team to span switches. Switch dependent requires that the switch supports Link Aggregation Control Protocol (LACP / 802.1ax) or static teaming (802.3ad draft v1) and only supports one switch.

The load balancing algorithms available are address hashing and Hyper-V ports; address hashing will hash the TCP or UDP port and IP address of the outgoing packet, if the TCP or UDP port is hidden i.e. is protected by IPsec or may be the traffic is not TCP or UDP then the IP addresses are hashed instead. Finally is the traffic is not IP then a hash of the MAC address is generated.

Hyper-V port load balancing uses the virtual port identifier to load balance traffic.

The combination of the teaming modes and algorithms defined above are:

  1. Switch independent with address hashing
    1. Uses all team members to sent outbound traffic. Can only receive inbound traffic on the primary network adapter of the NIC team. This mode is best suited for applications such as IIS, teaming within a virtual machine, teaming where switch diversity is a concern and where active / standby teaming is required.
  2. Switch independent with Hyper-V ports
    1. Each virtual machine will be limited to the bandwidth of one network adapter in the NIC team. The virtual machines will send and receive data on the same network adapter. This mode is best suited when the number of virtual machines far exceeds the number of team members and the limitation of bandwidth is not a concern. This mode also is best suited when virtual machine queue is used.
  3. Switch dependent with address hashing
    1. Uses all team members to send outbound traffic, inbound traffic will be distributed by the switch. This mode is best suited for applications which run on Windows natively and require maximum network performance and can also be used where virtual machines have a requirement for more bandwidth than one network adapter can provide.
  4. Switch independent with Hyper-V ports
    1. More or less the same as the switch independent mode but with the following exceptions: the inbound network traffic is distributed by the switch, this may result in the inbound traffic arriving on all team members thus a VMQ will be present on all team members for a particular VM so this mode may not be best suited when VMQ is configured. The mode suits when the Hyper-V host is densely populated and policy dictates that LACP should be used.

NIC teaming within Hyper-V

NIC teaming within the virtual machine allows you to provide access to two or more virtual switches. Each network adapter must be configured to allow teaming. The team within the virtual machine cannot be anything other than switch independent with address hashing. NIC teaming is also useful where SR-IOV is being used; SR-IOV allows the virtual machine direct access to hardware, more here.

The vSwitches must connect to physically different switches. The virtual machine should be configured with a network adapter assigned to each vSwitch

Image below shows three virtual switches each with one network adapter assigned.

NICTeamingVMConfig0

Within the virtual machine create two network adapters connecting each one to a different virtual switch.

NICTeamingVMConfig01

Within Server Manager you will notice two network adapters, nothing new there.

NICTeamingVMConfig03

Click on the NIC Teaming link to configure NIC teaming.

NicTeamingGUIVM1

Select tasks > New Team

NicTeamingGUIVM2

Name the team and select the adapters; you’ll notice because the operating system is installed within a virtual machine you’re restricted to address hashing in switch independent mode.

NicTeamingGUIVM3

NICTeamingVMConfig05

Alternatively to using the GUI you can do the same in one line of PowerShell using New-NetLbfoTeam.

Native NIC teaming

NIC teaming natively allows you to untag multiple VLANs using team interfaces, the most common NIC teaming configuration is switch dependent with address hashing.

NIC teaming configuration example:

LACP1NicTeam

Before running the command above I would suggest the network ports are not connected to protect against loops and broadcast storms. I have a HP procurve switch which supports dynamic LACP, so I only had to configure the switch ports to be LACP active and the switch does the rest. HP Documentation here.

To remove the NIC teaming configuration use Remove-NetLbfoTeam -Name [Team name]

NIC teaming can also be used where VLAN tagging at the operating system is required.