MCTS 70 – 649 Deploying servers

Deploying servers

Deploy images by using Windows Deployment Services

Windows Deployment Services (WDS) uses WIM files for deployment and XML files for automated and customised installs of Windows Server 2008, Windows Vista, Windows 7 etc.

The XML automated install files must be syntactically correct, to aid the creation of these files you should use Windows Automated Installation Kit (WAIK) Windows System Image Manager (Windows SIM).

The XML file should be named autounattend.xml is deploying locally i.e. from DVD etc. as the Windows Server installation automatically looks for this file. If deploying over the network i.e. using WDS then you specify the XML file to the WDS client thus you can name it as you please e.g. setup.exe /unattend:{drive letter}:\{unattend.xml}

The WDS server role which enables Windows Server to deploy operating systems to clients booting from PXE compliant network cards. WDS uses multicast to deploy to many clients at once; this is called auto-cast. To create a multicast session use wdsutil.

WDS deployment server is the full install of WDS, the server role requires the WDS server be joined to a domain, have functioning domain resolution and DHCP. The WDS must also be authorised within the domain. If WDS is installed on the same server as DHCP then WDS must be configured not to listen on UDP port 67 and you must publish option 60 within DHCP. e.g.

wdsutil.exe /Set-Server /UseDHCPPorts:no /DHCPoption06:yes

WDS transport server is just the networking components of WDS; transport server can only be managed with wdsutil.exe and requires a custom PXE provider.

WDS Images

Boot Images

Boot images contain WinPE and the WDSClient; WinPE is used for connect to WDS deployment servers to install images. The default boot image can be found on the Windows Server installation DVD under the sources directory; it is named boot.wim.

To add the boot image to WDS use the GUI or wdsutil

wdsutil.exe /Add-Image /ImageFilePath:{path}\boot.wim /ImageType:boot

Install Images

The install images are used to deploy operating systems such as Windows Server 2008, 2008 R2, Windows Vista and Windows 7. The default install images can be found on the installation media of that particular operating system in the sources directory; it is named install.wim.

To add the install images to WDS use the GUI or wdsutil.

wdsutil.exe /Add-Images /ImageFilePath:{path}\install.wim /ImaheType:install /ImageGroupName:{group name}

Discover Images

The discover image would be used when deploying to servers with non-compliant PXE network cards. Discover images can be scoped to a particular WDS server or scoped to discover a WDS server.

Capture Images

The capture images are bootable images which can be used to capture a syspreped image to a wim file.

The images within WDS can be secured using wdsutil.exe e.g.

wdsutil.exe /Verbose /Set-ImageGroup /ImageGroup:ImageGroup1 /Server:MyWDSServer /Name:"New Image Group Name"
 /Security:"O:BAG:S-1-5-21-2176941838-3499754553-4071289181-513 D:AI(A;ID;FA;;;SY)(A;OICIIOID;GA;;;SY) 
A;ID;FA;;;BA)(A;OICIIOID;GA;;;BA)(A;ID;0x1200a9;;;AU)(A;OICIIOID;GXGR;;;AU)"

Image deployment can use auto-cast or scheduled cast; auto-cast deploys images as and when clients connect using multicast. Scheduled cast deploys images when x no. of computers have connected or the transmission start time as past.

Computers can be prestaged in WDS using the MAC address or GUID.

wdsutil.exe /Add-Device /Device:{computer name} /ID: {MAC address} or {GUID}

Admin approval allows the WDS client to create a computer account for unknown client computers within AD DS when the deployment phase is completing.

wdsutil.exe /Set-Server /AutoAddPolicy / Policy:AdminApproval

Configure Microsoft Windows Activation

Windows Server can be activated via the System control panel applet or slmgr.vbs.

The vbscript slmgr.vbs examples:

cscript slmgr.vbs -ipk {product key} and -ato [-ato starts the activation process]
cscript slmgr.vbs -rearm [reactivate grace period]
cscript slmgr.vbs -dli [show licensing information]

Windows Server product keys:

OEM

System builders such as Dell, HP and IBM etc.

Retail

People purchasing off the shelf copies

Volume

Large businesses, service providers etc.

The volume keys are:

Multiple Activation Key (MAK) and Key Management Service (KMS)

KMS is used in large organisations, it allows for automated activation using DNS service records and requires clients re-activate every 180 days.

KMS requires a minimum 5 physical Windows Server installations or 25 Windows Vista / Windows 7 installations be deployed before KMS will activate clients. The clients must be able to communicate with the KMS server using TCP port 1688. The DNS service record is created when KMS is activated; the record is named _vlmcs._TCP. KMS is activated by changing the product key of a Windows Server installation to a KMS product key e.g.

cscript slmgr.vbs -ipk {KMS product key} -ato

MAK keys are defined for specific number of activations; activated computer stay activated unlike the KMS.

The Volume Activation Management Tool (VAMT) should be used to deploy licenses as well as keep track. VAMT can be used to keep track of KMS licenses too.

Configure Windows Server Hyper-V and virtual machines

Hyper-V is Microsoft’s type-1 hypervisor; this hypervisor is based on a micro-kernel operating system. Hyper-V has a shared virtualisation stack i.e. the parent partition; this is where virtual machines (child partitions) send their I/O requests via the VMBus. The parent partition is where drivers are installed, this is a distributed driver model rather than a shared driver model where drivers are installed within the hypervisor.

Benefits of virtualisation

Availability – virtualisation allows you to create a highly available infrastructure for many applications using fewer hardware resources.

Sand-boxing – isolated or dedicated servers running specific server roles or applications.

Utilisation – greater resource utilisation by grouping lots of relatively under-utilised server together or server that only run their workload at defined periods e.g. every quarter.

Portability – servers can be moved to new hardware with relative ease.

Capacity – relatively easy to increase RAM, no. of CPUs, hard disks, network cards etc.

Hyper-V features

64bit guest support

up to four vCPUs per guest

up to 64GB RAM per guest

guest snapshots

Installation

start /wait ocsetup Microsoft-Hyper-V

or

server manager > Hyper-V

Licensing

Hyper-V licensing is tied to the Windows Server licence as follows:

Microsoft Windows Hyper-V server – no guest licenses included

Microsoft Windows Server 2008 Standard with Hyper-V – one guest licence included

Microsoft Windows Server 2008 Enterprise with Hyper-V – four guest licenses included

Microsoft Windows Server 2008 Datacenter with Hyper-V – unlimited guest licences included

Remote Management

Remote management tools are available as a feature in Windows 7 and Windows Server 2008 R2. For Windows Vista you can download the RSAT tools from the Microsoft download center.

Virtual Networks

Each Hyper-V host should have a minimum of two network adapters one for the parent partition and one for the child partitions (guests).

The virtual network types are as follows:

  • Private – communication between guests  on particular hyper-V host.
  • External – communication between guests on other hosts, the host itself, or any other computers.
  • Internal – communication between guests on a particular hyper-v host and the host itself.

Guests which support hyper-V enlightenments (integration services) should use the standard network adapter as this will improve performance; guest which do not support enlightenments must use the emulated Intel 21140 adapter (legacy adapter), if booting a guest from the network then the legacy adapter must be used too.

Hyper-V virtual networks support VLAN untagging i.e. untagging packets sent to the switch with a particular VLAN ID before sending it onto the guest.

Hyper-V failover clusters

Hyper-V failover clusters use Microsoft failover clustering, the virtual machines are added to failover clustering as a services and application resource. The storage where the virtual machine is hosted should be configured as a cluster shared volume; cluster shared volumes are a feature of Windows Server 2008 R2.

  • Install storage manager for SANs
  • Install failover clustering
  • Create a Cluster Shared Volume (CSV)

Hyper-V virtual machines

Physical-toVirtual migrations

The following operating systems can be migrated to Hyper-V:

  • Windows Server 2008, Windows Server 2003 SP1, Windows XP Professional SP2 and Windows Vista SP1 can be migrated online using System Center Virtual Machine Manager 2008 (SCVMM).
  • Windows Server 2000 Sp4 can be migrated in an offline state using SCVMM.
  • Windows Sever NT4 must be migrated using the Virtual Server Migration Toolkit (VSMT) to Microsoft Virtual Server 2005 SP1 then migrated to Hyper-V.

Hyper-V Enlightenment (integration services) has a number of features:

  • Graceful shutdown from within Hyper-V
  • Time synchronisation between guest and host
  • Data exchange
  • Heartbeats i.e. monitoring of the guest from the parent partition
  • backups functionality using snapshots

Virtual machines can use the following disk types:

Fixed size disk allocates all the space at creation time; this disk type suffers less from fragmentation. This disk type also zeros out the underlying block storage at creation time which improves performance.

Dynamic disks have a default block size of 512KB in Windows Server 2008, this was increased to 2MB in Windows Server 2008 R2. When the disk requires more space it zeros out some more block and format them; this can have a performance impact for heavy I/O production systems.

Differencing disks records the changes made by the running virtual machine from its parent virtual disk.

Pass-through disks are raw LUNs passed directly to the virtual machine with no encapsulation. NOTE: the disk must be offline within the parent partition.

Virtual machine boot disks are always IDE.

Backup and snapshots

Hyper-V host backups

If the storage presenting the virtual machine storage is compatible with Hyper-V (VSS) then you will be able to backup the virtual machine configuration, virtual switches, snapshots and virtual hard disks online. Windows Server backup requires the registry keys described here be added.

Backups within the Hyper-V host

This option is recommended if your storage does not support Hyper-V VSS e.g.

  • Software presented by the software iSCSI initiator
  • Pass-through disks
  • Other storage not compatible with Hyper-V VSS

online backups requirements:

  • Integration services installed
  • NTFS formatted disks
  • Fixed disk type
  • VSS enabled on all volumes that host virtual machines

Offline backups of a virtual machine can be performed when the virtual machine is powered off or paused.

New features in Windows Server 2008 R2

64 bit only.

Live migration using cluster shared volumes; cluster storage is mounted within c:\ClusterStorage\ on the Hyper-V host. Live migration allows for hardware servicing, hypervisor patching and virtual machine workload optimisation with no downtime.

Boot from VHD allows physical machines to boot from virtual hard disks; see link

Hyper-V now supports:

  • hot add and hot removal of virtual hard disks
  • 512 vCPUs

and has the following enhancements:

  • near native performance of fixed virtual hard disk
  • CPU parking (consolidating processing onto the fewest no. of cores)
  • SLAT – second level address translation, this is a paging area below the x86 page tables; think this is like VMware shadow page tables.
  • Jumbo frames (9000 bytes per frame rather than 1500 bytes)
  • Chimney and VMQ which offload processing

Configure High Availability

DNS Round Robin

This method of high availability is very simple but has a few drawbacks. DNS Round Robin works by returning more than one resource record for a particular host e.g. the host http://www.contoso.com may have two DNS A resource records. The first drawback is host failure; DNS will still return multiple A resource records whether the actual host is online or not. The second drawback is skewed balancing of client requests e.g. when a client queries the DNS server it will be given the resource records in a particular order. A second client querying DNS would get the same resource records but in a different order so requests are balanced but what if that client already has an entry for that the other host in its local cache? the cached record will be used.

Network Load Balancing (NLB)

This method of high availability uses server load (and some other information) to determine where the request is handled. Each host in the NLB cluster will simultaneously map a clients request to itself, then each hosts uses uses the load balancing algorithm to determine which host will map the request. The following are load balancing algorithm considerations;

  • host priority: if defined then the host with the highest priority will handle all traffic for that rule.
  • port rules: used to map port rules to multiple hosts or a single host for a particular port or ports and protocol or protocols (TCP/UDP).
  • affinity: this determines whether client sessions are returned to the same host to maintain session state.
  • load percentage distribution: if defined then load weighting can be used to force a higher percentage of requests to be handled by a particular host.

Whereas round robin wouldn’t detect a failed host, NLB hosts send heartbeats between themselves, if a host fails to send five consecutive heartbeat it is removed from the NLB cluster.

Cluster operation modes: Unicast, Multicast and IGMP Multicast. Unicast requires two network adapters; one for load balanced traffic and one for heartbeats and general management. The adapter assigned to the load balancer uses the cluster MAC address.

Multicast only requires one network adapter; the network adapter is assigned a multicast MAC address; this adapter can now be used for heartbeats, management / administration and servicing client requests.

It is worth noting though that the router in front of the load balanced hosts must support ARP replies from a different MAC address than the MAC address in the ARP packet payload; this ensures the cluster MAC address is not learnt by the switch fabric thus client requests always flood the switch ports.

IGMP Multicast is the same as Multicast but with improved traffic management as broadcast are only sent to the switch ports which are populated by NLB cluster nodes.

Failover clustering


Uses two or more hosts to ensure applications are highly available. There are two general cluster application types; single instance and multiple instance; single instance application execute on one server at a time whereas multiple instance applications run on multiple hosts concurrently sharing or partitioning data so any host in the cluster can respond to client requests.

Applications which support clustering should attempt to re-establish connectivity automatically, use IP based protocol and application and configuration data should be stored on a shared disk if applicable or replicated between cluster hosts. All hosts within the cluster must use the same processor architecture and be members of a domain. The storage connection types supported by Windows Server 2008 are Fibre Channel, iSCSI and SAS.

Application resources can be configured to prefer one host over another and if failback is enabled the application resource will failback to that node when it comes back online.

Failover clustering uses a heartbeats (UDP 3343) to determine the health of each cluster node; heartbeats are sent every second and the threshold for 5 seconds.

The cluster disk driver is now truly plug and play and is no longer part of the storage stack; the cluster disk driver now interacts with the partition manager; partition manager policies ensure shared disks are offline by default.

Quorum Models

Node majority Best used where you have an odd number of hosts. The cluster configuration is stored on each host within the cluster and each host has a vote to achieve quorum. The cluster can survive if half of the hosts are still functioning plus one. The minimum number of hosts supported is three and the application should have some form of mechanism for replicating its data  between hosts.

Node and disk majority Best used with an even number of hosts. The cluster configuration is stored on each host and the shared disk and each host and the shared disk has a vote to achieve quorum. The minimum number of host is two.

Node and file share majority Very similar to the node and disk majority quorum model but this model uses a file share; the file share in this model does not contain any cluster configuration but instead keeps track of where the most up-to-date cluster configuration is located i.e. which host. This model is generally used where clusters are geographically dispersed or on different subnets.

No majority: Disk only The up-to-date cluster configuration in this model is only kept on the shared disk, copies of the cluster configuration are kept on each host. The shared disk is a single point of failure but if the disk is lost or corrupted the configuration can be repaired from the host configuration. Microsoft do not recommend this configuration.

Configure storage

RAID

RAID 0

Striping of disks to create one volume. The benefits are fast read and writes but does not provide any redundancy; requires at least two disks

RAID 1

Mirroring of disks to create one volume. Then benefit is disk redundancy and potential faster reads if supported by the RAID controller but fault tolerance cost is 50% and can still maintain data integrity when a disk fails. Requires two disks or three if you have a hot spare.

RAID 5

Disk striping with parity, similar to RAID 0 in that respect but added parity for redundancy. The benefits are faster reads but can be hampered by the write penalty, the fault tolerance cost is one disk per array, so in a three disk array 33% of storage space is required for parity. RAID 5 can maintain data integrity if a disk fails. Requires a minimum of three disks or four if you have a hot spare. There maybe a sweet spot for the maximum number of disks in RAID 5 array.

Check out these links:

RAID 10

Disk mirroring and disk striping combined, minimum of four disks, two RAID 1 disk groups striped, an alternative would be RAID 0+1 which is RAID 0 disk groups mirrored.

Windows Server 2008 supports RAID 0, RAID 1 and RAID 5 built in software, if using hardware RAID the operating system is none the wiser to the RAID configuration.

LUNs

LUNs are a volume presented by the disk storage subsystem, a LUN could comprise of a portion of a disk, a single disk or a groups of disks.

Windows Server 2008 has storage manager for SANs; storage manager requires that the SAN supports Virtual Disk Service (VDS) version 1.1. You can check the VDS provider loaded by the operating system by running diskraid from the command line.

Fibre Channel

High performance block I/O storage. The hosts require a Host Bus Adapter (HBA) to connect to the storage controllers. Fibre Channel LUNs are mapped to hosts using the HBA World Wide Port Name (WWPN) of that particular hosts HBA.

iSCSI

ISCSI uses TCP/IP to encapsulate SCSI commands. The clients requires a iSCSI initiator which can be software or hardware. iSCSI initiators use an initiator name to map LUNs to. The iSCSI storage could be a server with lots of disks running open source iSCSI software such as openfiler or freeNAS.

SAN Policies

SAN polices are used to defined how shared disks are handled by default; the default is OfflineShared.

Storage Manager for SANs

Allows you to:

  • view the properties of FC and iSCSI storage subsystems
  • Create new, extend and delete LUNs
  • Assign LUNs to Clients
  • Manage HBAs
  • Manage iSCSI security i.e. CHAP configuration
  • Monitor LUN status and health

2 comments

  1. Pingback: MCTS 70-646 Plan Storage | Notes from stuff I'm working on
  2. Pingback: MCTS 70-646 Plan high availability | Notes from stuff I'm working on

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.