VCP 4 Prep objective 3.1 – Configure FC SAN Storage

Objective 3.1 – Configure FC SAN Storage

Identify FC SAN hardware components

The virtual machine and Service Console disk access is handled by the Virtual Machine Monitor (VMM) and the VMKernel; the VMKernel interfaces directly with the storage adapters using proprietary drivers and the VMM intercepts disk access.  The VMM does this to ensure the underlying disk subsystem is not exposed to the virtual machines.

The hardware components which make up the Fibre Channel SAN connectivity and storage are:

The Host Bus Adapters, the SAN fabric (cabling [LC/LC multimode] and switches), the storage processors (cache and array software) and the physical disks. 

Identify how ESX Server connections are made to FC SAN storage

ESX/ESXi use hardware based HBAs to connect to the FC SAN fabric; the HBAs have unique identifiers called World Wide Node Names and a corresponding World Wide Port Number (WWPN). PortIDs are fibre channel addresses which uniquely identify each port; the PortID allows routing between devices logged into the SAN. N-Port ID virtualisation registers multiple WWPNs per port; this allows storage to be assigned directly to the virtual machine. 

The SAN administrator would zone the WWPNs to a specific storage processor port or multiple storage processor ports; it is recommended that single-initiator zoning be used. This is where one initiator (the HBA port) is zoned to all applicable storage processor ports.

Describe ESX Server FC SAN storage addressing

As above the ESX/ESXi hosts are zoned all applicable storage processor ports using specific WWPNs.

Describe the concepts of zoning and LUN masking

Zoning can be configured in two ways; hard and soft zoning. Hard zoning is configured at the fibre channel switch level and allows a specific HBA port to communicate with a specific storage processor port. Soft zoning configuration allows the hosts WWPN to communicate with the storage processor on any port.

Masking can be configured at the storage processor or the host; masking basically allows WWPN to see only the LUNs defined by the storage admin or system admin.

Configure LUN masking

A LUN masking rule can be created via the vSphere CLI or directly at the console; NOTE this is host masking not storage masking.

Listing claimrules

esxcli corestorage claimrule list

Configuring claimrules

esxcli corestorage claimrule add -P MASK_PATH -r ### -t location -A vmhba# -C # -T # -L # see http://booandjoel.me.uk/vcp-4-prep/vcp-4-prep-objective-1-4/ Configure LUN masking for more detailed info.

Loading claimrules

esxcli corestorage claimrule load

Running claimrules

esxcli corestorage claimrule run

Removing claimrules 

esxcli corestorage claiming unclaim -t location -A vmhba#

Scan for new LUNs

ESX/ESXi will scan for new LUNs at boot and when told to do so by user action. The latter scan happens when the systems admin selects rescan on the storage adapters screen within the vSphere client; the rescan will you to scan for devices and or VMFS datastores.

An ESX/ESXi host can connect to 256 LUNs with each LUN having a maximum of 32 paths.

Determine and configure the appropriate multi-pathing policy

The three multi-pathing policies natively available to ESX/ESXi through the Native Multi-pathing Plugin (NMP) are fixed, most recently used and round robin.

Is the storage array Active / Passive or Active / Active? – the path selection policy you select will be determined by this.  An Active / Active array would more than likely use the fixed PSP or the round robin PSP, the storage vendor best practice document will define the best PSP to use. It should be noted that round robin PSP should not be used on LUNs which are presented to a Microsoft failover cluster.

An Active / Passive array would more than likely use the Most Recently Used PSP or the fixed PSP with array path preference.

Differentiate between NMP and third-party MPP

The default VMware NMP is called Storage Array Type Plug-in (SATP); the default SATPs included with VMware can detect path states, change IO paths etc. The pluggable storage architecture of ESX/ESXi allows for vendors to write third party plugins which allow VMware to storage array features etc.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.