VCP 4 Prep Objective 3.3 – Configure NFS Datastores
Objective 3.3 – Configure NFS Datastores
Identify the NFS hardware components
The hardware components which make up the NFS storage connectivity and storage are:
A NAS appliance exposing NFS shares, ethernet switches, ethernet cabling and network adapters.
VMware vSphere 4 now supports 10gE interfaces and with jumbo frame functionality NFS can handle large volumes of traffic.
Explain ESX exclusivity for NFS mounts
VMware vSphere uses a lock file with a .lck-xxx extension in the same directory as the vmdk being locked whereas the VMFS would use metadata locks. I think is the reason why MSCS / Failover clustering is not supported on NFS.
Configure ESX/ESXi network connectivity to the NAS device
NAS connectivity is configured via the vSphere client.
NOTE: NFS uses TCP port 2049
Configuration tab > Networking > Add Networking > VMKernel > Next > select the vmnic# > Next > Label the Port Group e.g. NFS > Next > specify a VLAN ID (if applicable) > Next > specify an IP address and subnet mask > Next > Finish.
Create an NFS Datastore
NFS datastores are created via the vSphere client.
Configuration tab > Storage > Add storage > NFS > Next > Server: DNS name or IP address e.g. nfsdata.domain.local > Folder: NFS share path e.g. /mnt/NFSData > Datastore Name: Something relevant e.g VirtualMachines > Next > Finish.
VMware supports 8 NFS datastores by default; this can be increased to 64 via the advanced settings (NFS.MaxVolumes upto 64; this change will require Net.TcpipHeapSize and Net.TcpipHeapMax adjusting as per the vendors best practice).
Virtual disks created on an NFS datastore are thin provisioned by default.
NFS does not support the virtual machine file system, Microsoft clustering or raw device mappings.