We have previously discussed the concept for the most common approach VMFS, now we will consider Raw Device Mapping (RDM), N-Port ID Virtualization (NPIV) and VMware Direct Path I/O concepts.
Raw Device Mapping
This option for virtual machine hard disks allows for guest to directly utilise the provisioned LUN on the storage array. This may not be the common approach adopted in the datacenter, but this option is beneficial in a number of use cases, to which I have list a number below:
- Applications that may require hardware specific SCSI commands, such as Microsoft Cluster Server to which the quorum disk requires to utilise SCSI-3 commands.
- Configuring a virtual machine to use N-Port ID Virtualization (NPIV)
- Enabling a virtual machine to use storage array management software, such as snapshots.
- Physical to Virtual migration
- Specific I/O requirements
There are two compatibility modes for RDMs, these being the default Physical (rdmp) which allows for SCSI commands to be passed directly from the guest OS to the hardware. The limitation of this mode is that some VMware features are not available as hypervisor support is required, such as snapshots, cloning and storage migration.
For RDMs that are configured with the virtual compatibility mode (rdm) this supports a subset of SCSI commands to pass through the hypervisor between the guest OS to the hardware and therefore the limitations provided with the physical compatibility mode are no longer an issue as hypervisor support is enabled in this mode.
What are the performance considerations when placing a virtual machine hard disk on either VMFS, NFS or RDM? The benefit of an RDM in terms of performance is that you may isolate virtual machine hard disks to the LUN provisioned, typically if the virtual machine hard disk was located on a VMFS datastore the available I/O operations would be shared between multiple virtual machines. If you had specific I/O requirements creating a RDM would isolate the I/O operations to this virtual machine hard disk.
However, one argument for this is that you can easily achieve the above by either placing the virtual machine or the virtual machine hard disk on a isolated datastore provisioned by a LUN. You can also to an extent use Storage I/O control to provide QoS in terms of latency to the datastore. Also, in future releases the concept of VVOLs can be leveraged with your storage array to provide QoS. In my experience, performance characteristics of the virtual machine has never been the driver for providing specific I/O requirements (whilst a valid use case) but it has been to provide hardware specific SCSI commands or to enable the use of storage array management in a guest OS.
N-Port ID Virtualization
A use case for N-Port ID Virtualization (NPIV) is to enable a virtual machine to be assigned an addressable World Wide Port Name (WWPN) within the storage array. In normal operations, a virtual machine uses the host systems physical HBAs World Wide Name (WWN). This will allow storage to be zoned directly to a virtual machines unique WWN for possibly QoS or security requirements.
So what are the use cases for NPIV? Well this provides storage visibility to each virtual machine that has been configured by leveraging the storage array management software. Also, this can also provide the ability to exceed the configuration maximums for an ESXi host which at the time of writing is a maximum of 8 HBA adapters and 16 HBA ports.
In order to enable NPIV you will be required to have created a RDM in physical compatibility mode to which the HBA and switch are NPIV aware.
VMware Direct Path I/O
Finally, lets discuss VMware Direct Path I/O which allows for a virtual machine to gain direct control of an adapter such as a NIC or HBA. In order to support VMware Direct Path I/O the host system will require to have either Intel Technology for Directed I/O (VT-d) or AMD I/O Virtualization (AMD-Vi or IOMMU) enabled in the BIOS.
An example of a use case for VMware Direct Path I/O is for performance, by providing direct control of a NIC for a workload with very high packet rates it is likely to achieve greater performance through CPU savings from the direct access.
Also by providing direct control to the virtual machine you may access devices from the guest OS which may yet be on the Hardware Compatibility Guide.