Following a recent upgrade to vSphere 5, a single VMFS volume was not mounted on a number of hosts within one of the clusters, after a short period of investigation the LUN was correctly mapped to the hosts and no configuration issues could be found. The VMFS volume was listed when selecting to ‘Add Storage’ from the vSphere Client when managing the host from my vCenter installation, however on adding the storage and selecting the keep the existing signature, the task would fail with the following error message:
“Cannot change the host configuration”
I then attempted to connect the existing VMFS volume and retain the existing disk signature, by managing the ESXi host using the vSphere client where this operation succeeded. I was also able to add the VMFS volume by managing another ESXi host by performing the following from the below from the command prompt to obtain information regarding the VMFS volume that could not be mounted;
esxcli storage vmfs snapshot list
Make a note of either the Volume Name or the VMFS UUID and ensure the can mount value is set to ‘true’, also the reason for un-mountability and non-resignaturability should be listed.
To mount the LUN so that it is persistent accross reboots run the following command;
esxcli storage vmfs snapshot mount -l <label>|-u <uuid>
Where label is the ‘Volume Name’ and uuid is the ‘VMFS UUID’ noted in the previous step.
For my specific issue, I did not require to resignature the volume, however the command to perform this action is as follows;
esxcli storage vmfs snapshot resignature -l <label>|-u <uuid>
From browsing the VMware KB, the cause of this issue is;
This issue may occur if:
- Multiple ESX/ESXi 4.x and 5.0 hosts are managed by the same vCenter Server and these hosts are in the same datacenter.
- A snapshot LUN containing a VMFS datastore is presented to all these ESX/ESXi hosts.
- One of these ESX/ESXi hosts has force mounted the VMFS datastore that resides on this snapshot LUN.
- A second ESX/ESXi host is attempting to do an operation at the same time.When one ESX/ESXi host force mounts a VMFS datastore residing on a LUN which has been detected as a snapshot, an object is added to the datacenter grouping in the vCenter database to represent that datastore.When the second ESX/ESXi host attempts to do the same operation on the same VMFS datastore, the operation fails because an object already exists within the same datacenter grouping in the vCenter database.Since an object already exists, vCenter Server does not allow mounting the datastore on any other ESX host residing in that same datacenter.
In this case the LUN was not a snapshot or replica and further reading (http://bit.ly/IVVqY9) brought to my attention that VMFS volumes can be incorrectly recognised as snapshots, one of the first items to check was the LUN ID was not uniform accross all hosts, this is not the case in my scenario. However, following the volume being mapped using esxcli and subsequent reboots the volume is now persistent to all hosts in the cluster.
Below are two VMware KB articles I referenced in order to resolve my issue;
One thought on “Cannot change the host configuration when force mounting a VMFS volume”
We have four hosts ESXi 5.0 conected to a SAN, where we have two storage IBM v7000. There are primary LUNs replicating with secondaries in both of them. The questions are:
1-Should be “map” the primary and the secondary LUN to the hosts? Or only the primary? The secondary is read-oly by the way.
2-When aone of the storages go down, the secondary became read-write , what should be the procedure to raise the secondary LUN? should be re-signature or keep the signature?