VMware SRM Error: “Failed to create snapshots of replica devices”

Today I encountered VMware SRM error “Failed to create snapshots of replica device Cause: SRA command ‘testFailoverStart’ failed. Storage port not found Either Storage port information provided in NFS list is incorrect else Verify the “isPv4″ option in ontap_config file matches the ipaddress in NFS field.”

Found the solution in kb article 000016026: https://kb.netapp.com/support/s/article/ka11A0000001BN5QAM/sra-command-testfailoverstart-failed-storage-port-not-found-either-storage-port-information-provided-in-nfs-list-is-incorrect?language=en_US

Looks to be caused by the firewall-policy of the SVM data LIFs. These were set for “mgmt”, which are not detected by the SRA according to the kb article.

To change the firewall-policy from “mgmt” to “data”:
net int modify -vserver [vserver_name] -lif [data_lif_name] -firewall-policy data  
To list LIFs by firewall-policy:
net int show -fields firewall-policy  

Article also advises checking the ontap_config file on the SRM server to

ensure that the NFS IP address on the controller is correct and the IP address format mentioned in the NFS address field matches the value set for the isipv4 option in the ontap_config file

By default, the configuration file is located at install_dirProgram FilesVMwareVMware vCenter Site Recovery ManagerstoragesraONTAPontap_config.txt. You’ll look for the “isPv4” option.

Add NFS datastore using VMWare PowerCLI

To add a NFS datastore to a single VMHost:


To add a NFS datastore to all VMHosts in a Cluster:


The mount request was denied by the NFS server | VMware | Troubleshooting netgroups

Call “HostDatastoreSystem.CreateNasDatastore” for object “datastoreSystem-663108” on vCenter Server failed.
NFS mount failed: The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it.
An unknown error has occurred.

VMware message encountered while trying to mount volume from newly added nodes 13 and 14 in a 14 node cluster running 8.3.2P2.

Troubleshooting steps:

  • Checked namespace export policy – looked good; identical to dozens of other volumes backed by node1 thru node12 already mounted as VMware datastores.
  • Tested mounting on different vSphere hosts – same message.
  • As the export policy uses local netgroups, I checked the local netgroups definitions file:
vserver services name-service netgroup file show  
  • File looked fine.
  • Checked the status of netgroups definitions across all nodes in the cluster – found the culprit:
::> set -privilege advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.  
Do you want to continue? {y|n}: y

::*> vserver services name-service netgroup status
Vserver   Node            Load Time           Hash Value  
--------- --------------- ------------------- -------------------
                          -                   -

                          -                   -

vserver services name-service netgroup status enables you to verify that netgroup definitions are consistent across all nodes that back a SVM into which netgroup definitions have been loaded.

The command displays the following information:

  • SVM name
  • Node name
  • Load time for netgroup definitions
  • Hash value of the netgroup definitions

Node13 and node14 contained no load time or hash value, as this is a manual step, and doesn’t happen automatically when a node is added. No worries. To resolve, I reloaded the netgroup file onto the SVMs:

::> vserver services name-service netgroup load -vserver vservername -source http://ipaddress/netgroup.txt

This loaded the netgroup file onto node13 and node14, which took about 5 minutes to build the netgroup.byhost map. (I tried mounting on vSphere host immediately after netgroup load, and got the same error message. Waited 5 minutes, and it mounted fine.)

More on displaying the local netgroup definitions status here:

Check if a datastore is SIOC Enabled | VMware

Researching unmanaged workloads today..here’s a quick way to check if your datastore is managed by Storage I/O Control (SIOC):

Get-Datastore | Select-Object Name, Type, StorageIOControlEnabled  

If you want to check VMhosts for SIOC events, like “An unmanaged I/O workload is detected on a SIOC-enabled datastore”:

Get-VMHost | Get-VIEvent | Where {$_.fullformattedmessage -like '*SIOC*'} | Select FullFormattedMessage, CreatedTime  

VMware Error: “msg.hbacommon.outofspace: There is no more space for virtual disk”

msg.hbacommon.outofspace:There is no more space for virtual disk .vmdk. You may be able to continue this session by freeing disk space on the relevant partition, and clicking Retry. Otherwise, click Abort to terminate this session.

I haven’t seen this message in awhile (knock-on-wood), but if this does occur, you will need to (1) add space to the datastore and (2) answer the message prompt. If this datastore housed a number of VMs, you could have your hands busy clicking VMs and answering message prompts.

Or you could use PowerCLI.

To target all VMs:
Get-VM | Get-VMQuestion | Answer-VMQuestion -DefaultOption  
To target all VMs on a specific datastore:
Get-Datastore datastorename | Get-VM | Get-VMQuestion | Answer-VMQuestion -DefaultOption  

To avoid being prompted to confirm the answer, you can add “-Confirm:$false” to each command.

More information on Answer-VMQuestion cmdlet (actually called Set-VMQuestion, but I prefer using its alias) located here: https://www.vmware.com/support/developer/windowstoolkit/wintk40u1/html/Set-VMQuestion.html