How To Add A New Load-Sharing Mirror on NetApp Cluster Mode using PowerShell

To protect the Storage Virtual Machine (SVM) namespace root volume, you can create a load-sharing mirror volume on every node in the cluster, including the node in which the root volume is located. Then you create a mirror relationship to each load-sharing mirror volume and initialize the set of load-sharing mirror volumes. As you add new nodes to the cluster, you may choose to add a new load-sharing mirror to a set of existing load-sharing mirrors.

New-NcVol
PS>New-NcVol -Name root_vs01_m3  -Aggregate sas_aggr1 -JunctionPath $null -Type dp -Size 1g -VserverContext vs01

Name          State       TotalSize  Used  Available Dedupe Aggregate   Vserver  
----          -----       ---------  ----  --------- ------ ---------   -------
root_vs01_m3  online      1.0 GB     0%    1023.9 MB False  sas_aggr1   vs01

 

Create the destination load-sharing mirror volume by using the New-NcVol cmdlet with the -type parameter set to DP (data-protection volume). The destination volume that you create must be the same size or greater than the SVM root volume.

New-NcSnapmirror
PS>New-NcSnapmirror  //vs01/root_vs01_m3 //vs01/root_vs01 -Type ls 

SourceLocation               DestinationLocation             Status       MirrorState  
--------------               -------------------             ------       -----------
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m3 idle         uninitialized

 

Use the New-NcSnapmirror cmdlet with the -type LS parameter to create a load-sharing mirror relationship between the source volume and a destination volume.

Note: The -Schedule parameter does not need to be used, because Data ONTAP automatically applies the same schedule to the set of all load-sharing mirrors that share the same source volume.

Get-NcSnapmirror
PS>Get-NcSnapmirror | Select sourcelocation, destinationlocation, relationshipstatus, relationshipType, ishealthy, schedule | ft -a

SourceLocation               DestinationLocation              RelationshipStatus RelationshipType IsHealthy Schedule  
--------------               -------------------              ------------------ ---------------- --------- --------
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m1  idle               load_sharing          True 5min  
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m2  idle               load_sharing          True 5min  
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m3                     load_sharing               5min

 

Use the Get-NcSnapmirror cmdlet with the parameters listed to confirm the relationship has been created, as well as the health of the relationship.

Invoke-NcSnapmirrorInitialize
PS>  Invoke-NcSnapmirrorInitialize -DestinationVolume root_vs01_m3 -DestinationVserver vs01 | Get-NcJob

JobId JobName                        JobPriority JobState   JobVserver           JobCompletion  
----- -------                        ----------- --------   ----------           -------------
70    SnapMirror initialize          exclusive   queued     vs01


PS>  Get-NcJob 70

JobId JobName                        JobPriority JobState   JobVserver           JobCompletion  
----- -------                        ----------- --------   ----------           -------------
70    SnapMirror initialize          exclusive   success    vs01       SnapMirror: done  

 

Use the Invoke-NcSnapmirrorInitialize cmdlet specifying the DestinationVolume and DestinationVserver to perform the initial update of a SnapMirror relationship.

Note: Do not use the Invoke-NcSnapmirrorLsInitialize cmdlet. The Invoke-NcSnapmirrorLsInitialize cmdlet is for initializing volumes for an entire set of load-sharing mirrors, not for initializing an individual volume.

Invoke-NcSnapmirrorLsUpdate
PS>  Invoke-NcSnapmirrorLsUpdate ntap-clus01://vs01/root_vs01 | Get-NcJob

JobId JobName                        JobPriority JobState   JobVserver           JobCompletion  
----- -------                        ----------- --------   ----------           -------------
87    SnapMirror Loadshare update    exclusive   queued     vs01

PS>  Get-NcJob 87

JobId JobName                        JobPriority JobState   JobVserver           JobCompletion  
----- -------                        ----------- --------   ----------           -------------
87    SnapMirror Loadshare update    exclusive   success    vs01                 SnapMirror: done  

 

Upon job completion, update the LS set by using the Invoke-NcSnapmirrorLsUpdate cmdlet specifying the source endpoint to update destination volumes of the set of load-sharing mirrors. The cmdlet makes destination volumes in the group of load-sharing mirrors up-to-date mirrors of the source volume. Separate SnapMirror transfers are performed from the source volume to each of the up-to-date destination volumes in the set of load-sharing mirrors.

Use the Get-NcSnapmirror cmdlet once more to confirm the health of the relationship.

PS>  Get-NcSnapmirror | Select sourcelocation, destinationlocation, relationshipstatus, relationshipType, ishealthy, schedule | ft -a

SourceLocation               DestinationLocation              RelationshipStatus RelationshipType IsHealthy Schedule  
--------------               -------------------              ------------------ ---------------- --------- --------
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m1  idle               load_sharing          True 5min  
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m2  idle               load_sharing          True 5min  
ntap-clus01://vs01/root_vs01 ntap-clus01://vs01/root_vs01_m3  idle               load_sharing          True 5min  

 

Further reading:

  1. Adding a load-sharing mirror to a set of load-sharing mirrors
  2. Initializing an individual load-sharing mirror

Missing Security Tab On New NetApp CIFS Share

Caught this after upgrading from 8.3.2P2 Cluster-Mode to 8.3.2P12 Cluster-Mode. We created a new CIFS share and found we could not apply NTFS ACL permissions to the share because it was missing the security tab.

Old shares looked and operated fine.

It turned out the culprit was quiesced LS mirrors. Here’s how to fix:

snapmirror show

From here we can confirm that our LS mirrors are in a quiesced state. As per NetApp doc “FA266”:

To a cluster, a volume is a folder. When you create and mount a volume to /, it appears as a folder to the cluster and clients.

A read or write request comes through that path into the N-blade of a node, the N-blade first determines if there are any LS mirrors of the volume that it needs to access. If there are no LS mirrors of that volume, the read request will be routed to the R/W volume. If there are LS mirrors of the volume, preference is given to an LS mirror on the same node as the N-blade that fielded the request. If there is no LS mirror on that node, an up-to-date LS mirror from another node is chosen. This is why the newly created volumes are invisible, since before the LS mirror update, all the requests go to LS Mirror Destination volume, which is Read-Only.

Additionally if we browse the admin share c$, we do not see our new share.

snapmirror resume

Because there is a LS mirror set in place that was quiesced, meta data for the new share was not propagated to the root vol. Once the mirror set is resumed and updated, the new share replicates the meta data and access is restored.

After resuming the mirror, you can either wait for it to update and sync on its set schedule, or you can update the LS set manually by using “snapmirror update-ls-set”.

snapmirror update-ls-set

We can confirm that the LS mirror set is in sync now because security tab appears on the new share.

And it now appears when we browse the admin share.

One last thing to confirm is that the LS set is being updated on a schedule.

snapmirror show -fields schedule

Further reading:

How do LS mirrors affect NAS access when new volumes are added?

Reassigning disk ownership in NetApp Cluster Mode

Working on a multiple-node cluster this morning and I mistakenly assigned ownership of two shelves of disks to controller nodes which did not own the stack.

It resulted in 48 disks having an “unknown” state:

PIC

Trying to remove the software ownership information in order to reassign disk ownership will, and did, fail:

PIC

Here’s how to fix:

1. If disk ownership automatic assignment is on, turn it off
::> storage disk option modify -node * -autoassign off
2. Start a nodeshell session on the node where the stack resides
::> node run [node_name]
3. Make the disk unowned, by using the `-s unowned’ option
::> disk assign [disk_name] -s unowned -f 

Exit the nodeshell and return to the clustershell by using the “exit” command.

4. Display all unowned disks
::> storage disk show -container-type unassigned
5. Assign disk
::> storage disk assign -disk [disk_name] -owner [owner_name]

NFS users have issues when they belong to greater than 16 groups

Recently ran into this issue, as described in NetApp KB000010630:

  • Network File System (NFS) users have access problems when they belong to greater than 16 groups.
  • Users have access problems if in 17 to 20 groups.

As noted in the KB:

Although the filer currently supports up-to 32 UNIX/NFS groups some NFS clients only support 16 groups, which means an NFS user can only belong to 16 groups while using NFS… While there are hacks for allowing a Unix user to be a part of more then 16 netgroups, per RFC standard RFC 5531 this is a set limit and cannot be modified. So it is likely that a client vendor would not support changes to the client allowing more then 16 netgroups. ONTAP limits to 16 as well following the RFC 5531 standard.

Support recommended as workaround setting up and configuring LDAP for Clustered Data ONTAP 8.x as well as using the “extended-groups-limit” and “auth-sys-extended-groups” parameters to extend the maximum number of group IDs.

1. Gather the schema information (read-only by default)

schema show -vserver [vservername] 
(vserver services name-service ldap client schema show)
Vserver Schema Template Comment
------- --------------- -------------------------------------------------------
[vservername] AD-IDMU Schema based on Active Directory Identity Management for UNIX (read-only)
[vservername] AD-SFU Schema based on Active Directory Services for UNIX (read-only)
[vservername] AD-SFU-Deprecated Schema based on Active Directory Services for UNIX (read-only)
[vservername] RFC-2307 Schema based on RFC 2307 (read-only)
4 entries were displayed.
2. Create a new schema to use by copying RFC 2307 to a new schema

::*> set -privilege advanced
::*> ldap client schema copy -schema RFC-2307 -new-schema-name NEW-RFC-2307 -vserver [vservername] 
3. Modify the schema as necessary for Active Directory

::*> vserver services ldap client schema modify -schema NEW-RFC-2307 -comment "NEW-RFC-2307" -gecos-attribute name -home-directory-attribute unixHomeDirectory -uid-attribute sAMAccountName -user-password-attribute unixUserPassword -posix-account-object-class User -posix-group-object-class Group -member-uid-attribute memberUid -enable-rfc2307bis true -group-of-unique-names-object-class group -unique-member-attribute member -vserver [vservername]
4. Verify the schema

::*> ldap client schema show -schema NEW-RFC-2307 -vserver [vservername]      

                                           Vserver: [vservername]
                                   Schema Template: NEW-RFC-2307
                                           Comment: NEW-RFC-2307
                RFC 2307 posixAccount Object Class: User
                  RFC 2307 posixGroup Object Class: Group
                 RFC 2307 nisNetgroup Object Class: nisNetgroup
                            RFC 2307 uid Attribute: sAMAccountName
                      RFC 2307 uidNumber Attribute: uidNumber
                      RFC 2307 gidNumber Attribute: gidNumber
                RFC 2307 cn (for Groups) Attribute: cn
             RFC 2307 cn (for Netgroups) Attribute: cn
                   RFC 2307 userPassword Attribute: unixUserPassword
                          RFC 2307 gecos Attribute: name
                  RFC 2307 homeDirectory Attribute: unixHomeDirectory
                     RFC 2307 loginShell Attribute: loginShell
                      RFC 2307 memberUid Attribute: memberUid
              RFC 2307 memberNisNetgroup Attribute: memberNisNetgroup
              RFC 2307 nisNetgroupTriple Attribute: nisNetgroupTriple
              Enable Support for Draft RFC 2307bis: true
       RFC 2307bis groupOfUniqueNames Object Class: group
                RFC 2307bis uniqueMember Attribute: member
Data ONTAP Name Mapping windowsToUnix Object Class: posixAccount
  Data ONTAP Name Mapping windowsAccount Attribute: windowsAccount
   Data ONTAP Name Mapping windowsToUnix Attribute: windowsAccount
   No Domain Prefix for windowsToUnix Name Mapping: false
                               Vserver Owns Schema: true
 Maximum groups supported when RFC 2307bis enabled: 256
                   RFC 2307 nisObject Object Class: nisObject
                     RFC 2307 nisMapName Attribute: nisMapName
                    RFC 2307 nisMapEntry Attribute: nisMapEntry
5a. Create the ldap client config (using a bind account) or..

Use if the SVM is not joined to AD, and you have no intention of serving out CIFS.


::*> vserver services ldap client create -client-config ldap1 -ad-domain [domainname] -preferred-ad-servers [ipaddress] -schema NEW-RFC-2307 -port 389 -query-timeout 10 -min-bind-level sasl -base-dn [basedn] -base-scope subtree -user-scope subtree -group-scope subtree -netgroup-scope subtree -bind-dn [binddn] -bind-password [password] -user-dn [userdn] -group-dn [groupdn] -bind-as-cifs-server false -vserver [vservername]

::*> vserver services name-service ldap client show -instance -vserver [vservername]

                                  Vserver: [vservername]
                Client Configuration Name: ldap1
                         LDAP Server List: -
                  Active Directory Domain: [domainname]
       Preferred Active Directory Servers: [ipaddress]
Bind Using the Vserver's CIFS Credentials: false
                          Schema Template: NEW-RFC-2307
                         LDAP Server Port: 389
                      Query Timeout (sec): 10
        Minimum Bind Authentication Level: sasl
                           Bind DN (User): [binddn]
                                  Base DN: [basedn]
                        Base Search Scope: subtree
                                  User DN: [userdn]
                        User Search Scope: subtree
                                 Group DN: [groupdn]
                       Group Search Scope: subtree
                              Netgroup DN: -
                    Netgroup Search Scope: subtree
               Vserver Owns Configuration: true
      Use start-tls Over LDAP Connections: false
(DEPRECATED) Allow SSL for the TLS Handshake Protocol: false
           Enable Netgroup-By-Host Lookup: false
                      Netgroup-By-Host DN: -
                   Netgroup-By-Host Scope: subtree
5b. Create the ldap client config (binding as cifs server)

Use if the SVM is already joined to AD.


::*> vserver services ldap client create -client-config ldap1 -ad-domain [domainname] -preferred-ad-servers [ipaddress] -schema NEW-RFC-2307 -port 389 -query-timeout 10 -min-bind-level sasl -base-dn [basedn] -base-scope subtree -user-scope subtree -group-scope subtree -netgroup-scope subtree -bind-dn [binddn] -bind-password [password] -user-dn [userdn] -group-dn [groupdn] -bind-as-cifs-server true -vserver [vservername]

::*> vserver services name-service ldap client show -instance -vserver [vservername]

                                  Vserver: [vservername]
                Client Configuration Name: ldap1
                         LDAP Server List: -
                  Active Directory Domain: [domainname]
       Preferred Active Directory Servers: [ipaddress]
Bind Using the Vserver's CIFS Credentials: false
                          Schema Template: NEW-RFC-2307
                         LDAP Server Port: 389
                      Query Timeout (sec): 10
        Minimum Bind Authentication Level: sasl
                           Bind DN (User): [binddn]
                                  Base DN: [basedn]
                        Base Search Scope: subtree
                                  User DN: [userdn]
                        User Search Scope: subtree
                                 Group DN: [groupdn]
                       Group Search Scope: subtree
                              Netgroup DN: -
                    Netgroup Search Scope: subtree
               Vserver Owns Configuration: true
      Use start-tls Over LDAP Connections: false
(DEPRECATED) Allow SSL for the TLS Handshake Protocol: false
           Enable Netgroup-By-Host Lookup: false
                      Netgroup-By-Host DN: -
                   Netgroup-By-Host Scope: subtree
6. Configure the SVM to use the new LDAP client

::*> vserver services name-service ldap create -client-config ldap1 -client-enabled true -vserver [vservername]

::> vserver services name-service ldap show
               Client        Client
Vserver        Configuration Enabled
-------------- ------------- -------
[vservername] 
               ldap1         true
7. Configure the SVM to use LDAP for name server lookups

::*> vserver services name-service ns-switch show -vserver [vservername]
                               Source
Vserver         Database       Order
--------------- ------------   ---------
[vservername] hosts         files,
                               dns
[vservername] group         files
[vservername] passwd        files
[vservername] netgroup      files
[vservername] namemap       files
5 entries were displayed.

::*> vserver services name-service ns-switch modify -database passwd files,ldap -vserver [vservername]

::*> vserver services name-service ns-switch modify -database group files,ldap -vserver [vservername]

::*> vserver services name-service ns-switch modify -database namemap files,ldap -vserver [vservername]

::> vserver services name-service ns-switch show -vserver [vservername]
                               Source
Vserver         Database       Order
--------------- ------------   ---------
[vservername] hosts         files,
                               dns
[vservername] group         files,
                               ldap
[vservername] passwd        files,
                               ldap
[vservername] netgroup      files
[vservername] namemap       files,
                               ldap
5 entries were displayed.

::>
8. Configure the number of group IDs allowed for NFS users

By default, Data ONTAP supports up to 32 group IDs when handling NFS user credentials using Kerberos (RPCSEC_GSS) authentication. When using AUTH_SYS authentication, the default maximum number of group IDs is 16, as defined in RFC 5531. You can increase the maximum up to 1,024 if you have users who are members of more than the default number of groups.


::*> vserver nfs modify -auth-sys-extended-groups enabled -vserver [vservername] 

::*> vserver nfs modify -extended-groups-limit 256 -vserver [vservername]

::*> vserver nfs show -fields auth-sys-extended-groups,extended-groups-limit -vserver [vservername] 
vserver          auth-sys-extended-groups extended-groups-limit 
---------------- ------------------------ --------------------- 
[vservername] enabled                  256    
9. Test the lookup

Example results for successful gid 308 lookup:


::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> diag secd authentication translate -node [nodename] -gid 308 -vserver [vservername] 
[AD group object]

Example results for failed uid 308 lookup:


::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> diag secd authentication translate -node [nodename] -uid 308 -vserver [vservername]

Vserver: [vservername] (internal ID: 24)

Error: Acquire UNIX credentials procedure failed
  [  1 ms] Entry for user-id: 308 not found in the current source:
           FILES. Ignoring and trying next available source
  [     2] Using a cached connection to [domainserver]
 *[     3] FAILURE: User ID '308' not found in UNIX authorization
 *         source LDAP. 
  [     4] Entry for user-id: 308 not found in the current source:
           LDAP. Entry for user-id: 308 not found in any of the
           available sources
  [     4] Unable to retrieve UNIX username for UID 308

Error: command failed: Failed to resolve User ID '308' to a user name. Reason: "SecD Error: object not found". 

Example results for successful name mapping:


::*> diag secd name-mapping show -node -[nodename] -vserver [vservername] -direction win-unix -name [domaindomainusername]             

Warning: Mapping of Data ONTAP "admin" users to UNIX user "root" is enabled,but the following information does not reflect this mapping.
Do you want to continue? {y|n}: y

[domaindomainusername] maps to [username]     

Further reading:

Secure Unified Authentication for NFS Kerberos, NFSv4, and LDAP in ONTAP

Failed to change NetApp volume to data-protection (volume busy)

This relationship had been intentionally broken for some testing on the destination volume and when resync was issued, it had failed due to volume busy.

                            Healthy: false
                   Unhealthy Reason: Scheduled update failed to start. (Destination volume must be a data-protection volume.)
           Constituent Relationship: false
            Destination Volume Node: 
                    Relationship ID: aa9b0b54-64d9-11e5-be3f-00a0984ad3aa
               Current Operation ID: 1bed480d-1554-11e7-aa85-00a098a230de
                      Transfer Type: resync
                     Transfer Error: -
                   Current Throttle: 103079214
          Current Transfer Priority: normal
                 Last Transfer Type: resync
                Last Transfer Error: Failed to change the volume to data-protection. (Volume busy)

To check the snapshots on the volume for busy status and dependency:
snapshot show -vserver 'vserver_name' -volume 'volume_name' -fields busy, owners  

In this case, a running NDMP backup session was preventing the resync.

To list NDMP backup sessions:
system services ndmp status  

The system services ndmp status command lists all the NDMP sessions in the cluster. By default it lists the following details about the active sessions:

To list details for a NDMP backup session:
system services ndmp status -node 'node_name' -session-id 'session-id'  

From here you can confirm this is the NDMP session you need to kill by referencing the ‘Data Path’ field. This should be the path to the volume that is failing the resync.

To kill NDMP backup session:
system services ndmp kill 'session-id' -node 'node_name'  

The system services ndmp kill command is used to terminate a specific NDMP session on a particular node in the cluster. This command is not supported on Infinite Volumes.

After clearing the busy snapshot application dependency, I was able to successfully issue the resync as per normal operations.