FAQ / How to use an auto-snap service with a clustered volume

Currently, the auto-snap service is local to the node that it was created on, so when the volume fails over to the other node, snapshots will no longer be taken. As a workaround, the following steps should be taken:

Create the auto-snap service on the primary node (or whichever node the pool is imported on)

nmc@nextest1:/$ setup auto-snap vol02/ create 
Interval   : daily
Time       : 3am
Period     : 2
Keep value : 10
Recursive  : 1
Custom name: vol02-nextest1
Comment    : 
About to create a new auto-snap service for 'vol02'. Note that additional options to configure the service are available via command line - see help (-h) for more information. Proceed?  Yes

  1  show auto-snap ':vol02-nextest1-000' state and properties
     'show auto-snap :vol02-nextest1-000 -v'

  2  show auto-snap ':vol02-nextest1-000' log
     'show auto-snap :vol02-nextest1-000 log'

Press one of the highlighted keys to make a selection, or press any key to quit 

PROPERTY              VALUE                                                 
service             : auto-snap
instance            : vol02-nextest1-000
folder              : vol02
frequency           : every 2 days at 03:00
status              : online since 16:14:21
enabled             : true
state               : idle
keep value          : 10 snapshots
comment             : 
exclude             : 
last_replic_time    : 0
latest-suffix       : 
period              : 2
snap-recursive      : 1
time_started        : N/A
trace_level         : 1
uniqid              : 132a50ee934ab9175d59b52b443cf30c
log                 : /var/svc/log/system-filesystem-zfs-auto-snap:vol02-nextest1-000.log

Failover the volume service to the secondary node

nmc@nextest1:/$ setup group rsf-cluster HA-Cluster failover                                                                                     
Appliance       : nextest2
Shared volume   : vol02
Waiting for failover operation to complete ........ done.

nextest2:
 vol01        stopped        auto    unblocked  vip01        bge0      20  8  
 vol02        running        auto    unblocked  vip02        bge0      20  8

Create another auto-snap service on the secondary node

nmc@nextest2:/$ setup auto-snap vol02/ create 
Interval   : daily
Time       : 3am
Period     : 2
Keep value : 10
Recursive  : 1
Custom name: vol02-nextest2
Comment    : 
About to create a new auto-snap service for 'vol02'. Note that additional options to configure the service are available via command line - see help (-h) for more information. Proceed?  Yes

  1  show auto-snap ':vol02-nextest2-000' state and properties
     'show auto-snap :vol02-nextest2-000 -v'

  2  show auto-snap ':vol02-nextest2-000' log
     'show auto-snap :vol02-nextest2-000 log'

Press one of the highlighted keys to make a selection, or press any key to quit 

PROPERTY              VALUE                                                 
service             : auto-snap
instance            : vol02-nextest2-000
folder              : vol02
frequency           : every 2 days at 03:00
status              : offline* since 16:22:11
enabled             : true
state               : idle
keep value          : 10 snapshots
comment             : 
exclude             : 
last_replic_time    : 0
latest-suffix       : 
period              : 2
snap-recursive      : 1
time_started        : N/A
trace_level         : 1
uniqid              : 96f1119e8dbd2224f73b313d9df3f002
log                 : /var/svc/log/system-filesystem-zfs-auto-snap:vol02-nextest2-000.log

There is now an active auto-snap service running on both nodes, so regardless of which node the volume is imported on, a snapshot will be taken at the specified times.

Posted in: NexentaStor