FAQ / How should I test mapmgr?

1 TESTING PLAN AND ADVANCED FUNCTIONALITY OVERVIEW FOR MAPMGR V3.4.6
1.1 Set up an enviroment that uses the full capabilities of stmf. The enviroment should include multiple pools, multiple zvols with multiple depths, multiple target/host groups and multiple views.  An example of such an enviroment set up is given in the diagram.

1.2 It is assumed the reader is familiar with stmf commands, the mapmgr documentation, the intended functionality of mapmgr and how to check that mapmgr has performed it’s intended functionality.

1.3 This testing plan assumes no file operations are done in the entire /volumes directory (e.g. no files in .mapping directory are edited, deleted or added, and no .mapping directories are edited), other than what is explicitly mentioned here.  If you place files in the top level of the /volumes directory this will break backup-remove, in future mapmgr will handle this particular case.

2 BASIC FUNCTIONALITY TESTING:
2.1 Fail over between two nodes.

2.2 Fail over again after changing the stmf state; most importantly rename, add and remove zvols. Repeat this step several times in each enviroment in which mapmgr is to be deployed to ensure mapmgr functions correctly and integrates as expected.

3 BACKWARD COMPATABILITY AND INSTALLATION TESTING:
3.1 Perform a failover in an enviroment where no mapmgr has ever ran before.
3.2 Test asymmetric upgrades of mapmgr from the original mapmgr.
3.3 Test symmetric upgrades of mapmgr from the original mapmgr.
4 ASYNCHRONIZATION HANDLING TESTING:
It has been discovered through reading logs that mapmgr is being called multiple times asynchronously.  mapmgr should handle this.  To ensure mapmgr does handle this the following    test is required.

“manual shell failover”:
4.1 in a shell run mapmgr backup for each pool,
4.2 then run mapmgr backup-remove for each pool,
4.3 then run mapmgr backup-remove for each pool again,
4.4 check the mapping is empty (i.e. no LUs and views exist),
4.5 (on second appliance) run mapmgr restore for each pool
4.6 (on second appliance) run mapmgr restore for each pool again,
4.7 (on second appliance) check the mapping has beed restored as expected

5 GROUP AWARENESS TESTING:
It has been discovered through reading logs that groups, in particular target groups, are somehow being destroyed and/or not created on the second appliance, i.e. the destination machine of a failover. mapmgr should handle this.  To ensure mapmgr does handle this the following tests are required.

Graceful failover test
5.4 in a shell run mapmgr backup for every pool, then delete TargetGroups513513099082456 for every .mapping directory
5.5 run mapmgr backup-remove for each pool
5.6 destroy a target group on the second appliance
5.7 (on second appliance) run mapmgr restore for each pool
5.8 The failover should work and the target group should now exist on the second appliance (mapmgr should create it).  Also a mapmgr restore log should contain a warning saying <group_name> does not exist AND an event notification should have occured (see EVENT NOTIFICATION).

5.9 repeat 5.4 – 5.8 but with host groups (remember to delete the “HostGroup…” files this time)

Non-graceful failover test (i)
5.10 in a shell run mapmgr backup for every pool, then delete TargetGroups513513099082456 for every .mapping directory
5.11 run mapmgr backup-remove for ONLY one pool

** First appliance dies **

5.12 destroy a target group on the second appliance
5.13 (on second appliance) run mapmgr restore for each pool
5.14 The failover should work and the target group should now exist on the second appliance (mapmgr should create it).  Also a mapmgr restore log should contain a warning saying <group_name> does not exist AND an event notification should have occured (see EVENT NOTIFICATION).

5.15 repeat 5.10 – 5.14 but with host groups (remember to delete the “HostGroup…” files this time)

Non-graceful failover test (ii)
5.16 enable comstar HA max mode by creating the file /opt/HAC/RSF-1/etc/.comstarHAMaxMode
5.17 in a shell run mapmgr backup for every pool, then delete TargetGroups513513099082456 for every .mapping directory

** First appliance dies **

5.18 destroy a target group on the second appliance
5.19 (on second appliance) run mapmgr restore for each pool
5.20 The failover should work and but the views that correspond to the target groups will now use target group ALL.  Also there should be some warning messages in the log AND an event notification should have occured (see EVENT NOTIFICATION).
6 COMSTAR ROLLBACK TESTING:
After you have completed the tests above you should now have multiple .mapping directories (direcotries starting “.mapping”) in the mount point for the zpools.

General checks
6.1 Check the number of .mapping directories seems to match the number of calls made to backup or backup-remove for the zpool.  Note that the maximum number is 11.
6.2 Check that .mapping.tmp does NOT exist

Interactive Rollback Mode Test
6.3 Ensure you have called backup, or backup-remove, for some zpool , in the past where the stmf state differed then to the way it is now (e.g. one more logical unit, different views, etc), except that the target/host groups are the same.  Let the time corresponding to this call be X.
6.4 Run “mapmgr remove ” to clear the mapping ready for the rollback
6.5 Run “mapmgr -r “, use the on screen instructions to “rollback” to X.
6.6 Check that mapmgr has indeed restored the stmf state corresponding to time X.

Non-Interactive Rollback Test
6.7 Do 6.3, then 6.4 then run “mapmgr -R -n X, where X is the directory you wish to rollback to, then do 6.5

 

EVENT NOTIFICATION:

In order to ensure the event notification functions correctly you need to first ensure the “Mailer” settings are correct in NMV.  This can be done on the following page: /settings/appliance/mailer/

Then to test your Mailer settings are correct run the following:

/opt/HAC/RSF-1/bin/event_notifier LOG_WARN test_action test_arg=test_val

You should receive an email containing text like the following:

FAULT: Description : HA Cluster event: LOG_WARN test_action test_arg=test_val

Posted in: Mapping Manager - mapmgr