Help Documentation for mapmgr V18.104.22.168
The mapmgr program’s basic functionality is to move COMSTAR state between nodes, in particular the state of the views and logical units. The mapmgr has extensive logging and can also handle synchronization under certain circumstances if mapmgr is being called by multiple programs. It is recommended that the reader first read the definitions if they have not already.
mapmgr currently has six flags:
- -v – Writes the version number to standard error.
- -h – Writes a short message including the address of this FAQ entry to standard error.
- -l <arg> – Writes <arg> as a header for the log file.
- -R <zpool> – Runs a rollback on zpool <zpool> if used in conjunction with the -n flag below.
- -n <n> – Runs a rollback to state <n> if used in conjunction with the -R flag above.
- -r <zpool> – Starts mapmgr interactive rollback mode on zpool <zpool>.
The first command given to mapmgr must be one of the following:
The second argument must be a zfs pool (henceforth zpool) unless the first argument is “update” in which case the second argument must be a zfs volume (henceforth zvol).
The map manager will loop through every logical unit for <zpool> and save information describing the logical unit and it’s views in a file in a directory, namely .mapping, in the (root of) zpool <zpool> . If the directory does not exist map manager will create it. If files already exist in the directory that describe the logical unit (and it’s views) they will be deleted as these files may contain conflicting or out of date information.
The map manager will loop through every logical unit for <zpool> and delete the logical unit and it’s views. One can give this command 1 or 2 extra arguments, the first will be the number of tries the remove command will use. The second argument will be the number of seconds the remove sleeps for until it tries again. If one or more of these arguments are not given the defaults for that variable will be used. E.g. calling
mapmgr remove myZpool 3
will result in the remove command with 3 tries and the default number of seconds. E.g. calling
mapmgr remove myZpool 45 2
will result in the remove command with 45 tries and after each failed try (except the last try) the remove will sleep for 2 seconds. This functionality of mapmgr exists because sometimes the underlying stmf can be busy.
The map manager will loop through every file in the .mapping directory (see mapmgr backup <zpool>) and for each file it will create the logical unit described by the file as well as it’s views.
This is like calling “mapmgr backup <zpool>”, then “mapmgr remove <zpool>” should the backup succeed. However the backup will not delete any files in the .mapping directory, which means should the state of the zfs volumes be changed and a call is not made to update nor backup then backup-remove is called, then the state of stmf may not be saved reliably. This differing behaviour is because this command is synchronized so can be called multiple times by other programs. This command can also take arguments to control how the remove retries; see remove.
This is just like calling “mapmgr backup <zpool_of_zvol>” where <zpool_of_zvol> is the zpool of zvol.
Whenever backup or update is called mapmgr first copies it’s currently saved data. mapmgr keeps up to ten copies, each has a time stamp of it’s creation and a number from 0 to 9 to reference the copy. One can rollback the stmf state back to one of these copies. This essentially adds “undo” functionality to COMSTAR, so suppose a user is unhappy with some changes they have made in COMSTAR, or the way they have used mapmgr, they can simply rollback to a previous time.
To execute a rollback you can use the interactive rollback mode below, which is advised if you do not know the reference number of the stmf state copy, or you can use the -R and -n flags.
- The -R flag specifies the zpool you wish to rollback.
- The -n flag specifies the reference number of the stmf state you wish to rollback to.
- Note that you must specify both the zpool and the reference number.
NOTE: The rollback will not perform any removal operations on any views/LUs should they exist as it assumes the user is using the rollback as a recovery feature when mapmgr/stmf has been used in error. Of course a rollback is still possible if views/LUs exist but the user will need to run the remove command first.
WARNING: The intended functionality of mapmgr is to save the state of the views and logical units, hence if anything else other than the views and logical units has changed between now and a saved stmf state, then the behaviour of a rollback to that state is unspecified. In particular if the set of target group and/or host group names have been changed in any way it is highly recommended that you either do not use the rollback feature or you read the advanced section of this FAQ.
The interactive rollback mode provides a command line interface for the user to view and select available previous stmf states to rollback to. In order to start interactive rollback mode you must run mapmgr with the -r flag, and specify the zpool you wish to rollback as the argument to the -r flag. To use the interactive rollback mode please follow the on screen instructions exactly.
mapmgr is fully backward compatible, moreover mapmgr will clean up after older versions of mapmgr.
The following exit values are returned:
- 0 – Successful execution of the command.
- 1 – Something failed, consult logs for more information.
- -1 – Could not obtain lock (30 second timeout).
Extensive information regarding the execution of mapmgr, successful or not, is printed to the file /opt/HAC/RSF-1/log/mapmgr.log (or /var/log/mapmgr.log should /opt/HAC/RSF-1/log not exist) and previous log files will be renamed or deleted depending on their age.
It is not necessary to read this section in order to understand the basic functionality of mapmgr. A thorough understanding of COMSTAR is recommended to read this section. The reader should also read the documentation distributed with mapmgr that will explain any bespoke functionality of mapmgr.
An intended application of mapmgr is for HA, and thus mapmgr has modes that can be enabled(/disabled) that increase HA by using context awareness.
Group awareness: When backup, backup-remove or update is called mapmgr also saves the state of the target and host groups. Then when restore is called if mapmgr attempts to add a view to a logical unit but cannot because a target/host group does not exist, mapmgr will attempt to create and fill the group using it’s saved data then attempt to add the view again. Group awareness means that mapmgr will do everything it can to restore as much as possible from it’s data.
The diagram below is not supposed to exhaustively nor formally represent a cluster, rather it represents that which is important to understand the function of the mapmgr and stmf. It is recommended that the reader first read the definitions if they have not already. The diagram also shows how mapmgr fits in with RSF-1 and NMS.
- The orientation of the page loosely represents the calling hierarchy, where the programs at the top of node A are higher level and the programs at the bottom of node A are lower level (similarly for node B).
- The arrows represent how each program calls one another.
- Dashed (dark blue) bordered boxes and lines that intersect the stmf box represent structures/links that will exist after a failover.
- Normal (dark red) bordered boxes and lines that intersect the stmf box represent structures/links that will not exist after a failover.
See diagram below.
- RSF-1 runs stop scripts on NODE A.
- Stop scripts call mapmgr backup-remove <zpool> for each zpool.
- On NODE A for each zpool mapmgr saves a description of the stmf state in files in the .mapping directory corresponding to the zpool,
- mapmgr destroys the LUs and views using stmf and ZFS API calls. (This step may repeat if, for whatever reason, it fails.)
- When RSF-1 stop scripts have finished, RSF-1 runs start scripts on NODE B.
- Start scripts call mapmgr restore <zpool> for each zpool.
- On NODE B for each zpool mapmgr uses the .mapping directory from step 2 corresponding to the zpool to import the LUs and create the views.
Posted in: Mapping Manager - mapmgr