Settings
General Settings
This page contains settings that apply to the whole cluster including the webapp:
Security settings
Setting |
Description |
---|---|
Inactivity timeout |
The period of time during which a user can be inactive in the webapp (that is, not interact with the system in any way) without any impact on their session. After the timeout expires, the user is logged out of the session and will have to log back in. |
Encrypted Heartbeats |
By enabling this option heartbeats exchanged between cluster nodes will be encrypted. This provides an extra level of security. |
Miscellaneous settings
Setting |
Description | ||||||||
---|---|---|---|---|---|---|---|---|---|
Storage Web Interface URL (Optional) |
By entering a valid URL, a custom launcher button will be added to the ZFS->Pools page. When clicked the URL will be opened in a new browser window. This is intended to provide a shortcut the connected storage interface. Note please prefix the URL with http:// or https:// |
||||||||
Storage Web Interface Label (Optional) |
Sets the name of the label on the storage interface launcher button. | ||||||||
All Targets All Ports |
Some disk arrays don't follow the scsi persistent reservations specification when making key registrations per I_T NEXUS. Instead the registration is shared by all target ports connected to a given host. This causes PGR3 reservations to fail whenever it tries to register a key, since it will receive a registration conflict on some of the paths. This issue manifests itself when RSF-1 is attempting to place reservation keys on a disk. In the rsfmon log file a message of the form: MHDC(nnn): Failed to register key on disk is indicative of this issue. This setting remedies the problem by making a registration unique to each target port. |
||||||||
Log Level |
Sets the detail level for logging:
|
||||||||
Reservation handling |
In normal operation the cluster will panic a node if it detects it has lost, or another node has taken, a reservation. In certain configurations this may not be the desired behaviour, rather it is only when another node takes reservations that a panic should be triggered. An example of this would be when a JBOD in a mirrored configuration is power cycled thus clearing its reservations. These missing reservations would normally trigger a panic; however, in this case this is not the desired outcome as the cluster will continue to run using the other half of the mirror. |
Shares Settings
This page contains settings that apply to share content in the cluster
Share handling
Setting |
Description |
---|---|
NFS Shares |
By enabling NFS share handling the cluster assumes responsibility for the NFS exports file and its contents. Shares are configured using the Shares->NFS page (visible only when cluster share handling is enabled). NFS shares created or modified are distributed across all cluster nodes.Note ensure NFS server packages are installed on all cluster nodes when enabling NFS sharing (specifically the NFS kernel server). |
SMB Shares |
By enabling SMB share handling the cluster assumes responsibility for the SMB configuration file and its contents. Shares are configured using the Shares->SMB page (visible only when cluster share handling is enabled). SMB shares created or modified are distributed across all cluster nodes.Note ensure Samba packages are installed on all cluster nodes when enabling SMB sharing (Samba/Winbind etc.). |
iSCSI Shares |
By enabling iSCSI share handling the cluster will save and migrate iSCSI configuration for any clustered ZFS pools/volumes. Shares are configured using the Shares->iSCSI page (visible only when cluster share handling is enabled) iSCSI shares created or modified are synchronised across all cluster nodes.Note ensure the targetcli package is installed on all cluster nodes when enabling iSCSI sharing. |
ISCSI
Setting |
Description |
---|---|
Check Frequency |
The RSF-1 iSCSI agent checks the local iSCSI configuration for any changes, with any found being applied to the clusters stored configuration. This setting configures how often that check is made. |
Backup Copies |
Each time a pool failover occurs, a backup copy of the current iSCSI configuration for that pool is created. This setting configures the number of backup copies to retain. |
Proxmox Support |
Additional support for proxmox including ZFS over iSCSI extensions to create volume links used by the proxmox remote client. |
Shared Nothing Settings
This page contains settings specific for a Shared-Nothing Cluster
Setting |
Description |
---|---|
Active node snapshot interval |
Sets the frequency by which snapshots are taken of a pool on its active node. The passive node is responsible for replicating these snapshots using ZFS send/receive. The amount of snapshots kept locally is governed by the snapshot retention value. |
Passive node snapshot retrieval interval |
Sets the frequency by which the passive node checks an active node to ensure it has up-to-date copies of all active pool snapshots. Any missing snapshots will be synchronised using ZFS send/receive and any expired snapshots (as dictated by the snapshot retention value) will be removed. Using a pull mechanism ensures any node recovering after a crash will immediately synchronize any missing snapshots. Using this approach also removes the requirement for the active node to continually attempt to send snapshots to the passive node, which could be unavailable. |
Snapshot Retention |
The amount of snapshots to be retained for each pool on all nodes. Once the retention number is reached, and as new snapshots are taken, the oldest are purged to maintain the snapshot retention level. |
Linux Settings
This page contains settings that are specific to a Linux environment:
Setting |
Description |
---|---|
Enable multipath support in cluster |
If your system storage has multi-pathing enabled and configured correctly then enable this option for multipath aware disk reservation and pool import in the cluster |
Enable netplan support in cluster |
This option must be enabled if your system uses netplan for its network configuration |
BSD Settings
This page contains settings that are specific to a BSD environment:
Setting |
Description |
---|---|
Enable multipath support in cluster |
If your system storage has multi-pathing enabled and configured correctly then enable this option for multipath aware disk reservation and pool import in the cluster |