RSF-1 supports both shared and shared-nothing storage clusters.
A shared storage cluster utilises an common set of storage devices that are accessible to both nodes in the cluster (housed in a shared JBOD for example). A ZFS pool is created using these devices and access to that pool is controlled by RSF-1.
Pool integrity is maintained by the cluster software using a combination of redundant heartbeating and PGR3 disk reservations to ensures any pool in a shared storage cluster can only be accessed by a single node at any one time.
flowchart TD SSa("Node A") & SSb("Node B") <-- SAS/FC etc. --> SSS[("Storage")]
A shared-nothing cluster consists of two nodes, each with their own locally accessible ZFS storage pool residing on non shared storage:
flowchart TD SNa("Node A")<-->|SAS/FC etc.|SNSa SNb("Node B")<-->|SAS/FC etc.|SNSb SNSa[("Storage")] SNSb[("Storage")]
Data is replicated between nodes by an HA synchronisation process. Replication is always done from the active to the passive node, where the active node is the one serving out the pool to clients:
flowchart LR SNa("Node A (active)<br />Pool-A")-->|HA Synchronisation|SNb SNb("Node B (passive)<br />Pool-A")
Should a failover occur then synchronisation is effectively reversed:
flowchart RL SNa("Node B (active)<br />Pool-A")-->|HA Synchronisation|SNb SNb("Node A (passive)<br />Pool-A")
Before creating pools for shared nothing clusters
To be eligible for clustering the storage pools must have the same name on each node in the cluster
It is strongly recommended the pools are of equal size, otherwise the smaller of the two runs the risk of depleting all available space during synchronization