Skip to content
NFS - HAC documentation hub

NFS

NFS shares

Enabling clustered NFS

Note

Before enabling NFS please ensure all relevant packages (i.e. nfs-kernel-server) are installed and enabled on all nodes in the cluster.

By default RSF-1 does not manage NFS shares - the contents of the /etc/exports file are left to be managed by the system administrator manually on each node in the cluster. To enable the management of the exports file from the webapp and synchronise it across all cluster nodes, navigate to Shares -> NFS and click ENABLE NFS SHARE HANDLING:

NFS Image 1

Once enabled the shares table will be shown:

NFS Image 2

Before creating new shares the option to import the existing /etc/exports file is available (this option is disabled once any new shares are added via the webapp):

Clustering an NFS share

  1. Navigate to Shares -> NFS and click +Add on the NFS table to fill in the required info. The available options are:

    • Description - Description of the Share (optional)
    • Path - Path of the directory/dataset to share - for example /pool1/nfs
    • Export Options - For a detailed description of the available options click the SHOW NFS OPTIONS EXAMPLES button.

    NFS Image 3

  2. Click to add the share:

    NFS Image 4

    The share will now be available and clustered.

FSID setting for failover

NFS identifies each file system it exports using a file system UUID or the device number of the device holding the file system. NFS clients use this identifier to ensure consistency in mounted file systems; if this identifier changes then the client considers the mount stale and typically reports "Stale NFS file handle" meaning manual intervention is required.

In an HA environment there is no guarantee that these identifiers will be the same on failover to another node (it may for example have a different device numbering). To alleviate this problem each exported file system should be assigned a unique identifier (starting at 1 - see the note below on the root setting) using the NFS fsid= option, for example:

/tank      10.10.23.4(fsid=1)
/sales     10.01.23.5(fsid=2,sync,wdelay,no_subtree_check,ro,root_squash)
/accounts  accounts.dept.foo.com(fsid=3,rw,no_root_squash)

Here each exported file system has been assigned a unique fsid thereby ensuring that no matter which cluster node exports the filesystem it will always have a consistent identifier exposed to clients.

For NFSv4 the option fsid=0 or fsid=root is reserved for the "root" export. When present all other exported directories must be below it, for example:

/srv/nfs       192.168.7.0/24(rw,fsid=root)
/srv/nfs/data  192.168.7.0/24(fsid=1,sync,wdelay,no_subtree_check,ro,root_squash)

As /srv/nfs is marked as the root export then the export /srv/nfs/data is mounted by clients as nfsserver:/data. For further details see the NFS manual page.

Modifying an NFS Share

To modify an NFS chare, click the pencil icon to the left of the dataset:

NFS Image 5

When done, click to update the share.

Deleting an NFS Share

To delete an NFS share click the trash can icon and then confirm the deletion

NFS Image 6