Proxmox
Introduction
This document describes two common ways of sharing storage to a Proxmox server from a RSF-1 cluster.
-
ZFS over iSCSI - using an iSCSI target where Proxmox is able to directly manage ZFS zvols on the storage server and access them via an iSCSI Qualified Name (IQN)
Supported cluster architectures for ZFS over iSCSI
- Any Linux system with Linux-IO Target (LIO) support (
istgt
is not a supported iSCSI target subsystem). - Proxmox is not currently supported as a cluster node architecture.
- The package
targetcli-fb
must be installed on each cluster node.
- Any Linux system with Linux-IO Target (LIO) support (
-
An iSCSI target accessed using an iSCSI Qualified Name (IQN)
-
An NFS share directly mounted on the Proxmox server
-
An SMB share directly mounted on the Proxmox server
Adding an ZFS over iSCSI share to Proxmox
Proxmox has the ability to use an external ZFS based cluster as a storage backend for its virtual machines/containers. When ZFS over iSCSI is configured correctly Proxmox automates the process of creating a ZFS volume (essentially raw disk space with no pre-configured file system) and then using that volume as storage for a virtual machine.
The ZFS over iSCSI approach offers many advantages:
- As the iSCSI protocol works at the block level, it can generally provide higher performance than NFS/SMB by manipulating the remote disk directly.
- Multiple Proxmox servers can consolidate their storage to a single, independant, clustered storage server that can grow with the environment.
- There is no interdependancy between Proxmox servers for the underlying storage.
- Leverage the benefits of clustered storage for redundant backups, hardware acceleration (NVMe for Cache/ZIL etc).
- Native ZFS snapshots and cloning via the Proxmox ZFS over iSCSI interface.
Note
Volumes created using ZFS over iSCSI can also be used as additional storage for existing VM's.
To configure ZFS over iSCSI a few steps are required:
- Identify and configure the storage pool to be used as the backend storage.
- Configure passwordless SSH access from the Proxmox server(s) to the storage cluster.
- Create an iSCSI target for use by Proxmox.
- Add the storage to Proxmox.
Configure ZFS cluster service storage pool
A clustered ZFS storage pool must be configured into RSF-1 before provisioning any ZFS over iSCSI shares. If not already done so detailed instructions on creating and clustering a pool can be found in the RSF-1 Webapp user guide here.
In this example walkthrough the pool poola
with VIP address of 10.6.19.21
will be used:
Enable ZFS over iSCSI support in the cluster
In the RSF-1 Webapp navigate to Settings->Shares
and enable Support for Proxmox including ZFS over iSCSI
:
This option enables additional OS checks for iSCSI Proxmox support (for example ensuring the correct ZFS device paths exist). Once this is set you should reboot each cluster node as there are a number of device setting that are only enabled on boot.
Backup frequency and copies
At a regular interval the cluster checks for any updates to the iSCSI configuration and if found stores a backup of that configuration to the pool that those changes relate to.
Multiple backups copies are kept sequentially, newest to oldest, with numbering starting at 1.
The configuration is saved to the file /<poolname>/.rsf-luns
, for example:
# ls -al
total 147
drwxr-xr-x 5 root root 58 Apr 25 16:36 .
drwxr-xr-x 26 root root 4096 Apr 24 13:00 ..
-rw-r--r-- 1 root root 3149 Apr 25 16:36 .rsf-luns
-rw-r--r-- 1 root root 1592 Apr 22 17:53 .rsf-luns.1
-rw-r--r-- 1 root root 3133 Apr 16 15:28 .rsf-luns.2
-rw-r--r-- 1 root root 1592 Apr 16 13:55 .rsf-luns.3
-rw-r--r-- 1 root root 3133 Apr 16 13:52 .rsf-luns.4
-rw-r--r-- 1 root root 1592 Apr 16 13:36 .rsf-luns.5
-rw-r--r-- 1 root root 3133 Apr 16 13:35 .rsf-luns.6
-rw-r--r-- 1 root root 1592 Apr 16 13:27 .rsf-luns.7
-rw-r--r-- 1 root root 3133 Apr 10 11:20 .rsf-luns.8
-rw-r--r-- 1 root root 1592 Apr 10 10:55 .rsf-luns.9
-rw-r--r-- 1 root root 1592 Apr 09 17:04 .rsf-luns.10
...
The setting Frequency in seconds to check for iSCSI configuration changes
specifies the interval for this check.
The setting Number of iSCSI configuration copies to retain
specifies the number of backup copies to keep,
with the oldest being purged whenever a new backup is created.
Configure passwordless SSH access
To use ZFS over iSCSI as a storage provider, Proxmox requires ssh
keyless access to the host providing the storage.
This ssh tunnel is used autonomously by Proxmox to issue commands when creating ZFS volumes, snapshots and backups and also to associate
iSCSI LUN connections to ZFS volumes created via the configured iSCSI target.
In order for Proxmox to use the connection successfully there must be no prompts requiring input from ssh; for example when a ssh connection is first established a prompt is issued regarding the host key (in this example we're using the VIP address of 10.6.19.21):
The authenticity of host '10.6.19.21 (10.6.19.21)' can't be established.
ED25519 key fingerprint is SHA256:norzsHTETV3oR4wjKIokzPs7tR7HWe1bWeXtZB/IOXU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.6.19.21' (ED25519) to the list of known hosts.
Once the host key is added to the list of known hosts (~/.ssh.known_hosts
) by answering yes
to the
Are you sure you want to continue connecting
prompt, any further connections to host 10.6.19.21
will be made with no interaction required.
In a clustered environment, Proxmox connects to storage via a Virtual IP (VIP). However, the storage may reside on either node in the cluster, each with a distinct SSH host key. If a keyless, non-interactive SSH connection is configured to node-a (the current storage host), operations proceed normally. Upon failover to node-b (now hosting both the storage and VIP), SSH connections to the VIP will fail because the presented host key no longer matches the previously trusted key. This results in a connection block with an error message such as:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
*****
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:1
RSA host key for ***** has changed and you have requested strict checking.
Host key verification failed.
This is because the host key that the Proxmox host expected when connecting to 10.6.19.21
has changed.
To avoid this issue our recommended approach is to disable host key checking for the VIP address.
The following steps show how to configure ssh keyless access for a Proxmox host - in this example the VIP address is 10.6.19.21
,
and the cluster nodes have addresses 10.6.19.1
and 10.6.19.2
respectively (these steps should be performed as the root user):
-
On the Proxmox host create a private/public key pair based on VIP address and distribute the public key to each cluster node (when prompted do not use a passphrase):
# mkdir -p /etc/pve/priv/zfs # ssh-keygen -f /etc/pve/priv/zfs/10.6.19.21_id_rsa # ssh-copy-id -i /etc/pve/priv/zfs/10.6.19.21_id_rsa.pub root@10.6.19.1 # ssh-copy-id -i /etc/pve/priv/zfs/10.6.19.21_id_rsa.pub root@10.6.19.2
-
On the Proxmox host disable host key checking for the VIP address by creating/updating the file
/etc/ssh/ssh_config.d/rsf_config.conf
with the contents:There should be an entry for each VIP in the cluster; therefore if another VIP were added with the addressHost 10.6.19.21 StrictHostKeyChecking no UserKnownHostsFile=/dev/null
192.168.77.9
then the/etc/ssh/ssh_config.d/rsf_config.conf
file should look like this:Note that we advise using the separate configuration fileHost 10.6.19.21 StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host 192.168.77.9 StrictHostKeyChecking no UserKnownHostsFile=/dev/null
/etc/ssh/ssh_config.d/rsf_config.conf
as this will always be included in the main ssh configuration file and avoids theyou have local changes
message when performing system upgrades. -
Test access from the Proxmox host by running ssh to the address of the VIP (note that the private key must be specified when manually testing, in operation Proxmox automatically adds this option whenever it uses ssh). where the VIP currently resides:
The ssh command should login to the cluster node where the VIP is currently plumbed in without prompting for a password.ssh -i /etc/pve/priv/zfs/10.6.19.21_id_rsa 10.6.19.21
These steps should be performed for each Proxmox host wanting to access clustered storage using ZFS over iSCSI.
Create an iSCSI target for Proxmox use
Note
Before attempting to create iSCSI targets please ensure targetcli-fb
is installed on each cluster node.
-
In the RSF-1 Webapp navigate to
Shares -> iSCSI
and clickADD TARGET
: -
Select
Proxmox Share
as the Share Type and the IP Address (VIP) to assign to the portal. If the IQN field is left blank one will be generated automatically. A description can be set to provide a more user-friendly name for the target.
VIP addresses and ZFS pools are bound together, so selecting the VIP also selects the backing store the target will be associated with (this association is shown in the selection list, in this example the chosen VIP is bound topoola
).
Finally, clickSUBMIT
to create the target:Target Portal Group
Whenever a target is created the system automatically adds a target portal group with the name
tpg1
to it. To check the underlying target configuration run the commandtargetcli ls
from a shell prompt. This will display the current configuration in effect - note the configuration shown will only be for services running on that node, for example:# targetcli ls o- / .................................................................................. [...] o- backstores ....................................................................... [...] | o- block ........................................................... [Storage Objects: 0] | o- fileio .......................................................... [Storage Objects: 0] | o- pscsi ........................................................... [Storage Objects: 0] | o- ramdisk ..........................................................[Storage Objects: 0] o- iscsi ..................................................................... [Targets: 2] | o- iqn.2003-01.org.linux-iscsi.mgdeb1.x8664:sn.42770836239d ................... [TPGs: 1] | o- tpg1 ........................................................ [no-gen-acls, no-auth] | o- acls ................................................................... [ACLs: 0] | o- luns ................................................................... [LUNs: 0] | o- portals ............................................................. [Portals: 1] | o- 10.6.19.21:3260 ........................................................... [OK] o- loopback .................................................................. [Targets: 0] o- vhost ..................................................................... [Targets: 0] o- xen-pvscsi ................................................................ [Targets: 0]
-
The newly created target will show up in list of configured targets along with any description and the cluster node where the target is currently active. Clicking the down arrow to the left of the target name displays further information:
-
Next the ACL list for this target must be updated with the initiator name(s) of all Proxmox hosts wishing to use the backing store (othewise a permission denied error will be returned to Proxmox when it attempts to use the storage). The initiator name is located in the file
/etc/iscsi/initiatorname.iscsi
on each Proxmox host:In this example the initiator name is# cat /etc/iscsi/initiatorname.iscsi ## DO NOT EDIT OR REMOVE THIS FILE! ## If you remove this file, the iSCSI daemon will not start. ## If you change the InitiatorName, existing access control lists ## may reject this initiator. The InitiatorName must be unique ## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames. InitiatorName=iqn.1993-08.org.debian:01:4887dcc472ed
iqn.1993-08.org.debian:01:4887dcc472ed
. Click the+ADD
button next to the ACL heading, enter the initiator details and clickSUBMIT
:
Add the storage to Proxmox
Once a target has been created in the cluster with the portal address of the VIP, along with ACLs for all Proxmox nodes requiring access, it can be added as storage to the Proxmox server.
-
In the Proxmox GUI, navigate to
Datacenter -> Storage -> Add -> ZFS over iSCSI
: -
In the resulting dialog window fill in the fields with the required information:
Field Name Description Example ID
Descriptive name for this storage HA-Storage Portal
The IP address of the VIP associated with the target 10.6.19.21 Pool
The clustered pool with the Portal VIP address poola Block Size
Set to the ZFS blocksize parameter 4k Target
The IQN created in the previous stage iqn.2003-01.org.linux-iscsi.mgdeb1.x8664:sn.42770836239d Nodes
Valid Proxmox nodes All (No restrictions) Enable
Enable/Disable this storage iSCSI Provider
The iSCSI target implementation used on the remote machine LIO Thin provision
Use ZFS thin provisioning Target portal group
The TPG created automatically by LIO tpg1
When done click
Add
-
The storage will now show up in the Proxmox storage table with the ID
HA-Storage
:
Using the backing storage
To use clustered ZFS over iSCSI as the storage backend for a Proxmox VM or Container select the ID
assigned when the storage was added to Proxmox. In the above example the ID assigned
was HA-Cluster
therefore select this from the available storage types for the Storage
field:
Once the VM is finalised the corresponding LUN that Proxmox creates will be visible from the RSF-1 Webapp:
Adding an iSCSI Share to Proxmox
These steps show how to share a Zvol block device via iSCSI to a Proxmox host.
-
Create a Zvol for use as the target backing store. In the RSF-1 Webapp navlgate to
ZFS -> Zvols
and add a new Zvol: -
Select the pool for the zvol, enter the zvol name and set the desired size. Optionally set a compression method:
-
Click
SUBMIT
and the resulting zvol will be listed in the zvol table: -
Navigate to
Shares -> iSCSI
in the RSF-1 Webapp and, if needed, enable iSCSI handling in the cluster: -
Click
ADD TARGET
to add a new iSCSI target: -
Enter the target details:
VIP Address
the VIP Address to be used as the portal to access this shareShare Type
select iSCSI Share to create a traditional iSCSI targetIQN
leave blank to have a target name generated automaticallyDescription
can be set to provide a more user-friendly name for the targetZvol Name
select the Zvol created earlier (in this exampleproxmox-zvol-1
)
VIP addresses and ZFS pools are bound together, so selecting the VIP also selects the pool the target will be associated with (this association is shown in the selection list, in this example the chosen VIP is bound to
poola
).Click
SUBMIT
to create the target: -
The newly created target will show up in list of configured targets along with any description and the cluster node where the target is currently active. Clicking the down arrow to the left of the target name displays further information:
-
Next the ACL list for this target must be updated with the initiator name(s) of all Proxmox hosts wishing to use the backing store (othewise a permission denied error will be returned to Proxmox when it attempts to use the storage). The initiator name is located in the file
/etc/iscsi/initiatorname.iscsi
on each Proxmox host:In this example the initiator name is# cat /etc/iscsi/initiatorname.iscsi ## DO NOT EDIT OR REMOVE THIS FILE! ## If you remove this file, the iSCSI daemon will not start. ## If you change the InitiatorName, existing access control lists ## may reject this initiator. The InitiatorName must be unique ## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames. InitiatorName=iqn.1993-08.org.debian:01:4887dcc472ed
iqn.1993-08.org.debian:01:4887dcc472ed
. Click the+ADD
button next to the ACL heading, enter the initiator details and clickSUBMIT
:Repeat this step for all Proxmox hosts requiring access to the share
-
The initiator name will be displayed in the ACLs:
Add the storage to Proxmox
Once a target has been created in the cluster with the portal address of the VIP, along with ACLs for all Proxmox nodes requiring access, it can be added as storage to the Proxmox server.
-
In the Proxmox GUI, navigate to
Datacenter -> Storage -> Add -> iSCSI
: -
In the resulting dialog window fill in the fields with the required information:
Field Name Description Example ID
Descriptive name for this storage RSF-iSCSI-114 Portal
The IP address of the VIP associated with the target 10.6.19.21 Target
The IQN created in the previous stage iqn.2003-01.org.linux-iscsi.mgdeb1.x8664:sn.42770836239d Nodes
Valid Proxmox nodes All (No restrictions) Enable
Enable/Disable this storage Use LUNs directly
Use luns directly as vm disks instead of putting an lvm on top and using the lvs for vm's When done click
Add
. -
The storage will now be shown in the Proxmox storage table with the ID
RSF-iSCSI-114
:
Using the backing store
To use clustered iSCSI as the storage backend for a Proxmox VM or Container select the ID
assigned when the storage was added to Proxmox. In the above example the ID assigned
was RSF-iSCSI-114
therefore select this from the available storage types for the Storage
field.
Next complete the Disk image
field - this will automatically be determined from the
available LUNS the iSCSI target provides and will normally be a single available option,
in this case CH 00 ID 0 LUN 0
:
Adding an NFS Share to Proxmox
These steps show how to share a dataset poola/NFS1
via NFS to a Proxmox host.
-
In the RSF-1 Webapp navigate to
ZFS -> Datasets
and clickCREATE DATASET
: -
Select the desired pool and provide a dataset name. Do not select
Enable NFS share
as this relates to the NFS server built into ZFS. -
Next navigate to
Shares -> NFS
in the RSF-1 Webapp and, if needed, enable NFS share handling in the cluster: -
Click
+ ADD
to create a share using datasetpoola/NFS1
, the suggested export options are:fsid=1
ensures the filesystem has the same ID on all cluster nodes1rw
ensures the filesystem has read/write permissionno_root_squash
is required as the dataset created in the previous step has user/group ownership ofroot
2
Add the storage to Proxmox
-
In the Proxmox GUI, navigate to
Datacenter -> Storage -> Add -> NFS
: -
In the resulting dialog window fill in the fields with the required information:
Field Name Description Example ID
Descriptive name for this storage RSF-iSCSI-114 Server
The IP address of the VIP associated with the share 10.6.19.21 Export
Automatically populated by Proxmox by interrogating the NFS cluster export list /poola/NFS1 Nodes
Valid Proxmox nodes All (No restrictions) Enable
Enable/Disable this storage When done click
Add
. -
The storage will now be shown in the Proxmox storage table with the ID
RSF-NFS1
:
Using the backing storage
To use clustered NFS as the storage backend for a Proxmox VM or Container, select the ID
assigned when the storage was added to Proxmox. In the above example the ID assigned
was RSF-NFS1
therefore select this from the available options in the Storage
field:
Once the VM is finalised the corresponding disk image will be visible on the NFS share:
# ls -lh /poola/NFS1/images/114/
total 619185
-rw-r----- 1 root root 32.1G Apr 10 11:43 vm-114-disk-0.qcow2
Adding an SMB Share to Proxmox
These steps show how to share a dataset poola/SMB1
via SMB to a Proxmox host. The SMB authentication method used
is User
(defined as a local UNIX user, that must exist on all cluster nodes with the same UID and GID).
-
If you havent already done so, create a Unix user/group for the Proxmox SMB share from the WebApp, to do so navigate to
System -> Unix Groups
and click+ADD
:Next navigate to
System -> Unix Users
and click+ADD
:Select
Enable SMB support for user
when creating a SMB authentication user -
In the RSF-1 Webapp navigate to
ZFS -> Datasets
and clickCREATE DATASET
. In this example the user and group ownership of the dataset has been set to theproxmox
user created in the previous step: -
Next navigate to
Shares -> SMB
in the Webapp and select theshares
tab:Share Name
is the share identifier and will be displayed by Proxmox when selecting available sharesPath
should reference the dataset created in the previous stepValid Users
should contain the user created for this share
Add the storage to Proxmox
-
In the Proxmox GUI, navigate to
Datacenter -> Storage -> Add -> SMB/CIFS
: -
In the resulting dialog window fill in the fields with the required information:
Field Name Description Example ID
Descriptive name for this storage SMB Server
The IP address of the VIP associated with the share 10.6.19.21 Username
Should correspond to the Valid Users
field from the SMB shareproxmox Password
The username passwordset on the SMB server when the user was created Share
Automatically populated by Proxmox by interrogating the SMB cluster export list SMB1 Nodes
Valid Proxmox nodes All (No restrictions) Enable
Enable/Disable this storage When done click
Add
. -
The storage will now be shown in the Proxmox storage table with the ID
RSF-SMB1
:
Using the backing store
To use clustered SMB as the storage backend for a Proxmox VM or Container, select the ID
assigned when the storage was added to Proxmox. In the above example the ID assigned
was RSF-SMB1
therefore select this from the available options in the Storage
field:
Once the VM is finalised the corresponding disk image will be visible on the SMB share:
# ls -l /poola/SMB1/images/114/
total 59428
-rwxr--r-- 1 proxmox proxmox 34365243392 Apr 25 16:04 vm-114-disk-0.qcow2
Troubleshooting
Could not open <device-path>
When using ZFS over iSCSI, an error is returned when creating a VM or Container of the form:
Could not open /dev/<poolname>/vm-<id>-disk-0
This indicates that the device links used by Proxmox to add backing store to iSCSI configuration do not exist. This is a known Proxmox issue and is
resolved by ensuring the cluster setting Support for Proxmox including ZFS over iSCSI
is enabled - see this section for details.
Cannot configure StorageObject because device <device-path> is already in use
When RSF-1 is restoring an iSCSI configuration and an error is encountered by the iSCSI subsystem when attempting to open backing store for exclusive use, the error is logged in the rsfmon log file and the service will be marked as broken_safe (to avoid restoring an incomplete configuration). The RSF-1 log file entry will look similar to the following:
Cannot configure StorageObject because device /dev/iscsi01/vm-<id>-disk is already in use
This indicated that some other process has taken exclusive access to the backing store and the iSCSI subsystem is unable to reserve it itself. One common reason for this is that LVM has detected mountable Logical Volumes and has therefore opened and mounted those devices on the cluster node, blocking access for any other process.
To resolve this issue LVM should ignore any ZFS zvols. On all cluster nodes edit the file /etc/lvm/lvm.conf
and add/update
a filter line of the form:
filter = [ "r|/dev/zvol|", "r,/dev/zd.*,", "a/.*/" ]
This filter tells LVM to ignore devices starting with /dev/zvol
and /dev/zd.
Once the /etc/lvm/lvm.conf
file has been updated the node itself should be rebooted to apply the filter.