Skip to content
Comstar and Omnios Configuration | High Availability

COMSTAR Target

Configuring COMSTAR as an iSCSI target host for Solaris, OmniOS, OpenIndiana, Illumos etc.

This document describes how to configure OmniOS as a highly available iSCSI target host in an RSF-1 clustered environment. In this example a previously created ZFS pool named pool1 is used as backing store for the iSCSI targets. This pool is in turn clustered on two nodes, live01 and live02. In the cluster, iSCSI targets are exposed to clients (initiators) via a Virtual IP address or VIP (also referred to as a floating IP address). This virtual IP address is bound to the backing store and moves, or floats, with the storage as it fails over between cluster nodes.

The combination of backing store, application (iSCSI in this case) and virtual IP address is referred to as an RSF-1 clustered service.

OmniOS uses the COMSTAR1 framework to provide its iSCSI services. For clustering, the utility /opt/HAC/RSF-1/bin/stmfha is used to configure COMSTAR rather than the system supplied utilities stmfadm and itadm. This is because the actions of stmfha are performed cluster wide, unlike the system utilities that operate on a single node only. The stmfha command implements a superset of the operations available in both stmfadm and itadm.

Note that in this walkthrough pool specific operations are performed on the host on which the clustered service, and by implication, the pool, is running on.

  1. On both nodes install and enable the iSCSI target package:

    # pkg install network/iscsi/target
    # svcadm enable svc:/system/stmf:default
    # svcadm enable svc:/network/iscsi/target:default
    
    If the required package is already installed you will receive the message No updates necessary for this image.

  2. Create a ZFS block device (zvol) to use as the backing storage for the iSCSI target. The -V option is required to create a volume of the given size (without it the zfs create command will attempt to create a ZFS file system, rather than a volume within a ZFS file system). In this example the volume zvol1 is created as part of storage pool pool1 with a size of 1Gb:2

    # zfs create -V 1G pool1/zvol1
    

  3. Create a Target Portal Group (TPG) using the VIP address configured for use with pool1 along with a port on which iSCSI services will listen for incoming requests from clients. In this example the VIP address is 192.168.5.10 and the port 3260 (the default port for iSCSI services as documented in rfc3720):

    # stmfha create-tpg TPG01 192.168.5.10:3260
    live01:
      create-tpg: TPG01 successfully created
    
    live02:
      create-tpg: TPG01 successfully created
    
    Because this is a clustered iSCSI configuration the target portal group is created on both nodes in the cluster - this symetric configuration is required for iSCSI failover. Note that the TPG will only be active on any one node at any one time due to the use of the cluster virtual IP address used when the TPG was created.

    To check the TPG's configured into the cluster run the following command (note the -v verbose argument to retrieve as much detail as possible):
    # stmfha list-tpg -v
    live01:
    TARGET PORTAL GROUP           PORTAL COUNT
    TPG01                         1
        portals:    192.168.5.10:3260
    
    live02:
    TARGET PORTAL GROUP           PORTAL COUNT
    TPG01                         1
        portals:    192.168.5.10:3260
    

  4. Create an iSCSI target. In this example chap authentication is used (--auth-method chap), meaning authentication is required to connect to this target. The target has been given the alias of zvol1-iscsi to assist in identifiying it and finally it is associated with the TPG created in the previous step (--tpg TPG01 - note when creating a target it is possible to specify multiple TPG membership using the format --tpg TPG01,TPG03,ACCTPG):

    # stmfha create-target --auth-method chap --chap-secret not-so-secret --alias zvol1-iscsi --tpg TPG01
    live01:
      Target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 successfully created
    
    live02:
      Target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 successfully created
    
    To list the targets:
    # stmfha list-target -v
    dev3:
    TARGET NAME                                                  STATE    SESSIONS
    iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4       online   0
            alias:                  zvol1-iscsi
            auth:                   chap
            targetchapuser:         -
            targetchapsecret:       set
            tpg-tags:               TPG01 = 2
    
    dev4:
    TARGET NAME                                                  STATE    SESSIONS
    iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4       online   0
            alias:                  zvol1-iscsi
            auth:                   chap
            targetchapuser:         -
            targetchapsecret:       set
            tpg-tags:               TPG01 = 2
    

  5. Create a target group (TG) to which your target will be added to.
    # stmfha create-tg TG01
    live01:
      Target group created
    
    live02:
      Target group created
    
    To list the target groups:
    # stmfha list-tg -v
    dev3:
        Target Group: TG01
    
    dev4:
        Target Group: TG01
    
  6. Next associate the newly created target group with the target. To do this the target must first be offlined:
    # stmfha offline-target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    live01:
      Target offlined
    
    live02:
      Target offlined
    
    Now add your target to the target group:
    # stmfha add-tg-member --group-name TG01 iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    live01:
      Target group member added
    
    live02:
      Target group member added
    
    Listing the target group should now show the target added to the target group:
    # stmfha list-tg -v
    
    dev3:
    Target Group: TG01
            Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    
    dev4:
    Target Group: TG01
            Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    
    Finally bring your target back online:
    # stmfha online-target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    live01:
      Target onlined
    
    live02:
      Target onlined
    
  7. Create a logical unit using the zvol created earlier (/dev/zvol/rdsk/pool1/zvol1 - note the full path of the zvol should be provided to the create-lu subcommand). The reason we use "rdsk" over "dsk" is raw disk devices transfer data to and from the disk directly; block devices transfer data to an in memory buffer cache first, then flush to the disk at some time later, which will at a minimum cause performance issues but also increase the chances of lost data (as data in memory can be lost on system failure).

    The logical unit must be created on the node the service is running on (i.e. the node the pool is imported). In the previous steps the iSCSI component parts (target groups, target portal groups etc.) are created on both nodes as this part of the iSCSI configuration can, and should be, shared across the cluster. However, because the logical unit references the underlying physical volume, it is only created, and is only visible, on any one node at any one time - the cluster will handle migration of the logical unit as part of failover.
    # stmfha create-lu /dev/zvol/rdsk/pool1/zvol1
    live01:
      Logical unit created: 600144F09CF9DD070000614DEADE0001
    
  8. Finally add a view to the logical unit 600144F09CF9DD070000614DEADE0001 using the target group TG01, again on the node where the service is running. A view is an association of a host group, a target group and a logical unit number, with a logical unit. A host group is a group of initiators that are allowed access to the logical unit - when unspecified, as in this example, access is granted to all initiators. The same is true when no target group is specified. If no logical unit number is specified the system automatically assigns one:
    # stmfha add-view -t TG01 600144F09CF9DD070000614DEADE0001
    live01:
      600144F09CF9DD070000614DEADE0001: view entry 0 created for LUN 0
    
    By creating this view all targets declared in target group TG01 have access to the logical unit, and similarly any of the targets in that group that also appear in a target portal group (in this example TPG01) make the logical unit discoverable to external initiators.

    Use the list-lu subcommand to check the completed iSCSI view:
    # stmfha list-lu -v
    live01:
    LU Name: 600144F09CF9DD070000614DEADE0001
        Operational Status : Online
        Provider Name      : sbd
        Alias              : /dev/zvol/rdsk/pool1/zvol1
        View Entry Count   : 1
         View-entry 0      : Host group 'all' Target group 'TG01' LUN '0'
        Data File          : /dev/zvol/rdsk/pool1/zvol1
        Meta File          : not set
        Size               : 1073741824
        Block Size         : 512
        Management URL     : not set
        Vendor ID          : SUN
        Product ID         : COMSTAR
        Serial Num         : not set
        Write Protect      : Disabled
        Writeback Cache    : Disabled
        Access State       : Active
    
    live02:
    
    Note that the view only exists on the node where the service is running. When the cluster fails the service over to another node, part of the startup procedure will recreate the view there.

    In the above example a single view has been created, labeled View-entry 0. Because no host group was specified, the wildcard all is displayed and the system has assigned logical unit number 0.

Inspecting the configuration

Once the target has been created, the stmfha command can be used to inspect the configuration:

  1. First of all list the targets in the system:

    # stmfha list-target -v
    dev3:
    TARGET NAME                                                  STATE    SESSIONS
    iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4       online   0
            alias:                  zvol1-iscsi
            auth:                   chap
            targetchapuser:         -
            targetchapsecret:       set
            tpg-tags:               TPG01 = 2
    
    dev4:
    TARGET NAME                                                  STATE    SESSIONS
    iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4       online   0
            alias:                  zvol1-iscsi
            auth:                   chap
            targetchapuser:         -
            targetchapsecret:       set
            tpg-tags:               TPG01 = 2
    
    This shows the targets available on both systems, using chap authentication, and belonging to the target portal group TPG01.

  2. Next list the target portal groups so the targets can be tied to an IP/port address:

    # stmfha list-tpg -v
    
    dev3:
    TARGET PORTAL GROUP           PORTAL COUNT
    TPG01                         1
        portals:    192.168.5.10:3260
    
    dev4:
    TARGET PORTAL GROUP           PORTAL COUNT
    TPG01                         1
        portals:    192.168.5.10:3260
    
    This shows us the target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 can be accessed via IP address 192.168.5.10 on port 3260.

  3. At this point we know the target and we know the IP address/port it will be discoverable on. Next check which target groups the target is a member of:

    # stmfha list-tg -v
    
    dev3:
    Target Group: TG01
            Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    
    dev4:
    Target Group: TG01
            Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
    
    This shows us the target is a member of target group TG01.

  4. Now, list the logical units to show which views they are members of, along with associated target groups:

    # stmfha list-lu -v
    
    live01:
    LU Name: 600144F09CF9DD070000614DEADE0001
        Operational Status : Online
        Provider Name      : sbd
        Alias              : /dev/zvol/rdsk/pool1/zvol1
        View Entry Count   : 1
         View-entry 0      : Host group 'all' Target group 'TG01' LUN '0'
        Data File          : /dev/zvol/rdsk/pool1/zvol1
        Meta File          : not set
        Size               : 1073741824
        Block Size         : 512
        Management URL     : not set
        Vendor ID          : SUN
        Product ID         : COMSTAR
        Serial Num         : not set
        Write Protect      : Disabled
        Writeback Cache    : Disabled
        Access State       : Active
    
    live02:
    
    In this example the zvol /dev/zvol/rdsk/pool1/zvol1 has one view, which is a member of target group TG01, and that target group has target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 as its member. Finally as that target is discoverable via the target portal group TPG01 on IP address 192.168.5.10, port 3260 the path the initiator takes to the underlying storage can be established.

Troubleshooting

My service is going "broken_unsafe" after creating iSCSI Logical units.

This could be because you have used the incorrect path when creating your Logical Units. Ensure you have used the path "/dev/zvol/rdsk".


  1. COmmon Multiprotocol SCSI TARget 

  2. When creating a volume the size specified is automatically rounded up by ZFS to the nearest 128Kbytes.