Skip to content

Upgrading a RSF-1 Cluster

Getting your node ready for Upgrade

First step is to set your services to manual on both nodes, this will stop the service doing any unwanted failovers/migrations during the upgrade. To do this via the webapp, go to Cluster Control > Clutser Actions > Set Services to manual on all cluster nodes.

QS image 1

To do this using the cli run the following command. Note that this needs to be run for all running services on all nodes.

# /opt/HAC/RSF-1/bin/hacli service manual --name <servicename>  --node <nodename>
{
  "timeout": 60,
  "errorMsg": "",
  "execTime": 0.032,
  "error": false,
  "output": "Putting <servicename> in manual mode on appliance <nodename>"
}

Once your services are set to manual, your are now able to upgrade the machine where your services are not running without the services moving unexpectedly.

After upgrading, check your upgraded node is connected and communicating with the other node(s).

QS image 2

you can also check you are now running the latest version in the webapp by going to Help > About.

QS image 3

To check cluster status and which version your nodes are running via the cli, run the following command:

# /opt/HAC/RSF-1/bin/hacli cluster info
{
  "timeout": 40,
  "errorMsg": "",
  "execTime": 0.053,
  "error": false,
  "output": {
    "bootstrap": 0,
    "cacheTime": 1,
    "clusterName": "mgc7cluster",
    "crc": "d9ac",
    "description": "No description given",
    "fcMonitoringEnabled": false,
    "health": {
      "alerts": [],
      "clusterHealth": "OK",
      "networkHeartbeatsHealth": "OK",
      "nodesHealth": "OK",
      "servicesHealth": "OK"
    },
    "networkHeartbeats": [
      {
        "dstNode": "mgc71",
        "srcNode": "mgc72",
        "status": "up"
      },
      {
        "dstNode": "mgc72",
        "srcNode": "mgc71",
        "status": "up"
      }
    ],
    "networkMonitoringEnabled": true,
    "nodes": [
      {
        "expireTime": "1970-01-01T00:00:00Z",
        "hostId": "543c4373",
        "ipAddress": "10.6.7.1",
        "licenseStatus": "0",
        "lineType": "V",
        "machineId": "703ACFCA-C165-4D89-859D-C1414462F0A1",
        "machineName": "mgc71",
        "nodeState": "up",
        "releaseDate": "2021-08-18T07:48:00Z",
        "releaseName": "1.4.9",
        "releasePatch": "p",
        "releaseString": "1.4.9",
        "releaseVersion": "1.4.9"
      },
      {
        "expireTime": "1970-01-01T00:00:00Z",
        "hostId": "70b2ac95",
        "ipAddress": "10.6.7.2",
        "licenseStatus": "0",
        "lineType": "V",
        "machineId": "E3B47E94-8EC2-408B-AC80-4FE8A8905269",
        "machineName": "mgc72",
        "nodeState": "up",
        "releaseDate": "2021-08-18T07:48:00Z",
        "releaseName": "1.5.0",
        "releasePatch": "p",
        "releaseString": "1.5.0",
        "releaseVersion": "1.5.0"
      }
    ],
    "pollTime": "1",
    "serialHeartbeatEnabled": false,
    "services": [
      {
        "quickStat": "mgc71:running",
        "serviceName": "pool1",
        "status": [
          {
            "node": "mgc71",
            "status": "running"
          },
          {
            "node": "mgc72",
            "status": "stopped"
          }
        ]
      },
      {
        "quickStat": "mgc71:running",
        "serviceName": "pool2",
        "status": [
          {
            "node": "mgc71",
            "status": "running"
          },
          {
            "node": "mgc72",
            "status": "stopped"
          }
        ]
      }
    ],
    "support_expiry_date": "",
    "vips": []
  }
}

Once confirmed everything is working as expected, move your services over to the upgraded node, this can be done via the webapp Cluster Control page.

QS image 4

To achieve this via the cli, run this command for each active service:

# /opt/HAC/RSF-1/bin/hacli service move --name <servicename> --dest <nodename>
{
  "timeout": 60,
  "errorMsg": "",
  "execTime": 7.051,
  "error": false,
  "output": "Service <servicename> is now moving to node <nodename>"
}

Once moved, you can repeat the rpocess of upgrading then finally checking the cluster is healthy and it is reporting the latest version.