HA Monitor Guide 1.0.7

~ 0 min
2021-04-30 17:09

1. Overview

This document describes the RSF-1 external resource availability monitor (referred to in this document as “the monitor”), a software extension for RSF-1 clusters that monitors the end-point availability of clustered HA resources (NFS, SMB, etc) from the perspective of a consumer of those services on the network.

The monitor runs in a docker container on a host machine located anywhere on the network where monitoring the availability of cluster services is desired. Note that this host must have network access to the cluster resources being monitored as the docker container internally mounts any NFS or SMB shares to be monitored through the network stack of the docker host.

Once the desired resources have been configured they are continually monitored for availability. Should availability be lost (or gained in the case where they were lost) then the monitor is capable of sending a number of differing types of alerts depending upon what has been configured (alert types include SNMP, Email, Slack, Teams, etc).

Pictorially this can be viewed as:


2. Installation and Upgrade

2.1 Requirements

The resource availability monitor is delivered as a self-contained docker image using the Docker Content Trust (DCT) system to ensure the integrity and publisher of all the data downloaded. Installation of the image can be undertaken on any host running docker (note at present docker only supports IPv6 on hosts running Linux, therefore if IPv6 monitoring is required, i.e. monitoring of shared resources over IPv6, then a linux derivative should be chosen as the docker host).

In this document the host actually running the monitor container is referred to as the docker host. This host must have network access to any cluster resources to be monitored so it in turn can make those resources available to the container running the monitor itself.

For example, to monitor a clustered NFS share then the docker host must be able to mount and access that share, as the docker container will require access to successfully monitor that resources availability. Also note, the process running the monitor docker image must belong to the docker group on the host OS (this is a requirement of docker).

Running under docker, the monitor tool requires a persistent location to store its data, logs and ancillary files. This location is provided by the host OS and mapped to the directory /tmp/hamonitor when the container is started (see section 3 Starting the Monitor). This way existing configurations, logs and other components are preserved during upgrades, migrations etc.

2.2 Installation

The monitor is distributed from a docker content trust server and is installed as follows:

  1. Download and install the docker application package from www.docker.com onto the docker host machine. Start the docker daemon on the host machine.
  2. Enable docker trust; docker uses environment variables to modify its behaviour, therefore to enable trust set the following:
    Unix command shell:
    # export DOCKER_CONTENT_TRUST=1

    Windows power shell:
    $Env: DOCKER_CONTENT_TRUST=1
  3. Set the high-availability notary server for the trust system, again using an environment variable:
    Unix command shell:
    # export DOCKER_CONTENT_TRUST_SERVER=https://notary-server.high-availability.com:4443

    Windows power shell:
    # $Env:DOCKER_CONTENT_TRUST_SERVER="https://notary-server.high-availability.com:4443"
  4. In order to use the docker registry for trusted downloads it is necessary to have a user name/password combination - this should be requested using the email address docker-trust@high-availability.com.
  5. Using the user/password combination retrieved in the previous step, login to the docker framework:
    # docker login dkr.high-availability.com
    The docker login subcommand will prompt for a username and password. Note that once you have successfully logged into the server, docker saves a login token locally in the users home directory in the file .docker/config.json thereby avoiding the need for this user to login again. The token can be cleared using the docker logout subcommand.
  6. Inspect the list of signed monitor images available from the registry:
    # docker trust inspect --pretty dkr.high-availability.com/hamonitor

    here is some example output showing two signed versions of the monitor:
    Signatures for dkr.high-availability.com/hamonitor

    SIGNED_TAG DIGEST                                                        SIGNERS
    v1.0       46d706ebead9e7746b3c1ffcbc2247562d035a5eed85410dc54eebe5c1aed hacsigner
    v1.1       eaea423478652348753463487563487563324856473285763434876538475 hacsigner

    List of signers and their keys for dkr.high-availability.com/hamonitor

    SIGNER    KEYS
    hacsigner 346c3155f96c

    Administrative keys for dkr.high-availability.com/hamonitor

    Repository Key: 2058e5cfcc725b7b00607c60941e6105d5709ab20d56278ac3a4e1dd386c0

    Root Key:       a6ae8b3cc1ac73aa34d4237d6c0562fb5d69c5eaa3ab9a9e78b2ac3ecce93c39
  7. Download the desired version:
    # docker pull https://dkr.high-availability.com/hamonitor:v1.0

    Note that with the trust framework enabled, the docker command line tool takes care of validating the digest of the images downloaded and checks against the official signatures held by the notary server. Any image that has not been signed and verified will be blocked from download.

    The output from the docker pull should look similar to the following:
    Pull (1 of 1): dkr.high-availability.com/hamonitor:v1.0@sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed

    sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed:

    Pulling from hamonitor

    ab3acf868d91: Pull complete
    bf8f3d9e8100: Pull complete
    4cf71b2b4422: Pull complete
    668c80dc67a6: Pull complete
    1b527012fdfd: Pull complete
    ade8b6ab4354: Pull complete
    4849fab77f68: Pull complete
    ccedab781a09: Pull complete

    Digest: sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed

    Status: Downloaded newer image for dkr.high-availability.com/hamonitor@sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed

    Tagging dkr.high-availability.com/hamonitor@sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed as dkr.high-availability.com/hamonitor:v1.0

    dkr.high-availability.com/hamonitor:v1.0 docker pull dkr.high-availability.com/hamonitor:v1.0
  8. Confirmation of the downloaded image can be performed by comparing the digest of the local images with those held by the remote trust server (use the trust inspect docker subcommand as detailed earlier to retrieve the remote image digests). To list the digest of locally installed images use the command:
    # docker images --digests

    The output will look similar to the following - the third digest fields should correspond to the digest listed by the trust server:
    REPOSITORY                          TAG  DIGEST IMAGE ID CREATED SIZE

    dkr.high-availability.com/hamonitor v1.0  sha256:46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed
     fc254100c107 4 weeks ago 768MB

2.2.1 Offline (dark site) installation

To install the monitor on hosts that have no external internet connection (and thus cannot make a connection to the the docker trust server) necessitates a two step approach, the result of which creates an offline image which can then be used to install the monitor on any hosts, regardless of whether or not they have external connectivity.

The first step in creating the image is to designate a download host (with internet connectivity) that can be used to download the monitor as described in the previous section. Once that is accomplished the next step is to create an image that can be shipped to the non-connected hosts and installed locally. Creating an image is accomplished as follows:

  1. On the host where the monitor has been downloaded create an image of the monitor:
    # docker image save -o hamonitor_v1.0.tar dkr.high-availability.com/hamonitor:v1.0
  2. The newly created tar file (in this example hamonitor_v1.0.tar) can then be copied to any host and installed using the following command:
    # docker load -i hamonitor_v1.0.tar
  3. Finally, on the host where the image was loaded, check the image ID is the same, output will be similar to the following:
    # docker images —digests

    REPOSITORY                          TAG  DIGEST IMAGE ID     CREATED  SIZE                           
    dkr.high-availability.com/hamonitor v1.0 <none> fc254100c107 3 months 768MB

The monitor is now installed and ready to run.

2.2.2 Understanding the <none> digest column for loaded images

When an image is loaded from an image file (created using the docker image save command), the digest field will always be shown as <none>. To understand why this is first requires some understanding of where the digest field originates from.

When an image is pushed to a docker registry, the layers that go to make up that image are transferred over in an uncompressed format. When docker saves those layers, it saves them in a compressed format. Once all the layers that make up an image have been received and stored in compressed format, docker then creates an image manifest listing all the layers and a SHA256 checksum for each compressed layer. Once the manifest has been created a digest is calculated and the image tag is signed. The signatures and digest can be seen by inspecting the trust information for the image using the command:

# docker trust inspect --pretty dkr.high-availability.com/hamonitor:v1.0

The resulting output will look similar to the following:

Signatures for dkr.high-availability.com/hamonitor:v1.0

SIGNED TAG   DIGEST                                                             SIGNERS
v1.0         46d706ebead9e7746b3c1ffcbc2247562d038865a5eed85410dc54eebe5c1aed   hacsigner

List of signers and their keys for dkr.high-availability.com/hamonitor:v1.0

SIGNER      KEYS
hacsigner   96d8fb669c3e

Administrative keys for dkr.high-availability.com/hamonitor:v1.0

  Repository Key: 619faa1dc970583f7b366fe68ecfa48b5e6cd5b07ccea2647d5c0ab7bb50191e
  Root Key: 4dd62ed17d61204196017fff4c1e9f2ded9508d9bfbdf5321a10f401b555a414


The digest listed for the signed tag v1.0 is the same as the one shown when requesting digests for locally installed images using docker images --digests.

The process described so far details how an image is uploaded and verified to a docker registry using the trust framework. When that image is downloaded using docker pull, the trust chain is maintained because a signed digest for the manifest is available, meaning the manifest can be trusted (after login/key exchange etc.), and therefore it’s contents, and therefore the checksums for the compressed layers and so on.

However, when an image file is created from a locally installed image, the original manifest cannot be used because, firstly, it contains checksums for the compressed layers and locally generated image files contain uncompressed layers, but more importantly, any manifest shipped with an image has no way to establish trust of that manifest as it is not derived from the original docker registry and thus there is no way to verify it’s contents (most importantly the layer checksums).

Because of this, verification of an image file is done using the image ID field from the image once it has been installed. The image ID should correspond to the original image ID from the host where the image file was originally created. This is because the image ID is calculated by applying the algorithm (SHA256) to a layer's content and so long as that content has not changed from the originating host, then the ID’s will match and the image can be trusted.


3. Starting the monitor

3.1 Starting from the command line

Once the monitor image has been installed, use the docker run command to start the monitors container (and thus the monitor itself). Note, the user starting the container must belong to the docker group on the host OS.

When starting the container there are a number of required arguments:

# docker run \
    --detach \
    --name hamonitor \
    --net host \
    --privileged \
    --restart unless-stopped \
    --volume <host directory>:/tmp/hamonitor \
    dkr.high-availability.com/hamonitor:v1.0 \
    --publish 13514


Once the container has been started it will run unattended; if the docker host is restarted then the container will restart automatically.

Note that the --publish argument should always be last on the command line and that a suitable value for the <host directory> parameter should be provided (see the description below for more details). These arguments have the following effect:

--detach
Runs the monitor in the background.

--name hamonitor
Assigns a friendly name to the container that can then be used as a more memorable argument to other docker commands (such as docker start and stop) as opposed to the less memorable image ID that docker uses (i.e. fa37f3788bb3).

Furthermore, note that although an ID is unique to an installed image, a new image ID is generated on every upgrade, meaning any process that refers to a specific image ID will have to be modified should the image ID change (shell scripts for example). Using a friendly name avoids this problem.

--net host
Use the host’s network stack for the container.

--privileged
Gives
all capabilities to the container and also access to the host’s devices (those that reside under /dev).

--restart unless-stopped
Specifies the restart policy for the container, in this case always restart the container if it stops, unless it is manually stopped in which case it will not be restarted even if the docker daemon itself restarts (alternatively, if always is specified instead of unless-stopped then a stopped container will be restarted if the Docker daemon restarts). Also note that should the monitor process terminate for any reason the container itself will exit and docker will restart another instance of the container.

--volume <host directory>:/tmp/monitor
Maps the directory <host directory> on the docker host to the directory /tmp/hamonitor in the container.

The container directory portion of this mapping (/tmp/hamonitor) is used by the monitor to store all it’s permanent data (encrypted database, logs etc) and cannot be changed. The <host directory> portion should be any suitable local filesystem directory on the docker host; it is recommended this is a local filesystem as opposed to a remotely mounted one (SMB/NFS etc.) to avoid network outages etc. adversely affecting the running monitor.

By using a mapping from the docker host to the container rather than using a local filesystem within the container gives a number of advantages:

  • Upgrades can be performed without having to first backup data in the container (and then re-import afterwards).
  • The monitor data can be backed up without the need for the container to be running.
  • It simplifies migration of the container to another host

dkr.high-availability.com/hamonitor:v1.0
The name of the docker image to run.
Note the version number (v1.0) should correspond to the version downloaded.

--publish <port-no>
The TCP port that the monitors REST API listens on for incoming requests; port 13514 is used by default if one is not provided on the command line.

 3.2 Create a start script

To simplify starting the monitor, the command line can be saved as a shell script and run as a single command. For example, save the following to a file:

#!/bin/sh
docker run \
    --detach \
    --name hamonitor \
    --net host \
    --privileged \
    --restart unless-stopped \
    --volume <host directory>:/tmp/hamonitor \
    dkr.high-availability.com/hamonitor:v1.0 \
    --publish 13514

Then set execute permission on the file and start the monitor by running the newly created script (if the file saved to is start-monitor.sh then):

# chmod +x start-monitor.sh
# ./start-monitor.sh

4. Initial configuration

Once the monitor is installed and running, the next step is configuration. The first task is to create an administrator. This must be done before any other operations are performed as adding monitored resources, creating alerts, user management etc., can only be performed by a user with the administrator privilege. Configuration is done via the hamonitor command line utility, located in the docker image as /usr/bin/hamonitor.

4.1 Check progress by watching the log file

The log file records all actions taken by the monitor when configuration is undertaken and during the monitoring process itself (recording lost and regained connections, alerts sent, etc.). It is therefore useful to monitoring the contents of the log file during any configuration to assist in debugging.

The log file is held in the shared volume specified with the --volume argument when the monitor is started. The log file name is hamonitor.log, therefore in the docker container the full path is /tmp/hamonitor/hamonitor.log.

The log file can also be accessed from the docker host using the host directory path supplied to the volume argument, by appending hamonitor.log to the end of the path.

4.2 Create an administrator

  1. Start a shell connected to the docker container:
    # docker exec -it hamonitor bash
  2. Issue the user create command:
    # hamonitor user create
  3. When prompted fill in the details for the administrator:

    Enter username: admin
    Enter password: [hidden]
    Verify password: [hidden]
    Enter real name [None]: admin
    Enter email address [None]: someone@some.domain.com
    Available roles: 0 (view-only), 1 (operator), 2 (admin)
    Enter role [0]: 2

    Initial admin user successfully created.  
    The API will restart now to enforce security.  
    User creation is now limited to admin roles.  
    Please log in as the new user

  4. From now on you will need to login to the monitor in order to perform any operations:
    # hamonitor login
    Enter URL [https://localhost:13514 if empty]:
    Enter Username: admin
    Enter Password: ******
    Welcome admin

Once the administrator is created resources to be monitored can be added.

4.3 Add a monitored resource

  1. From a shell connected to the docker container issue the resource add command:
    # hamonitor resource add
  2. Fill in the resource details:
    Enter protocol [NFS, SMB]: NFS
    IP address of NFS server: 2001:efca:56e4::70e7:fd8:999
    Path of the NFS share: /tank
    Mount options [return for None]:

    User name [return for None]:
    Password [return for None]:

    Resource created, ID is 1

This creates a monitored NFS resource and assigned it the unique ID 1 within the monitor framework. The ID is then used by the hamonitor to refer to this resource for all operations (such as adding or removing an alert).

To view configured resources use the resource list command:

# hamonitor resource list
[{

    "path": “/tank”,
    "ip": "[2001:efca:56e4::70e7:fd8:999]",
    "protocol": “nfs",
    "enabled": true,
    "creationDate": "2020-08-26T09:48:08+00:00",
    "notifications": {
      "slack": false,
      "teams": false,
      "email": false,
      "snmp": false
    },
    “resourceid”: 1
}]

In the above listing all alert notifications are set false. The next step is to configure and enable an alert for this resource.

4.4 Create an alert

Once a resource has been configured, alerts can then be associated with it. In this step an email alert is added and associated with the resource created previously. Note that the resource ID (in this case 1) is used to link the alert to the resource being monitored.

  1. From a shell connected to the docker container issue the following command:
    # hamonitor alert email create -id 1
  2. Fill in the alert details from the prompts:
    This alert method only supports authenticated email delivery over TLS.

    Enter SMTP server address (MX:PORT): mx1.yourcompany.com:587

From now on any changes in the availability of this resource will generate an email alert. Other alerts can be added as required.


5. Users and roles

5.1 User authentication

Before performing any operations on the monitor using the CLI or the REST API, it is necessary to authenticate as a user of the system. The monitor uses a role base access control approach with the administrator role providing the most access (when the monitor is first installed and configured an administrator user is created).

5.2 Available roles

There are three roles that can be assigned to users:

Role ID Description
View only 0 Basic access only. Check status of resources and alerts only.
Operator 1 Same access as view only but also the ability to enable/disable alerts.
Administrator 2 No restrictions.


5.3 Logging into the monitor

To authenticate to the monitor use the following command:

# hamonitor login

You will be prompted to enter a valid URL to connect to (defaulting to localhost if run inside the docker image or on the docker host), followed by user name and password. Upon successful login, the monitor issues the following response:

# hamonitor login
Enter URL [https://localhost:13514 if empty]:
Enter Username: admin
Enter Password:
Welcome admin


5.4 Creating new users

Only users with the administrator role are able to create new users (who can in turn be assigned the administrator role). The monitor will enforce at least one user having administrator role and will prevent any attempt to delete an administrative user if there are no other users with that role.

To create a new user enter the following command:

# hamonitor user create

Here is an example of the creation of a user with operation role:

Enter username: oper
Enter password: [hidden]

Verify password: [hidden]
Enter real name [None]: Operator
Enter email address [None]: operations@some.domain.com
Available roles: 0 (view only), 1 (operator), 2 (admin)
Enter role [0]: 1
User oper successfully created

6. CLI reference guide

The utility hamonitor is used to perform all monitor actions. It is supplied as part of the docker image. To run the utility first gain shell access to the running docker instance:

# docker exec -it hamonitor bash

The hamonitor utility is self-documented; typing any command or subcommand with no arguments produces a help summary:

# hamonitor
NAME:
   hamonitor - RSF-1 shared resources monitor

USAGE:
   hamonitor [global options] command [command options] [argument...]

VERSION:
   1.6.15

COMMANDS:
   alert       Alert management
   login       Login to RSF-1 shared resource monitor
   logout      Sign out of RSF-1 shared resource monitor
   resource    Resource management
   user        User management
   help, h     Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --help, -h     show help

 


6.1 User management

To administer users the user subcommand is used. It allows for user creation, deletion and listing existing users (modification is not supported in this release).

6.1.1 User addition

To create a new user, enter the subcommand:

# hamonitor user create
Enter username: oper
Enter password: *********

Verify password: *********
Enter real name [None]: Operator
Enter email address [None]: ops@example.com
Available roles: 0 (view only), 1 (operator), 2 (admin)
Enter role [0]: 1
User oper successfully created

6.1.2 Listing users

To show a list of configured users enter the subcommand:

# hamonitor user list

Note that the resulting list is provided in JSON format; to make it more readable, pipe the output through jq (a utility to print JSON into a more human readable format, shipped with the docker image). Running the above command through the jq utility results in:

# hamonitor user list | jg
[

  {
    "userid": 1,
    "creation_time": "1600692753.915523",

    "username": "admin",
    "realname": "",
    "password": "<*** HIDDEN ***>",
    "role": 2,
    "enabled": "True"
  },
  {
    "userid": 2,
    "creation_time": "1600701250.404133",
    "username": "user",
    "realname": "",
    "password": "<*** HIDDEN ***>",
    "role": 1,
    "enabled": "True"
  }
]


6.1.3 Deleting a user

To delete a user enter the subcommand:

# hamonitor user delete
Enter userid: 2
Do you really want to remove user 2? [y/n]: y
User successfully deleted


Note that users are referenced by their userid (shown by the user list subcommand).


6.2 Logging in and out

Before any changes are made to the monitor, it is necessary to authenticate using the login command. The connection URL uses localhost and port 13514 by default - an alternative can be entered when logging in:

# hamonitor login
Enter URL [https://localhost:13514 if empty]:
Enter Username: admin
Enter Password:

Welcome admin

To logout issue the logout subcommand:

# hamonitor logout

6.3 Resource management

Resources are managed using the hamonitor resource command.

6.3.1 Adding a monitored resource

To configure a monitored resource, use the resource add subcommand:

# hamonitor resource add
Enter protocol [NFS, SMB]: SMB
IP address of the SMB server: 192.168.4.1
SMB share name: Scratch
Mount options [return for None]:

User name [return for None]: system
Password [return for None]:

Do you want to test the connection now? [y/n]y
Test successful.

201: Created

6.3.2 Removing a monitored resource

To remove a resource from the monitor use the resource remove subcommand:

# hamonitor resource remove --id 1
200: OK

Any alerts associated with the resource are also removed.

6.3.3 Enabling and Disabling a monitored resource

The active monitoring state of a resource can be toggled between enabled or disabled. When a resource is first added its monitor state is enabled. To suspend monitoring without removing the resource entirely (say for example the resource is going offline for maintenance) its state can be set to disabled in the monitor. Reinstate monitoring for a resource by enabling it.

To disable a resource use the resource disable subcommand:

# hamonitor resource disable --id 1
200: OK

To enable a resource use the resource enable subcommand:

# hamonitor resource enable --id 1
200: OK

6.3.4 Listing resources being monitored

To list all resources configured use the resource list subcommand:

# hamonitor resource list
...

The list is reported back as a JSON object; pipe it through jq for a more human readable form.

6.3.5 Status of an individual resource

To check the status of individual resources use the resource status subcommand:

# hamonitor resource status --id 1 | jq
{
  "path": "Scratch",
  "ip": “192.168.4.1”,
  "protocol": "SMB",
  "enabled": true,
  "creationDate": "2020-09-22T12:16:54+00:00",
  "notifications": {
    "slack": false,
    "teams": false,
    "email": false,
    "snmp": false
  }
}

6.4 Alert management

The monitor supports several types of alerts. Alerts are configured on a per-resource basis so each resource has it’s own alert schema. Alerts are bound to resources using the mandatory --id argument when adding an alert.

6.4.1 Adding an email alert

To configure an email alert, a valid SMTP server is required along with a user name and password. The monitor only supports email delivery over an encrypted TLS connection. The (optional) TLS port on the email server is specified when the server address is entered:

# hamonitor alert email create --id 5
This alert method only supports authenticated email delivery over TLS.

Enter SMTP server address (MX:[PORT]): mx1.yourcomany.com:587
Enter SMTP server user name: realuseraccount@yourcompany.com
Enter SMTP server user password:
Verify password:
Enter email FROM address: alerts_sender_alias@yourcompany.com
Enter email TO address: alerts_manager_alias@yourcompany.com

201: Created

6.4.2 Adding a Slack alert

A slack alert will update a slack channel with events for which the resource has been configured. Before creating an alert a slack webhook is required. Webhook's are created using the slack application itself, please see the slack documentation for how to create a suitable webhook.

Once a webhook link has been generated it is then used as part of the URL when creating the alert:

# hamonitor alert slack create --id 5
Enter Slack hook URL: https://hooks.slack.com/services/<link>
201: Created

An alert published to slack will be similar to:

Resource OFFLINE: nfs 192.168.22.6 /pool/nfs-share


6.4.3 Adding a Microsoft Teams alert

A teams alert will update a teams channel with events for which the resource has been configured. Before creating an alert a teams webhook is required. Webhook's are created from the teams application itself, please see the teams documentation for how to create a suitable webhook.

Once a webhook link has been generated it is then used as part of the URL when creating the alert:

# hamonitor alert teams create --id 5
Enter Teams hook URL: https://outlook.office.com/webhook/<link>
201: Created

An alert published to teams will look similar to this example:

Resource OFFLINE: smb 10.6.11.12 /pool/smb-share

6.4.4 Adding an SNMP alert

To add an SNMP alert the IP address of an SNMP manager is added to the resource being monitored:

# hamonitor alert snmp create --id 5
Enter SNMP manager address: 10.5.14.22
201: Created

An SNMP MIB file for the monitor is shipped with the docker image in /root/RSF-MIB.txt.


6.5 Changing the HTTPS authentication certificate

The CLI communicates with the monitor using its REST API over HTTPS (TLS version 1.3). A self signed certificate is shipped in the docker image as /root/cert.pem.

A site specific certificate can be used instead by installing it in the shared host directory as cert.pem. It is necessary to restart the docker container in order for it to pick up the new certificate, on the docker host run:

# docker restart hamonitor

7. Troubleshooting

7.1 Configuring resources or alerts results in a “403: Forbidden” response

When creating or modifying any form of resource or alert, if the response returned back from the monitor is “403: Forbidden this indicates that the CLI is not authenticated to the monitor (or a previous authenticated connection has timed out). To resolve simply login to the monitor, for example:

# hamonitor resource add
Enter protocol [NFS, SMB]: SMB

IP address of the SMB server: 10.6.4.68
SMB share name: Acc_mnt
Mount options [return for None]:
Username [return for None]: admin
Password [return for None]:
Do you want to test the connection now? [y/n]y
403: Forbidden

# hamonitor login
Enter URL [https://localhost:13514 if empty]:

Enter Username:
Enter Password:
Welcome admin

# hamonitor resource add
Enter protocol [NFS, SMB]: SMB

IP address of the SMB server: 10.6.4.68
SMB share name: Acc_mnt
Mount options [return for None]:
Username [return for None]: admin
Password [return for None]:
Do you want to test the connection now? [y/n]y
Test successful
201: Created

7.2 Adding an SMB resource results in an unsuccessful test

If, at the final stage of adding an SMB resource to be monitored, the connection test fails then the underlying cause is likely to be recorded on the SMB server itself in system specific message files.

For example, in the case where the share name is incorrect you would see a log entry similar in format to:

Dec 9 12:14:35 NOTICE: smbd[MG\guest]: smb share not found

Or for an authentication issue:

Dec 9 12:19:22 NOTICE: smbd[MG\guest]: pool1_smb access denied: guest disabled.

 

Average rating 0 (0 Votes)

You cannot comment on this entry

Tags