Skip to content
MySQLmon Replication Cluster Software Configuration | High Availability

Installation & Configuration of MySQLmon

Rocky Linux 9.2

Pre-requisites for both nodes

A. Mysqlmon currently requires selinux disabling to allow the relocation of the MySQL socket. To do this update the property SELINUX= in the file /etc/selinux/config and set it to disabled and reboot the machine to apply the change.

B. If using a firewall, allow access to port 1195 (RSF-1/MySQLmon) & 3306 (MySQL):

# firewall-cmd --permanent --zone=public --add-port=1195/tcp
# firewall-cmd --permanent --zone=public --add-port=1195/udp
# firewall-cmd --permanent --zone=public --add-port=3306/tcp
C. A private network between the two cluster nodes for replication and RSF-1 communication. Although this step is not strictly neccessary it is highly recommended for performance and security reasons.

D. The public and private network IP addresses in the /etc/hosts file, along with a master and slave VIP which will be used by clients to connect to the MySQL instances. For Example:

# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.6.26.1      rsql1
10.6.26.2      rsql2

192.168.23.10  rsql1-priv
192.168.23.11  rsql2-priv

10.6.26.21     master-vip
10.6.26.22     slave-vip

Install packages & create mysqlmon user

  1. Enable the CRB repository which is required for mysql-libs, a dependency of MySQLmon:

    # dnf config-manager --set-enabled crb
    
  2. Install mysql-server and enable the MySQL service:

    # dnf install mysql-server
    # systemctl enable --now mysqld.service
    

  3. Download and install the RSF-1 and MySQLmon packages from the mysqlmon Package Repo:

    # dnf install ./rsf-1.rpm && dnf install ./mysqlmon.rpm
    
  4. Request trial licenses using the email address support@high-availability.com. You will need to provide the host ID from each node. To get these, run the command /opt/HAC/bin/hac_hostid:

    # /opt/HAC/bin/hac_hostid 
    xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    

  5. Once received, copy the provided licences.txt to each node and restart RSF-1:

    cp licences.txt /opt/HAC/RSF-1/etc/ && systemctl restart rsf-1
    
  6. Add mysql user mysqlmon with the password mysqlmon to RSF-1:

    # /opt/HAC/RSF-1/bin/rsfpasswd -a mysqlmon
    Enter new password for mysqlmon: 
    Repeat new password for mysqlmon: 
    # 
    

Create configure scripts for MySQL and RSF-1

The next phase of configuration is automated by running the interactive script
/opt/HAC/RSF-1/agents/mysqlmon/scripts/build_cluster.sh. This will generate a script for each node that will complete the configuration of MySQL and RSF-1.

Here is a walkthrough of of that script:

********************************************************************

 Welcome to the RSF-1/MySQL high-availability installer.

This script automates the  processes necessary for the configuration
of either a two node  replicated MySQL  cluster running under  RSF-1
or a two node MySQL cluster with shared storage running under RSF-1.

Replicated Disk Mode
With a two node replicated cluster,  MySQL  automatically replicates
data over a network link, meaning in  effect you have a master and a
replicated slave database. Access is normally through the master DB.
However,  if you experience a failure  on the master DB,  then RSF-1
will automatically  switch the remote  MySQL  slave to master status
and  dynamically  reconfigure  network  access for clients,  thereby
providing seamless failover. Once the original master recovers RSF-1
re-configures it back into the cluster as a slave server.
The only  connectivity  required  between  the  two database's  is a
private network used for synchronising data and RSF-1 communications.

Shared Disk Mode
With a two node shared disk storage configuration the MySQL database
is located on a disk volume that  is shared (connected to) all nodes
in the RSF-1 cluster.  This database is then made available from one
of the cluster nodes. Should that node subsequently fail, RSF-1 will
will migrate the logically  connected disk to a standby node and re-
start the MySQL service on it.
This type of cluster requires  multi-homed disk (SAN or direct SCSI)
and network connectivity for RSF-1 communications.


********************************************************************

Do you want to configure a Replicated or a Shared disk cluster?
Select configuration type [R/S/Quit - default R]: R
********************************************************************

Replicated Cluster->Networking

This  section configures  the network  addresses to be  used  in the
cluster.  It should be noted  that each node  in the  cluster should
have at  least two  network interfaces;  one is used in the standard
way for  access to  the host,  whilst the  other is used for private
cluster communications and MySQL replication.

This section  therefore prompts  for node names of the cluster hosts
along with the names of the private network interfaces.

********************************************************************

Please enter the host  name of the first node in  the cluster.  Note
that  this  node will  be the  preferred  master,  i.e. under normal
running conditions this node will receive database writes.

[rsql1]: rsql1

First node is: rsql1 [y/n - default y]: y

Next the host name of the first nodes private network in the cluster
which will be used for  RSF-1 communications  and MySQL replication.
Typically this address resides  on a private network between the two
nodes, directly  connected   by a   crossover  network  cable, or if
necessary, can utilise the public network.

[rsql1-priv]: rsql1-priv

First private node address is: rsql1-priv [y/n - default y]: y

Finally enter the  virtual name by which this  node will be known to
clients accessing  the cluster (this  virtual  name will be migrated
between nodes should a failure occur).
[master-vip]: master-vip

First virtual node is: master-vip [y/n - default y]: y
********************************************************************

Replicated Cluster->Networking

This  section configures  the network  addresses to be  used  in the
cluster.  It should be noted  that each node  in the  cluster should
have at  least two  network interfaces;  one is used in the standard
way for  access to  the host,  whilst the  other is used for private
cluster communications and MySQL replication.

This section  therefore prompts  for node names of the cluster hosts
along with the names of the private network interfaces.

First node rsql1
 private rsql1-priv

********************************************************************

Please enter the host name of the second node in the cluster.
[no default]: rsql2

Second node is: rsql2 [y/n - default y]: y

Next the host name of the second nodes private network in the cluster
which will be used for  RSF-1 communications  and MySQL replication.
Typically this address resides  on a private network between the two
nodes, directly  connected   by a   crossover  network  cable, or if
necessary, can utilise the public network.

[rsql2-priv]: rsql2-priv

Second private node address is: rsql2-priv [y/n - default y]: y

Finally enter the  virtual name by which this  node will be known to
clients accessing  the cluster (this  virtual  name will be migrated
between nodes should a failure occur).
[slave-vip]: slave-vip

Second virtual node is: slave-vip [y/n - default y]: y
********************************************************************

Replicated Cluster->Networking (complete)

First node is rsql1
 private network is on rsql1-priv

Second node is rsql2
 private network is on rsql2-priv

********************************************************************

Replicated Cluster->Raw Disk Devices

Next we need to configure which raw disk devices, if any, to use for
disk  heartbeats  between  machines.   Disk  heartbeats are  used in
conjunction with network and serial heartbeats by  RSF-1 to exchange
state and health information between nodes in a cluster. Disk heart-
beats run across a  shared disk,  either SCSI or  Fibre-based,  dual
connected between both machines.

********************************************************************

Would you like to configure disk heartbeats? [y/n - default n]: n
********************************************************************

Replicated Cluster->Networking (complete)

First node is rsql1
 private network is on rsql1-priv

Second node is rsql2
 private network is on rsql2-priv

********************************************************************

Replicated Cluster->Raw disk devices (complete)

No raw disk devices configured.


********************************************************************

Replicated Cluster->Ethernet Device

Finally we need to configure which Ethernet  device will be used for
VIP device access, i.e. if your host has Ethernet port eth0 or hme0,
then RSF-1 will use this  device to configure  a Virtual  IP address
used for access to the MySQL database (eth0:1 or hme0:1 for example)
You only  need enter the  device  name  without  the virtual  device
suffix i.e. eth0 not /dev/eth0.

********************************************************************

Please enter the Ethernet device to use on the node rsql1
No leading /dev/ [default ens18]: ens18
Is ens18 correct? [y/n - default y]: y

Please enter the Ethernet device to use on the node rsql2
No leading /dev/ [default ens18]: ens18
Is ens18 correct? [y/n - default y]: y
********************************************************************

Replicated Cluster->Networking (complete)

First node is rsql1
 private network is on rsql1-priv

Second node is rsql2
 private network is on rsql2-priv

********************************************************************

Replicated Cluster->Raw disk devices (complete)

No raw disk devices configured.


********************************************************************

Replicated Cluster->Ether Device (complete)

First  node device is ens18
Second node device is ens18

********************************************************************

OK, press return to start creating the configuration scripts.
********************************************************************

                  ***   Configuration complete    ***

The following scripts have been created:

 ./rsf_inst_rsql1.sh
 ./rsf_inst_rsql2.sh

Please now (as root) run:

          ./rsf_inst_rsql1.sh
  on host rsql1

and

          ./rsf_inst_rsql2.sh
  on host rsql2

For help and assistance please contact support@high-availability.com
or visit the web site www.high-availability.com

We also welcome comments and suggestions regarding this installation
and any of our products.

High-Availability Ltd

********************************************************************

Run the configure scripts

Now the two scripts have been generated, run each one on it's respective node.

Note

Each script is node specific, therefore you will need to copy one of them over to the remote node:

# scp rsf_inst_rsql2.sh root@rsql2:/root/
root@rsql2's password: 
rsf_inst_rsql2.sh                                       100%   18KB  35.2MB/s   00:00  

Here is an example of running the configuration script on a node:

********************************************************************

                      *** CONFIRM OVERWRITE ***

The RSF-1 configuration file /opt/HAC/RSF-1/etc/config
already exists on this machine.

This script  either  needs to  overwrite the  configuration file, or
save it under a different name to be merged later. Please select one
of the following options:

. [s]ave under a different name for merging later
. [o]verwrite the existing config with one suitable for replication
. [q]uit the installation

********************************************************************

Choice [s/o/q - default o]: o
Original configuration file saved as /opt/HAC/RSF-1/etc/config.pre_rsf

Press return to start configuring this machine: 
********************************************************************
Saving MySQL configuration file: /etc/my.cnf.d/mysql-server.cnf...done
Starting MySQL; please wait...
Configuring MySQL for replication, remote is rsql2-priv...done
Saving MySQLmon configuration file: /opt/HAC/RSF-1/agents/mysqlmon/etc/config...done
Saving RSF-1 configuration file: /opt/HAC/RSF-1/etc/config...done
Stopping MySQL and placing under RSF-1 control...
Setting MySQL service to auto...
Restarting RSF-1.
Running systemctl start rsf-1
See output of "systemctl status rsf-1" for info

This machine is now configured for MySQL replication under RSF-1.
Please add /opt/HAC/bin and /opt/HAC/RSF-1/bin to roots PATH

Run /opt/HAC/RSF-1/bin/rsfcli stat to confirm the configuration of MySQL and RSF-1 was successful.

The output should be similar to the following:

# /opt/HAC/RSF-1/bin/rsfcli stat
Contacted 127.0.0.1
 Found cluster "MySQL_Replication", CRC = 0xdc2d, ID = <none>
 - - - - - - - - - - - - - - - - - - - -
Hosts:

 rsql1 (10.6.26.1) UP, service startups enabled
 RSF-1 release 1.11.0 (built on 08-Nov-2023-11:09)

 rsql2 (10.6.26.2) UP, service startups enabled
 RSF-1 release 1.11.0 (built on 08-Nov-2023-11:09)

 2 nodes configured, 2 online.
 - - - - - - - - - - - - - - - - - - - -
Services:
 [00] rsql1-mysqld - "MySQL Database Service on rsql1"
      rsql1 : running automatic unblocked
        Configured vip(s) : -


 [01] rsql2-mysqld - "MySQL Database Service on rsql2"
      rsql2 : running automatic unblocked
        Configured vip(s) : -


 [02] repl-master - "Replication Master Service"
      rsql1 : running automatic unblocked
        Configured vip(s) : master-vip

      rsql2 : stopped automatic unblocked
        Configured vip(s) : master-vip


 [03] repl-slave - "Replication Slave Service"
      rsql1 : stopped manual unblocked
        Configured vip(s) : slave-vip

      rsql2 : running automatic unblocked
        Configured vip(s) : slave-vip


 4 services configured
   2 service instances stopped
   4 service instances running
 - - - - - - - - - - - - - - - - - - - -
Heartbeats:

 00 net rsql1 -> rsql2 [10.6.26.2]: Up, last heartbeat #301 Thu 2023-11-09 13:35:53 GMT
 01 net rsql1 -> rsql2 (via rsql2-priv) [10.6.26.2]: Up, last heartbeat #301 Thu 2023-11-09 13:35:54 GMT
 02 net rsql2 -> rsql1 Up, last heartbeat #117 Thu 2023-11-09 13:35:54 GMT
 03 net rsql2 -> rsql1 (via rsql1-priv) Up, last heartbeat #116 Thu 2023-11-09 13:35:54 GMT
4 cluster heartbeats configured, 4 up, 0 down
 - - - - - - - - - - - - - - - - - - - -

Errors:
No errors detected

Testing Replication

Finally, ensure replication is working by creating a test database on the master node and checking the slave:

rsql1# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 25
Server version: 8.0.32 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database testDB;
Query OK, 1 row affected (0.02 sec)

mysql> use testDB;
Database changed

mysql> create table test1 (name varchar(20));
Query OK, 0 rows affected (0.06 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| mysqlmon           |
| performance_schema |
| sys                |
| testDB             |
+--------------------+
6 rows in set (0.00 sec)

mysql> describe test1;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| name  | varchar(20) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
1 row in set (0.00 sec)
Next log into the slave and confirm the testDB has been replicated:
# ssh root@rsql2
root@rsql2's password: 
Last login: Thu Nov  9 12:30:20 2023 from 10.100.0.4

rsql2# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 8.0.32 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| mysqlmon           |
| performance_schema |
| sys                |
| testDB             |
+--------------------+
6 rows in set (0.00 sec)

mysql> use testDB;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

mysql> describe test1;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| name  | varchar(20) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
1 row in set (0.01 sec)
Now delete the testDB; as this is two-way replication it can be dropped from either node:

mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> drop database testDB;
Query OK, 1 row affected (0.05 sec)