What is a Two Node Cluster Architecture?

A traditional two-node cluster consists of two separate system nodes, or servers, with shared dual-attach storage whereby both systems are independently physically connected and capable of accessing the file systems and volumes on the underlying shared storage.

Generally, the two cluster nodes would be homogenous and be identical with the same physical configuration, although this is not always necessary provided both nodes are capable of running the highly available services deployed.

There are three different system topologies for two node clusters described below that each have their own merits depending on architecture, performance and budgetary requirements.

Active/Passive High Availability Architecture

An Active/Passive configuration refers to one of the cluster nodes always being active and the other passive; in other words, only one node is the master running all cluster services and the other is the standby node which will be idle until a fault occurs on the running master server. When this happens, a failover event occurs and the critical services are failed over to the surviving node, which becomes the new master. The failed system will then be marked as faulted and be disabled from the cluster.

Managing an active passive two node cluster architecture for system reliability

The advantage of an Active/Passive configuration, assuming identical cluster node physical system configuration, is that system and service performance is maintained and identical after a failover event. The main disadvantage is cost in that one of the cluster nodes is always redundant and unproductive. In the case that the standby server is less powerful and has less performance capability than the failed master, then service failover may result in a degraded continuation of service until the failed node can be recovered and brought back online.

Active/Active Server High Availability Architecture

An Active/Active server configuration refers to both cluster nodes delivering highly available services simultaneously. In the event of a cluster node failure, the surviving healthy node is responsible for running all services alone. The Highly Available services being delivered will typically be the separation of a mission critical application stack into individual components, for example such as a database server, an associated application server and perhaps a webserver, that can be run independently on either cluster node, or just one in the event of a single server node failure.

Illustration showing an active/active configuration

Although additional cost is still required for the second system, both cluster nodes are continuously utilised and productive. In the event of a system failure however, system and service performance may be reduced until the failed node can be recovered and brought back online into the cluster.

A loss of performance in the event of a failed system can however be mitigated by prioritising availability of services. For example, in the event of a node failure, it may be desirable to shutdown or suspend a less critical service to ensure the performance and availability of a higher priority service until the failed node can be restored and brought back online into the cluster and normal active/active service can be resumed. An example of this may be a mission critical production database as one highly available service, and a lower-priority test database as the other.

Active/Active Service High Availability Architecture

In this topology, the active element refers to the service and not the server where certain applications may be able to run concurrently on both cluster nodes simultaneously to provide a load balanced capability. For many applications, this may not be architecturally possible as service availability is usually only possible through a single unique network address that can only be available on one server at a time. Some applications however can be configured to make use of this capability, and one such example is ZFS storage whereby each cluster node can provide unique ZFS pool storage services independently and concurrently.

The advantage of this approach is optimal cost versus performance.