Introduction

Linux cluster is a group of Linux computers or nodes, storage devices that work together and are managed as a single system. A traditional clustering configuration has two nodes that are connected to shared storage (typically a SAN). With Linux clustering, an application is run on one node, and clustering software is used to monitor its operation.

A Linux cluster provides faster processing speed, larger storage capacity, better data integrity, greater reliability and wider availability of resources.

Failover

Failover is a process. Whenever a primary system, network or a database fails or is abnormally terminated, then a Failover acts as a standby which helps resume these operations.

Failover Cluster

Failover cluster is a set of servers that work together to provide High Availability (HA) or Continuous availability (CA). As mentioned earlier, if one of the servers goes down another node in the cluster can take over its workload with minimal or no downtime. Some failover clusters use physical servers whereas others involve virtual machines (VMs).

CA clusters allow users to access and work on the services and applications without any incidence of timeouts (100% availability), in case of a server failure. HA clusters, on the other hand, may cause a short hiatus in the service, but system recovers automatically with minimum downtime and no data loss.

A cluster is a set of two or more nodes (servers) that transmit data for processing through cables or a dedicated secure network. Even load balancing, storage or concurrent/parallel processing is possible through other clustering technologies.

Linux Failover Cluster Monitoring

If you look at the above image, Node 1 and Node 2 have common shared storage. Whenever one node goes down, the other one will pick up from there. These two nodes have one virtual IP that all other clients connect to.

Let us take a look at the two failover clusters, namely High Availability Failover Clusters and Continuous Availability Failover Clusters.

High Availability Failover Clusters

In case of High Availability Failover Clusters, a set of servers share data and resources in the system. All the nodes have access to the shared storage.

High Availability Clusters also include a monitoring connection that servers use to check the “heartbeat” or health of the other servers. At any time, at least one of the nodes in a cluster is active, while at least one is passive.

Continuous Availability Failover Clusters

This system consists of multiple systems that share a single copy of a computer’s operating system. Software commands issued by one system are also executed on the other systems. In case of a failover, the user can check critical data in a transaction.

There are a few Failover Cluster types like Linux Server Failover Cluster (WSFC), VMware Failover Clusters, SQL Server Failover Clusters, and Red Hat Linux Failover Clusters.

Supported Target Versions
Pacemaker : Pacemaker 1.1.23-1.el7_9.1
Non-PaceMaker : RGManager - 6.5 ( Linux nodes : redhat-6.2.0)
Cluster Resource Manager: CRM version 4.6.0+20240718.c5fa894 (Linux Nodes - suse:sles:15:sp6)

Prerequisites

  • OpsRamp Classic Gateway 14.0.0 and above.
  • OpsRamp Nextgen Gateway 14.0.0 and above.
    Note: OpsRamp recommends using the latest Gateway version for full coverage of recent bug fixes, enhancements, etc.
  • Prerequisites for Pacemaker
    • Credentials: root / non-root privileges with a member of “haclient” group.
    • Cluster management: Pacemaker
    • Accessibility: All nodes within a cluster should be accessible by a single credential set.
    • For non-root users: Update “~/.bashrc” file with “pcs” command path across all cluster nodes.
      Ex: export PATH=$PATH:/usr/sbin -> as a new line in ~/.bashrc file.
  • Prerequisites for RGManager (non-pacemaker)
    • Credentials: should provide access to both root and non-root users.

    • Cluster management: RGManager

    • Accessibility: All the nodes within a cluster should be accessible by a single credential set.

    • For non-root users: Update the following commands in “etc/sudoers” file to provide access for non-root users to execute these commands.

      /usr/sbin/cman_tool nodes,/usr/sbin/cman_tool status,/usr/sbin/clustat -l,/sbin/service cman status,/sbin/service rgmanager status,/sbin/service corosync status,/usr/sbin/dmidecode -s system-uuid,/bin/cat /sys/class/dmi/id/product_serial

      Note: Usually a linux cluster will be configured with a virtual-ip normally called as cluster-virtual-ip.We use this Ip for adding configurations during the installation of integration.

    • If the cluster-virtual-ip is not configured give the ip address of the reachable node associated with the cluster.

    • Prerequisites for CRManager

      • Credentials : root / non-root privileges with a member of “haclient” group.
      • Cluster management: crm
      • Accessibility: All the nodes within a cluster should be accessible by a single set of credential set.
      • For non-root users: Update ~/ .bashrc file using crm command path across all cluster nodes. For example, export PATH=$PATH:/usr/sbin → as a new line in ~/ .bashrc file.

Hierarchy of Linux Cluster

Linux Cluster
  - Linux Cluster Nodes

Application Migration

  1. Check for the gateway version as a prerequisite step - classic gateway-12.0.1 and above.
    Notes:

    • You only have to follow these steps when you want to migrate from sdk 1.0 to sdk 2.0.
    • For the first time installation below steps are not required.
  2. Disable all configurations associated with sdk 1.0 adaptor integration application.

  3. Install and Add the configuration to that sdk 2.0 application.
    Note: refer to Configure and Install the Linux Failover Cluster Integration & View the Linux Failover Cluster Details sections of this document.

  4. Once all discoveries are completed with the sdk 2.0 application, follow any one of the approaches.

    • Direct uninstallation of the sdk 1.0 adaptor application through the uninstall API with skipDeleteResources=true in the post request

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}

      Request Body:
          {
          "uninstallReason": "Test",
          "skipDeleteResources": true
          }


      (OR)

    • Delete the configuration one by one through the Delete adaptor config API with the request parameter as skipDeleteResources=true

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/config/{configId}?skipDeleteResources=true.

    • Finally, uninstall the adaptor application through API with skipDeleteResources=true in the post request.

      End-Point: https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}

      Request Body:
          {
          "uninstallReason": "Test",
          "skipDeleteResources": true
          }

Supported Metrics

Click here to view the supported metrics

Resource Type: Cluster

Native TypeMetric NamesDisplay NameMetric LabelUnitApplication VersionCluster TypeDescription
Linux Clusterlinux_cluster_nodes_statusCluster Node StatusAvailability1.0.0AllStatus of each nodes present in linux cluster. 0 - offline, 1- online, 2- standby
linux_cluster_service_status_HawkHigh Availability Web Konsole Service StatusAvailability3.0.0CRManagerHigh Availability Web Konsole Service Status. The status representation as follows : 0 - \"failed\", 1 - \"active\" & 2 - \"unknown\"",
linux_cluster_service_status_SbdStorage-Based Death Service StatusAvailability3.0.0CRManagerStorage-Based Death (also known as STONITH Block Device) service status. The status representation as follows : 0 - \"failed\", 1 - \"active\" & 2 - \"unknown\"",
linux_cluster_system_OS_UptimeSystem UptimeAvailabilitym1.0.0AllTime lapsed since last reboot in minutes
linux_cluster_system_cpu_LoadSystem CPU LoadUsage1.0.0AllMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
linux_cluster_system_cpu_UtilizationSystem CPU UtilizationUsage%1.0.0AllThe percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time).
linux_cluster_system_memory_UsedspaceSystem Memory Used SpaceUsageGb1.0.0AllPhysical and virtual memory usage in GB
linux_cluster_system_memory_UtilizationSystem Memory UtilizationUsage%1.0.0AllPhysical and virtual memory usage in percentage.
linux_cluster_system_cpu_Usage_StatsSystem CPU Usage StatisticsUsage%1.0.0AllMonitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes. System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete. Idle - The time the processor spends idle. Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time.
linux_cluster_system_disk_UsedspaceSystem Disk UsedSpaceUsageGb1.0.0AllMonitors disk used space in GB
linux_cluster_system_disk_UtilizationSystem Disk UtilizationUsage%1.0.0AllMonitors disk utilization in percentage.
linux_cluster_system_disk_Inode_UtilizationSystem Disk Inode UtilizationUsage%1.0.0AllThis monitor is to collect DISK Inode metrics for all physical disks in a server.
linux_cluster_system_disk_freespaceSystem FreeDisk UsageUsageGb1.0.0AllMonitors the Free Space usage in GB
linux_cluster_system_network_interface_Traffic_InSystem Network In TrafficPerformanceKbps1.0.0AllMonitors In traffic of each interface for Linux Devices
linux_cluster_system_network_interface_Traffic_OutSystem Network Out TrafficPerformanceKbps1.0.0AllMonitors Out traffic of each interface for Linux Devices
linux_cluster_system_network_interface_Packets_InSystem Network In packetsPerformancepackets/sec1.0.0AllMonitors in Packets of each interface for Linux Devices
linux_cluster_system_network_interface_Packets_OutSystem Network out packetsPerformance1.0.0AllMonitors Out packets of each interface for Linux Devices
linux_cluster_system_network_interface_Errors_InSystem Network In ErrorsAvailabilityErrors per Sec1.0.0AllMonitors network in errors of each interface for Linux Devices
linux_cluster_system_network_interface_Errors_OutSystem Network Out ErrorsAvailabilityErrors per Sec1.0.0AllMonitors Network Out traffic of each interface for Linux Devices
linux_cluster_system_network_interface_discards_InSystem Network In discardsAvailabilitypsec1.0.0AllMonitors Network in discards of each interface for Linux Devices
linux_cluster_system_network_interface_discards_OutSystem Network Out discardsAvailabilitypsec1.0.0AllMonitors network Out Discards of each interface for Linux Devices
linux_cluster_service_status_PacemakerPacemaker Service StatusAvailability1.0.0Pacemaker/CRManagerPacemaker High Availability Cluster Manager. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_service_status_CorosyncCorosync Service StatusAvailability1.0.0Pacemaker/CRManagerThe Corosync Cluster Engine is a Group Communication System. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_service_status_PCSDPCSD Service StatusAvailability1.0.0Pacemaker/RGManagerPCS GUI and remote configuration interface. The status representation as follows : 0 - "failed", 1 - "active" & 2 - "unknown"
linux_cluster_Online_Nodes_CountOnline Nodes CountAvailabilitycount1.0.0AllOnline cluster nodes count.
linux_cluster_Failover_StatusCluster FailOver StatusAvailability1.0.0AllProvides the details about cluster failover status. The integer representation as follows, 0 - cluster is running on the same node, 1 - there is failover happened.
linux_cluster_node_HealthCluster Node Health PercentageAvailability%1.0.0AllThis metrics gives the info about the percentage of online linux nodes available within a cluster.
linux_cluster_service_StatusLinux Cluster Service StatusAvailability1.0.0Pacemaker/RGManagerCluster Services Status. The status representation as follows : 0 - disabled, 1-blocked, 2 - failed, 3 - stopped, 4 - recovering, 5 - stopping, 6 - starting, 7 - started, 8 - unknown
linux_cluster_service_status_rgmanagerRGManager Service StatusAvailability1.0.0RGManagerRGManager Service Status. The status representation as follows : 0 - \"failed\", 1 - \"active\" , 2 - \"unknown\"
linux_cluster_service_status_CMANCMAN Service StatusAvailability1.0.0RGManagerCMAN Service Status. The status representation as follows : 0 - \"failed\", 1 - \"active\" \u0026 2 - \"unknown\"
linux_cluster_fence_statusLinux Cluster Fence StatusAvailability2.0.0PacemakerCluster Fence Status. The status representation as follows : 0 - disabled, 1-blocked, 2 - failed, 3 - stopped, 4 - recovering, 5 - stopping, 6 - starting, 7 - started, 8 - unknown
linux_cluster_fence_failover_statusCluster Fence FailOver StatusAvailability2.0.0PacemakerProvides the details about cluster fence failover status. The integer representation as follows , 0 - fence is running on the same node , 1 - there is failover happened
linux_cluster_service_failover_statusCluster Service FailOver StatusAvailability2.0.0PacemakerProvides the details about cluster service failover status. The integer representation as follows , 0 - resource group is running on the same node , 1 - there is failover happened
linux_cluster_failed_actions_countCluster Failed Resource Actions CountAvailabilityCount2.0.0PacemakerProvides the cluster failed resource actions Count

Resource Type: Server

Native TypeMetric NamesDisplay NameMetric LabelUnitApplication VersionPacemaker / RGManagerDescription
Linux Cluster Nodelinux_node_system_OS_UptimeSystem UptimeAvailabilitym1.0.0AllTime lapsed since last reboot in minutes
linux_node_system_cpu_LoadSystem CPU LoadUsage1.0.0AllMonitors the system's last 1min, 5min and 15min load. It sends per cpu core load average.
linux_node_system_cpu_UtilizationSystem CPU UtilizationUsage%1.0.0AllThe percentage of elapsed time that the processor spends to execute a non-Idle thread(This doesn't includes CPU steal time)
linux_node_system_memory_UsedspaceSystem Memory Used SpaceUsageGb1.0.0AllPhysical and virtual memory usage in GB
linux_node_system_memory_UtilizationSystem Memory UtilizationUsage%1.0.0AllPhysical and virtual memory usage in percentage.
linux_node_system_cpu_Usage_StatsSystem CPU Usage StatisticsUsage%1.0.0AllMonitors cpu time in percentage spent in various program spaces. User - The processor time spent running user space processes. System - The amount of time that the CPU spent running the kernel. IOWait - The time the CPU spends idle while waiting for an I/O operation to complete. Idle - The time the processor spends idle. Steal - The time virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Kernal Time Total Time
linux_node_system_disk_UsedspaceSystem Disk UsedSpaceUsageGb1.0.0AllMonitors disk used space in GB
linux_node_system_disk_UtilizationSystem Disk UtilizationUsage%1.0.0AllMonitors disk utilization in percentage
linux_node_system_disk_Inode_UtilizationSystem Disk Inode UtilizationUsage%1.0.0AllThis monitor is to collect DISK Inode metrics for all physical disks in a server.
linux_node_system_disk_freespaceSystem FreeDisk Usage.UsageGb1.0.0AllMonitors the Free Space usage in GB
linux_node_system_network_interface_Traffic_InSystem Network In Traffic.PerformanceKbps1.0.0AllMonitors In traffic of each interface for Linux Devices
linux_node_system_network_interface_Traffic_OutSystem Network Out TrafficPerformanceKbps1.0.0AllMonitors Out traffic of each interface for Linux Devices
linux_node_system_network_interface_Packets_InSystem Network In packetsPerformancepackets/sec1.0.0AllMonitors in Packets of each interface for Linux Devices
linux_node_system_network_interface_Packets_OutSystem Network out packetsPerformancepackets/sec1.0.0AllMonitors Out packets of each interface for Linux Devices
linux_node_system_network_interface_Errors_InSystem Network In ErrorsAvailabilityErrors per Sec1.0.0AllMonitors network in errors of each interface for Linux Devices
linux_node_system_network_interface_Errors_OutSystem Network Out ErrorsAvailabilityErrors per Sec1.0.0AllMonitors Network Out traffic of each interface for Linux Devices
linux_node_system_network_interface_discards_InSystem Network In discardsAvailabilitypsec1.0.0AllMonitors Network in discards of each interface for Linux Devices
linux_node_system_network_interface_discards_OutSystem Network Out discardsAvailabilitypsec1.0.0AllMonitors network Out Discards of each interface for Linux Devices

Default Monitoring Configurations

Linux Failover Cluster application has default Global Device Management Policies, Global Templates, Global Monitors and Global metrics in OpsRamp. Users can customize these default monitoring configurations as per their business use cases by cloning respective Global Templates, and Global Device Management Policies. OpsRamp recommends doing this activity before installing the application to avoid noise alerts and data.

  1. Default Global Device Management Policies

    OpsRamp has a Global Device Management Policy for each Native Type of Lnux Failover Cluster. You can find those Device Management Policies at Setup > Resources > Device Management Policies, search with suggested names in global scope. Each Device Management Policy follows below naming convention:

    {appName nativeType - version}

    Ex: linux-failover-cluster Linux Cluster - 1 (i.e, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

  2. Default Global Templates

    OpsRamp has a Global template for each Native Type of LINUX-FAILOVER-CLUSTER. You can find those templates at Setup > Monitoring > Templates, search with suggested names in global scope. Each template follows below naming convention:

    {appName nativeType 'Template' - version}

    Ex: linux-failover-cluster Linux Cluster Template - 1 (i.e, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

  3. Default Global Monitors

    OpsRamp has a Global Monitors for each Native Type which has monitoring support. You can find those monitors at Setup > Monitoring > Monitors, search with suggested names in global scope. Each Monitors follows below naming convention:

    {monitorKey appName nativeType - version}

    Example: Linux Failover Cluster Monitor linux-failover-cluster Linux Cluster 1 (i.e, monitorKey = Linux Failover Cluster Monitor, appName = linux-failover-cluster, nativeType = Linux Cluster, version = 1)

Configure and Install the Linux Failover Cluster Integration

  1. From All Clients, select a client.
  2. Navigate to Setup > Account.
  3. Select the Integrations and Apps tab.
  4. The Installed Integrations page, where all the installed applications are displayed. Note: If there are no installed applications, it will navigate to the Available Integrations and Apps page.
  5. Click + ADD on the Installed Integrations page. The Available Integrations and Apps page displays all the available applications along with the newly created application with the version.
  6. Search for the application using the search option available. Alternatively, use the All Categories option to search.
Linux Install Integration
  1. Click ADD in the Linux Failover Cluster application.
  2. In the Configurations page, click + ADD. The Add Configuration page appears.
  3. Enter the below mentioned BASIC INFORMATION:
Object NameDescription
NameEnter the name for the integration
IP Address/Host NameIP address/host name of the target.
CredentialsSelect the credentials from the dropdown list.

Note: Click + Add to create a credential.
Cluster TypeSelect Pacemake or RGManager or CRManager from the Cluster Type drop-down list.

Note:

  • Ip Address/Host Name should be accessible from Gateway.
  • Select App Failure Notifications to be notified in case of an application failure that is, Connectivity Exception, Authentication Exception.
  1. Select the below mentioned Custom Attribute:
FunctionalityDescription
Custom AttributeSelect the custom attribute from the drop down list box.
ValueSelect the value from the drop down list box.

Note: The custom attribute that you add here will be assigned to all the resources that are created by the integration. You can add a maximum of five custom attributes (key and value pair).

  1. In the RESOURCE TYPE section, select:
    • ALL: All the existing and future resources will be discovered.
    • SELECT: You can select one or multiple resources to be discovered.
  2. In the DISCOVERY SCHEDULE section, select Recurrence Pattern to add one of the following patterns:
    • Minutes
    • Hourly
    • Daily
    • Weekly
    • Monthly
  3. Click ADD.

Now the configuration is saved and displayed on the configurations page after you save it.
Note: From the same page, you may Edit and Remove the created configuration.

  1. Under the ADVANCED SETTINGS, Select the Bypass Resource Reconciliation option, if you wish to bypass resource reconciliation when encountering the same resources discovered by multiple applications.

    Note: If two different applications provide identical discovery attributes, two separate resources will be generated with those respective attributes from the individual discoveries.

  2. Click NEXT.

  3. (Optional) Click +ADD to create a new collector by providing a name or use the pre-populated name.

Veeam
  1. Select an existing registered profile.
Veeam
  1. Click FINISH.

The application is installed and displayed on the INSTALLED INTEGRATION page. Use the search field to find the installed integration.

Modify the Configuration

View the Linux Failover Cluster Details

To discover resources for HPE StoreOnce:

  1. Navigate to Infrastructure > Search > OS > Linux Failover Cluster.
  2. The LINUX FAILOVER CLUSTER page is displayed, select the application name.
  3. The RESOURCE page appears from the right.
  4. Click the ellipsis () on the top right and select View details.
Linux Install Integration

View resource attributes

The discovered resource(s) are displayed under Attributes. In this page you will get the basic information about the resources such as: Resource Type, Native Resource Type, Resource Name, IP Address etc.

Linux Install Integration

View resource metrics

To confirm Linux Cluster monitoring, review the following:

  • Metric graphs: A graph is plotted for each metric that is enabled in the configuration.
  • Alerts: Alerts are generated for metrics that are configured as defined for integration.
Linux Install Integration

Supported Alert Custom Macros

Customize the alert subject and description with below macros then it will generate alert based on customisation.
Supported macros keys:

Click here to view the alert subject and description with macros

                                ${resource.name}

                                ${resource.ip}

                                ${resource.mac}

                                ${resource.aliasname}

                                ${resource.os}

                                ${resource.type}

                                ${resource.dnsname}

                                ${resource.alternateip}

                                ${resource.make}

                                ${resource.model}

                                ${resource.serialnumber}

                                ${resource.systemId}

                                ${Custome Attributes in the resource}

                                ${parent.resource.name}

Risks, Limitations & Assumptions

  • Application can handle Critical/Recovery failure notifications for below two cases when user enables App Failure Notifications in configuration
    • Connectivity Exception
    • Authentication Exception
  • The application will send duplicate/repeat failure alert notifications for every 6 hours.
  • Support for Macro replacement for threshold breach alerts (i.e, customisation for threshold breach alert’s subject, description).
  • Application cannot control monitoring pause/resume actions based on above alerts.
  • Metrics can be used to monitor Linux-Failover-Cluster resources and can generate alerts based on the threshold values.
  • No support of showing activity logs.
  • The Template Applied Time will only be displayed if the collector profile (Classic and NextGen Gateway) is version 18.1.0 or higher.
  • This application supports both Classic Gateway and NextGen Gateway.
  • For the metric linux_cluster_failed_actions_count, an alert will be generated if the failed actions count is greater than or equal to 1, and if there is an alert raised on a component then the repeat alert will be generated only after 6 hours on that component if any threshold breach exists. The created alerts will not be healed by the application.
  • For metrics linux_cluster_fence_failover_status and linux_cluster_service_failover_status, an alert will be generated if there is a change of the node on which service runs. The created alerts will be healed by the application in the subsequent poll if it is running on the same node.Also the metric graphs will have the discontinuity as we’re setting the component name as service_name:node.
  • The minimum supported version for the option to get the latest snapshot metric is Nextgen-14.0.0.

Version History

Application VersionBug fixes / Enhancements
3.0.0Added support for Linux CR Manager
2.0.0
  • Support given for native type wise discovery.
  • Support given to below metrics.
    • linux_cluster_fence_status
    • linux_cluster_fence_failover_status
    • linux_cluster_service_failover_status
    • linux_cluster_failed_actions_count
1.0.9Support added for metric label changes.
1.0.8Fixed discovery response parsing issue.
Click here to view the earlier version updates
Application VersionBug fixes / Enhancements
1.0.7Full discovery support added.
1.0.6
  • Monitoring parsing issues have been fixed for service status metrics.
  • Fixed the logic to make the ssh connection for RGManager linux cluster.
1.0.5Monitoring parsing issues have been fixed.
1.0.4
  • Macro support for alert subject and description customization.
  • Support added to get latest metric snapshot data (from Gateway v14.0.0).
  • Added support for Template level component filters.
1.0.3Added support to alert on gateway in case initial discovery fails with connectivity/authorization issues.
1.0.2Fixed the metrics graphs issue.
1.0.1Initial sdk2.0 app discovery & monitoring implementation.