Supported Target Versions |
---|
Hitachi NAS Platform 4040, 4100 and 4060. |
Firmware version of HNAS Server should be greater than 12.4. |
FULL discovery support requires gateway version 15.0.0 or above. |
Application Version and Upgrade Details
Application Version | Bug fixes / Enhancements |
---|---|
3.0.0 | API statistics, metric labels and API timeouts. |
Introduction
Hitachi Network attached Storage (HNAS) storage systems are discovered via an SNMP agent that can be configured using the embedded System Management Unit (SMU). When an SNMP discovery finds a HNAS server’s IP address, a Storage Device node representing whether a Cluster or a HNAS Standalone Server is created. The creation or confirmation of the Storage Device node triggers the appropriate storage discovery pattern. The storage discovery pattern continues discovery of the storage entity using further SNMP requests to discover the associated storage components. Once discovered, the storage entities are modeled in BMC Discovery.
OpsRamp integrates with HNAS using REST APIs exposed through HNAS gateways.
Key Use cases
Discovery Use cases
- It discovers the Hitachi HNAS Storage device.
• Hitachi NAS File Device
•Hitachi NAS Nodes(Component-of file device)
•Hitachi NAS VirtualServers(component-of file device)
•Hitachi NAS Storage Pools(Component-of file device)
Hitachi NAS File Systems-(Component of Storage pools)
• Hitachi NAS System Drives(Component-of file device)
Monitoring Use cases
- Provides metrics related to storage management components and storage server statistics.
- Concern alerts will be generated for each metric to notify the administrator regarding the issue with the resource.
Prerequisites
- OpsRamp Nextgen Gateway supported version 14.0.0 and above.
- OpsRamp Classic Gateway supported version 14.0.0 and above.
Note: OpsRamp recommends using the latest Gateway version for full coverage of recent bug fixes, enhancements, etc.
Application Migration
Check for the gateway version as a prerequisite step - classic gateway-12.0.1 and above.
Notes:- You only have to follow these steps when you want to migrate from sdk 1.0 to sdk 2.0.
- For the first time installation below steps are not required.
Disable all configurations associated with sdk 1.0 adaptor integration application.
Install and Add the configuration to that sdk 2.0 application.
Note: refer to Configure and Install the Hitachi NAS Integration & View the Hitachi NAS Details sections of this document.Once all discoveries are completed with the sdk 2.0 application, follow any one of the approaches.
Direct uninstallation of the sdk 1.0 adaptor application through the uninstall API with skipDeleteResources=true in the DELETE request.
End-Point:
https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}
Request Body:
{ "uninstallReason": "Test", "skipDeleteResources": true }
(OR)Delete the configuration one by one through the Delete adaptor config API with the request parameter as skipDeleteResources=true in the DELETE request.
End-Point:
https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/config/{configId}?skipDeleteResources=true.
Finally, uninstall the adapter application through API with skipDeleteResources=true in the DELETE request.
End-Point:
https://{{host}}/api/v2/tenants/{tenantId}/integrations/installed/{installedIntgId}
Request Body:
{ "uninstallReason": "Test", "skipDeleteResources": true }
Unassign template (or) delete the empty graphs for SDK 1.0
- Navigate to Infrastructure > Resource > Hitachi HNAS.
- Select the resource and click Metric.
- Copy the Template name.
- Navigate to Setup > Monitoring > Template.
- Right click on the Search By Template Name search box, and click Paste on the Quick Access Toolbar.
- Click the Device number. The Devices screen appears.
- Select the resource and click Unassign. The confirmation message is displayed.
- Select the Also delete the associated graphs checkbox. and click Yes.
Supported Metrics
Click here to view the supported metrics
HNAS Component | Metric Name | Metric Display Name | Units | Application Version |
---|---|---|---|---|
Hitachi NAS File Device | hnas_filedevice_Status | hnas filedevice status |
| 2.0.0 |
hnas_node_count | hnas node count | count | 2.0.0 | |
hnas_virtualserver_Count | hnas virtual server count | count | 2.0.0 | |
hnas_Api_Statistics | hnas API Statistics | 3.0.0 | ||
Hitachi NAS Storage Pools | hnas_storagepool_Healthstatus | hnas storage pool health | BOOLEAN | 2.0.0 |
hnas_storagepool_Freecapacity | hnas storage pool free capacity | Bytes | 2.0.0 | |
hnas_storagepool_Usedcapacity | hnas storage pool used capacity | Bytes | 2.0.0 | |
hnas_storagepool_Capacityutilization | hnas storage pool percentage | Percentage | 2.0.0 | |
SystemDrive | hnas_systemdrive_Capacity | hnas system drives capacity | Bytes | 2.0.0 |
hnas_systemdrive_Status | hnas device system drives status |
| 2.0.0 | |
hnas_systemdrive_IsAccessAllowed | hnas system drives is access allowed | BOOLEAN | 2.0.0 | |
hnas_systemdrive_IsAssignedToStoragePool | hnas system drives is assigned to storage pool | BOOLEAN | 2.0.0 | |
hnas_systemdrive_IsMirrored | hnas system drive is mirrored | BOOLEAN | 2.0.0 | |
hnas_systemdrive_IsMirrorPrimary | hans system drives is mirror primary | BOOLEAN | 2.0.0 | |
Hitachi NAS VirtualServer | hnas_virtualserver_Status | hnas virtual server status |
| 2.0.0 |
hnas_evs_migration_Status | hnas virtual server migration status | Boolean | 2.0.0 | |
Hitachi NAS Nodes | hnas_node_Status | hnas node status |
| 2.0.0 |
hnas_node_disk_ReadLatency | hnas disk read latency for nodes | milliSeconds | 2.0.0 | |
hnas_node_diskstripe_WriteLatency | hnas disk stripe write latency | milliSeconds | 2.0.0 | |
hnas_node_disk_WriteLatency | hnas disk write latency | milliSeconds | 2.0.0 | |
hnas_node_ethernet_Throughput_rx | hnas ethernet throughput rx | Mbps | 2.0.0 | |
hnas_node_fibrechannel_Throughput_rx | hnas fibre channel throughput for rx | Mbps | 2.0.0 | |
hnas_node_fibrechannel_throughput_tx | hnas fibre channel throughput for tx | Mbps | 2.0.0 | |
hnas_node_fsi_CacheUsage | hnas node fsi cache usage | Percentage | 2.0.0 | |
hnas_node_HeapUsage | hnas node heap usage | Percentage | 2.0.0 | |
hnas_node_Mfb_load | hnas node mfb Usage | Percentage | 2.0.0 | |
hnas_node_Mmb_load | hnas node mmb load percentage | Percentage | 2.0.0 | |
hnas_node_TotalOperation | hnas total operations per sec for node | PerSec | 2.0.0 | |
hnas_node_nvram_WaitedAllocs | hnas nvram waited allocs | Count | 2.0.0 | |
hnas_node_PI_tcpSockets_receiveFibres | hnas pi and tcp socket receive fibres | Count | 2.0.0 | |
hnas_node_Running_bossock_fibres | hnas running bossock fibres | Count | 2.0.0 | |
hnas_node_filesystem_Opspersec | hnas file system operations per sec for each node | Per Sec | 2.0.0 | |
hnas_node_virus_scanCount | hnas virus scan count | Count | 2.0.0 | |
hnas_node_virus_scanCleancount | hnas virus scan clean count | Count | 2.0.0 | |
hnas_node_virus_scanErrors | hnas virus scan errors | Count | 2.0.0 | |
hnas_node_virus_scanInfectionsFound | hnas virus scan infections found | Count | 2.0.0 | |
hnas_node_virus_scanActionsTaken | hnas virus scan actions taken | Count | 2.0.0 | |
hnas_node_virus_scanInfectionsRepaired | hnas virus scan infections repaired | Count | 2.0.0 | |
hnas_node_virus_scanfiles_DeleteCount | hnas virus scan files delete count | Count | 2.0.0 | |
hnas_node_virus_scanfiles_QuarantinedCount | hnas virus scan files quarantined count | Count | 2.0.0 | |
hnas_node_fibre_channel_PortStatus | hnas fibre channel port status | ENUM(UP) | 2.0.0 | |
hnas_node_iscsi_Current_sessions | hnas iscsi current sessions | Count | 2.0.0 | |
hnas_node_iscsi_Current_connections | hnas iscsi current connections | Count | 2.0.0 | |
hnas_node_tcpip_Failed_connections | hnas tcpip failed connections | Count | 2.0.0 | |
hnas_node_nvram_Size | hnas nvram size | Giga Bytes | 2.0.0 | |
hnas_node_nvram_MaxUse | hnas nvram maximum use | Giga Bytes | 2.0.0 | |
hnas_node_nvram_CurrentUse | hnas nvram currently in use | Mega Bytes | 2.0.0 | |
hnas_node_nvram_Utilization | hnas nvram utilization | Percentage | 2.0.0 | |
hnas_node_snmp_input_General_errors | hnas snmp input general errors | Count | 2.0.0 | |
hnas_node_snmp_input_AsnParse_errors | hnas snmp input asn parse errors | Count | 2.0.0 | |
hnas_node_snmp_output_Packets | hnas snmp output packets | Count | 2.0.0 | |
hnas_node_snmp_output_GeneralErrors | hnas snmp output general errors | Count | 2.0.0 | |
hnas_node_snmp_output_Traps | hnas snamp output traps | Count | 2.0.0 | |
hnas_node_snmp_drops_SilentDrops | hnas snmp silent drops | Count | 2.0.0 | |
hnas_node_snmp_drops_ProxyDrops | hnas snmp proxy drops | Count | 2.0.0 | |
hnas_node_ftp_sessions_Active | hnas ftp active sessions | Count | 2.0.0 | |
hnas_node_ftp_sessions_Total_sessions | hnas ftp total sessions | Count | 2.0.0 | |
hnas_node_ftp_data_incoming_ActiveSessions | hnas ftp data incoming active sessions | Count | 2.0.0 | |
hnas_node_ftp_files_outgoing_ActiveSessions | hnass ftp data outgoing active sessions | Count | 2.0.0 | |
hnas_node_ftp_files_Total_outgoing | hnas ftp files total outgoing | Count | 2.0.0 | |
hnas_node_ftp_files_Total_incoming | hnas ftp files total incoming | Count | 2.0.0 | |
hnas_node_ftp_Data_incoming | hnas ftp data incoming | Count | 2.0.0 | |
hnas_node_ftp_Data_outgoing | hnas ftp data outgoing | Count | 2.0.0 | |
hnas_node_ftp_data_incoming_ActiveSessions | hnas ftp data incoming active sessions | Count | 2.0.0 | |
hnas_node_ftp_data_outgoing_ActiveSessions | hnas ftp data outgoing active sessions | Count | 2.0.0 | |
hnas_node_cifs_smb_statistics_ConnectionsCount | hnas cifs smb connections | Count | 2.0.0 | |
hnas_node_cifs_smb_statistics_SharesMappedCount | hnas cifs smb shares mapped connections | Count | 2.0.0 | |
hnas_node_cifs_smb_TransportDisconnects | hnas cifs smb transport disconnects | Count | 2.0.0 | |
hnas_node_cifs_smb_UnorphanedFilereopens | hnas cifs smb unorphaned fies reopens | Count | 2.0.0 | |
hnas_node_cifs_smb_Durreopenedfileidallocfailures | hnas cifs smb durreopened file id alloc failures | Count | 2.0.0 | |
hnas_node_cifs_smb_Durorphanfilereopenfailures | hnas cifs smb dur orphan file reopen failures | Count | 2.0.0 | |
hnas_node_cifs_smb_Durpreserveorphanfailures | hnas cifs smb dur preserve orphan failures | Count | 2.0.0 | |
hnas_ethernet_Interfaceutil | hnas cifs smb dur preserve orphan failures | Count | 2.0.0 | |
hnas_transmit_Rate_Instantaneous | hnas ethernet transmit rate instantaneous | Mb/sec | 2.0.0 | |
hnas_transmit_rate_Peak | hnas ethernet transmit rate peak | Mb/sec | 2.0.0 | |
hnas_receive_rate_Instantaneous | hnas ethernet receive rate instantaneous | Mb/sec | 2.0.0 | |
hnas_receive_rate_Peak | hnas ethernet receive rate Peak | Mb/sec | 2.0.0 | |
Hitachi NAS FileSystems | hnas_filesystem_Freecapacity | hnas filesystem free capacity | Bytes | 2.0.0 |
hnas_filesystem_Usedcapacity | hnas file system used capacity | Bytes | 2.0.0 | |
hnas_filesystem_Logicalfreecapacity | hnas file system logical free capacity | Bytes | 2.0.0 | |
hnas_filesystem_Utilization | hnas file system percentage | Percentage | 2.0.0 | |
hnas_filesystem_Status | hnas file system status |
| 2.0.0 | |
hnas_filesystem_IsDedupeEnabled | hnas filesystem isdedupeenabled | BOOLEAN | 2.0.0 | |
hnas_filesystemquotas_ByteQuotaUtilization | hnas file system byte quotas utilization | 2.0.0 | ||
hnas_filesystemquotas_FileQuotaUtilization | hnas file system file quotas utilization | 2.0.0 | ||
hnas_filesystemquotas_ByteUsage | hnas file system quotas byte usage hard setting | GB | 2.0.0 | |
hnas_filesystemquotas_FileUsage | hnas file system quotas file usage | count | 2.0.0 | |
hnas_filesystemquotas_byteusageHardSetting | hnas file system quotas byte usage hard setting | Boolean | 2.0.0 | |
hnas_filesystemquotas_fileusageHardSetting | hnas file system quotas file usage hard setting | Boolean | 2.0.0 | |
hnas_virtualvolumequotas_FileQuotaUtilization | hnas virtual volume file quotas utilization | percent | 2.0.0 | |
hnas_virtualvolumequotas_ByteQuotaUtilization | hnas virtual volume byte quotas utilization | percent | 2.0.0 | |
hnas_virtualvolumequotas_byteusage | hnas virtual volume quotas byte usage | GB | 2.0.0 | |
hnas_virtualvolumequotas_fileusage | hnas virtual volume quotas file usage | Count | 2.0.0 | |
hnas_virtualvolumequotas_byteusageHardSetting | hnas virtual volume quotas byte usage hard setting | Boolean | 2.0.0 | |
hnas_virtualvolumequotas_fileusageHardSetting | hnas virtual volume quotas file usage hard setting | Boolean | 2.0.0 | |
hnas_object_replication_BytesTransferred | hnas object replication bytes transmitted rate | Mb/sec | 2.0.0 | |
hnas_object_replication_BytesRemaining | hnas object replication bytes remaining | Bytes | 2.0.0 | |
hnas_object_replication_BytesTransferred_Rate | hnas object replication bytes transmitted rate | 2.0.0 | ||
hnas_object_replication_Status | hnas object replication status | 2.0.0 |
Default Monitoring Configurations
Hitachi NASs has default Global Device Management Policies, Global Templates, Global Monitors and Global metrics in OpsRamp. You can customize these default monitoring configurations as per your business use cases by cloning respective global templates and global Device Management Policies. OpsRamp recommends performing the below activity before installing the application to avoid noise alerts and data.
Default Global Device Management Policies
OpsRamp has a Global Device Management Policy for each Native Type of Hitachi Nas .We can find those Device Management Policies at Setup > Resources > Device Management Policies, search with suggested names in global scope. Each Device Management Policy follows below naming convention:
{appName nativeType - version}
Ex: hitachi-nas Hitachi NAS File Device- 2(i.e, appName = hitachi-nas, nativeType =Hitachi NAS File Device, version = 2)
Default Global Templates
OpsRamp has a Global template for each Native Type of hitachi-nas. We can find those templates at Setup > Monitoring > Templates, search with suggested names in global scope. Each template follows below naming convention:
{appName nativeType 'Template' - version}
Ex: hitachi-nas Hitachi NAS File Device Template - 2 (i.e, appName = hitachi-nas, nativeType = Hitachi NAS File Device, version = 2)
Default Global Monitors
OpsRamp has a Global Monitors for each Native Type which has monitoring support. You can find those monitors at Setup > Monitoring > Monitors, search with suggested names in global scope. Each Monitors follows below naming convention:
{monitorKey appName nativeType - version}
Ex: Hitachi NAS File Device Monitor hitachi-nas Hitachi NAS File Device 2 (i.e, monitorKey = Hitachi NAS File Device Monitor, appName = hitachi-nas, nativeType = Hitachi NAS File Device, version = 2)
Configure and Install the Hitachi NAS Integration
- From All Clients, select a client.
- Navigate to Setup > Account.
- Select the Integrations and Apps tab.
- The Installed Integrations page, where all the installed applications are displayed. If there are no installed applications, it will navigate to the Available Integrations and Apps page.
- Click + ADD on the Installed Integrations page. The Available Integrations and Apps page displays all the available applications along with the newly created application with the version.
Note: Search for the application using the search option available. Alternatively, use the All Categories option to search.
- Click ADD in the Hitachi NAS application.
- In the Configurations page, click + ADD. The Add Configuration page appears.
- Enter the below mentioned BASIC INFORMATION:
Functionality | Description |
---|---|
Name | Enter the name for the configuration. |
Host Name / IP Address | Enter the Host name or the IP address. |
Port | API Port information Note: By default 844 is selected. |
Serial Number | Enter the serial number. |
Credentials | Select the credentials from the drop-down list. Note: Click + Add to create a credential. |
Notes:
- By default the Is Secure checkbox is selected.
- Host Name / IP Address and Port should be accessible from Gateway.
- Select the following:
- App Failure Notifications: if turned on, you will be notified in case of an application failure that is, Connectivity Exception, Authentication Exception.
- Select the below mentioned CUSTOM ATTRIBUTE:
Functionality | Description |
---|---|
Custom Attribute | Select the custom attribute from the drop down list box. |
Value | Select the value from the drop down list box. |
Note: The custom attribute that you add here will be assigned to all the resources that are created by the integration. You can add a maximum of five custom attributes (key and value pair).
- In the RESOURCE TYPE section, select:
- ALL: All the existing and future resources will be discovered.
- SELECT: You can select one or multiple resources to be discovered.
- In the DISCOVERY SCHEDULE section, select Recurrence Pattern to add one of the following patterns:
- Minutes
- Hourly
- Daily
- Weekly
- Monthly
- Click ADD.
Now the configuration is saved and displayed on the configurations page after you save it.
Note: From the same page, you may Edit and Remove the created configuration.
- Click NEXT.
- In the Installation page, select an existing registered gateway profile, and click FINISH.
- Below are the optional steps you can perform on the Installing page.
- Under the ADVANCED SETTINGS, select the Bypass Resource Reconciliation checkbox.
Note: To Bypass Resource Reconciliation for the same resource discovered by multiple applications, two separate resources will be created with their respective attributes from the individual discoveries.
- Under the ADVANCED SETTINGS, select the Bypass Resource Reconciliation checkbox.
- Click +ADD to create a new collector by providing a name or use the pre-populated name.
- Select an existing registered profile.
Click FINISH.
The integration is now installed and displayed on the Installed Integration page. Use the search field to find the installed integration.
Modify the Configuration
See Modify an Installed Integration or Application article.
Note: Select the Hitachi NAS application.
View the Hitachi NAS Details
The Hitachi NAS integration is displayed in the Infrastructure >Search > Storage Arrays > Hitachi-Nas. You can navigate to the Attributes tab to view the discovery details, and the Metrics tab to view the metric details for Hitachi NAS.
Resource Filter Input Keys
Hitachi NAS application resources are filtered and discovered based on below keys:
Click here to view the Supported Input Keys
Resource Type | Supported Input Keys |
---|---|
All Types | resourceName |
hostName | |
aliasName | |
dnsName | |
ipAddress | |
macAddress | |
os | |
make | |
model | |
serialNumber |
Supported Alert Custom Macros
Customize the alert subject and description with below macros then it will generate alert based on customisation.
Supported macros keys:
Click here to view the alert subject and description with macros
${resource.name}
${resource.ip}
${resource.mac}
${resource.aliasname}
${resource.os}
${resource.type}
${resource.dnsname}
${resource.alternateip}
${resource.make}
${resource.model}
${resource.serialnumber}
${resource.systemId}
${Custome Attributes in the resource}
${parent.resource.name}
Risks, Limitations & Assumptions
- Application can handle Critical/Recovery failure notifications for below two cases when user enables App Failure Notifications in configuration:
- Connectivity Exception
- Authentication Exception
- Application will not send any duplicate/repeat failure alert notification until the already existing critical alert is recovered.
- Using metrics for monitoring the resources and generating alerts when the threshold values are breached.
- Application cannot control monitoring pause/resume actions based on above alerts.
- This application supports both Classic Gateway and NextGen Gateway.
- No support of showing activity logs.
- The Template Applied Time will only be displayed if the collector profile (Classic and NextGen Gateway) is version 18.1.0 or higher.
- The minimum support version for the option to get Latest snapshot metric is nextgen-14.0.0.
- Alerts generated for metrics that are configured as per defined integration.