Hello, we monitor a Synology DS218+ and everything works fine. The Web Interface of the NAS shows a Warning message for Disk 1 because of too many bad sectors. The problem is, that in PRTG the state of the drive shows ok. We would like to populate the warning state to our PRTG Server. With the Synology System Health Sensor, the Physical Disk Sensor, the Synology MIBs from PRTG and the MIB directly from Synology we get the same results. Is there a way to show the warning state in PRTG?
Regards
Marco Riechen
Article Comments
Hello,
Thank you for your messages.
Can you please tell us which version of DSM you are using ? Please, execute a walk on the OID "1.3.6.1.4.1.6574.2.1.1.5" with SNMP Tester (from the PRTG server) on your Synology device. Then, provide us the output so we can check the value that PRTG receives.
Regards.
Jan, 2021 - Permalink
I have exactly the same problem - and I have a NAS with a failing HDD for testing Screenshot: https://www.dropbox.com/sh/1jn42d6ha90sb37/AADojy7afqXGTqb_zUxnl6hta?dl=0
Results of walk:
Paessler SNMP Tester - 20.2.4 Computername: PRTG-GRIFFIN Interface: xxx.xxx.xxx.xxx 09/03/2021 18:30:38 (1 ms) : Device: xxx.xxx.xxx.yyy 09/03/2021 18:30:38 (4 ms) : SNMP v2c 09/03/2021 18:30:38 (6 ms) : Custom OID 1.3.6.1.4.1.6574.2.1.1.5 09/03/2021 18:30:38 (11 ms) : SNMP Datatype: SNMP_EXCEPTION_NOSUCHINSTANCE 09/03/2021 18:30:38 (13 ms) : ------- 09/03/2021 18:30:38 (15 ms) : Value: #N SNMP_EXCEPTION_NOSUCHINSTANCE223 09/03/2021 18:30:38 (17 ms) : Done
Paessler SNMP Tester - 20.2.4 Computername: PRTG-GRIFFIN Interface: 09/03/2021 18:39:06 (1 ms) : Device: 09/03/2021 18:39:06 (3 ms) : SNMP v2c 09/03/2021 18:39:06 (5 ms) : Walk 1.3.6.1.4.1.6574.2.1.1.5 09/03/2021 18:39:06 (53 ms) : 1.3.6.1.4.1.6574.2.1.1.5.0 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (56 ms) : 1.3.6.1.4.1.6574.2.1.1.5.1 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (59 ms) : 1.3.6.1.4.1.6574.2.1.1.5.2 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (62 ms) : 1.3.6.1.4.1.6574.2.1.1.5.3 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (65 ms) : 1.3.6.1.4.1.6574.2.1.1.5.4 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (68 ms) : 1.3.6.1.4.1.6574.2.1.1.5.5 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (70 ms) : 1.3.6.1.4.1.6574.2.1.1.5.6 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (74 ms) : 1.3.6.1.4.1.6574.2.1.1.5.7 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (77 ms) : 1.3.6.1.4.1.6574.2.1.1.5.8 = "1" [ASN_INTEGER] 09/03/2021 18:39:06 (80 ms) : 1.3.6.1.4.1.6574.2.1.1.5.9 = "1" [ASN_INTEGER]
Yet another reply. I have a Synology with a "failing disk" Walking the MIB and looking at the docs in https://global.download.synology.com/download/Document/Software/DeveloperGuide/Firmware/DSM/All/enu/Synology_DiskStation_MIB_Guide.pdf
I get the following results 1.3.6.1.4.1.6574 = Base MIB
Results: System MIB - This is system Status 1.3.6.1.4.1.6574.1.1.0 = "1" [ASN_INTEGER] - Status Normal
Results: Disk MIB (for the specific disk)
09/03/2021 19:00:23 (267 ms) : 1.3.6.1.4.1.6574.2.1.1.3.4 = "WD20EFRX-68EUZN0 " [ASN_OCTET_STR] 09/03/2021 19:00:23 (396 ms) : 1.3.6.1.4.1.6574.2.1.1.5.4 = "1" [ASN_INTEGER]
All Status show as 1 = Functioning Normally
Results: RAID MIB
09/03/2021 19:03:02 (11 ms) : 1.3.6.1.4.1.6574.3.1.1.1.0 = "0" [ASN_INTEGER] 09/03/2021 19:03:02 (14 ms) : 1.3.6.1.4.1.6574.3.1.1.2.0 = "Volume 1" [ASN_OCTET_STR] 09/03/2021 19:03:02 (17 ms) : 1.3.6.1.4.1.6574.3.1.1.3.0 = "1" [ASN_INTEGER]
RAID Status shows as 1 = normal, I think this should be showing as 11 = degrade ("Degrade is shown when a tolerable failure of disk(s) occurs").
The question, is this a bug or by design. The RAID is functioning. The disk is functioning - its just got a "this disk is failing warning. Please backup your data" etc. Not sure how PRTG can deal with this. If Synology SNMP don't tell them then PRTG can't report it.
Mar, 2021 - Permalink
Hello,
Thank you for your message.
Regarding the status of the RAID, you will find them just below (source: Synology MIB documentation):
Then, regarding the disks, the OID to use is 1.3.6.1.4.1.6574.2 (table). It returns many information including disk model, type and status. You can use this OID with the SNMP Custom Table sensor.
The possible status are the following:
Therefore, if the Synology doesn't return a value different from 1, then PRTG can't effectively trigger an alert. Please, make sure to use the latest version of DSM and check if the issue is still there. If that's the case, then contact the support of Synology.
Regards.
Mar, 2021 - Permalink
Could you then please include 1.3.6.1.4.1.6574.2.1.1.9 "diskBadSector" in the "Synology physical Disk " sensor ?
Sep, 2022 - Permalink
Hello Klaus,
Thank you for your message.
According to the documentation of Synology, there are indeed new OIDs implemented in DSM 7.0 and above including diskBadSector. I will inform our development team about it so they can improve the sensor when they will rewrite it (to make it compatible with the new multi-platform probe).
Please, note that it might take a while until the new version gets released therefore I invite you to use a SNMP Custom sensor in the meantime, with the corresponding OID ("1.3.6.1.4.1.6574.2.1.1.9").
Regards.
Oct, 2022 - Permalink
Hi there,
same problem here with the Synology RS2416RP+. Over 700 bad sectors and a Warning state on the Synology on Disk 10, but the Synology Physical Disk or the System Health Sensor show no Warning at all.
I would also like if there is a possibility to get that status over to the PRTG system.
Best regards
Leonard Barth
Jan, 2021 - Permalink