Understanding Netdata: bad HDD or just RAID5 redundancy drive?

I am very new to NAS… I’m hoping someone can help me interpret this Netdata info.

(The breaks in the graph is where I restarted the Cube, and did a test of swapping 2 of the drives.)

Is this warning/failure a true HDD warning?

Or does is it simply picking up the 1 RAID5 redundancy drive?

I’m trying to understand if there’s a serious issue here or not, because one thing that makes me concerned is that the first night I had the Cube and had the RAID set up, the next morning I couldn’t connect to the Cube, and the RAID had become “broken.” Only the 6th drive was lit up green while the others were offline.

When I deleted the RAID and rebuilt it and investigated Netdata for the first time, I started seeing the warnings. (I hadn’t checked Netdata before the broken RAID.)

I don’t think the Backlog messages are an issue.

When I created RAID0 or RAID1, there were still Backlog warnings, but I didn’t notice the red ‘failure’. If RAID1 takes half the drives for redundancy, shouldn’t I have seen 3 ‘failures’ if that’s how Netdata is interpreting the RAID?

Does anyone else have a RAID5 to see how it shows up in Netdata?

Are there any better SMART apps I can install to verify everything is in order?

If there is a failed drive (I just ordered them), I don’t know how to interpret which drive is the issue … unless it’s #6 from when the RAID ‘broke’.


I just saw this thread where you mention a software issue making RAIDs disappear…

Is that issue related to my experience where the next day, the RAID was “broken”? I’m not sure where the Cube turned off or not. I don’t believe so, as I didn’t turn it off, and it was still on when I checked it physically. I just couldn’t login to the Dashboard. (I have already updated to 1.2.1.)

This does not encourage confidence in me for transferring all of my data to the Cube. I need to be able to know the RAID isn’t going to break … I need to understand/check to make sure an HDD isn’t bad … I need to mount the Cube to Finder like a normal drive… It just seems like “things aren’t ready.”

1 Like

That issue was fixed on the recent update, raid now stays intact after restart or shut down. The only other issue I’m having is with file permissions, on the raid or USB drives appears to be read only when accessing through Windows explorer, or any file explorer for that matter Linux, Android etc. Another example Qbit torrent won’t save to those drives only the M2 drives…

Yeah, I saw the update. I’m just hoping fingers crossed that the “RAID break” I experienced was truly that bug, and not a physical HDD issue which I’m finding myself unable to pinpoint. (I pre-tested all HDDs on PC with Crystal Disk and they seemed fine…) I don’t think there was a restart when the RAID ‘broke’.

As for your issue, I also had that issue. See ETWang’s workaround + my response. I got it to work.

I did some tests with my six drives.

I created a RAID5 out of the first three while the last three were unplugged. Netdata showed “1 failed device.”

Then I created a RAID5 out of the last three drives while the first three were unplugged. Netdata again showed “1 failed device.”

Then I recreated my 6-drive RAID5. Netdata again said “1 failed device.” Not 2.

So from what I’m understanding, the Netdata “failed device” is not a failed HDD, but rather the redundancy/parity drive that’s being marked as failed.

That’s a relief I guess. I don’t know why Netdata isn’t more accurate.