Drive failure(s) in linux mdadm raid array. Help!How to rebuild a two drive raid 1 array on Linux?mdadm;...
Is it stylistically sound to use onomatopoeic words?
What was this character's plan?
Is there a way I can open the Windows 10 Ubuntu bash without running the ~/.bashrc script?
Did the Ottoman empire suppress the printing press?
When I press the space bar it deletes the letters in front of it
Why does the Antonov AN-225 not have any winglets?
What exactly is a "murder hobo"?
Is it better in terms of durability to remove card+battery or to connect to charger/computer via USB-C?
What factors could lead to bishops establishing monastic armies?
How does the Melf's Minute Meteors spell interact with the Evocation wizard's Sculpt Spells feature?
If props is missing Should I use memo?
When did "&" stop being taught alongside the alphabet?
What does Middle English "bihiȝten" mean?
Moving millions of files to a different directory with specfic name patterns
What could cause the sea level to massively decrease?
Don't the events of "Forest of the Dead" contradict the fixed point in "The Wedding of River Song"?
Why different specifications for telescopes and binoculars?
This LM317 diagram doesn't make any sense to me
Distinguish the explanations of Galadriel's test in LotR
How does one acquire an undead eyeball encased in a gem?
Performance issue in code for reading line and testing for palindrome
How was the Shuttle loaded and unloaded from its carrier aircraft?
When an electron changes its spin, or any other intrinsic property, is it still the same electron?
How to evaluate the performance of open source solver?
Drive failure(s) in linux mdadm raid array. Help!
How to rebuild a two drive raid 1 array on Linux?mdadm; previously working; after “failure”, cannot join array due to disk sizeRemove 1 disk from mdadm RAID 0 arraymdadm raid 5 missing partitions after rebootAdding new devices to an mdadm raid10 - new device has fewer sectorsmdadm 2x Raid 5 missing drivesDoes Simulating mdadm failure require rebuilding entire Array? mdadm -manage -set-faultyRaid array 'clean, degraded'?mdadm array assemble causes kernel hangTried to grow my raid6 array with a new drive, got “Failed to restore critical section”
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
It looks like my system is suffering from some sort of catastrophic failure, and I'm panicking at the moment, not sure what to do.
I had a 3 drive raid 10 array. I noticed this morning that I was having trouble accessing the array (all my photos are on there). I checked mdadm and it said that one drive was dropped from the array (drive 2). I think this may have been because the computer was shutdown accidentally (there was a blackout), and the drive was kicked as a result.
I tried adding the drive back, and that worked. Then I checked the progress of the rebuild with mdstat, and it is rebuilding at 10 kb/s. Yes. It would take years to rebuild a 3 TB drive at that speed. I check dmseg, and I was getting a bunch of I/O errors for drive 3. It seems that drive 3 has some kind of hardware failure. I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has. I then tried turning off the computer (I read it was save to do so even when mdstat is resyncing).
Unfortunately, the shut-off is not working. Pressing "ESC" showed me the terminal and it's displaying a bunch of "print_req_error: I/O error, dev sdd, sector ...." type errors. I don't know what to do. Just wait? It's been going on for 30 minutes like this.
Any advice?
raid disk mdadm system-failure
New contributor
|
show 5 more comments
It looks like my system is suffering from some sort of catastrophic failure, and I'm panicking at the moment, not sure what to do.
I had a 3 drive raid 10 array. I noticed this morning that I was having trouble accessing the array (all my photos are on there). I checked mdadm and it said that one drive was dropped from the array (drive 2). I think this may have been because the computer was shutdown accidentally (there was a blackout), and the drive was kicked as a result.
I tried adding the drive back, and that worked. Then I checked the progress of the rebuild with mdstat, and it is rebuilding at 10 kb/s. Yes. It would take years to rebuild a 3 TB drive at that speed. I check dmseg, and I was getting a bunch of I/O errors for drive 3. It seems that drive 3 has some kind of hardware failure. I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has. I then tried turning off the computer (I read it was save to do so even when mdstat is resyncing).
Unfortunately, the shut-off is not working. Pressing "ESC" showed me the terminal and it's displaying a bunch of "print_req_error: I/O error, dev sdd, sector ...." type errors. I don't know what to do. Just wait? It's been going on for 30 minutes like this.
Any advice?
raid disk mdadm system-failure
New contributor
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something likemount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...
– roaima
8 hours ago
|
show 5 more comments
It looks like my system is suffering from some sort of catastrophic failure, and I'm panicking at the moment, not sure what to do.
I had a 3 drive raid 10 array. I noticed this morning that I was having trouble accessing the array (all my photos are on there). I checked mdadm and it said that one drive was dropped from the array (drive 2). I think this may have been because the computer was shutdown accidentally (there was a blackout), and the drive was kicked as a result.
I tried adding the drive back, and that worked. Then I checked the progress of the rebuild with mdstat, and it is rebuilding at 10 kb/s. Yes. It would take years to rebuild a 3 TB drive at that speed. I check dmseg, and I was getting a bunch of I/O errors for drive 3. It seems that drive 3 has some kind of hardware failure. I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has. I then tried turning off the computer (I read it was save to do so even when mdstat is resyncing).
Unfortunately, the shut-off is not working. Pressing "ESC" showed me the terminal and it's displaying a bunch of "print_req_error: I/O error, dev sdd, sector ...." type errors. I don't know what to do. Just wait? It's been going on for 30 minutes like this.
Any advice?
raid disk mdadm system-failure
New contributor
It looks like my system is suffering from some sort of catastrophic failure, and I'm panicking at the moment, not sure what to do.
I had a 3 drive raid 10 array. I noticed this morning that I was having trouble accessing the array (all my photos are on there). I checked mdadm and it said that one drive was dropped from the array (drive 2). I think this may have been because the computer was shutdown accidentally (there was a blackout), and the drive was kicked as a result.
I tried adding the drive back, and that worked. Then I checked the progress of the rebuild with mdstat, and it is rebuilding at 10 kb/s. Yes. It would take years to rebuild a 3 TB drive at that speed. I check dmseg, and I was getting a bunch of I/O errors for drive 3. It seems that drive 3 has some kind of hardware failure. I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has. I then tried turning off the computer (I read it was save to do so even when mdstat is resyncing).
Unfortunately, the shut-off is not working. Pressing "ESC" showed me the terminal and it's displaying a bunch of "print_req_error: I/O error, dev sdd, sector ...." type errors. I don't know what to do. Just wait? It's been going on for 30 minutes like this.
Any advice?
raid disk mdadm system-failure
raid disk mdadm system-failure
New contributor
New contributor
edited 1 hour ago
frostschutz
29.7k2 gold badges66 silver badges98 bronze badges
29.7k2 gold badges66 silver badges98 bronze badges
New contributor
asked 8 hours ago
user361233user361233
61 bronze badge
61 bronze badge
New contributor
New contributor
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something likemount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...
– roaima
8 hours ago
|
show 5 more comments
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something likemount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...
– roaima
8 hours ago
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something like
mount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something like
mount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...– roaima
8 hours ago
|
show 5 more comments
1 Answer
1
active
oldest
votes
It's difficult to answer your question, and this is too long for a comment, so just some general pointers.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has.
Unless there is a kernel bug, re-adding a disk (in the same role and same offset it had before) does not "destroy" data. It just re-writes most of the same data that was already there, no harm done.
- the role might change if there was more than one drive missing from the array
- the offset usually only changes if you add
sdx
when it wassdx1
before - if very unlucky, offset might also change if it was in a weird state before
The main problem about kicked drive, even if the drive was innocent, is that it's no longer part of the array. As soon as the array is mounted in write mode, data on the array is modified, and the data on the kicked drive is not updated along with it, so it becomes outdated and as such no longer "good".
I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
You can't do data recovery if your drives have issues. If those 6000 bad sectors didn't appear over night, you should have replaced that drive a long time ago. RAIDs die if you don't selftest, monitor and actually replace ASAP any drives that are going bad.
Get new drives, use ddrescue
to copy what you can from the old drives, then use copy-on-write overlays for data recovery experiments. With overlays you can write without modifying the original (so you don't have to re-do the disk copy and don't need a copy of the copy neither). But overlays, too, require drives that work, you can't do it with drives that have errors.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
user361233 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f528851%2fdrive-failures-in-linux-mdadm-raid-array-help%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
It's difficult to answer your question, and this is too long for a comment, so just some general pointers.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has.
Unless there is a kernel bug, re-adding a disk (in the same role and same offset it had before) does not "destroy" data. It just re-writes most of the same data that was already there, no harm done.
- the role might change if there was more than one drive missing from the array
- the offset usually only changes if you add
sdx
when it wassdx1
before - if very unlucky, offset might also change if it was in a weird state before
The main problem about kicked drive, even if the drive was innocent, is that it's no longer part of the array. As soon as the array is mounted in write mode, data on the array is modified, and the data on the kicked drive is not updated along with it, so it becomes outdated and as such no longer "good".
I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
You can't do data recovery if your drives have issues. If those 6000 bad sectors didn't appear over night, you should have replaced that drive a long time ago. RAIDs die if you don't selftest, monitor and actually replace ASAP any drives that are going bad.
Get new drives, use ddrescue
to copy what you can from the old drives, then use copy-on-write overlays for data recovery experiments. With overlays you can write without modifying the original (so you don't have to re-do the disk copy and don't need a copy of the copy neither). But overlays, too, require drives that work, you can't do it with drives that have errors.
add a comment |
It's difficult to answer your question, and this is too long for a comment, so just some general pointers.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has.
Unless there is a kernel bug, re-adding a disk (in the same role and same offset it had before) does not "destroy" data. It just re-writes most of the same data that was already there, no harm done.
- the role might change if there was more than one drive missing from the array
- the offset usually only changes if you add
sdx
when it wassdx1
before - if very unlucky, offset might also change if it was in a weird state before
The main problem about kicked drive, even if the drive was innocent, is that it's no longer part of the array. As soon as the array is mounted in write mode, data on the array is modified, and the data on the kicked drive is not updated along with it, so it becomes outdated and as such no longer "good".
I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
You can't do data recovery if your drives have issues. If those 6000 bad sectors didn't appear over night, you should have replaced that drive a long time ago. RAIDs die if you don't selftest, monitor and actually replace ASAP any drives that are going bad.
Get new drives, use ddrescue
to copy what you can from the old drives, then use copy-on-write overlays for data recovery experiments. With overlays you can write without modifying the original (so you don't have to re-do the disk copy and don't need a copy of the copy neither). But overlays, too, require drives that work, you can't do it with drives that have errors.
add a comment |
It's difficult to answer your question, and this is too long for a comment, so just some general pointers.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has.
Unless there is a kernel bug, re-adding a disk (in the same role and same offset it had before) does not "destroy" data. It just re-writes most of the same data that was already there, no harm done.
- the role might change if there was more than one drive missing from the array
- the offset usually only changes if you add
sdx
when it wassdx1
before - if very unlucky, offset might also change if it was in a weird state before
The main problem about kicked drive, even if the drive was innocent, is that it's no longer part of the array. As soon as the array is mounted in write mode, data on the array is modified, and the data on the kicked drive is not updated along with it, so it becomes outdated and as such no longer "good".
I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
You can't do data recovery if your drives have issues. If those 6000 bad sectors didn't appear over night, you should have replaced that drive a long time ago. RAIDs die if you don't selftest, monitor and actually replace ASAP any drives that are going bad.
Get new drives, use ddrescue
to copy what you can from the old drives, then use copy-on-write overlays for data recovery experiments. With overlays you can write without modifying the original (so you don't have to re-do the disk copy and don't need a copy of the copy neither). But overlays, too, require drives that work, you can't do it with drives that have errors.
It's difficult to answer your question, and this is too long for a comment, so just some general pointers.
So now I'm panicking, thinking that the drive 2 was actually still good and now when I readded it, it's resynching, probably destroying the good data it has.
Unless there is a kernel bug, re-adding a disk (in the same role and same offset it had before) does not "destroy" data. It just re-writes most of the same data that was already there, no harm done.
- the role might change if there was more than one drive missing from the array
- the offset usually only changes if you add
sdx
when it wassdx1
before - if very unlucky, offset might also change if it was in a weird state before
The main problem about kicked drive, even if the drive was innocent, is that it's no longer part of the array. As soon as the array is mounted in write mode, data on the array is modified, and the data on the kicked drive is not updated along with it, so it becomes outdated and as such no longer "good".
I checked the drive's health with the disks tool (I think it's from gnome, gnome-disk-tool or something), and it had almost 6000 bad sectors, but apart from that it said it was OK.
You can't do data recovery if your drives have issues. If those 6000 bad sectors didn't appear over night, you should have replaced that drive a long time ago. RAIDs die if you don't selftest, monitor and actually replace ASAP any drives that are going bad.
Get new drives, use ddrescue
to copy what you can from the old drives, then use copy-on-write overlays for data recovery experiments. With overlays you can write without modifying the original (so you don't have to re-do the disk copy and don't need a copy of the copy neither). But overlays, too, require drives that work, you can't do it with drives that have errors.
edited 1 hour ago
answered 1 hour ago
frostschutzfrostschutz
29.7k2 gold badges66 silver badges98 bronze badges
29.7k2 gold badges66 silver badges98 bronze badges
add a comment |
add a comment |
user361233 is a new contributor. Be nice, and check out our Code of Conduct.
user361233 is a new contributor. Be nice, and check out our Code of Conduct.
user361233 is a new contributor. Be nice, and check out our Code of Conduct.
user361233 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f528851%2fdrive-failures-in-linux-mdadm-raid-array-help%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Power-off the entire system NOW until you get some (good) advice. Don't try to shutdown cleanly. Just pull the plug.
– roaima
8 hours ago
Do you have a backup of your data?
– roaima
8 hours ago
Ok, just pulled the plug. I only have old backups (about 2 years old). I moved houses, and never got around to rebacking everything up. The irony is that I was actually looking to do this now. I thought that I was relatively safe since I'd notice if one drive failed and I could deal with it.
– user361233
8 hours ago
OK. Please edit your question to explain how you're running RAID 10 with (only) three drives.
– roaima
8 hours ago
I'm about to head to bed so I can't follow this through to its conclusion, but one possible option is to boot a system rescue disk with only one disk of your RAID array installed. DO NOT let it try to start your array. Then carefully, and using something like
mount -o ro,noload /dev/...
mount each partition's filesystem in turn. DON'T START THE RAID - I'm referring to the filesystem inside the RAID that you're never supposed to see. Find out which disk has the most complete set of filesystems (i.e. boot three times, once for each disk)...– roaima
8 hours ago