Dmesg full of I/O errors, smart ok, four disks affectedmdadm raid1 fails to resyncLinux external USB drive...
Why does independence imply zero correlation?
How long did the SR-71 take to get to cruising altitude?
What does it cost to buy a tavern?
Exact functors and derived functors
What are the pros and cons for the two possible "gear directions" when parking the car on a hill?
King or Queen-Which piece is which?
How can a warlock learn from a spellbook?
Helping ease my back pain by studying 13 hours everyday , even weekends
What is the most suitable position for a bishop here?
What happened to Hopper's girlfriend in season one?
Counterfeit checks were created for my account. How does this type of fraud work?
What is the "ls" directory in my home directory?
Subtract the Folded Matrix
Umlaut character order when sorting
How can I prevent a user from copying files on another hard drive?
Why don't countries like Japan just print more money?
What are the current battlegrounds for people’s “rights” in the UK?
macOS: How to take a picture from camera after 1 minute
Improve appearance of the table in Latex
How do internally carried IR missiles acquire a lock?
Is the continuity test limit resistance of a multimeter standard?
Cut the gold chain
In the US, can a former president run again?
How could empty set be unique if it could be vacuously false
Dmesg full of I/O errors, smart ok, four disks affected
mdadm raid1 fails to resyncLinux external USB drive failure - corrupt filesystemmdadm raid1, [1/2] disks failed, safe to reboot?end_request: I/O error, dev sda, sector xxxxxxxxxMigrate SAS disks from E200i to B320i Smart ArrayDisk IO Errors when writing / Linux + Windows / HDD is OKZFS slow read speed on 8 drive 4 vdev striped mirrorsHow to find the cause of a single event file content corruption?dmesg errors not showing in ddrescueG-sense errors in new disks
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I'm working on a remote server (Dell Poweredge) that was a new install. It has four drives (2TB) and 2 SSD's (250 GB). One SSD contains the OS (RHEL7) and the four mechanical disks are eventually going to contain an oracle database.
Trying to create a software RAID array led to disks constantly being marked as faulty. Checking dmesg outputs a slew of the following errors,
[127491.711407] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719699] sd 0:0:4:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127491.719717] sd 0:0:4:0: [sde] Sense Key : Aborted Command [current]
[127491.719726] sd 0:0:4:0: [sde] Add. Sense: Logical block guard check failed
[127491.719734] sd 0:0:4:0: [sde] CDB: Read(32)
[127491.719742] sd 0:0:4:0: [sde] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127491.719750] sd 0:0:4:0: [sde] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127491.719757] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719764] Buffer I/O error on dev sde, logical block 488378260, async page read
[127497.440222] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.440240] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.440249] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.440258] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.440266] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.440273] sd 0:0:5:0: [sdf] CDB[10]: 00 01 a0 00 00 01 a0 00 00 00 00 00 00 00 00 08
[127497.440280] blk_update_request: I/O error, dev sdf, sector 106496
[127497.901432] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.901449] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.901458] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.901467] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.901475] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.901482] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.901489] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911003] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.911019] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.911029] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.911037] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.911045] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.911052] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.911059] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911067] Buffer I/O error on dev sdf, logical block 488378260, async page read
These errors occur for all of the four mechanical disks, (sdc/sdd/sde/sdf) SMARTctl passed all four disks, long and short tests. I'm currently running badblocks (write mode test ~35 hrs in, probably another 35 to go).
The following are the errors I've suspected/considered upon research
Failed HDD - Seems unlikely that 4 "refurbished" disks would be DOA doesn't it?
Storage Controller Issue (bad cable?) - Seems like it would affect the SSD's too?
- Kernel issue, The only change to the stock kernel was the addition of kmod-oracleasm. I really don't see how it would cause these faults, ASM isn't set up at all.
Another noteworthy event was when trying to zero the disks (part of early troubleshooting), using the command $ dd if=/dev/zero of=/dev/sdX yielded these errors,
dd: writing to ‘/dev/sdc’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70583 s, 32.0 MB/s
dd: writing to ‘/dev/sdd’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70417 s, 32.0 MB/s
dd: writing to ‘/dev/sde’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71813 s, 31.7 MB/s
dd: writing to ‘/dev/sdf’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71157 s, 31.9 MB/s
If anyone here could share some insight as to what might be causing this, I'd be grateful. I'm inclined to follow occam's razor here and go straight for the HDD's, the only doubt stems from the unlikelihood of four failed HDD's out of box.
I will be driving to the site tomorrow for a physical inspection & to report my assessment of this machine to the higher ups. If there's something I should physically inspect (beyond cables/connections/power supply) please let me know.
Thanks.
redhat hard-drive io
New contributor
add a comment |
I'm working on a remote server (Dell Poweredge) that was a new install. It has four drives (2TB) and 2 SSD's (250 GB). One SSD contains the OS (RHEL7) and the four mechanical disks are eventually going to contain an oracle database.
Trying to create a software RAID array led to disks constantly being marked as faulty. Checking dmesg outputs a slew of the following errors,
[127491.711407] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719699] sd 0:0:4:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127491.719717] sd 0:0:4:0: [sde] Sense Key : Aborted Command [current]
[127491.719726] sd 0:0:4:0: [sde] Add. Sense: Logical block guard check failed
[127491.719734] sd 0:0:4:0: [sde] CDB: Read(32)
[127491.719742] sd 0:0:4:0: [sde] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127491.719750] sd 0:0:4:0: [sde] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127491.719757] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719764] Buffer I/O error on dev sde, logical block 488378260, async page read
[127497.440222] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.440240] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.440249] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.440258] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.440266] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.440273] sd 0:0:5:0: [sdf] CDB[10]: 00 01 a0 00 00 01 a0 00 00 00 00 00 00 00 00 08
[127497.440280] blk_update_request: I/O error, dev sdf, sector 106496
[127497.901432] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.901449] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.901458] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.901467] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.901475] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.901482] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.901489] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911003] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.911019] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.911029] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.911037] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.911045] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.911052] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.911059] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911067] Buffer I/O error on dev sdf, logical block 488378260, async page read
These errors occur for all of the four mechanical disks, (sdc/sdd/sde/sdf) SMARTctl passed all four disks, long and short tests. I'm currently running badblocks (write mode test ~35 hrs in, probably another 35 to go).
The following are the errors I've suspected/considered upon research
Failed HDD - Seems unlikely that 4 "refurbished" disks would be DOA doesn't it?
Storage Controller Issue (bad cable?) - Seems like it would affect the SSD's too?
- Kernel issue, The only change to the stock kernel was the addition of kmod-oracleasm. I really don't see how it would cause these faults, ASM isn't set up at all.
Another noteworthy event was when trying to zero the disks (part of early troubleshooting), using the command $ dd if=/dev/zero of=/dev/sdX yielded these errors,
dd: writing to ‘/dev/sdc’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70583 s, 32.0 MB/s
dd: writing to ‘/dev/sdd’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70417 s, 32.0 MB/s
dd: writing to ‘/dev/sde’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71813 s, 31.7 MB/s
dd: writing to ‘/dev/sdf’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71157 s, 31.9 MB/s
If anyone here could share some insight as to what might be causing this, I'd be grateful. I'm inclined to follow occam's razor here and go straight for the HDD's, the only doubt stems from the unlikelihood of four failed HDD's out of box.
I will be driving to the site tomorrow for a physical inspection & to report my assessment of this machine to the higher ups. If there's something I should physically inspect (beyond cables/connections/power supply) please let me know.
Thanks.
redhat hard-drive io
New contributor
add a comment |
I'm working on a remote server (Dell Poweredge) that was a new install. It has four drives (2TB) and 2 SSD's (250 GB). One SSD contains the OS (RHEL7) and the four mechanical disks are eventually going to contain an oracle database.
Trying to create a software RAID array led to disks constantly being marked as faulty. Checking dmesg outputs a slew of the following errors,
[127491.711407] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719699] sd 0:0:4:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127491.719717] sd 0:0:4:0: [sde] Sense Key : Aborted Command [current]
[127491.719726] sd 0:0:4:0: [sde] Add. Sense: Logical block guard check failed
[127491.719734] sd 0:0:4:0: [sde] CDB: Read(32)
[127491.719742] sd 0:0:4:0: [sde] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127491.719750] sd 0:0:4:0: [sde] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127491.719757] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719764] Buffer I/O error on dev sde, logical block 488378260, async page read
[127497.440222] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.440240] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.440249] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.440258] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.440266] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.440273] sd 0:0:5:0: [sdf] CDB[10]: 00 01 a0 00 00 01 a0 00 00 00 00 00 00 00 00 08
[127497.440280] blk_update_request: I/O error, dev sdf, sector 106496
[127497.901432] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.901449] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.901458] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.901467] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.901475] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.901482] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.901489] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911003] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.911019] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.911029] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.911037] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.911045] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.911052] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.911059] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911067] Buffer I/O error on dev sdf, logical block 488378260, async page read
These errors occur for all of the four mechanical disks, (sdc/sdd/sde/sdf) SMARTctl passed all four disks, long and short tests. I'm currently running badblocks (write mode test ~35 hrs in, probably another 35 to go).
The following are the errors I've suspected/considered upon research
Failed HDD - Seems unlikely that 4 "refurbished" disks would be DOA doesn't it?
Storage Controller Issue (bad cable?) - Seems like it would affect the SSD's too?
- Kernel issue, The only change to the stock kernel was the addition of kmod-oracleasm. I really don't see how it would cause these faults, ASM isn't set up at all.
Another noteworthy event was when trying to zero the disks (part of early troubleshooting), using the command $ dd if=/dev/zero of=/dev/sdX yielded these errors,
dd: writing to ‘/dev/sdc’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70583 s, 32.0 MB/s
dd: writing to ‘/dev/sdd’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70417 s, 32.0 MB/s
dd: writing to ‘/dev/sde’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71813 s, 31.7 MB/s
dd: writing to ‘/dev/sdf’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71157 s, 31.9 MB/s
If anyone here could share some insight as to what might be causing this, I'd be grateful. I'm inclined to follow occam's razor here and go straight for the HDD's, the only doubt stems from the unlikelihood of four failed HDD's out of box.
I will be driving to the site tomorrow for a physical inspection & to report my assessment of this machine to the higher ups. If there's something I should physically inspect (beyond cables/connections/power supply) please let me know.
Thanks.
redhat hard-drive io
New contributor
I'm working on a remote server (Dell Poweredge) that was a new install. It has four drives (2TB) and 2 SSD's (250 GB). One SSD contains the OS (RHEL7) and the four mechanical disks are eventually going to contain an oracle database.
Trying to create a software RAID array led to disks constantly being marked as faulty. Checking dmesg outputs a slew of the following errors,
[127491.711407] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719699] sd 0:0:4:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127491.719717] sd 0:0:4:0: [sde] Sense Key : Aborted Command [current]
[127491.719726] sd 0:0:4:0: [sde] Add. Sense: Logical block guard check failed
[127491.719734] sd 0:0:4:0: [sde] CDB: Read(32)
[127491.719742] sd 0:0:4:0: [sde] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127491.719750] sd 0:0:4:0: [sde] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127491.719757] blk_update_request: I/O error, dev sde, sector 3907026080
[127491.719764] Buffer I/O error on dev sde, logical block 488378260, async page read
[127497.440222] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.440240] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.440249] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.440258] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.440266] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.440273] sd 0:0:5:0: [sdf] CDB[10]: 00 01 a0 00 00 01 a0 00 00 00 00 00 00 00 00 08
[127497.440280] blk_update_request: I/O error, dev sdf, sector 106496
[127497.901432] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.901449] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.901458] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.901467] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.901475] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.901482] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.901489] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911003] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[127497.911019] sd 0:0:5:0: [sdf] Sense Key : Aborted Command [current]
[127497.911029] sd 0:0:5:0: [sdf] Add. Sense: Logical block guard check failed
[127497.911037] sd 0:0:5:0: [sdf] CDB: Read(32)
[127497.911045] sd 0:0:5:0: [sdf] CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00
[127497.911052] sd 0:0:5:0: [sdf] CDB[10]: e8 e0 7c a0 e8 e0 7c a0 00 00 00 00 00 00 00 08
[127497.911059] blk_update_request: I/O error, dev sdf, sector 3907026080
[127497.911067] Buffer I/O error on dev sdf, logical block 488378260, async page read
These errors occur for all of the four mechanical disks, (sdc/sdd/sde/sdf) SMARTctl passed all four disks, long and short tests. I'm currently running badblocks (write mode test ~35 hrs in, probably another 35 to go).
The following are the errors I've suspected/considered upon research
Failed HDD - Seems unlikely that 4 "refurbished" disks would be DOA doesn't it?
Storage Controller Issue (bad cable?) - Seems like it would affect the SSD's too?
- Kernel issue, The only change to the stock kernel was the addition of kmod-oracleasm. I really don't see how it would cause these faults, ASM isn't set up at all.
Another noteworthy event was when trying to zero the disks (part of early troubleshooting), using the command $ dd if=/dev/zero of=/dev/sdX yielded these errors,
dd: writing to ‘/dev/sdc’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70583 s, 32.0 MB/s
dd: writing to ‘/dev/sdd’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.70417 s, 32.0 MB/s
dd: writing to ‘/dev/sde’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71813 s, 31.7 MB/s
dd: writing to ‘/dev/sdf’: Input/output error
106497+0 records in
106496+0 records out
54525952 bytes (55 MB) copied, 1.71157 s, 31.9 MB/s
If anyone here could share some insight as to what might be causing this, I'd be grateful. I'm inclined to follow occam's razor here and go straight for the HDD's, the only doubt stems from the unlikelihood of four failed HDD's out of box.
I will be driving to the site tomorrow for a physical inspection & to report my assessment of this machine to the higher ups. If there's something I should physically inspect (beyond cables/connections/power supply) please let me know.
Thanks.
redhat hard-drive io
redhat hard-drive io
New contributor
New contributor
New contributor
asked 14 hours ago
Scu11yScu11y
283
283
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Your dd
tests show the four disks all failing at the same LBA address. As it is extremely improbable that four disks all fail at the exact same location, I strongly suspect it is due to controller or cabling issues.
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "2"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Scu11y is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f971722%2fdmesg-full-of-i-o-errors-smart-ok-four-disks-affected%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your dd
tests show the four disks all failing at the same LBA address. As it is extremely improbable that four disks all fail at the exact same location, I strongly suspect it is due to controller or cabling issues.
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
add a comment |
Your dd
tests show the four disks all failing at the same LBA address. As it is extremely improbable that four disks all fail at the exact same location, I strongly suspect it is due to controller or cabling issues.
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
add a comment |
Your dd
tests show the four disks all failing at the same LBA address. As it is extremely improbable that four disks all fail at the exact same location, I strongly suspect it is due to controller or cabling issues.
Your dd
tests show the four disks all failing at the same LBA address. As it is extremely improbable that four disks all fail at the exact same location, I strongly suspect it is due to controller or cabling issues.
answered 13 hours ago
shodanshokshodanshok
28k35194
28k35194
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
add a comment |
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
Okay thanks, This was actually one of the things that made me suspect a controller fault. Wouldnt that affect the ssd's too?
– Scu11y
13 hours ago
1
1
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
It's difficult to tell without further testing. Anyway, the first think I would control/replace is the cables attaching the controller to the backplane.
– shodanshok
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Sounds good. Would there be any meaningful data yielded from taking a multimeter to the cables and testing continuity? I'd like to try to give the higher-ups a definitive answer/fix tomorrow.
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
Also, I'm going to accept your answer, It makes sense and I appreciate your time. If anyone else has anything they think is worth checking (either hardware/software) I'd still love to hear it. Thanks again to this community for being a reliable source of knowledgeable second opinions. :)
– Scu11y
13 hours ago
1
1
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
High data-rate cables, as 6/12 Gbs SATA/SAS ones, are not only about electrical continuity, but mainly about signal clearness and low noise. Try to physically clear the connectors and reseat the cables. If the error persists, try changing them and, finally, try a different controller.
– shodanshok
12 hours ago
add a comment |
Scu11y is a new contributor. Be nice, and check out our Code of Conduct.
Scu11y is a new contributor. Be nice, and check out our Code of Conduct.
Scu11y is a new contributor. Be nice, and check out our Code of Conduct.
Scu11y is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f971722%2fdmesg-full-of-i-o-errors-smart-ok-four-disks-affected%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown