Why is ddrescue slow when it could be faster on error free areas?How to estimate loops/time for completion of...
If I did not sign promotion bonus document, my career would be over. Is this duress?
Could an American state survive nuclear war?
When was the famous "sudo warning" introduced? Under what background? By whom?
Why is Trump releasing (or not) his tax returns such a big deal?
Grade changes with auto grader
How can I communicate feelings to players without impacting their agency?
Who discovered the covering homomorphism between SU(2) and SO(3)?
Can Microsoft employees see my data?
Replace spaces with comma but not in the whole line
Sum of all digits in a string
Why is Mars cold?
What is gerrymandering called if it's not the result of redrawing districts?
In this day and age should the definition / categorisation of erotica be revised?
Why is my paper "under review" if it contains no results?
First author doesn't want a co-author to read the whole paper
How does a ball bearing door hinge work?
How to increment the value of a (decimal) variable (with leading zero) by +1?
Conveying the idea of "tricky"
How much does freezing grapes longer sweeten them more?
Does code obfuscation give any measurable security benefit?
I wanna get a result of i/j . Like 1/3, 1/5, 1/7, 3/5, 3, 5/3... Something like that. But none of my code works
Is it really better for the environment if I take the stairs as opposed to a lift?
How can the mourner remarry within one month, if he has to wait two regalim?
Digit Date Range
Why is ddrescue slow when it could be faster on error free areas?
How to estimate loops/time for completion of GNU ddrescue (1.18.1) using current status?ddrescue prohibitively slow because of I/O timeouts?Faster disk recovery (ddrescue running slow)ddrescue taking months, what options do I have?use gddrescue to clone device into multiple image filesDatarescue with ddrescue location of filesDDRESCUE - How to re-run first pass (don't do reverse yet)?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{
margin-bottom:0;
}
This question addresses the first pass of ddrescue
on the device to be rescued.
I had to rescue a 1.5TB hard disk.
The command I used is:
# ddrescue /dev/sdc1 my-part-img my-part-map
When the rescue is started (with no optional parameters) on a good
area of the disk, the read rate ("current rate
") stays around 18 MB/s.
It occasionally slows a bit, but then comes back to this speed.
However, when it encounters a bad area of the disk, it may slow down
significantly, and then it never comes back to the 18 MB/s, but stays
around 3 MB/s, even after reading 50 GB of good disk with no problem.
The strange part is that, when it is currently scanning a good
disk area at 3 MB/s, if I stop ddrescue
and restart it, it restarts at the higher reading rate of 18 MB/s. I
actually saved about 2 days by stopping and restarting ddrescue
when it was going at 3 MB/s, which I had to do 8 times to finish the first pass.
My question is: why is it that ddrescue
will not try to go back to the
highest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast the
easy areas, that is what should be done, and the behavior I observed
seems to me to be a bug.
I have been wondering whether this can be dealt with with the option
-a
or --min-read-rate=…
but the manual is so terse that I was not
sure. Besides, I do not understand on what basis one should choose a
read rate for this option. Should it be the above 18 MB/s?
Still, even with an option to specify it, I am surprised this is not done by default.
Meta note
Two users have voted to close the question for being primarily opinion
based.
I would appreciate knowing in what sense it is?
I describe with some numerical precision the behavior of an
important piece of software on an actual example, showing clearly that
it does not meet a major design objective stated in its documentation
(doing the easy parts as quickly as possible), and that very simple
reasoning could improve that.
The software is well know, from a very trusted source, with precise
algorithms, and I expect that most defects were weeded out long ago.
So I am asking experts for a possible known reason for this unexpected
behavior, not being an expert myself on this issue.
Furthermore, I ask whether one of the options of the software should
be used to resolve the issue, which is even more a very precise
question. And I ask for a detailed aspect (how to choose the parameter
for this option) since I did not find documentation for that.
I am asking for facts that I need for my work, not opinions. And I
motivate it with experimental facts, not opinions.
hard-disk data-recovery gnu ddrescue
|
show 4 more comments
This question addresses the first pass of ddrescue
on the device to be rescued.
I had to rescue a 1.5TB hard disk.
The command I used is:
# ddrescue /dev/sdc1 my-part-img my-part-map
When the rescue is started (with no optional parameters) on a good
area of the disk, the read rate ("current rate
") stays around 18 MB/s.
It occasionally slows a bit, but then comes back to this speed.
However, when it encounters a bad area of the disk, it may slow down
significantly, and then it never comes back to the 18 MB/s, but stays
around 3 MB/s, even after reading 50 GB of good disk with no problem.
The strange part is that, when it is currently scanning a good
disk area at 3 MB/s, if I stop ddrescue
and restart it, it restarts at the higher reading rate of 18 MB/s. I
actually saved about 2 days by stopping and restarting ddrescue
when it was going at 3 MB/s, which I had to do 8 times to finish the first pass.
My question is: why is it that ddrescue
will not try to go back to the
highest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast the
easy areas, that is what should be done, and the behavior I observed
seems to me to be a bug.
I have been wondering whether this can be dealt with with the option
-a
or --min-read-rate=…
but the manual is so terse that I was not
sure. Besides, I do not understand on what basis one should choose a
read rate for this option. Should it be the above 18 MB/s?
Still, even with an option to specify it, I am surprised this is not done by default.
Meta note
Two users have voted to close the question for being primarily opinion
based.
I would appreciate knowing in what sense it is?
I describe with some numerical precision the behavior of an
important piece of software on an actual example, showing clearly that
it does not meet a major design objective stated in its documentation
(doing the easy parts as quickly as possible), and that very simple
reasoning could improve that.
The software is well know, from a very trusted source, with precise
algorithms, and I expect that most defects were weeded out long ago.
So I am asking experts for a possible known reason for this unexpected
behavior, not being an expert myself on this issue.
Furthermore, I ask whether one of the options of the software should
be used to resolve the issue, which is even more a very precise
question. And I ask for a detailed aspect (how to choose the parameter
for this option) since I did not find documentation for that.
I am asking for facts that I need for my work, not opinions. And I
motivate it with experimental facts, not opinions.
hard-disk data-recovery gnu ddrescue
Did you trysdd
fromschilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.
– schily
Aug 6 '18 at 7:27
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
Sdd has the needed properties inside. The main options to control that are-noerror
andtry=
. I know that I repaired hundreds of disks withsdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.
– schily
Aug 6 '18 at 13:46
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09
|
show 4 more comments
This question addresses the first pass of ddrescue
on the device to be rescued.
I had to rescue a 1.5TB hard disk.
The command I used is:
# ddrescue /dev/sdc1 my-part-img my-part-map
When the rescue is started (with no optional parameters) on a good
area of the disk, the read rate ("current rate
") stays around 18 MB/s.
It occasionally slows a bit, but then comes back to this speed.
However, when it encounters a bad area of the disk, it may slow down
significantly, and then it never comes back to the 18 MB/s, but stays
around 3 MB/s, even after reading 50 GB of good disk with no problem.
The strange part is that, when it is currently scanning a good
disk area at 3 MB/s, if I stop ddrescue
and restart it, it restarts at the higher reading rate of 18 MB/s. I
actually saved about 2 days by stopping and restarting ddrescue
when it was going at 3 MB/s, which I had to do 8 times to finish the first pass.
My question is: why is it that ddrescue
will not try to go back to the
highest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast the
easy areas, that is what should be done, and the behavior I observed
seems to me to be a bug.
I have been wondering whether this can be dealt with with the option
-a
or --min-read-rate=…
but the manual is so terse that I was not
sure. Besides, I do not understand on what basis one should choose a
read rate for this option. Should it be the above 18 MB/s?
Still, even with an option to specify it, I am surprised this is not done by default.
Meta note
Two users have voted to close the question for being primarily opinion
based.
I would appreciate knowing in what sense it is?
I describe with some numerical precision the behavior of an
important piece of software on an actual example, showing clearly that
it does not meet a major design objective stated in its documentation
(doing the easy parts as quickly as possible), and that very simple
reasoning could improve that.
The software is well know, from a very trusted source, with precise
algorithms, and I expect that most defects were weeded out long ago.
So I am asking experts for a possible known reason for this unexpected
behavior, not being an expert myself on this issue.
Furthermore, I ask whether one of the options of the software should
be used to resolve the issue, which is even more a very precise
question. And I ask for a detailed aspect (how to choose the parameter
for this option) since I did not find documentation for that.
I am asking for facts that I need for my work, not opinions. And I
motivate it with experimental facts, not opinions.
hard-disk data-recovery gnu ddrescue
This question addresses the first pass of ddrescue
on the device to be rescued.
I had to rescue a 1.5TB hard disk.
The command I used is:
# ddrescue /dev/sdc1 my-part-img my-part-map
When the rescue is started (with no optional parameters) on a good
area of the disk, the read rate ("current rate
") stays around 18 MB/s.
It occasionally slows a bit, but then comes back to this speed.
However, when it encounters a bad area of the disk, it may slow down
significantly, and then it never comes back to the 18 MB/s, but stays
around 3 MB/s, even after reading 50 GB of good disk with no problem.
The strange part is that, when it is currently scanning a good
disk area at 3 MB/s, if I stop ddrescue
and restart it, it restarts at the higher reading rate of 18 MB/s. I
actually saved about 2 days by stopping and restarting ddrescue
when it was going at 3 MB/s, which I had to do 8 times to finish the first pass.
My question is: why is it that ddrescue
will not try to go back to the
highest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast the
easy areas, that is what should be done, and the behavior I observed
seems to me to be a bug.
I have been wondering whether this can be dealt with with the option
-a
or --min-read-rate=…
but the manual is so terse that I was not
sure. Besides, I do not understand on what basis one should choose a
read rate for this option. Should it be the above 18 MB/s?
Still, even with an option to specify it, I am surprised this is not done by default.
Meta note
Two users have voted to close the question for being primarily opinion
based.
I would appreciate knowing in what sense it is?
I describe with some numerical precision the behavior of an
important piece of software on an actual example, showing clearly that
it does not meet a major design objective stated in its documentation
(doing the easy parts as quickly as possible), and that very simple
reasoning could improve that.
The software is well know, from a very trusted source, with precise
algorithms, and I expect that most defects were weeded out long ago.
So I am asking experts for a possible known reason for this unexpected
behavior, not being an expert myself on this issue.
Furthermore, I ask whether one of the options of the software should
be used to resolve the issue, which is even more a very precise
question. And I ask for a detailed aspect (how to choose the parameter
for this option) since I did not find documentation for that.
I am asking for facts that I need for my work, not opinions. And I
motivate it with experimental facts, not opinions.
hard-disk data-recovery gnu ddrescue
hard-disk data-recovery gnu ddrescue
edited Aug 6 '18 at 17:58
babou
asked Aug 5 '18 at 21:55
baboubabou
4416 silver badges13 bronze badges
4416 silver badges13 bronze badges
Did you trysdd
fromschilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.
– schily
Aug 6 '18 at 7:27
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
Sdd has the needed properties inside. The main options to control that are-noerror
andtry=
. I know that I repaired hundreds of disks withsdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.
– schily
Aug 6 '18 at 13:46
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09
|
show 4 more comments
Did you trysdd
fromschilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.
– schily
Aug 6 '18 at 7:27
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
Sdd has the needed properties inside. The main options to control that are-noerror
andtry=
. I know that I repaired hundreds of disks withsdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.
– schily
Aug 6 '18 at 13:46
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09
Did you try
sdd
from schilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.– schily
Aug 6 '18 at 7:27
Did you try
sdd
from schilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.– schily
Aug 6 '18 at 7:27
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
Sdd has the needed properties inside. The main options to control that are
-noerror
and try=
. I know that I repaired hundreds of disks with sdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.– schily
Aug 6 '18 at 13:46
Sdd has the needed properties inside. The main options to control that are
-noerror
and try=
. I know that I repaired hundreds of disks with sdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.– schily
Aug 6 '18 at 13:46
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09
|
show 4 more comments
2 Answers
2
active
oldest
votes
I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s?
The --min-read-rate=
option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition.
even after reading 50 GB of good disk with no problem.
Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it.
Now, ddrescue
supports using a dynamic --min-read-rate=
value, from info ddrescue
:
If BYTES is 0 (auto), the minimum read rate is recalculated every
second as (average_rate / 10).
But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective.
So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10
manually, average_rate being what the drive's average rate would be if it was intact.
So for example you can go with 10M
here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later.
the behavior I observed seems to me to be a bug.
If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode.
When dealing with defective drives, you also have to check dmesg
if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others.
Sometimes manual intervention just can't be avoided.
Even then, I am surprised this is not done by default.
Most programs don't come with sane defaults. dd
still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time.
I am asking for facts that I need for my work, not opinions.
Having good backups is better than having to rely on ddrescue
. Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions.
Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans.
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain whyddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the--min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.
– babou
Aug 6 '18 at 21:59
add a comment
|
@babou I don’t have sufficient ‘reputation’ to upvote your question. I tried to and found that I cannot. I found your question after experiencing the exact behaviour you described. It is clear to me that the people who think your observations are ‘opinion’ are wrong, and wrong with Mathematical certainty. They should be congratulated for achieving maximum wrongness. Your valid statements asking why this is in an opinion have gone unanswered which is a pity as that is exactly the question I would have asked. My opinion (and it is more a guess) is that people are close minded to things they don’t want to believe (for example climate change denial) and simply discard new information that doesn’t fit their model of the world. You have stated an observable fact. An apple falls down from a tree is now an opinion?
Sorry that my two-cents worth is much too late to be useful to you but I thought that (a) I would like to let you know that you have been wronged (that is a fact, not an opinion) and (b) having seen exactly what you describe, I have something that remedies the problem, at least for me on this occasion. I used the –d switch on the command line aka ‘direct disk access’.
Now –d is a remedy (not necessarily a solution), but it is not an explanation. Sorry I don’t have that. All I know at this time is that I had your problem, and –d is a remedy and working pretty much how you described (sarcasm mode on) ‘in your opinion’ (sarcasm mode off) you would expect ddrescue to work. I am doing a disk recovery using system-rescue-cd and even with the output file as /dev/null, I see the behaviour you reported.
BTW in my current problem, the 'current rate' drops from 100MB/s to 60MB/s so the difference is signifcant. With -d, this saves more than one hour on 1TB disk.
New contributor
add a comment
|
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f460716%2fwhy-is-ddrescue-slow-when-it-could-be-faster-on-error-free-areas%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s?
The --min-read-rate=
option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition.
even after reading 50 GB of good disk with no problem.
Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it.
Now, ddrescue
supports using a dynamic --min-read-rate=
value, from info ddrescue
:
If BYTES is 0 (auto), the minimum read rate is recalculated every
second as (average_rate / 10).
But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective.
So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10
manually, average_rate being what the drive's average rate would be if it was intact.
So for example you can go with 10M
here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later.
the behavior I observed seems to me to be a bug.
If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode.
When dealing with defective drives, you also have to check dmesg
if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others.
Sometimes manual intervention just can't be avoided.
Even then, I am surprised this is not done by default.
Most programs don't come with sane defaults. dd
still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time.
I am asking for facts that I need for my work, not opinions.
Having good backups is better than having to rely on ddrescue
. Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions.
Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans.
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain whyddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the--min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.
– babou
Aug 6 '18 at 21:59
add a comment
|
I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s?
The --min-read-rate=
option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition.
even after reading 50 GB of good disk with no problem.
Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it.
Now, ddrescue
supports using a dynamic --min-read-rate=
value, from info ddrescue
:
If BYTES is 0 (auto), the minimum read rate is recalculated every
second as (average_rate / 10).
But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective.
So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10
manually, average_rate being what the drive's average rate would be if it was intact.
So for example you can go with 10M
here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later.
the behavior I observed seems to me to be a bug.
If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode.
When dealing with defective drives, you also have to check dmesg
if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others.
Sometimes manual intervention just can't be avoided.
Even then, I am surprised this is not done by default.
Most programs don't come with sane defaults. dd
still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time.
I am asking for facts that I need for my work, not opinions.
Having good backups is better than having to rely on ddrescue
. Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions.
Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans.
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain whyddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the--min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.
– babou
Aug 6 '18 at 21:59
add a comment
|
I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s?
The --min-read-rate=
option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition.
even after reading 50 GB of good disk with no problem.
Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it.
Now, ddrescue
supports using a dynamic --min-read-rate=
value, from info ddrescue
:
If BYTES is 0 (auto), the minimum read rate is recalculated every
second as (average_rate / 10).
But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective.
So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10
manually, average_rate being what the drive's average rate would be if it was intact.
So for example you can go with 10M
here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later.
the behavior I observed seems to me to be a bug.
If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode.
When dealing with defective drives, you also have to check dmesg
if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others.
Sometimes manual intervention just can't be avoided.
Even then, I am surprised this is not done by default.
Most programs don't come with sane defaults. dd
still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time.
I am asking for facts that I need for my work, not opinions.
Having good backups is better than having to rely on ddrescue
. Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions.
Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans.
I have been wondering whether this can be dealt with with the option -a or --min-read-rate= ... but the manual is so terse that I was not sure. Besides, I do not understand on what basis one should choose a read rate for this option. Should it be the above 18 MB/s?
The --min-read-rate=
option should help. Modern drives tend to spend a lot of time in their internal error checking, so while the rate slows down extremely, this isn't reported as error condition.
even after reading 50 GB of good disk with no problem.
Which also means: you don't even know if there are problems anymore. The drive might have a problem, and decide to not report it.
Now, ddrescue
supports using a dynamic --min-read-rate=
value, from info ddrescue
:
If BYTES is 0 (auto), the minimum read rate is recalculated every
second as (average_rate / 10).
But in my experience, the auto setting doesn't seem to help much. Once the drive gets stuck, especially if that happens right at the beginning, I guess the average_rate never stays high enough for it to be effective.
So in a first pass when you want to grab as much data as possible, fast areas first, I just set it to average_rate / 10
manually, average_rate being what the drive's average rate would be if it was intact.
So for example you can go with 10M
here (for a drive that is supposed to go at ~100M/s) and then you can always go back and try your luck with the slow areas later.
the behavior I observed seems to me to be a bug.
If you have a bug then you have to debug it. It's hard to reproduce without having the same kind of drive failure. It could just as well be the drive itself that is stuck in some recovery mode.
When dealing with defective drives, you also have to check dmesg
if there are any odd things happening, such as bus resets and the like. Some controllers are also worse at dealing with failing drives than others.
Sometimes manual intervention just can't be avoided.
Even then, I am surprised this is not done by default.
Most programs don't come with sane defaults. dd
still uses 512 byte blocksize by default, which is the "wrong" choice in most cases... What is considered sane might also change over time.
I am asking for facts that I need for my work, not opinions.
Having good backups is better than having to rely on ddrescue
. Getting data off a failing drive is a matter of luck in the first place. Data recovery involves a lot of personal experience and thus - opinions.
Most recovery tools we have are also stupid. The tool does not have an AI that reports to a central server, and goes like "Oh I've seen this failure pattern on this particular drive model before, so let's change our strategy...". So this part has to be done by humans.
answered Aug 6 '18 at 15:01
frostschutzfrostschutz
31.4k2 gold badges70 silver badges103 bronze badges
31.4k2 gold badges70 silver badges103 bronze badges
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain whyddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the--min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.
– babou
Aug 6 '18 at 21:59
add a comment
|
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain whyddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the--min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.
– babou
Aug 6 '18 at 21:59
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Quote: It could just as well be the drive itself that is stuck in some recovery mode. It could also be S.M.A.R.T. that is running some periodic check.
– Pro Backup
Aug 6 '18 at 17:48
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain why
ddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the --min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.– babou
Aug 6 '18 at 21:59
Much to think about, and you do provide some clarification on apparently weird behaviour. Still, it does not explain why
ddrescue
is not able to regain its former speed, unless I stop-restart it. And now, I suspect the --min-read-rate=
option will not help for that, but only controls skipping. I will try to check that, when I am finished rescuing the disk which has lost 10 MB out of 1.5 TB. But I also lack experience in repairing file systems.– babou
Aug 6 '18 at 21:59
add a comment
|
@babou I don’t have sufficient ‘reputation’ to upvote your question. I tried to and found that I cannot. I found your question after experiencing the exact behaviour you described. It is clear to me that the people who think your observations are ‘opinion’ are wrong, and wrong with Mathematical certainty. They should be congratulated for achieving maximum wrongness. Your valid statements asking why this is in an opinion have gone unanswered which is a pity as that is exactly the question I would have asked. My opinion (and it is more a guess) is that people are close minded to things they don’t want to believe (for example climate change denial) and simply discard new information that doesn’t fit their model of the world. You have stated an observable fact. An apple falls down from a tree is now an opinion?
Sorry that my two-cents worth is much too late to be useful to you but I thought that (a) I would like to let you know that you have been wronged (that is a fact, not an opinion) and (b) having seen exactly what you describe, I have something that remedies the problem, at least for me on this occasion. I used the –d switch on the command line aka ‘direct disk access’.
Now –d is a remedy (not necessarily a solution), but it is not an explanation. Sorry I don’t have that. All I know at this time is that I had your problem, and –d is a remedy and working pretty much how you described (sarcasm mode on) ‘in your opinion’ (sarcasm mode off) you would expect ddrescue to work. I am doing a disk recovery using system-rescue-cd and even with the output file as /dev/null, I see the behaviour you reported.
BTW in my current problem, the 'current rate' drops from 100MB/s to 60MB/s so the difference is signifcant. With -d, this saves more than one hour on 1TB disk.
New contributor
add a comment
|
@babou I don’t have sufficient ‘reputation’ to upvote your question. I tried to and found that I cannot. I found your question after experiencing the exact behaviour you described. It is clear to me that the people who think your observations are ‘opinion’ are wrong, and wrong with Mathematical certainty. They should be congratulated for achieving maximum wrongness. Your valid statements asking why this is in an opinion have gone unanswered which is a pity as that is exactly the question I would have asked. My opinion (and it is more a guess) is that people are close minded to things they don’t want to believe (for example climate change denial) and simply discard new information that doesn’t fit their model of the world. You have stated an observable fact. An apple falls down from a tree is now an opinion?
Sorry that my two-cents worth is much too late to be useful to you but I thought that (a) I would like to let you know that you have been wronged (that is a fact, not an opinion) and (b) having seen exactly what you describe, I have something that remedies the problem, at least for me on this occasion. I used the –d switch on the command line aka ‘direct disk access’.
Now –d is a remedy (not necessarily a solution), but it is not an explanation. Sorry I don’t have that. All I know at this time is that I had your problem, and –d is a remedy and working pretty much how you described (sarcasm mode on) ‘in your opinion’ (sarcasm mode off) you would expect ddrescue to work. I am doing a disk recovery using system-rescue-cd and even with the output file as /dev/null, I see the behaviour you reported.
BTW in my current problem, the 'current rate' drops from 100MB/s to 60MB/s so the difference is signifcant. With -d, this saves more than one hour on 1TB disk.
New contributor
add a comment
|
@babou I don’t have sufficient ‘reputation’ to upvote your question. I tried to and found that I cannot. I found your question after experiencing the exact behaviour you described. It is clear to me that the people who think your observations are ‘opinion’ are wrong, and wrong with Mathematical certainty. They should be congratulated for achieving maximum wrongness. Your valid statements asking why this is in an opinion have gone unanswered which is a pity as that is exactly the question I would have asked. My opinion (and it is more a guess) is that people are close minded to things they don’t want to believe (for example climate change denial) and simply discard new information that doesn’t fit their model of the world. You have stated an observable fact. An apple falls down from a tree is now an opinion?
Sorry that my two-cents worth is much too late to be useful to you but I thought that (a) I would like to let you know that you have been wronged (that is a fact, not an opinion) and (b) having seen exactly what you describe, I have something that remedies the problem, at least for me on this occasion. I used the –d switch on the command line aka ‘direct disk access’.
Now –d is a remedy (not necessarily a solution), but it is not an explanation. Sorry I don’t have that. All I know at this time is that I had your problem, and –d is a remedy and working pretty much how you described (sarcasm mode on) ‘in your opinion’ (sarcasm mode off) you would expect ddrescue to work. I am doing a disk recovery using system-rescue-cd and even with the output file as /dev/null, I see the behaviour you reported.
BTW in my current problem, the 'current rate' drops from 100MB/s to 60MB/s so the difference is signifcant. With -d, this saves more than one hour on 1TB disk.
New contributor
@babou I don’t have sufficient ‘reputation’ to upvote your question. I tried to and found that I cannot. I found your question after experiencing the exact behaviour you described. It is clear to me that the people who think your observations are ‘opinion’ are wrong, and wrong with Mathematical certainty. They should be congratulated for achieving maximum wrongness. Your valid statements asking why this is in an opinion have gone unanswered which is a pity as that is exactly the question I would have asked. My opinion (and it is more a guess) is that people are close minded to things they don’t want to believe (for example climate change denial) and simply discard new information that doesn’t fit their model of the world. You have stated an observable fact. An apple falls down from a tree is now an opinion?
Sorry that my two-cents worth is much too late to be useful to you but I thought that (a) I would like to let you know that you have been wronged (that is a fact, not an opinion) and (b) having seen exactly what you describe, I have something that remedies the problem, at least for me on this occasion. I used the –d switch on the command line aka ‘direct disk access’.
Now –d is a remedy (not necessarily a solution), but it is not an explanation. Sorry I don’t have that. All I know at this time is that I had your problem, and –d is a remedy and working pretty much how you described (sarcasm mode on) ‘in your opinion’ (sarcasm mode off) you would expect ddrescue to work. I am doing a disk recovery using system-rescue-cd and even with the output file as /dev/null, I see the behaviour you reported.
BTW in my current problem, the 'current rate' drops from 100MB/s to 60MB/s so the difference is signifcant. With -d, this saves more than one hour on 1TB disk.
New contributor
New contributor
answered 28 mins ago
Robert WatsonRobert Watson
11 bronze badge
11 bronze badge
New contributor
New contributor
add a comment
|
add a comment
|
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f460716%2fwhy-is-ddrescue-slow-when-it-could-be-faster-on-error-free-areas%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Did you try
sdd
fromschilytools
? Sdd is much older than ddrescue, so it learned from the time when disks had more problems (in the 1980s). Iẗ́'s read speed only depends on the error state of the source disk.– schily
Aug 6 '18 at 7:27
Two users voted to close the question (not to mention one downvote), but without a word of explanation. I am not experienced with ddrescue and disk resding issues, but I did put some care and time in researching my problem, and in writing the question, which does differ from the many questions just asking "why is ddrescue so slow ?". I would much appreciate a word of comment regarding the reason(s) for wanting to close the question.
– babou
Aug 6 '18 at 12:47
@schily From what I read on source forge, sdd is a replacement for dd, not for ddrescue. Also, not being very experienced on this issue, I must confess that I tend to stay with the tool for which I will more easily find help on the net. And I tend to trust GNU software in general. But I will look ... I assume you wrote it.
– babou
Aug 6 '18 at 12:59
Sdd has the needed properties inside. The main options to control that are
-noerror
andtry=
. I know that I repaired hundreds of disks withsdd
and I usually tend to distrust GNU software because I've see too many problems in too many GNU programs and a reported bug typically takes 20 years for a fix.– schily
Aug 6 '18 at 13:46
Not being competent to judge, I will not discuss software sources. Two things I like in ddrescue are the policy to do the easy parts first, and more generally the map feature allowing stop and restart with changing parameters, or prioritizing a specific part of the disk. Would sdd do that. BTW, I love your avatar; it reminds me of the caricature of an italian free-software activist baking his disk in a pizza oven,
– babou
Aug 6 '18 at 14:09