Make disk/disk copy slowerBash Script to cp files from a listCopying to an external NTFS partition: slows...
How should I quote American English speakers in a British English essay?
Is there an antonym (a complementary antonym) for "spicy" or "hot" regarding food (I DO NOT mean "seasoned", but "hot")?
How to have poached eggs in "sphere form"?
Is it safe if the neutral lead is exposed and disconnected?
Why does Canada require bilingualism in a lot of federal government posts?
To find islands of 1 and 0 in matrix
Why did Windows 95 crash the whole system but newer Windows only crashed programs?
What language is Raven using for her attack in the new 52?
Can a US President, after impeachment and removal, be re-elected or re-appointed?
Is it unprofessional to mention your cover letter and resume are best viewed in Chrome?
If you inherit a Roth 401(k), is it taxed?
Omnidirectional LED, is it possible?
Just how much information should you share with a former client?
How to efficiently shred a lot of cabbage?
Was the Psych theme song written for the show?
Is it possible for a particle to decay via gravity?
How does the Thief's Fast Hands feature interact with mundane and magical shields?
Do 3/8 (37.5%) of Quadratics Have No x-Intercepts?
Convert graph format for Mathematica graph functions
Why put copper in between battery contacts and clamps?
Does dual boot harm a laptop battery or reduce its life?
Who said "one can be a powerful king with a very small sceptre"?
What Marvel character has this 'W' symbol?
How to store my pliers and wire cutters on my desk?
Make disk/disk copy slower
Bash Script to cp files from a listCopying to an external NTFS partition: slows down when I copy many files at onceWhy is my PC freezing while I'm copying a file to a pendrive?Why is dd using direct slower writing to disk than to a fileWhy is userspace slower than the kernel?copy files down the tree but only copy the files not the directories in targetHow to limit speed of file copy?Ubuntu trusty format disk slower than RHEL7?Running iperf in bidirectional mode goes slower than unidirectionalHow can I copy and verify a file as I do the copy?Why applications on FreeBSD run slower than on Linux?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
Is there a method of slowing down the copy process on Linux?
I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp
command.
Is this possible? (If yes, how?)
Edit: so, I'll add more context to what I'm trying to achieve.
I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again.
Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop).
Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
linux performance file-copy block-device limit
migrated from serverfault.com Mar 1 '14 at 18:20
This question came from our site for system and network administrators.
|
show 1 more comment
Is there a method of slowing down the copy process on Linux?
I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp
command.
Is this possible? (If yes, how?)
Edit: so, I'll add more context to what I'm trying to achieve.
I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again.
Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop).
Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
linux performance file-copy block-device limit
migrated from serverfault.com Mar 1 '14 at 18:20
This question came from our site for system and network administrators.
2
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
7
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.
– Steven Monday
Mar 1 '14 at 15:13
3
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
4
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something likecat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.
– ptman
Mar 1 '14 at 18:56
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02
|
show 1 more comment
Is there a method of slowing down the copy process on Linux?
I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp
command.
Is this possible? (If yes, how?)
Edit: so, I'll add more context to what I'm trying to achieve.
I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again.
Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop).
Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
linux performance file-copy block-device limit
Is there a method of slowing down the copy process on Linux?
I have a big file, say 10GB, and I'd like to copy it to another directory, but I don't want to copy it with full speed. Let's say I'd like to copy it with the speed of 1mb/s, not faster. I'd like to use a standard Linux cp
command.
Is this possible? (If yes, how?)
Edit: so, I'll add more context to what I'm trying to achieve.
I have a problem on the ArchLinux system when copying large files over USB (to a pendrive, usb disk, etc). After filling up the usb buffer cache, my system stops responding (even the mouse stops; it moves only sporadically). The copy operation is still ongoing, but it takes 100% resources of the box. When the copy operation finishes, everything goes back to normal -- everything is perfectly responsive again.
Maybe it's a hardware error, I don't know, but I do know I have two machines with this problem (both are on ArchLinux, one is a desktop box, second is a laptop).
Easiest and fastest "solution" to this (I agree it's not the 'real' solution, just an ugly 'hack') would be to prevent this buffer from filling up by copying the file with an average write speed of the USB drive, for me that would be enough.
linux performance file-copy block-device limit
linux performance file-copy block-device limit
edited Aug 8 '17 at 23:51
Wildcard
24.1k10 gold badges70 silver badges182 bronze badges
24.1k10 gold badges70 silver badges182 bronze badges
asked Mar 1 '14 at 14:37
antononeantonone
3071 gold badge3 silver badges11 bronze badges
3071 gold badge3 silver badges11 bronze badges
migrated from serverfault.com Mar 1 '14 at 18:20
This question came from our site for system and network administrators.
migrated from serverfault.com Mar 1 '14 at 18:20
This question came from our site for system and network administrators.
migrated from serverfault.com Mar 1 '14 at 18:20
This question came from our site for system and network administrators.
2
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
7
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.
– Steven Monday
Mar 1 '14 at 15:13
3
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
4
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something likecat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.
– ptman
Mar 1 '14 at 18:56
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02
|
show 1 more comment
2
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
7
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.
– Steven Monday
Mar 1 '14 at 15:13
3
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
4
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something likecat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.
– ptman
Mar 1 '14 at 18:56
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02
2
2
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
7
7
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,
ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.– Steven Monday
Mar 1 '14 at 15:13
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,
ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.– Steven Monday
Mar 1 '14 at 15:13
3
3
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
4
4
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something like
cat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.– ptman
Mar 1 '14 at 18:56
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something like
cat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.– ptman
Mar 1 '14 at 18:56
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02
|
show 1 more comment
8 Answers
8
active
oldest
votes
You can throttle a pipe with pv -qL
(or cstream -t
provides similar functionality)
tar -cf - . | pv -qL 8192 | tar -C /your/usb -xvf -
The -L
limit is in bytes.-q
removes stderr progress reporting.
This answer originally pointed to throttle
but that project is no longer available so has slipped out of some package systems.
Ifcp
can't be slowed down, then using a custom command is the only option I guess.
– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with thersync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated topv
. thanks.
– Matt
11 mins ago
add a comment |
Instead of cp -a /foo /bar
you can also use rsync
and limit the bandwidth as you need.
From the rsync
's manual:
--bwlimit=KBPS
limit I/O bandwidth; KBytes per second
So, the actuall command, also showing the progress, would look like this:
rsync -av --bwlimit=100 --progress /foo /bar
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from/dev/zero
or/dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)
– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.20m
, it is not supported on all platforms, so better stick to the KBytes notation.
– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroupcgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...
– Aquarius Power
Oct 29 '18 at 22:56
|
show 1 more comment
I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice
which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice
will display the documentation.
Try copying the file using a command like:
ionice -c 3 cp largefile /new/directory
If the two directories are on the same device you may find linking the file does what you want. If you are copying for backup purposes, do not use this option. ln
is extremely fast as the file itself does not get copied. Try:
ln largefile /new/directory
Or if you just want to access it from a directory on a different device try:
ln -s largefile /new/directory
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
add a comment |
If the ionice
solution is not enough (whyever) and you really want to limit I/O to an absolute value there are several possibilities:
the probably easiest:
ssh
. It has a built-in bandwidth limit. You would use e.g.tar
(instead ofcp
) orscp
(if that's good enough; I don't know how it handles symlinks and hard links) orrsync
. These commands can pipe their data overssh
. In case oftar
you write to/dev/stdout
(or-
) and pipe that into thessh
client which executes anothertar
on the "remote" side.elegant but not in the vanilla kernel (AFAIK): The device mapper target
ioband
. This, of course, works only if you can umount either the source or target volume.some self-written fun:
grep "^write_bytes: " /proc/$PID/io
gives you the amount of data a process has written. You could write a script which startscp
in the background, sleeps for e.g. 1/10th second, stops the backgroundcp
process (kill -STOP $PID
), checks the amount which has been written (and read? about the same value in this case), calculates for how longcp
must pause in order to take the average transfer rate down to the intended value, sleeps for that time, wakes upcp
(kill -CONT $PID
), and so on.
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
add a comment |
Your problem is probably not with your computer, per se, it's probably fine. But that USB flash transition layer has a processor of its own that has to map out all of your writes to compensate for what could be as much as a 90% faulty flash chip, who knows? You flood it then you flood your buffers then you flood the whole bus, then you're stuck, man - after all, that's where all your stuff is. It may sound counter-intuitive but what you really need is blocking I/O - you need to let the FTL set the pace and then just keep up.
(On hacking FTL microcontrollers: http://www.bunniestudios.com/blog/?p=3554)
All of the above answers should work so this is more a "me too!" than anything else: I've totally been there, man. I solved my own issues with rsync's --bwlimit arg (2.5mbs seemed to be the sweet spot for a single, error-free run - anything more and I'd wind up with write-protect errors). rsync was especially suited to my purpose because I was working with entire filesystems - so there were a lot of files - and simply running rsync a second time would fix all of the first run's problems (which was necessary when I'd get impatient and try to ramp past 2.5mbs).
Still, I guess that's not quite as practical for a single file. In your case you could just pipe to dd set to raw-write - you can handle any input that way, but only one target file at a time (though that single file could be an entire block device, of course).
## OBTAIN OPTIMAL IO VALUE FOR TARGET HOST DEV ##
## IT'S IMPORTANT THAT YOUR "bs" VALUE IS A MULTIPLE ##
## OF YOUR TARGET DEV'S SECTOR SIZE (USUALLY 512b) ##
% bs=$(blockdev --getoptio /local/target/dev)
## START LISTENING; PIPE OUT ON INPUT ##
% nc -l -p $PORT | lz4 |
## PIPE THROUGH DECOMPRESSOR TO DD ##
> dd bs=$bs of=/mnt/local/target.file
## AND BE SURE DD'S FLAGS DECLARE RAW IO ##
> conv=fsync oflag=direct,sync,nocache
## OUR RECEIVER'S WAITING; DIAL REMOTE TO BEGIN ##
% ssh user@remote.host <<-REMOTECMD
## JUST REVERSED; NO RAW IO FLAGS NEEDED HERE, THOUGH ##
> dd if=/remote/source.file bs=$bs |
> lz4 -9 | nc local.target.domain $PORT
> REMOTECMD
You might find netcat to be a little faster than ssh for the data transport if you give it a shot. Anyway, the other ideas were already taken, so why not?
[EDIT]: I noticed the mentions of lftp, scp, and ssh in the other post and thought we were talking about a remote copy. Local's a lot easier:
% bs=$(blockdev --getoptio /local/target/dev)
% dd if=/src/fi.le bs=$bs iflag=fullblock of=/tgt/fi.le
> conv=fsync oflag=direct,sync,nocache
[EDIT2]: Credit where it's due: just noticed ptman beat me to this by like five hours in the comments.
Definitely you could tune $bs for performance here with a multiplier - but some filesystems might require it to be a multiple of the target fs's sectorsize so keep that in mind.
On my machine, the flag is--getioopt
, not--getoptio
– Michael Mior
May 9 '17 at 17:46
add a comment |
The problem is that the copy is filling up your memory with blocks "in flight," crowding out "useful" data. A known (and very hard to fix) bug in the Linux kernel handling of I/O to slow devices (USB in this case).
Perhaps you can try to parcel out the copying, e.g. by a script like the following (proof-of-concept sketch, totally untested!):
while true do
dd if=infile of=outfile bs=4096 count=... seek=... skip=...
sleep 5
done
adjusting seek
and skip
by count
each round. Need to tune count
so it doesn't fill up (too much) memory, and 5
to allow it to drain.
add a comment |
Lower the dirty page limit. The default limit is insane.
Create /etc/sysctl.d/99-sysctl.conf with:
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
Then run sysctl -p or reboot.
What's happening is that data is being read faster than it can be written to the destination disk. When linux copies files, what it does is read them into RAM, then mark the pages as dirty for writing to the destination. Dirty pages cannot be swapped out. So if the source disk is faster than the destination disk and you're copying more data than you have free RAM, the copy operation will eat up all available RAM (or at least whatever the dirty page limit is, which could be more than the available RAM) and cause starvation as the dirty pages cannot be swapped out and clean pages get used and marked dirty as they are freed.
Note that his will not completely solve the problem...what linux really needs is some way to arbitrate creation of dirty pages so a large transfer taking place does not eat up all available RAM/all allowed dirty pages.
add a comment |
This problem has nothing to do with errors or faults in hardware or software, it's just your kernel trying to be nice to you and give your prompt back and copy in the background (it uses an in-kernel cache: more RAM, more cache, but you can limit it by writing somewhere in /proc - not reccommended though). Flash drives are too slow and while the kernel writes on it, other IO operations can't be performed fast enough. ionice
mentioned a several times in other answers is ok. But have you tried just mounting the drive with -o sync
to avoid OS buffering? It's probably the simplest solution out there.
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f117680%2fmake-disk-disk-copy-slower%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
8 Answers
8
active
oldest
votes
8 Answers
8
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can throttle a pipe with pv -qL
(or cstream -t
provides similar functionality)
tar -cf - . | pv -qL 8192 | tar -C /your/usb -xvf -
The -L
limit is in bytes.-q
removes stderr progress reporting.
This answer originally pointed to throttle
but that project is no longer available so has slipped out of some package systems.
Ifcp
can't be slowed down, then using a custom command is the only option I guess.
– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with thersync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated topv
. thanks.
– Matt
11 mins ago
add a comment |
You can throttle a pipe with pv -qL
(or cstream -t
provides similar functionality)
tar -cf - . | pv -qL 8192 | tar -C /your/usb -xvf -
The -L
limit is in bytes.-q
removes stderr progress reporting.
This answer originally pointed to throttle
but that project is no longer available so has slipped out of some package systems.
Ifcp
can't be slowed down, then using a custom command is the only option I guess.
– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with thersync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated topv
. thanks.
– Matt
11 mins ago
add a comment |
You can throttle a pipe with pv -qL
(or cstream -t
provides similar functionality)
tar -cf - . | pv -qL 8192 | tar -C /your/usb -xvf -
The -L
limit is in bytes.-q
removes stderr progress reporting.
This answer originally pointed to throttle
but that project is no longer available so has slipped out of some package systems.
You can throttle a pipe with pv -qL
(or cstream -t
provides similar functionality)
tar -cf - . | pv -qL 8192 | tar -C /your/usb -xvf -
The -L
limit is in bytes.-q
removes stderr progress reporting.
This answer originally pointed to throttle
but that project is no longer available so has slipped out of some package systems.
edited 12 mins ago
answered Mar 1 '14 at 22:07
MattMatt
6,3951 gold badge16 silver badges26 bronze badges
6,3951 gold badge16 silver badges26 bronze badges
Ifcp
can't be slowed down, then using a custom command is the only option I guess.
– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with thersync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated topv
. thanks.
– Matt
11 mins ago
add a comment |
Ifcp
can't be slowed down, then using a custom command is the only option I guess.
– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with thersync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated topv
. thanks.
– Matt
11 mins ago
If
cp
can't be slowed down, then using a custom command is the only option I guess.– antonone
Mar 2 '14 at 9:33
If
cp
can't be slowed down, then using a custom command is the only option I guess.– antonone
Mar 2 '14 at 9:33
Sounds too complicated in comparison with the
rsync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
Sounds too complicated in comparison with the
rsync
– LinuxSecurityFreak
Dec 15 '16 at 8:03
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
looks more complicated but more usable to me. Need to test a file lockingechanism and need slowing down copying down to some bytes/s which seems not possible with rsync. Ill give it a try and 'cat' a file through the throttle pipe
– cljk
Jul 18 at 11:56
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
sad to say but the project is dead bugs.debian.org/cgi-bin/bugreport.cgi?bug=426891
– cljk
Jul 19 at 12:03
@cljk updated to
pv
. thanks.– Matt
11 mins ago
@cljk updated to
pv
. thanks.– Matt
11 mins ago
add a comment |
Instead of cp -a /foo /bar
you can also use rsync
and limit the bandwidth as you need.
From the rsync
's manual:
--bwlimit=KBPS
limit I/O bandwidth; KBytes per second
So, the actuall command, also showing the progress, would look like this:
rsync -av --bwlimit=100 --progress /foo /bar
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from/dev/zero
or/dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)
– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.20m
, it is not supported on all platforms, so better stick to the KBytes notation.
– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroupcgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...
– Aquarius Power
Oct 29 '18 at 22:56
|
show 1 more comment
Instead of cp -a /foo /bar
you can also use rsync
and limit the bandwidth as you need.
From the rsync
's manual:
--bwlimit=KBPS
limit I/O bandwidth; KBytes per second
So, the actuall command, also showing the progress, would look like this:
rsync -av --bwlimit=100 --progress /foo /bar
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from/dev/zero
or/dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)
– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.20m
, it is not supported on all platforms, so better stick to the KBytes notation.
– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroupcgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...
– Aquarius Power
Oct 29 '18 at 22:56
|
show 1 more comment
Instead of cp -a /foo /bar
you can also use rsync
and limit the bandwidth as you need.
From the rsync
's manual:
--bwlimit=KBPS
limit I/O bandwidth; KBytes per second
So, the actuall command, also showing the progress, would look like this:
rsync -av --bwlimit=100 --progress /foo /bar
Instead of cp -a /foo /bar
you can also use rsync
and limit the bandwidth as you need.
From the rsync
's manual:
--bwlimit=KBPS
limit I/O bandwidth; KBytes per second
So, the actuall command, also showing the progress, would look like this:
rsync -av --bwlimit=100 --progress /foo /bar
edited Dec 15 '16 at 7:58
LinuxSecurityFreak
9,05017 gold badges74 silver badges162 bronze badges
9,05017 gold badges74 silver badges162 bronze badges
answered Mar 2 '14 at 0:35
user55518
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from/dev/zero
or/dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)
– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.20m
, it is not supported on all platforms, so better stick to the KBytes notation.
– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroupcgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...
– Aquarius Power
Oct 29 '18 at 22:56
|
show 1 more comment
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from/dev/zero
or/dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)
– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.20m
, it is not supported on all platforms, so better stick to the KBytes notation.
– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroupcgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...
– Aquarius Power
Oct 29 '18 at 22:56
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
This sounds like a nice idea for copying old drives I don't want to beat up.
– jeremyjjbrown
Jun 7 '15 at 22:13
Doesn't work for reading from
/dev/zero
or /dev/random
– cdosborn
Jan 25 '16 at 21:23
Doesn't work for reading from
/dev/zero
or /dev/random
– cdosborn
Jan 25 '16 at 21:23
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)– lucaferrario
Jul 28 '17 at 10:06
rsync -a --bwlimit=1500 /source /destination
works perfectly to copy giant folders at a 1,5 MB/s speed (which is a good trade off between avoiding any server slow down and not taking too much time)– lucaferrario
Jul 28 '17 at 10:06
Sidenote: even while the man page might say that you can use letters for units, e.g.
20m
, it is not supported on all platforms, so better stick to the KBytes notation.– Hubert Grzeskowiak
Aug 22 '17 at 2:15
Sidenote: even while the man page might say that you can use letters for units, e.g.
20m
, it is not supported on all platforms, so better stick to the KBytes notation.– Hubert Grzeskowiak
Aug 22 '17 at 2:15
saved my day! cgroup
cgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...– Aquarius Power
Oct 29 '18 at 22:56
saved my day! cgroup
cgexec -g ... cp /in /out
was not working all the time (from terminal worked some times, from script never) and I have no idea why...– Aquarius Power
Oct 29 '18 at 22:56
|
show 1 more comment
I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice
which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice
will display the documentation.
Try copying the file using a command like:
ionice -c 3 cp largefile /new/directory
If the two directories are on the same device you may find linking the file does what you want. If you are copying for backup purposes, do not use this option. ln
is extremely fast as the file itself does not get copied. Try:
ln largefile /new/directory
Or if you just want to access it from a directory on a different device try:
ln -s largefile /new/directory
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
add a comment |
I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice
which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice
will display the documentation.
Try copying the file using a command like:
ionice -c 3 cp largefile /new/directory
If the two directories are on the same device you may find linking the file does what you want. If you are copying for backup purposes, do not use this option. ln
is extremely fast as the file itself does not get copied. Try:
ln largefile /new/directory
Or if you just want to access it from a directory on a different device try:
ln -s largefile /new/directory
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
add a comment |
I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice
which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice
will display the documentation.
Try copying the file using a command like:
ionice -c 3 cp largefile /new/directory
If the two directories are on the same device you may find linking the file does what you want. If you are copying for backup purposes, do not use this option. ln
is extremely fast as the file itself does not get copied. Try:
ln largefile /new/directory
Or if you just want to access it from a directory on a different device try:
ln -s largefile /new/directory
I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice
which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice
will display the documentation.
Try copying the file using a command like:
ionice -c 3 cp largefile /new/directory
If the two directories are on the same device you may find linking the file does what you want. If you are copying for backup purposes, do not use this option. ln
is extremely fast as the file itself does not get copied. Try:
ln largefile /new/directory
Or if you just want to access it from a directory on a different device try:
ln -s largefile /new/directory
answered Mar 1 '14 at 16:10
BillThorBillThor
7,85314 silver badges26 bronze badges
7,85314 silver badges26 bronze badges
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
add a comment |
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
is ionice works well in linux? i read it just "emulate" work and there is no real difference? +1 for links
– Nick
Nov 30 '15 at 16:51
1
1
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
@Nick When I've used it, it has behaved as expected. The process to which I applied ionice slowed significantly, an the other processes that needed I/O were able to perform as expected. With a moderate I/O load from other processes, I was able to effectively suspend a high I/O process by applying maximal 'niceness' as expected. Once there was no competing I/O, the ioniced process performed as normal.
– BillThor
Dec 2 '15 at 0:47
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
with the 400MB file I was copying from one HD to a SSD, the initial 10s it worked perfectly, then suddenly I saw I high IO load and had to wait for like 1minute machine frozen :/. I have the same problem with cgroup write io throttle where it works sometimes and others it wont work at all.
– Aquarius Power
Oct 29 '18 at 23:08
add a comment |
If the ionice
solution is not enough (whyever) and you really want to limit I/O to an absolute value there are several possibilities:
the probably easiest:
ssh
. It has a built-in bandwidth limit. You would use e.g.tar
(instead ofcp
) orscp
(if that's good enough; I don't know how it handles symlinks and hard links) orrsync
. These commands can pipe their data overssh
. In case oftar
you write to/dev/stdout
(or-
) and pipe that into thessh
client which executes anothertar
on the "remote" side.elegant but not in the vanilla kernel (AFAIK): The device mapper target
ioband
. This, of course, works only if you can umount either the source or target volume.some self-written fun:
grep "^write_bytes: " /proc/$PID/io
gives you the amount of data a process has written. You could write a script which startscp
in the background, sleeps for e.g. 1/10th second, stops the backgroundcp
process (kill -STOP $PID
), checks the amount which has been written (and read? about the same value in this case), calculates for how longcp
must pause in order to take the average transfer rate down to the intended value, sleeps for that time, wakes upcp
(kill -CONT $PID
), and so on.
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
add a comment |
If the ionice
solution is not enough (whyever) and you really want to limit I/O to an absolute value there are several possibilities:
the probably easiest:
ssh
. It has a built-in bandwidth limit. You would use e.g.tar
(instead ofcp
) orscp
(if that's good enough; I don't know how it handles symlinks and hard links) orrsync
. These commands can pipe their data overssh
. In case oftar
you write to/dev/stdout
(or-
) and pipe that into thessh
client which executes anothertar
on the "remote" side.elegant but not in the vanilla kernel (AFAIK): The device mapper target
ioband
. This, of course, works only if you can umount either the source or target volume.some self-written fun:
grep "^write_bytes: " /proc/$PID/io
gives you the amount of data a process has written. You could write a script which startscp
in the background, sleeps for e.g. 1/10th second, stops the backgroundcp
process (kill -STOP $PID
), checks the amount which has been written (and read? about the same value in this case), calculates for how longcp
must pause in order to take the average transfer rate down to the intended value, sleeps for that time, wakes upcp
(kill -CONT $PID
), and so on.
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
add a comment |
If the ionice
solution is not enough (whyever) and you really want to limit I/O to an absolute value there are several possibilities:
the probably easiest:
ssh
. It has a built-in bandwidth limit. You would use e.g.tar
(instead ofcp
) orscp
(if that's good enough; I don't know how it handles symlinks and hard links) orrsync
. These commands can pipe their data overssh
. In case oftar
you write to/dev/stdout
(or-
) and pipe that into thessh
client which executes anothertar
on the "remote" side.elegant but not in the vanilla kernel (AFAIK): The device mapper target
ioband
. This, of course, works only if you can umount either the source or target volume.some self-written fun:
grep "^write_bytes: " /proc/$PID/io
gives you the amount of data a process has written. You could write a script which startscp
in the background, sleeps for e.g. 1/10th second, stops the backgroundcp
process (kill -STOP $PID
), checks the amount which has been written (and read? about the same value in this case), calculates for how longcp
must pause in order to take the average transfer rate down to the intended value, sleeps for that time, wakes upcp
(kill -CONT $PID
), and so on.
If the ionice
solution is not enough (whyever) and you really want to limit I/O to an absolute value there are several possibilities:
the probably easiest:
ssh
. It has a built-in bandwidth limit. You would use e.g.tar
(instead ofcp
) orscp
(if that's good enough; I don't know how it handles symlinks and hard links) orrsync
. These commands can pipe their data overssh
. In case oftar
you write to/dev/stdout
(or-
) and pipe that into thessh
client which executes anothertar
on the "remote" side.elegant but not in the vanilla kernel (AFAIK): The device mapper target
ioband
. This, of course, works only if you can umount either the source or target volume.some self-written fun:
grep "^write_bytes: " /proc/$PID/io
gives you the amount of data a process has written. You could write a script which startscp
in the background, sleeps for e.g. 1/10th second, stops the backgroundcp
process (kill -STOP $PID
), checks the amount which has been written (and read? about the same value in this case), calculates for how longcp
must pause in order to take the average transfer rate down to the intended value, sleeps for that time, wakes upcp
(kill -CONT $PID
), and so on.
edited Feb 8 '17 at 8:57
mwfearnley
3713 silver badges10 bronze badges
3713 silver badges10 bronze badges
answered Mar 1 '14 at 21:33
Hauke LagingHauke Laging
59.5k12 gold badges93 silver badges139 bronze badges
59.5k12 gold badges93 silver badges139 bronze badges
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
add a comment |
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
Yes, normally i'm just using lftp to connect to localhost via scp, and limit the bandwich from there.
– antonone
Mar 1 '14 at 22:06
1
1
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
Congrats on 10K, just pushed you over.
– slm♦
Mar 2 '14 at 15:39
add a comment |
Your problem is probably not with your computer, per se, it's probably fine. But that USB flash transition layer has a processor of its own that has to map out all of your writes to compensate for what could be as much as a 90% faulty flash chip, who knows? You flood it then you flood your buffers then you flood the whole bus, then you're stuck, man - after all, that's where all your stuff is. It may sound counter-intuitive but what you really need is blocking I/O - you need to let the FTL set the pace and then just keep up.
(On hacking FTL microcontrollers: http://www.bunniestudios.com/blog/?p=3554)
All of the above answers should work so this is more a "me too!" than anything else: I've totally been there, man. I solved my own issues with rsync's --bwlimit arg (2.5mbs seemed to be the sweet spot for a single, error-free run - anything more and I'd wind up with write-protect errors). rsync was especially suited to my purpose because I was working with entire filesystems - so there were a lot of files - and simply running rsync a second time would fix all of the first run's problems (which was necessary when I'd get impatient and try to ramp past 2.5mbs).
Still, I guess that's not quite as practical for a single file. In your case you could just pipe to dd set to raw-write - you can handle any input that way, but only one target file at a time (though that single file could be an entire block device, of course).
## OBTAIN OPTIMAL IO VALUE FOR TARGET HOST DEV ##
## IT'S IMPORTANT THAT YOUR "bs" VALUE IS A MULTIPLE ##
## OF YOUR TARGET DEV'S SECTOR SIZE (USUALLY 512b) ##
% bs=$(blockdev --getoptio /local/target/dev)
## START LISTENING; PIPE OUT ON INPUT ##
% nc -l -p $PORT | lz4 |
## PIPE THROUGH DECOMPRESSOR TO DD ##
> dd bs=$bs of=/mnt/local/target.file
## AND BE SURE DD'S FLAGS DECLARE RAW IO ##
> conv=fsync oflag=direct,sync,nocache
## OUR RECEIVER'S WAITING; DIAL REMOTE TO BEGIN ##
% ssh user@remote.host <<-REMOTECMD
## JUST REVERSED; NO RAW IO FLAGS NEEDED HERE, THOUGH ##
> dd if=/remote/source.file bs=$bs |
> lz4 -9 | nc local.target.domain $PORT
> REMOTECMD
You might find netcat to be a little faster than ssh for the data transport if you give it a shot. Anyway, the other ideas were already taken, so why not?
[EDIT]: I noticed the mentions of lftp, scp, and ssh in the other post and thought we were talking about a remote copy. Local's a lot easier:
% bs=$(blockdev --getoptio /local/target/dev)
% dd if=/src/fi.le bs=$bs iflag=fullblock of=/tgt/fi.le
> conv=fsync oflag=direct,sync,nocache
[EDIT2]: Credit where it's due: just noticed ptman beat me to this by like five hours in the comments.
Definitely you could tune $bs for performance here with a multiplier - but some filesystems might require it to be a multiple of the target fs's sectorsize so keep that in mind.
On my machine, the flag is--getioopt
, not--getoptio
– Michael Mior
May 9 '17 at 17:46
add a comment |
Your problem is probably not with your computer, per se, it's probably fine. But that USB flash transition layer has a processor of its own that has to map out all of your writes to compensate for what could be as much as a 90% faulty flash chip, who knows? You flood it then you flood your buffers then you flood the whole bus, then you're stuck, man - after all, that's where all your stuff is. It may sound counter-intuitive but what you really need is blocking I/O - you need to let the FTL set the pace and then just keep up.
(On hacking FTL microcontrollers: http://www.bunniestudios.com/blog/?p=3554)
All of the above answers should work so this is more a "me too!" than anything else: I've totally been there, man. I solved my own issues with rsync's --bwlimit arg (2.5mbs seemed to be the sweet spot for a single, error-free run - anything more and I'd wind up with write-protect errors). rsync was especially suited to my purpose because I was working with entire filesystems - so there were a lot of files - and simply running rsync a second time would fix all of the first run's problems (which was necessary when I'd get impatient and try to ramp past 2.5mbs).
Still, I guess that's not quite as practical for a single file. In your case you could just pipe to dd set to raw-write - you can handle any input that way, but only one target file at a time (though that single file could be an entire block device, of course).
## OBTAIN OPTIMAL IO VALUE FOR TARGET HOST DEV ##
## IT'S IMPORTANT THAT YOUR "bs" VALUE IS A MULTIPLE ##
## OF YOUR TARGET DEV'S SECTOR SIZE (USUALLY 512b) ##
% bs=$(blockdev --getoptio /local/target/dev)
## START LISTENING; PIPE OUT ON INPUT ##
% nc -l -p $PORT | lz4 |
## PIPE THROUGH DECOMPRESSOR TO DD ##
> dd bs=$bs of=/mnt/local/target.file
## AND BE SURE DD'S FLAGS DECLARE RAW IO ##
> conv=fsync oflag=direct,sync,nocache
## OUR RECEIVER'S WAITING; DIAL REMOTE TO BEGIN ##
% ssh user@remote.host <<-REMOTECMD
## JUST REVERSED; NO RAW IO FLAGS NEEDED HERE, THOUGH ##
> dd if=/remote/source.file bs=$bs |
> lz4 -9 | nc local.target.domain $PORT
> REMOTECMD
You might find netcat to be a little faster than ssh for the data transport if you give it a shot. Anyway, the other ideas were already taken, so why not?
[EDIT]: I noticed the mentions of lftp, scp, and ssh in the other post and thought we were talking about a remote copy. Local's a lot easier:
% bs=$(blockdev --getoptio /local/target/dev)
% dd if=/src/fi.le bs=$bs iflag=fullblock of=/tgt/fi.le
> conv=fsync oflag=direct,sync,nocache
[EDIT2]: Credit where it's due: just noticed ptman beat me to this by like five hours in the comments.
Definitely you could tune $bs for performance here with a multiplier - but some filesystems might require it to be a multiple of the target fs's sectorsize so keep that in mind.
On my machine, the flag is--getioopt
, not--getoptio
– Michael Mior
May 9 '17 at 17:46
add a comment |
Your problem is probably not with your computer, per se, it's probably fine. But that USB flash transition layer has a processor of its own that has to map out all of your writes to compensate for what could be as much as a 90% faulty flash chip, who knows? You flood it then you flood your buffers then you flood the whole bus, then you're stuck, man - after all, that's where all your stuff is. It may sound counter-intuitive but what you really need is blocking I/O - you need to let the FTL set the pace and then just keep up.
(On hacking FTL microcontrollers: http://www.bunniestudios.com/blog/?p=3554)
All of the above answers should work so this is more a "me too!" than anything else: I've totally been there, man. I solved my own issues with rsync's --bwlimit arg (2.5mbs seemed to be the sweet spot for a single, error-free run - anything more and I'd wind up with write-protect errors). rsync was especially suited to my purpose because I was working with entire filesystems - so there were a lot of files - and simply running rsync a second time would fix all of the first run's problems (which was necessary when I'd get impatient and try to ramp past 2.5mbs).
Still, I guess that's not quite as practical for a single file. In your case you could just pipe to dd set to raw-write - you can handle any input that way, but only one target file at a time (though that single file could be an entire block device, of course).
## OBTAIN OPTIMAL IO VALUE FOR TARGET HOST DEV ##
## IT'S IMPORTANT THAT YOUR "bs" VALUE IS A MULTIPLE ##
## OF YOUR TARGET DEV'S SECTOR SIZE (USUALLY 512b) ##
% bs=$(blockdev --getoptio /local/target/dev)
## START LISTENING; PIPE OUT ON INPUT ##
% nc -l -p $PORT | lz4 |
## PIPE THROUGH DECOMPRESSOR TO DD ##
> dd bs=$bs of=/mnt/local/target.file
## AND BE SURE DD'S FLAGS DECLARE RAW IO ##
> conv=fsync oflag=direct,sync,nocache
## OUR RECEIVER'S WAITING; DIAL REMOTE TO BEGIN ##
% ssh user@remote.host <<-REMOTECMD
## JUST REVERSED; NO RAW IO FLAGS NEEDED HERE, THOUGH ##
> dd if=/remote/source.file bs=$bs |
> lz4 -9 | nc local.target.domain $PORT
> REMOTECMD
You might find netcat to be a little faster than ssh for the data transport if you give it a shot. Anyway, the other ideas were already taken, so why not?
[EDIT]: I noticed the mentions of lftp, scp, and ssh in the other post and thought we were talking about a remote copy. Local's a lot easier:
% bs=$(blockdev --getoptio /local/target/dev)
% dd if=/src/fi.le bs=$bs iflag=fullblock of=/tgt/fi.le
> conv=fsync oflag=direct,sync,nocache
[EDIT2]: Credit where it's due: just noticed ptman beat me to this by like five hours in the comments.
Definitely you could tune $bs for performance here with a multiplier - but some filesystems might require it to be a multiple of the target fs's sectorsize so keep that in mind.
Your problem is probably not with your computer, per se, it's probably fine. But that USB flash transition layer has a processor of its own that has to map out all of your writes to compensate for what could be as much as a 90% faulty flash chip, who knows? You flood it then you flood your buffers then you flood the whole bus, then you're stuck, man - after all, that's where all your stuff is. It may sound counter-intuitive but what you really need is blocking I/O - you need to let the FTL set the pace and then just keep up.
(On hacking FTL microcontrollers: http://www.bunniestudios.com/blog/?p=3554)
All of the above answers should work so this is more a "me too!" than anything else: I've totally been there, man. I solved my own issues with rsync's --bwlimit arg (2.5mbs seemed to be the sweet spot for a single, error-free run - anything more and I'd wind up with write-protect errors). rsync was especially suited to my purpose because I was working with entire filesystems - so there were a lot of files - and simply running rsync a second time would fix all of the first run's problems (which was necessary when I'd get impatient and try to ramp past 2.5mbs).
Still, I guess that's not quite as practical for a single file. In your case you could just pipe to dd set to raw-write - you can handle any input that way, but only one target file at a time (though that single file could be an entire block device, of course).
## OBTAIN OPTIMAL IO VALUE FOR TARGET HOST DEV ##
## IT'S IMPORTANT THAT YOUR "bs" VALUE IS A MULTIPLE ##
## OF YOUR TARGET DEV'S SECTOR SIZE (USUALLY 512b) ##
% bs=$(blockdev --getoptio /local/target/dev)
## START LISTENING; PIPE OUT ON INPUT ##
% nc -l -p $PORT | lz4 |
## PIPE THROUGH DECOMPRESSOR TO DD ##
> dd bs=$bs of=/mnt/local/target.file
## AND BE SURE DD'S FLAGS DECLARE RAW IO ##
> conv=fsync oflag=direct,sync,nocache
## OUR RECEIVER'S WAITING; DIAL REMOTE TO BEGIN ##
% ssh user@remote.host <<-REMOTECMD
## JUST REVERSED; NO RAW IO FLAGS NEEDED HERE, THOUGH ##
> dd if=/remote/source.file bs=$bs |
> lz4 -9 | nc local.target.domain $PORT
> REMOTECMD
You might find netcat to be a little faster than ssh for the data transport if you give it a shot. Anyway, the other ideas were already taken, so why not?
[EDIT]: I noticed the mentions of lftp, scp, and ssh in the other post and thought we were talking about a remote copy. Local's a lot easier:
% bs=$(blockdev --getoptio /local/target/dev)
% dd if=/src/fi.le bs=$bs iflag=fullblock of=/tgt/fi.le
> conv=fsync oflag=direct,sync,nocache
[EDIT2]: Credit where it's due: just noticed ptman beat me to this by like five hours in the comments.
Definitely you could tune $bs for performance here with a multiplier - but some filesystems might require it to be a multiple of the target fs's sectorsize so keep that in mind.
edited Mar 4 '14 at 7:39
slm♦
265k73 gold badges573 silver badges717 bronze badges
265k73 gold badges573 silver badges717 bronze badges
answered Mar 1 '14 at 23:49
mikeservmikeserv
46.7k6 gold badges72 silver badges169 bronze badges
46.7k6 gold badges72 silver badges169 bronze badges
On my machine, the flag is--getioopt
, not--getoptio
– Michael Mior
May 9 '17 at 17:46
add a comment |
On my machine, the flag is--getioopt
, not--getoptio
– Michael Mior
May 9 '17 at 17:46
On my machine, the flag is
--getioopt
, not --getoptio
– Michael Mior
May 9 '17 at 17:46
On my machine, the flag is
--getioopt
, not --getoptio
– Michael Mior
May 9 '17 at 17:46
add a comment |
The problem is that the copy is filling up your memory with blocks "in flight," crowding out "useful" data. A known (and very hard to fix) bug in the Linux kernel handling of I/O to slow devices (USB in this case).
Perhaps you can try to parcel out the copying, e.g. by a script like the following (proof-of-concept sketch, totally untested!):
while true do
dd if=infile of=outfile bs=4096 count=... seek=... skip=...
sleep 5
done
adjusting seek
and skip
by count
each round. Need to tune count
so it doesn't fill up (too much) memory, and 5
to allow it to drain.
add a comment |
The problem is that the copy is filling up your memory with blocks "in flight," crowding out "useful" data. A known (and very hard to fix) bug in the Linux kernel handling of I/O to slow devices (USB in this case).
Perhaps you can try to parcel out the copying, e.g. by a script like the following (proof-of-concept sketch, totally untested!):
while true do
dd if=infile of=outfile bs=4096 count=... seek=... skip=...
sleep 5
done
adjusting seek
and skip
by count
each round. Need to tune count
so it doesn't fill up (too much) memory, and 5
to allow it to drain.
add a comment |
The problem is that the copy is filling up your memory with blocks "in flight," crowding out "useful" data. A known (and very hard to fix) bug in the Linux kernel handling of I/O to slow devices (USB in this case).
Perhaps you can try to parcel out the copying, e.g. by a script like the following (proof-of-concept sketch, totally untested!):
while true do
dd if=infile of=outfile bs=4096 count=... seek=... skip=...
sleep 5
done
adjusting seek
and skip
by count
each round. Need to tune count
so it doesn't fill up (too much) memory, and 5
to allow it to drain.
The problem is that the copy is filling up your memory with blocks "in flight," crowding out "useful" data. A known (and very hard to fix) bug in the Linux kernel handling of I/O to slow devices (USB in this case).
Perhaps you can try to parcel out the copying, e.g. by a script like the following (proof-of-concept sketch, totally untested!):
while true do
dd if=infile of=outfile bs=4096 count=... seek=... skip=...
sleep 5
done
adjusting seek
and skip
by count
each round. Need to tune count
so it doesn't fill up (too much) memory, and 5
to allow it to drain.
answered Mar 2 '14 at 1:55
vonbrandvonbrand
14.5k2 gold badges27 silver badges45 bronze badges
14.5k2 gold badges27 silver badges45 bronze badges
add a comment |
add a comment |
Lower the dirty page limit. The default limit is insane.
Create /etc/sysctl.d/99-sysctl.conf with:
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
Then run sysctl -p or reboot.
What's happening is that data is being read faster than it can be written to the destination disk. When linux copies files, what it does is read them into RAM, then mark the pages as dirty for writing to the destination. Dirty pages cannot be swapped out. So if the source disk is faster than the destination disk and you're copying more data than you have free RAM, the copy operation will eat up all available RAM (or at least whatever the dirty page limit is, which could be more than the available RAM) and cause starvation as the dirty pages cannot be swapped out and clean pages get used and marked dirty as they are freed.
Note that his will not completely solve the problem...what linux really needs is some way to arbitrate creation of dirty pages so a large transfer taking place does not eat up all available RAM/all allowed dirty pages.
add a comment |
Lower the dirty page limit. The default limit is insane.
Create /etc/sysctl.d/99-sysctl.conf with:
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
Then run sysctl -p or reboot.
What's happening is that data is being read faster than it can be written to the destination disk. When linux copies files, what it does is read them into RAM, then mark the pages as dirty for writing to the destination. Dirty pages cannot be swapped out. So if the source disk is faster than the destination disk and you're copying more data than you have free RAM, the copy operation will eat up all available RAM (or at least whatever the dirty page limit is, which could be more than the available RAM) and cause starvation as the dirty pages cannot be swapped out and clean pages get used and marked dirty as they are freed.
Note that his will not completely solve the problem...what linux really needs is some way to arbitrate creation of dirty pages so a large transfer taking place does not eat up all available RAM/all allowed dirty pages.
add a comment |
Lower the dirty page limit. The default limit is insane.
Create /etc/sysctl.d/99-sysctl.conf with:
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
Then run sysctl -p or reboot.
What's happening is that data is being read faster than it can be written to the destination disk. When linux copies files, what it does is read them into RAM, then mark the pages as dirty for writing to the destination. Dirty pages cannot be swapped out. So if the source disk is faster than the destination disk and you're copying more data than you have free RAM, the copy operation will eat up all available RAM (or at least whatever the dirty page limit is, which could be more than the available RAM) and cause starvation as the dirty pages cannot be swapped out and clean pages get used and marked dirty as they are freed.
Note that his will not completely solve the problem...what linux really needs is some way to arbitrate creation of dirty pages so a large transfer taking place does not eat up all available RAM/all allowed dirty pages.
Lower the dirty page limit. The default limit is insane.
Create /etc/sysctl.d/99-sysctl.conf with:
vm.dirty_background_ratio = 3
vm.dirty_ratio = 10
Then run sysctl -p or reboot.
What's happening is that data is being read faster than it can be written to the destination disk. When linux copies files, what it does is read them into RAM, then mark the pages as dirty for writing to the destination. Dirty pages cannot be swapped out. So if the source disk is faster than the destination disk and you're copying more data than you have free RAM, the copy operation will eat up all available RAM (or at least whatever the dirty page limit is, which could be more than the available RAM) and cause starvation as the dirty pages cannot be swapped out and clean pages get used and marked dirty as they are freed.
Note that his will not completely solve the problem...what linux really needs is some way to arbitrate creation of dirty pages so a large transfer taking place does not eat up all available RAM/all allowed dirty pages.
answered Dec 1 '16 at 1:41
alex.forencichalex.forencich
2901 gold badge3 silver badges10 bronze badges
2901 gold badge3 silver badges10 bronze badges
add a comment |
add a comment |
This problem has nothing to do with errors or faults in hardware or software, it's just your kernel trying to be nice to you and give your prompt back and copy in the background (it uses an in-kernel cache: more RAM, more cache, but you can limit it by writing somewhere in /proc - not reccommended though). Flash drives are too slow and while the kernel writes on it, other IO operations can't be performed fast enough. ionice
mentioned a several times in other answers is ok. But have you tried just mounting the drive with -o sync
to avoid OS buffering? It's probably the simplest solution out there.
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
add a comment |
This problem has nothing to do with errors or faults in hardware or software, it's just your kernel trying to be nice to you and give your prompt back and copy in the background (it uses an in-kernel cache: more RAM, more cache, but you can limit it by writing somewhere in /proc - not reccommended though). Flash drives are too slow and while the kernel writes on it, other IO operations can't be performed fast enough. ionice
mentioned a several times in other answers is ok. But have you tried just mounting the drive with -o sync
to avoid OS buffering? It's probably the simplest solution out there.
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
add a comment |
This problem has nothing to do with errors or faults in hardware or software, it's just your kernel trying to be nice to you and give your prompt back and copy in the background (it uses an in-kernel cache: more RAM, more cache, but you can limit it by writing somewhere in /proc - not reccommended though). Flash drives are too slow and while the kernel writes on it, other IO operations can't be performed fast enough. ionice
mentioned a several times in other answers is ok. But have you tried just mounting the drive with -o sync
to avoid OS buffering? It's probably the simplest solution out there.
This problem has nothing to do with errors or faults in hardware or software, it's just your kernel trying to be nice to you and give your prompt back and copy in the background (it uses an in-kernel cache: more RAM, more cache, but you can limit it by writing somewhere in /proc - not reccommended though). Flash drives are too slow and while the kernel writes on it, other IO operations can't be performed fast enough. ionice
mentioned a several times in other answers is ok. But have you tried just mounting the drive with -o sync
to avoid OS buffering? It's probably the simplest solution out there.
edited Mar 4 '14 at 8:22
Raphael Ahrens
7,2965 gold badges29 silver badges46 bronze badges
7,2965 gold badges29 silver badges46 bronze badges
answered Mar 4 '14 at 7:59
orionorion
9,57620 silver badges33 bronze badges
9,57620 silver badges33 bronze badges
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
add a comment |
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
After enabling -o sync, my Internet is faster than write speed to this USB drive. What I don't understand is why kernel doesn't track how quickly cache pages are getting flushed, and schedule future flushes based on that. It's like it always goes full-speed, even if this poor drive can't keep up with the speed. But that's a topic for another question I guess.
– antonone
Mar 4 '14 at 11:07
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f117680%2fmake-disk-disk-copy-slower%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
What are you trying to accomplish? Why do you want to slow down a file operation?
– Michael Hampton
Mar 1 '14 at 14:41
7
If you are seeking to limit disk-to-disk copy speed in an effort to be "nice" to other I/O-bound processes in the system, you are probably better off taking advantage of the kernel's ability to tune I/O scheduling instead. Specifically,
ionice
can be used to ensure that your disk-to-disk copy process is scheduled I/O at a lower priority than regular processes.– Steven Monday
Mar 1 '14 at 15:13
3
This is a classic XY problem question. You should instead ask about why your desktop becomes unresponsive when you copy files to a USB device.
– Michael Hampton
Mar 1 '14 at 18:50
4
Linux actually has ridiculously large I/O buffers these days. RAM sizes have grown faster that mass storage speeds. Maybe you could perform the copy using dd(1) and sync so that it would actually be synced periodically instead of being buffered? And pipe viewer (pv) has a rate limiting option. Something like
cat file | pv -L 3k > outfile
. Neither are the same as using cp(1), though.– ptman
Mar 1 '14 at 18:56
@MichaelHampton, there are several unresolved topics on this issue on ArchLinux's forum, so I figured I'll try to cope with it in a different way, just to make it work.
– antonone
Mar 1 '14 at 22:02