How do I get commands in a bash script to complete? (files get truncated with cp) The 2019...
Multiply Two Integer Polynomials
Is there a symbol for a right arrow with a square in the middle?
Can a flute soloist sit?
Does the shape of a die affect the probability of a number being rolled?
How to answer pointed "are you quitting" questioning when I don't want them to suspect
Is an up-to-date browser secure on an out-of-date OS?
Loose spokes after only a few rides
If a Druid sees an animal’s corpse, can they wild shape into that animal?
Pokemon Turn Based battle (Python)
What tool would a Roman-age civilization have for the breaking of silver and other metals into dust?
Apparent duplicates between Haynes service instructions and MOT
Why do we hear so much about the Trump administration deciding to impose and then remove tariffs?
Identify boardgame from Big movie
Lightning Grid - Columns and Rows?
How are circuits which use complex ICs normally simulated?
Did 3000BC Egyptians use meteoric iron weapons?
Landlord wants to switch my lease to a "Land contract" to "get back at the city"
What did it mean to "align" a radio?
How to support a colleague who finds meetings extremely tiring?
"as much details as you can remember"
How can I autofill dates in Excel excluding Sunday?
Why isn't airport relocation done gradually?
What is the meaning of the verb "bear" in this context?
Is there any way to tell whether the shot is going to hit you or not?
How do I get commands in a bash script to complete? (files get truncated with cp)
The 2019 Stack Overflow Developer Survey Results Are InCorrect locking in shell scripts?Upload file to ftp server using commands in shell script?Massively creating files with bashHow do I wait for a file in the shell script?Bash script to start tmux and issue commandsPipe tar command to SSHhow to make a loop with md5sum test on a bash script?upload files to gdrive using a tool but in a scriptHow to elaborate multiple selected files by drag & drop in a bash scriptHow can I stop a running inotifywait from another script?How to wait for all files and copy into dir
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I have a server that receives
backupname.tar.gz
files in/home/my_user/drop
directory every hour.I installed incron utility and use incrontab -e entry to run a script whenever a new file shows up in /drop
Here is the script:
#!/bin/sh
#
# First clear the 2 immediate use directories
rm /home/my_user/local_ready/*
wait
sleep 1
rm /home/my_user/local_restore/*
wait
sleep1
# Copy the file from the /drop /local_ready
cp /home/my_user/drop/*.tar.gz /home/my_user/local_ready/
wait
sleep 5
# Now move the file to the /current folder
mv /home/my_user/drop/*.tar.gz /home/my_user/current/
wait
sleep 1
# Next we delete any stray files dropped that are not
# of the target type so we can keep /drop clean.
rm /home/my_user/drop/*
wait
sleep 1
# Un-Tar the files into the /local_restore directory
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
wait
sleep 1
# This should complete the movement of files
The problem I have been running into is the file that gets copied to the /local_restore
directory is truncated as if the next command in the script is causing an interruption to the cp
command.
At first I put sleep commands in it to try to get it to work, then I added wait commands after each command in the script to try to get it to work thinking that would force everything to wait until the cp command had finished copying the file to the next location.
I cannot even tell if the tar
command is working at all because it depends on the success of the cp
command further up the chain to have the file in place. Based on a test I ran with only a command to un-tar one of the files, I suspect it will not complete either before the script exits either. At least that occurred in a different 3 line test I used to test my timing theory.
BTW... the mv
command works just fine and the whole file gets moved as it should.
Can anyone identify why the commands run in the script seem to be unable to complete their task?
I have been asked to show the contents of the incrontab entry so here it is:
/home/my_user/drop/ IN_CREATE /home/my_user/bin/cycle_backups
(cycle_backups is obviously the name of the script file)
This is a KVM type VPS cloud server running Ubuntu 16.04 LTS and it has 10gb of memory with over 100gb of disk space. When the file is dropped, this is the only thing the server has to do other than system idle!
I will admit that my server is a bit slow, so when trying to copy a 200mb file to another directory it takes a second or two to complete even when I do it right at the command line.
I am at a loss to explain the problem, which makes it even harder to identify a solution.
Fair Warning: I am not the best at any of this, but I didn't think this should be such an impossible thing to accomplish.
shell-script cp
|
show 4 more comments
I have a server that receives
backupname.tar.gz
files in/home/my_user/drop
directory every hour.I installed incron utility and use incrontab -e entry to run a script whenever a new file shows up in /drop
Here is the script:
#!/bin/sh
#
# First clear the 2 immediate use directories
rm /home/my_user/local_ready/*
wait
sleep 1
rm /home/my_user/local_restore/*
wait
sleep1
# Copy the file from the /drop /local_ready
cp /home/my_user/drop/*.tar.gz /home/my_user/local_ready/
wait
sleep 5
# Now move the file to the /current folder
mv /home/my_user/drop/*.tar.gz /home/my_user/current/
wait
sleep 1
# Next we delete any stray files dropped that are not
# of the target type so we can keep /drop clean.
rm /home/my_user/drop/*
wait
sleep 1
# Un-Tar the files into the /local_restore directory
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
wait
sleep 1
# This should complete the movement of files
The problem I have been running into is the file that gets copied to the /local_restore
directory is truncated as if the next command in the script is causing an interruption to the cp
command.
At first I put sleep commands in it to try to get it to work, then I added wait commands after each command in the script to try to get it to work thinking that would force everything to wait until the cp command had finished copying the file to the next location.
I cannot even tell if the tar
command is working at all because it depends on the success of the cp
command further up the chain to have the file in place. Based on a test I ran with only a command to un-tar one of the files, I suspect it will not complete either before the script exits either. At least that occurred in a different 3 line test I used to test my timing theory.
BTW... the mv
command works just fine and the whole file gets moved as it should.
Can anyone identify why the commands run in the script seem to be unable to complete their task?
I have been asked to show the contents of the incrontab entry so here it is:
/home/my_user/drop/ IN_CREATE /home/my_user/bin/cycle_backups
(cycle_backups is obviously the name of the script file)
This is a KVM type VPS cloud server running Ubuntu 16.04 LTS and it has 10gb of memory with over 100gb of disk space. When the file is dropped, this is the only thing the server has to do other than system idle!
I will admit that my server is a bit slow, so when trying to copy a 200mb file to another directory it takes a second or two to complete even when I do it right at the command line.
I am at a loss to explain the problem, which makes it even harder to identify a solution.
Fair Warning: I am not the best at any of this, but I didn't think this should be such an impossible thing to accomplish.
shell-script cp
does/bin/sh
point to/bin/dash
?
– drewbenn
yesterday
2
Thewait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.
– roaima
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
2
Which mask is used "whenever a new file shows up in /drop"? Please add the output ofincrontab -l
so we can see how your script is triggered.
– Freddy
yesterday
|
show 4 more comments
I have a server that receives
backupname.tar.gz
files in/home/my_user/drop
directory every hour.I installed incron utility and use incrontab -e entry to run a script whenever a new file shows up in /drop
Here is the script:
#!/bin/sh
#
# First clear the 2 immediate use directories
rm /home/my_user/local_ready/*
wait
sleep 1
rm /home/my_user/local_restore/*
wait
sleep1
# Copy the file from the /drop /local_ready
cp /home/my_user/drop/*.tar.gz /home/my_user/local_ready/
wait
sleep 5
# Now move the file to the /current folder
mv /home/my_user/drop/*.tar.gz /home/my_user/current/
wait
sleep 1
# Next we delete any stray files dropped that are not
# of the target type so we can keep /drop clean.
rm /home/my_user/drop/*
wait
sleep 1
# Un-Tar the files into the /local_restore directory
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
wait
sleep 1
# This should complete the movement of files
The problem I have been running into is the file that gets copied to the /local_restore
directory is truncated as if the next command in the script is causing an interruption to the cp
command.
At first I put sleep commands in it to try to get it to work, then I added wait commands after each command in the script to try to get it to work thinking that would force everything to wait until the cp command had finished copying the file to the next location.
I cannot even tell if the tar
command is working at all because it depends on the success of the cp
command further up the chain to have the file in place. Based on a test I ran with only a command to un-tar one of the files, I suspect it will not complete either before the script exits either. At least that occurred in a different 3 line test I used to test my timing theory.
BTW... the mv
command works just fine and the whole file gets moved as it should.
Can anyone identify why the commands run in the script seem to be unable to complete their task?
I have been asked to show the contents of the incrontab entry so here it is:
/home/my_user/drop/ IN_CREATE /home/my_user/bin/cycle_backups
(cycle_backups is obviously the name of the script file)
This is a KVM type VPS cloud server running Ubuntu 16.04 LTS and it has 10gb of memory with over 100gb of disk space. When the file is dropped, this is the only thing the server has to do other than system idle!
I will admit that my server is a bit slow, so when trying to copy a 200mb file to another directory it takes a second or two to complete even when I do it right at the command line.
I am at a loss to explain the problem, which makes it even harder to identify a solution.
Fair Warning: I am not the best at any of this, but I didn't think this should be such an impossible thing to accomplish.
shell-script cp
I have a server that receives
backupname.tar.gz
files in/home/my_user/drop
directory every hour.I installed incron utility and use incrontab -e entry to run a script whenever a new file shows up in /drop
Here is the script:
#!/bin/sh
#
# First clear the 2 immediate use directories
rm /home/my_user/local_ready/*
wait
sleep 1
rm /home/my_user/local_restore/*
wait
sleep1
# Copy the file from the /drop /local_ready
cp /home/my_user/drop/*.tar.gz /home/my_user/local_ready/
wait
sleep 5
# Now move the file to the /current folder
mv /home/my_user/drop/*.tar.gz /home/my_user/current/
wait
sleep 1
# Next we delete any stray files dropped that are not
# of the target type so we can keep /drop clean.
rm /home/my_user/drop/*
wait
sleep 1
# Un-Tar the files into the /local_restore directory
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
wait
sleep 1
# This should complete the movement of files
The problem I have been running into is the file that gets copied to the /local_restore
directory is truncated as if the next command in the script is causing an interruption to the cp
command.
At first I put sleep commands in it to try to get it to work, then I added wait commands after each command in the script to try to get it to work thinking that would force everything to wait until the cp command had finished copying the file to the next location.
I cannot even tell if the tar
command is working at all because it depends on the success of the cp
command further up the chain to have the file in place. Based on a test I ran with only a command to un-tar one of the files, I suspect it will not complete either before the script exits either. At least that occurred in a different 3 line test I used to test my timing theory.
BTW... the mv
command works just fine and the whole file gets moved as it should.
Can anyone identify why the commands run in the script seem to be unable to complete their task?
I have been asked to show the contents of the incrontab entry so here it is:
/home/my_user/drop/ IN_CREATE /home/my_user/bin/cycle_backups
(cycle_backups is obviously the name of the script file)
This is a KVM type VPS cloud server running Ubuntu 16.04 LTS and it has 10gb of memory with over 100gb of disk space. When the file is dropped, this is the only thing the server has to do other than system idle!
I will admit that my server is a bit slow, so when trying to copy a 200mb file to another directory it takes a second or two to complete even when I do it right at the command line.
I am at a loss to explain the problem, which makes it even harder to identify a solution.
Fair Warning: I am not the best at any of this, but I didn't think this should be such an impossible thing to accomplish.
shell-script cp
shell-script cp
edited 11 hours ago
BKM
asked yesterday
BKMBKM
115
115
does/bin/sh
point to/bin/dash
?
– drewbenn
yesterday
2
Thewait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.
– roaima
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
2
Which mask is used "whenever a new file shows up in /drop"? Please add the output ofincrontab -l
so we can see how your script is triggered.
– Freddy
yesterday
|
show 4 more comments
does/bin/sh
point to/bin/dash
?
– drewbenn
yesterday
2
Thewait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.
– roaima
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
2
Which mask is used "whenever a new file shows up in /drop"? Please add the output ofincrontab -l
so we can see how your script is triggered.
– Freddy
yesterday
does
/bin/sh
point to /bin/dash
?– drewbenn
yesterday
does
/bin/sh
point to /bin/dash
?– drewbenn
yesterday
2
2
The
wait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.– roaima
yesterday
The
wait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.– roaima
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
2
2
Which mask is used "whenever a new file shows up in /drop"? Please add the output of
incrontab -l
so we can see how your script is triggered.– Freddy
yesterday
Which mask is used "whenever a new file shows up in /drop"? Please add the output of
incrontab -l
so we can see how your script is triggered.– Freddy
yesterday
|
show 4 more comments
2 Answers
2
active
oldest
votes
None of the calls to wait
will do anything in your script as there are no background tasks. You may safely delete these.
I would delete the calls to sleep
as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1
is likely to generate a "command not found" error.
The only real issue that I can see with your script is the last call to tar
:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready
, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f
flag takes a single archive, and you can't really extract multiple archives at once.
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do
tar -xzf "$archive" -C /home/my_user/local_restore/
done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e
cd /home/my_user
# clear directories
rm -f local_ready/*
rm -f local_restore/*
# Alternatively, remove directories completely
# to also get rid of hidden files etc.:
#
# rm -rf local_ready; mkdir local_ready
# rm -rf local_restore; mkdir local_restore
# handle the archives, one by one
for archive in drop/*.tar.gz; do
tar -xzf "$archive" -C local_restore
cp "$archive" current
mv "$archive" local_ready
done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready
directory, and then the archive would also be moved from drop
to current
.
I'm using sh -e
to make the script terminate on errors, and I cd
to the /home/my_user
directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f
for clearing out those directories as rm
would complain if the *
glob did not expand to anything.
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
tar -xzf "$archive" -C local_restore
done
To save space, you may want to look into hard-linking the files in local_ready
and current
:
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
ln "$archive" current
tar -xzf "$archive" -C local_restore
done
add a comment |
You should change mask to IN_CLOSE_WRITE
New contributor
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, thecp
can run before the file is all there, and will therefore only copy part of it (the part that exists whencp
runs). Solution: don't run the script until the file has been created and closed.
– Gordon Davisson
18 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511555%2fhow-do-i-get-commands-in-a-bash-script-to-complete-files-get-truncated-with-cp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
None of the calls to wait
will do anything in your script as there are no background tasks. You may safely delete these.
I would delete the calls to sleep
as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1
is likely to generate a "command not found" error.
The only real issue that I can see with your script is the last call to tar
:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready
, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f
flag takes a single archive, and you can't really extract multiple archives at once.
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do
tar -xzf "$archive" -C /home/my_user/local_restore/
done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e
cd /home/my_user
# clear directories
rm -f local_ready/*
rm -f local_restore/*
# Alternatively, remove directories completely
# to also get rid of hidden files etc.:
#
# rm -rf local_ready; mkdir local_ready
# rm -rf local_restore; mkdir local_restore
# handle the archives, one by one
for archive in drop/*.tar.gz; do
tar -xzf "$archive" -C local_restore
cp "$archive" current
mv "$archive" local_ready
done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready
directory, and then the archive would also be moved from drop
to current
.
I'm using sh -e
to make the script terminate on errors, and I cd
to the /home/my_user
directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f
for clearing out those directories as rm
would complain if the *
glob did not expand to anything.
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
tar -xzf "$archive" -C local_restore
done
To save space, you may want to look into hard-linking the files in local_ready
and current
:
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
ln "$archive" current
tar -xzf "$archive" -C local_restore
done
add a comment |
None of the calls to wait
will do anything in your script as there are no background tasks. You may safely delete these.
I would delete the calls to sleep
as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1
is likely to generate a "command not found" error.
The only real issue that I can see with your script is the last call to tar
:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready
, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f
flag takes a single archive, and you can't really extract multiple archives at once.
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do
tar -xzf "$archive" -C /home/my_user/local_restore/
done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e
cd /home/my_user
# clear directories
rm -f local_ready/*
rm -f local_restore/*
# Alternatively, remove directories completely
# to also get rid of hidden files etc.:
#
# rm -rf local_ready; mkdir local_ready
# rm -rf local_restore; mkdir local_restore
# handle the archives, one by one
for archive in drop/*.tar.gz; do
tar -xzf "$archive" -C local_restore
cp "$archive" current
mv "$archive" local_ready
done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready
directory, and then the archive would also be moved from drop
to current
.
I'm using sh -e
to make the script terminate on errors, and I cd
to the /home/my_user
directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f
for clearing out those directories as rm
would complain if the *
glob did not expand to anything.
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
tar -xzf "$archive" -C local_restore
done
To save space, you may want to look into hard-linking the files in local_ready
and current
:
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
ln "$archive" current
tar -xzf "$archive" -C local_restore
done
add a comment |
None of the calls to wait
will do anything in your script as there are no background tasks. You may safely delete these.
I would delete the calls to sleep
as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1
is likely to generate a "command not found" error.
The only real issue that I can see with your script is the last call to tar
:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready
, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f
flag takes a single archive, and you can't really extract multiple archives at once.
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do
tar -xzf "$archive" -C /home/my_user/local_restore/
done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e
cd /home/my_user
# clear directories
rm -f local_ready/*
rm -f local_restore/*
# Alternatively, remove directories completely
# to also get rid of hidden files etc.:
#
# rm -rf local_ready; mkdir local_ready
# rm -rf local_restore; mkdir local_restore
# handle the archives, one by one
for archive in drop/*.tar.gz; do
tar -xzf "$archive" -C local_restore
cp "$archive" current
mv "$archive" local_ready
done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready
directory, and then the archive would also be moved from drop
to current
.
I'm using sh -e
to make the script terminate on errors, and I cd
to the /home/my_user
directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f
for clearing out those directories as rm
would complain if the *
glob did not expand to anything.
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
tar -xzf "$archive" -C local_restore
done
To save space, you may want to look into hard-linking the files in local_ready
and current
:
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
ln "$archive" current
tar -xzf "$archive" -C local_restore
done
None of the calls to wait
will do anything in your script as there are no background tasks. You may safely delete these.
I would delete the calls to sleep
as well. They will only delay the script execution at those points. A command will not start until the previous one has properly finished anyway. Also sleep1
is likely to generate a "command not found" error.
The only real issue that I can see with your script is the last call to tar
:
tar -xzf /home/my_user/local_ready/*.tar.gz -C /home/my_user/local_restore/
If there are multiple archives in /home/my_user/local_ready
, then this command would extract the first one and try to extract the names of the other archives from that archive. The -f
flag takes a single archive, and you can't really extract multiple archives at once.
Instead, use a loop:
for archive in /home/my_user/local_ready/*.tar.gz; do
tar -xzf "$archive" -C /home/my_user/local_restore/
done
I've ignored considerations of what happens if this script is run concurrently with itself. You mention that you have some facility to execute the script when a new file shows up, but it's unclear what would happen if two or more files showed up at about the same time. Since the script is handling all files in a single invocation, I'm pretty sure that two concurrently running script may well step on each other's toes.
Personally, I might instead run the script on a regular five minute interval. Alternatively use some form of locking to make sure that the script is not running while another copy of the script is already in progress (see e.g. "Correct locking in shell scripts?").
Here's my own rewrite of your code (not doing any form of locking):
#!/bin/sh -e
cd /home/my_user
# clear directories
rm -f local_ready/*
rm -f local_restore/*
# Alternatively, remove directories completely
# to also get rid of hidden files etc.:
#
# rm -rf local_ready; mkdir local_ready
# rm -rf local_restore; mkdir local_restore
# handle the archives, one by one
for archive in drop/*.tar.gz; do
tar -xzf "$archive" -C local_restore
cp "$archive" current
mv "$archive" local_ready
done
This would clear out the directories of non-hidden names and then extract each archive. Once an archive has been extracted it would be copied to the local_ready
directory, and then the archive would also be moved from drop
to current
.
I'm using sh -e
to make the script terminate on errors, and I cd
to the /home/my_user
directory to avoid having long paths in the script (this also makes it easier to move the whole operation to a subdirectory or elsewhere later). I'm using rm -f
for clearing out those directories as rm
would complain if the *
glob did not expand to anything.
You could also obviously handle archive copying and extraction separately:
cp drop/*.tar.gz current
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
tar -xzf "$archive" -C local_restore
done
To save space, you may want to look into hard-linking the files in local_ready
and current
:
mv drop/*.tar.gz local_ready
for archive in local_ready/*.tar.gz; do
ln "$archive" current
tar -xzf "$archive" -C local_restore
done
edited 20 hours ago
answered 21 hours ago
Kusalananda♦Kusalananda
141k17262438
141k17262438
add a comment |
add a comment |
You should change mask to IN_CLOSE_WRITE
New contributor
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, thecp
can run before the file is all there, and will therefore only copy part of it (the part that exists whencp
runs). Solution: don't run the script until the file has been created and closed.
– Gordon Davisson
18 hours ago
add a comment |
You should change mask to IN_CLOSE_WRITE
New contributor
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, thecp
can run before the file is all there, and will therefore only copy part of it (the part that exists whencp
runs). Solution: don't run the script until the file has been created and closed.
– Gordon Davisson
18 hours ago
add a comment |
You should change mask to IN_CLOSE_WRITE
New contributor
You should change mask to IN_CLOSE_WRITE
New contributor
New contributor
answered yesterday
alecxsalecxs
463
463
New contributor
New contributor
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, thecp
can run before the file is all there, and will therefore only copy part of it (the part that exists whencp
runs). Solution: don't run the script until the file has been created and closed.
– Gordon Davisson
18 hours ago
add a comment |
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, thecp
can run before the file is all there, and will therefore only copy part of it (the part that exists whencp
runs). Solution: don't run the script until the file has been created and closed.
– Gordon Davisson
18 hours ago
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
script looks fine to me. guess script is started before finished copy (does not concern mv because it only changes file descriptor, but cp runs too early)
– alecxs
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
Not sure what you mean by changing the mask. How exactly would I use IN_CLOSE_WRITE to solve this problem?
– BKM
yesterday
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
do you have created incrontab -e entry? there you set the second word. <path> <mask> <command>
– alecxs
19 hours ago
1
1
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, the
cp
can run before the file is all there, and will therefore only copy part of it (the part that exists when cp
runs). Solution: don't run the script until the file has been created and closed.– Gordon Davisson
18 hours ago
A new file "showing up in /drop" is not an atomic event. A new file gets created (initially empty), things get written into it over some period of time, and eventually (when everything has been written) the creating process will close it. If this script runs when it's created, the
cp
can run before the file is all there, and will therefore only copy part of it (the part that exists when cp
runs). Solution: don't run the script until the file has been created and closed.– Gordon Davisson
18 hours ago
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511555%2fhow-do-i-get-commands-in-a-bash-script-to-complete-files-get-truncated-with-cp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
does
/bin/sh
point to/bin/dash
?– drewbenn
yesterday
2
The
wait
is pointless and unnecessary. Commands run synchronously in a script unless you explicitly state otherwise.– roaima
yesterday
Well, if the wait is pointless then why is the cp command unable to complete it's task before the other commands in the script run? Your comment makes no sense.
– BKM
yesterday
When the incrontab entry runs the script, it calls it out with the full path /home/my/_user/bin/cycle_backups So I'm not sure it matters where /bin/sh points.
– BKM
yesterday
2
Which mask is used "whenever a new file shows up in /drop"? Please add the output of
incrontab -l
so we can see how your script is triggered.– Freddy
yesterday