How do you have granular control of dd?What's the POSIX way to read an exact number of bytes from a file?How...

Can I cast Sunbeam if both my hands are busy?

Who are the two thieves that appear the opening of Batman: TAS?

Why is the T-1000 humanoid?

If you have multiple situational racial save bonuses and are in a situation where they all apply do they stack?

Where can I find vomiting people?

Evidence that matrix multiplication cannot be done in O(n^2 poly(log(n))) time

Should I leave the first authourship of our paper to the student who did the project whereas I solved it?

Do ibuprofen or paracetamol cause hearing loss?

Contract Employer Keeps Asking for Small Things Without Pay

Insert str into larger str in the most pythonic way

Do any aircraft carry boats?

Are Democrats more likely to believe Astrology is a science?

Can I say "I have encrypted something" if I hash something?

How can I locate a missing person abroad?

Where does the expression "triple-A" comes from?

Georgian capital letter “Ⴒ” (“tar”) in pdfLaTeX

Is English tonal for some words, like "permit"?

What does "synoptic" mean in avionics?

Can I use ratchet straps to lift a dolly into a truck bed?

Can a magnet rip protons from a nucleus?

What is a realistic time needed to get a properly trained army?

What is Japanese Language Stack Exchange called in Japanese?

How to save PDFs from web for offline reading on an iPad?

What's the biggest organic molecule that could have a smell?



How do you have granular control of dd?


What's the POSIX way to read an exact number of bytes from a file?How to use DD to clone a partition off a disk image?Accidentally ejected sd card physically while ddSSHD cloning - something special to keep in mind, compared to HDD?DD Clone not bootingdd command indicates not enough disk space - trying to format sd card for raspberry piClone Drive with only Partitioned Spacedd completes implausibly quicklydd: No space left on device






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







0















I have a MicroSD card that is 64GB in size. I wrote an image to it that is 16GB in size with this:



$ sudo dd if=my_image.img of=/dev/sdb bs=4M
3798+1 records in
3798+1 records out
15931539456 bytes (16 GB, 15 GiB) copied, 657.848 s, 24.2 MB/s


Now, I want to take an image of the first 15931539456 bytes (16GB) of the same 64GB SD card with dd, to end up with an image that has the same checksum as the image I started with.



From what I understand, dd's result above (3798+1) shows that there were 3798 complete reads from the source image, and 1 partial read, because the size of the source image doesn't split evenly into 4M chunks. So how do I tell dd to now copy 15931539456 bytes from the SD card into a new file, 4M at a time?



I'm assuming I can do something like:



sudo dd if=/dev/sdb of=new_image.img bs=1 count=15931539456


but having a buffer that small would make the operation take forever. Is there any way to tell it to use a buffer of 4M, but only copy X number of bytes, even if that results in a short read at the end?










share|improve this question

























  • What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

    – Jim L.
    4 hours ago













  • @JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

    – FUZxxl
    3 hours ago











  • @FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

    – Jim L.
    3 hours ago




















0















I have a MicroSD card that is 64GB in size. I wrote an image to it that is 16GB in size with this:



$ sudo dd if=my_image.img of=/dev/sdb bs=4M
3798+1 records in
3798+1 records out
15931539456 bytes (16 GB, 15 GiB) copied, 657.848 s, 24.2 MB/s


Now, I want to take an image of the first 15931539456 bytes (16GB) of the same 64GB SD card with dd, to end up with an image that has the same checksum as the image I started with.



From what I understand, dd's result above (3798+1) shows that there were 3798 complete reads from the source image, and 1 partial read, because the size of the source image doesn't split evenly into 4M chunks. So how do I tell dd to now copy 15931539456 bytes from the SD card into a new file, 4M at a time?



I'm assuming I can do something like:



sudo dd if=/dev/sdb of=new_image.img bs=1 count=15931539456


but having a buffer that small would make the operation take forever. Is there any way to tell it to use a buffer of 4M, but only copy X number of bytes, even if that results in a short read at the end?










share|improve this question

























  • What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

    – Jim L.
    4 hours ago













  • @JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

    – FUZxxl
    3 hours ago











  • @FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

    – Jim L.
    3 hours ago
















0












0








0








I have a MicroSD card that is 64GB in size. I wrote an image to it that is 16GB in size with this:



$ sudo dd if=my_image.img of=/dev/sdb bs=4M
3798+1 records in
3798+1 records out
15931539456 bytes (16 GB, 15 GiB) copied, 657.848 s, 24.2 MB/s


Now, I want to take an image of the first 15931539456 bytes (16GB) of the same 64GB SD card with dd, to end up with an image that has the same checksum as the image I started with.



From what I understand, dd's result above (3798+1) shows that there were 3798 complete reads from the source image, and 1 partial read, because the size of the source image doesn't split evenly into 4M chunks. So how do I tell dd to now copy 15931539456 bytes from the SD card into a new file, 4M at a time?



I'm assuming I can do something like:



sudo dd if=/dev/sdb of=new_image.img bs=1 count=15931539456


but having a buffer that small would make the operation take forever. Is there any way to tell it to use a buffer of 4M, but only copy X number of bytes, even if that results in a short read at the end?










share|improve this question














I have a MicroSD card that is 64GB in size. I wrote an image to it that is 16GB in size with this:



$ sudo dd if=my_image.img of=/dev/sdb bs=4M
3798+1 records in
3798+1 records out
15931539456 bytes (16 GB, 15 GiB) copied, 657.848 s, 24.2 MB/s


Now, I want to take an image of the first 15931539456 bytes (16GB) of the same 64GB SD card with dd, to end up with an image that has the same checksum as the image I started with.



From what I understand, dd's result above (3798+1) shows that there were 3798 complete reads from the source image, and 1 partial read, because the size of the source image doesn't split evenly into 4M chunks. So how do I tell dd to now copy 15931539456 bytes from the SD card into a new file, 4M at a time?



I'm assuming I can do something like:



sudo dd if=/dev/sdb of=new_image.img bs=1 count=15931539456


but having a buffer that small would make the operation take forever. Is there any way to tell it to use a buffer of 4M, but only copy X number of bytes, even if that results in a short read at the end?







dd






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 4 hours ago









TalTal

7411 gold badge9 silver badges24 bronze badges




7411 gold badge9 silver badges24 bronze badges
















  • What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

    – Jim L.
    4 hours ago













  • @JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

    – FUZxxl
    3 hours ago











  • @FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

    – Jim L.
    3 hours ago





















  • What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

    – Jim L.
    4 hours ago













  • @JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

    – FUZxxl
    3 hours ago











  • @FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

    – Jim L.
    3 hours ago



















What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

– Jim L.
4 hours ago







What if you did it the other way 'round? sudo dd if=/dev/sdb of=new_image.img bs=15931539456 count=1? Also, your file size is a multiple of several reasonable block sizes, up as high as 2^19 (512k). Or if you prefer to use a block size that is not a divisor of the file size, then transfer slightly too much data and then use truncate(1) to shrink the file to the exact size you want.

– Jim L.
4 hours ago















@JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

– FUZxxl
3 hours ago





@JimL. That would require 16GB of RAM. Unless OP has that much RAM, that's not a good idea.

– FUZxxl
3 hours ago













@FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

– Jim L.
3 hours ago







@FUZxxl Sorry, yes, I tested on my machine, but I have 32GB RAM. I'd opt for the truncate route then, or the 512k blocksize.

– Jim L.
3 hours ago












1 Answer
1






active

oldest

votes


















1
















Few possibilities:





  1. Use a smaller bs, yet not very small:



    dd if=/dev/sdb of=new_image.img bs=512k count=30387


    This is not a general solution. It works here because 15931539456 can be factorized.




  2. Use your desired bs, read more and truncate on the fly:



    dd if=/dev/sdb bs=4M | head -c 15931539456 >new_image.img


    You don't need count=3799 here. Well, you don't even need dd:



    head -c 15931539456 /dev/sdb >new_image.img


    I expect head to read sane chunks and perform well. Note head -c is not required by POSIX.




  3. Use your desired bs, read more and truncate later:



    dd if=/dev/sdb of=new_image.img bs=4M count=3799
    truncate -s 15931539456 new_image.img


    truncate is not portable though.




  4. Use your desired bs, read less; then read the remaining part with bs=1:



    dd if=/dev/sdb of=new_image.img bs=4M count=3798
    offset=$((3798*4*1024*1024))
    remaining=$((15931539456-offset))
    dd if=/dev/sdb of=new_image.img bs=1 skip="$offset" seek="$offset" count="$remaining"



Note in general dd may read a partial block and this still increases the count (compare this answer). While reading from a block device this probably won't happen, still a general solution is to always specify iflag=fullblock whenever you rely on count and bs is greater than 1. Unfortunately fullblock is not required by POSIX, so your dd may not support it.






share|improve this answer






























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });















    draft saved

    draft discarded
















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f540084%2fhow-do-you-have-granular-control-of-dd%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1
















    Few possibilities:





    1. Use a smaller bs, yet not very small:



      dd if=/dev/sdb of=new_image.img bs=512k count=30387


      This is not a general solution. It works here because 15931539456 can be factorized.




    2. Use your desired bs, read more and truncate on the fly:



      dd if=/dev/sdb bs=4M | head -c 15931539456 >new_image.img


      You don't need count=3799 here. Well, you don't even need dd:



      head -c 15931539456 /dev/sdb >new_image.img


      I expect head to read sane chunks and perform well. Note head -c is not required by POSIX.




    3. Use your desired bs, read more and truncate later:



      dd if=/dev/sdb of=new_image.img bs=4M count=3799
      truncate -s 15931539456 new_image.img


      truncate is not portable though.




    4. Use your desired bs, read less; then read the remaining part with bs=1:



      dd if=/dev/sdb of=new_image.img bs=4M count=3798
      offset=$((3798*4*1024*1024))
      remaining=$((15931539456-offset))
      dd if=/dev/sdb of=new_image.img bs=1 skip="$offset" seek="$offset" count="$remaining"



    Note in general dd may read a partial block and this still increases the count (compare this answer). While reading from a block device this probably won't happen, still a general solution is to always specify iflag=fullblock whenever you rely on count and bs is greater than 1. Unfortunately fullblock is not required by POSIX, so your dd may not support it.






    share|improve this answer
































      1
















      Few possibilities:





      1. Use a smaller bs, yet not very small:



        dd if=/dev/sdb of=new_image.img bs=512k count=30387


        This is not a general solution. It works here because 15931539456 can be factorized.




      2. Use your desired bs, read more and truncate on the fly:



        dd if=/dev/sdb bs=4M | head -c 15931539456 >new_image.img


        You don't need count=3799 here. Well, you don't even need dd:



        head -c 15931539456 /dev/sdb >new_image.img


        I expect head to read sane chunks and perform well. Note head -c is not required by POSIX.




      3. Use your desired bs, read more and truncate later:



        dd if=/dev/sdb of=new_image.img bs=4M count=3799
        truncate -s 15931539456 new_image.img


        truncate is not portable though.




      4. Use your desired bs, read less; then read the remaining part with bs=1:



        dd if=/dev/sdb of=new_image.img bs=4M count=3798
        offset=$((3798*4*1024*1024))
        remaining=$((15931539456-offset))
        dd if=/dev/sdb of=new_image.img bs=1 skip="$offset" seek="$offset" count="$remaining"



      Note in general dd may read a partial block and this still increases the count (compare this answer). While reading from a block device this probably won't happen, still a general solution is to always specify iflag=fullblock whenever you rely on count and bs is greater than 1. Unfortunately fullblock is not required by POSIX, so your dd may not support it.






      share|improve this answer






























        1














        1










        1









        Few possibilities:





        1. Use a smaller bs, yet not very small:



          dd if=/dev/sdb of=new_image.img bs=512k count=30387


          This is not a general solution. It works here because 15931539456 can be factorized.




        2. Use your desired bs, read more and truncate on the fly:



          dd if=/dev/sdb bs=4M | head -c 15931539456 >new_image.img


          You don't need count=3799 here. Well, you don't even need dd:



          head -c 15931539456 /dev/sdb >new_image.img


          I expect head to read sane chunks and perform well. Note head -c is not required by POSIX.




        3. Use your desired bs, read more and truncate later:



          dd if=/dev/sdb of=new_image.img bs=4M count=3799
          truncate -s 15931539456 new_image.img


          truncate is not portable though.




        4. Use your desired bs, read less; then read the remaining part with bs=1:



          dd if=/dev/sdb of=new_image.img bs=4M count=3798
          offset=$((3798*4*1024*1024))
          remaining=$((15931539456-offset))
          dd if=/dev/sdb of=new_image.img bs=1 skip="$offset" seek="$offset" count="$remaining"



        Note in general dd may read a partial block and this still increases the count (compare this answer). While reading from a block device this probably won't happen, still a general solution is to always specify iflag=fullblock whenever you rely on count and bs is greater than 1. Unfortunately fullblock is not required by POSIX, so your dd may not support it.






        share|improve this answer















        Few possibilities:





        1. Use a smaller bs, yet not very small:



          dd if=/dev/sdb of=new_image.img bs=512k count=30387


          This is not a general solution. It works here because 15931539456 can be factorized.




        2. Use your desired bs, read more and truncate on the fly:



          dd if=/dev/sdb bs=4M | head -c 15931539456 >new_image.img


          You don't need count=3799 here. Well, you don't even need dd:



          head -c 15931539456 /dev/sdb >new_image.img


          I expect head to read sane chunks and perform well. Note head -c is not required by POSIX.




        3. Use your desired bs, read more and truncate later:



          dd if=/dev/sdb of=new_image.img bs=4M count=3799
          truncate -s 15931539456 new_image.img


          truncate is not portable though.




        4. Use your desired bs, read less; then read the remaining part with bs=1:



          dd if=/dev/sdb of=new_image.img bs=4M count=3798
          offset=$((3798*4*1024*1024))
          remaining=$((15931539456-offset))
          dd if=/dev/sdb of=new_image.img bs=1 skip="$offset" seek="$offset" count="$remaining"



        Note in general dd may read a partial block and this still increases the count (compare this answer). While reading from a block device this probably won't happen, still a general solution is to always specify iflag=fullblock whenever you rely on count and bs is greater than 1. Unfortunately fullblock is not required by POSIX, so your dd may not support it.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 3 hours ago

























        answered 3 hours ago









        Kamil MaciorowskiKamil Maciorowski

        2,6291 gold badge10 silver badges33 bronze badges




        2,6291 gold badge10 silver badges33 bronze badges


































            draft saved

            draft discarded



















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f540084%2fhow-do-you-have-granular-control-of-dd%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

            Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

            Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...