Does BTRFS guarantee data consistency on power outages?What keeps one side of an rsync so busy?Where does LVM...

What is to the west of Westeros?

Are PMR446 walkie-talkies legal in Switzerland?

Visual Block Mode edit with sequential number

How does the Earth's center produce heat?

Is there a simple example that empirical evidence is misleading?

What is the use case for non-breathable waterproof pants?

How to teach an undergraduate course without having taken that course formally before?

Did Game of Thrones end the way that George RR Martin intended?

Question about Shemot, locusts

Who wrote “A writer only begins a book. A reader finishes it.”

Gravitational Force Between Numbers

How to deceive the MC

How does Dreadhorde Arcanist interact with split cards?

Is there an idiom that means "accepting a bad business deal out of desperation"?

Is a world with one country feeding everyone possible?

I want to ask company flying me out for office tour if I can bring my fiance

How to escape dependency hell?

symmetric matrices with 1,2,3,4,5 in each line (and generalization)

The disk image is 497GB smaller than the target device

Was this scene in S8E06 added because of fan reactions to S8E04?

Does water in vacuum form a solid shell or freeze solid?

If I arrive in the UK, and then head to mainland Europe, does my Schengen visa 90 day limit start when I arrived in the UK, or mainland Europe?

How to deal with the mirrored components of a FFT? And another question

Merge pdfs sequentially



Does BTRFS guarantee data consistency on power outages?


What keeps one side of an rsync so busy?Where does LVM store data?Understanding btrfs disc usage for single data volumeIs btrfs suitable as backup filesystem?Deduplication semantics with btrfs - meta-data differs, file data identicalDirectly mount btrfs subvolume or bind mount if already visibleArch Linux Server: How to spin down HDDs in a BTRFS Raid5 configuration?When does btrfs allocate space?btrfs incremental snapshots: find UUID in sent dataUnrecoverable Btrfs filesystem after a power failure?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







9















As ZFS states exclusively, ZFS is claimed to be invulnerable ZFS accepts that it might be vulnerable to power failures.



I couldn't find such a statement for BTRFS. Is it (or designed/planned to be) durable between power outages?










share|improve this question

























  • read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

    – Michael D.
    Jan 29 '17 at 11:09











  • So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

    – ceremcem
    Feb 9 '17 at 8:10











  • Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

    – Michael D.
    Feb 9 '17 at 8:33






  • 2





    Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

    – ceremcem
    Feb 9 '17 at 11:58






  • 1





    I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

    – Hi-Angel
    Nov 13 '17 at 9:33




















9















As ZFS states exclusively, ZFS is claimed to be invulnerable ZFS accepts that it might be vulnerable to power failures.



I couldn't find such a statement for BTRFS. Is it (or designed/planned to be) durable between power outages?










share|improve this question

























  • read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

    – Michael D.
    Jan 29 '17 at 11:09











  • So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

    – ceremcem
    Feb 9 '17 at 8:10











  • Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

    – Michael D.
    Feb 9 '17 at 8:33






  • 2





    Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

    – ceremcem
    Feb 9 '17 at 11:58






  • 1





    I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

    – Hi-Angel
    Nov 13 '17 at 9:33
















9












9








9


1






As ZFS states exclusively, ZFS is claimed to be invulnerable ZFS accepts that it might be vulnerable to power failures.



I couldn't find such a statement for BTRFS. Is it (or designed/planned to be) durable between power outages?










share|improve this question
















As ZFS states exclusively, ZFS is claimed to be invulnerable ZFS accepts that it might be vulnerable to power failures.



I couldn't find such a statement for BTRFS. Is it (or designed/planned to be) durable between power outages?







btrfs failure-resistance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 10 '17 at 17:01







ceremcem

















asked Jan 29 '17 at 10:03









ceremcemceremcem

5691622




5691622













  • read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

    – Michael D.
    Jan 29 '17 at 11:09











  • So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

    – ceremcem
    Feb 9 '17 at 8:10











  • Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

    – Michael D.
    Feb 9 '17 at 8:33






  • 2





    Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

    – ceremcem
    Feb 9 '17 at 11:58






  • 1





    I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

    – Hi-Angel
    Nov 13 '17 at 9:33





















  • read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

    – Michael D.
    Jan 29 '17 at 11:09











  • So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

    – ceremcem
    Feb 9 '17 at 8:10











  • Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

    – Michael D.
    Feb 9 '17 at 8:33






  • 2





    Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

    – ceremcem
    Feb 9 '17 at 11:58






  • 1





    I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

    – Hi-Angel
    Nov 13 '17 at 9:33



















read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

– Michael D.
Jan 29 '17 at 11:09





read again. " If your pool is damaged due to failing hardware or a power outage, see Repairing ZFS Storage Pool-Wide Damage." (..) Attempt to recover the pool by using the zpool clear -F command

– Michael D.
Jan 29 '17 at 11:09













So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

– ceremcem
Feb 9 '17 at 8:10





So you say "ZFS does not guarantee data consistency, it only attempts to recover"?

– ceremcem
Feb 9 '17 at 8:10













Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

– Michael D.
Feb 9 '17 at 8:33





Yes. There're several caches to deal with, a hard drives built-in cache, OS caches/buffers. At some point there is a sync or a flush which writes caches to disk, or not during an power outage, that data will be lost. ZFS might work perfectly if the hard disk is healthy and there're no power outages (or an UPS is connected to properly shutdown computer on an outage). Whch you can't say about FAT32 or so.

– Michael D.
Feb 9 '17 at 8:33




2




2





Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

– ceremcem
Feb 9 '17 at 11:58





Data loss is not a concern as it is a natural consequence when a power loss is occurred, but, data consistency is a concern in my case. A file system might loose data in such extreme conditions, but should not cause inconsistent data in disk. I need continuous snapshots facility, so I'll keep going with BTRFS. NILFS2 is the closest option in my case though.

– ceremcem
Feb 9 '17 at 11:58




1




1





I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

– Hi-Angel
Nov 13 '17 at 9:33







I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics. I have posted a link to this question on IRC, hopefully somebody would take time to elaborate; but for now this is it.

– Hi-Angel
Nov 13 '17 at 9:33












1 Answer
1






active

oldest

votes


















1















I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics.




TL;DR: This means that btrfs is protected against data corruption due to power loss in a similar way as ZFS.



Here is why: The general idea behind ZFS and btrfs is similar. Both use Merkle trees as a data structure. Writes might require multiple blocks on the disk(s) to be updated. The file system is handling this by writing the new data to empty blocks (thus it doesn't need to modify blocks that reflect the old state) and building a new updated tree. Once all the heavy lifting is done and data + the updated tree have been written to the disk they update the head pointer to the new tree.



Here is how things are supposed to behave when writing to a file:




  1. Write data to free blocks on the disk.

  2. Make a copy of the Merkle tree, update it according to the changes written in (1).

  3. Asks hardware to flush data to disk - hardware writes all pending data.

  4. Update head pointer to new Merkle tree.

  5. Free old blocks that aren't needed anymore.


If power is lost after (4) the transaction is complete. If power is lost during steps (1) to (3) the file system will come up with the old state (data written in step (1) is lost but the file system is consistent). Note that there is no need to check for file system errors which means the file system is available immediately which is a big advantage (checking large file systems can take very long!).



Here is an example how things can go wrong with "buggy" hardware:




  1. Write data to free blocks on the disk.

  2. Make a copy of the Merkle tree, update it according to the changes written in (1).

  3. Asks hardware to flush data to disk - hardware confirms completion but doesn't flush all the way (e.g. the data might remain in the disk's write-back cache).

  4. Update head pointer to new Merkle tree. This data gets written to disk before other pending data (e.g. because the head of the disk happens to be at the right location).

  5. Data written in steps (1) and (2) gets written to disk.

  6. Free old blocks that aren't needed anymore.


The file system will become inconsistent if power is lost between (4) and (5) or while performing step (5). As a consequence the Merkle tree and/or the data might only be partially written causing the file system to become inconsistent.



In practice you have to be particularly careful when using RAID controllers. They usually disable the write-back caches on the disk and use their own write-back cache instead. There are two common ways for things to go wrong here:




  • The RAID controller's cache is RAM. If it's not backed by a battery or some similar technology you might lose a significant amount of data.


  • Some disks don't disable their write-back caches when asked to do so. This is more likely to happen with cheap desktop drives than with server drives designed for RAIDs.






share|improve this answer








New contributor



Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f340947%2fdoes-btrfs-guarantee-data-consistency-on-power-outages%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1















    I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics.




    TL;DR: This means that btrfs is protected against data corruption due to power loss in a similar way as ZFS.



    Here is why: The general idea behind ZFS and btrfs is similar. Both use Merkle trees as a data structure. Writes might require multiple blocks on the disk(s) to be updated. The file system is handling this by writing the new data to empty blocks (thus it doesn't need to modify blocks that reflect the old state) and building a new updated tree. Once all the heavy lifting is done and data + the updated tree have been written to the disk they update the head pointer to the new tree.



    Here is how things are supposed to behave when writing to a file:




    1. Write data to free blocks on the disk.

    2. Make a copy of the Merkle tree, update it according to the changes written in (1).

    3. Asks hardware to flush data to disk - hardware writes all pending data.

    4. Update head pointer to new Merkle tree.

    5. Free old blocks that aren't needed anymore.


    If power is lost after (4) the transaction is complete. If power is lost during steps (1) to (3) the file system will come up with the old state (data written in step (1) is lost but the file system is consistent). Note that there is no need to check for file system errors which means the file system is available immediately which is a big advantage (checking large file systems can take very long!).



    Here is an example how things can go wrong with "buggy" hardware:




    1. Write data to free blocks on the disk.

    2. Make a copy of the Merkle tree, update it according to the changes written in (1).

    3. Asks hardware to flush data to disk - hardware confirms completion but doesn't flush all the way (e.g. the data might remain in the disk's write-back cache).

    4. Update head pointer to new Merkle tree. This data gets written to disk before other pending data (e.g. because the head of the disk happens to be at the right location).

    5. Data written in steps (1) and (2) gets written to disk.

    6. Free old blocks that aren't needed anymore.


    The file system will become inconsistent if power is lost between (4) and (5) or while performing step (5). As a consequence the Merkle tree and/or the data might only be partially written causing the file system to become inconsistent.



    In practice you have to be particularly careful when using RAID controllers. They usually disable the write-back caches on the disk and use their own write-back cache instead. There are two common ways for things to go wrong here:




    • The RAID controller's cache is RAM. If it's not backed by a battery or some similar technology you might lose a significant amount of data.


    • Some disks don't disable their write-back caches when asked to do so. This is more likely to happen with cheap desktop drives than with server drives designed for RAIDs.






    share|improve this answer








    New contributor



    Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.
























      1















      I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics.




      TL;DR: This means that btrfs is protected against data corruption due to power loss in a similar way as ZFS.



      Here is why: The general idea behind ZFS and btrfs is similar. Both use Merkle trees as a data structure. Writes might require multiple blocks on the disk(s) to be updated. The file system is handling this by writing the new data to empty blocks (thus it doesn't need to modify blocks that reflect the old state) and building a new updated tree. Once all the heavy lifting is done and data + the updated tree have been written to the disk they update the head pointer to the new tree.



      Here is how things are supposed to behave when writing to a file:




      1. Write data to free blocks on the disk.

      2. Make a copy of the Merkle tree, update it according to the changes written in (1).

      3. Asks hardware to flush data to disk - hardware writes all pending data.

      4. Update head pointer to new Merkle tree.

      5. Free old blocks that aren't needed anymore.


      If power is lost after (4) the transaction is complete. If power is lost during steps (1) to (3) the file system will come up with the old state (data written in step (1) is lost but the file system is consistent). Note that there is no need to check for file system errors which means the file system is available immediately which is a big advantage (checking large file systems can take very long!).



      Here is an example how things can go wrong with "buggy" hardware:




      1. Write data to free blocks on the disk.

      2. Make a copy of the Merkle tree, update it according to the changes written in (1).

      3. Asks hardware to flush data to disk - hardware confirms completion but doesn't flush all the way (e.g. the data might remain in the disk's write-back cache).

      4. Update head pointer to new Merkle tree. This data gets written to disk before other pending data (e.g. because the head of the disk happens to be at the right location).

      5. Data written in steps (1) and (2) gets written to disk.

      6. Free old blocks that aren't needed anymore.


      The file system will become inconsistent if power is lost between (4) and (5) or while performing step (5). As a consequence the Merkle tree and/or the data might only be partially written causing the file system to become inconsistent.



      In practice you have to be particularly careful when using RAID controllers. They usually disable the write-back caches on the disk and use their own write-back cache instead. There are two common ways for things to go wrong here:




      • The RAID controller's cache is RAM. If it's not backed by a battery or some similar technology you might lose a significant amount of data.


      • Some disks don't disable their write-back caches when asked to do so. This is more likely to happen with cheap desktop drives than with server drives designed for RAIDs.






      share|improve this answer








      New contributor



      Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















        1












        1








        1








        I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics.




        TL;DR: This means that btrfs is protected against data corruption due to power loss in a similar way as ZFS.



        Here is why: The general idea behind ZFS and btrfs is similar. Both use Merkle trees as a data structure. Writes might require multiple blocks on the disk(s) to be updated. The file system is handling this by writing the new data to empty blocks (thus it doesn't need to modify blocks that reflect the old state) and building a new updated tree. Once all the heavy lifting is done and data + the updated tree have been written to the disk they update the head pointer to the new tree.



        Here is how things are supposed to behave when writing to a file:




        1. Write data to free blocks on the disk.

        2. Make a copy of the Merkle tree, update it according to the changes written in (1).

        3. Asks hardware to flush data to disk - hardware writes all pending data.

        4. Update head pointer to new Merkle tree.

        5. Free old blocks that aren't needed anymore.


        If power is lost after (4) the transaction is complete. If power is lost during steps (1) to (3) the file system will come up with the old state (data written in step (1) is lost but the file system is consistent). Note that there is no need to check for file system errors which means the file system is available immediately which is a big advantage (checking large file systems can take very long!).



        Here is an example how things can go wrong with "buggy" hardware:




        1. Write data to free blocks on the disk.

        2. Make a copy of the Merkle tree, update it according to the changes written in (1).

        3. Asks hardware to flush data to disk - hardware confirms completion but doesn't flush all the way (e.g. the data might remain in the disk's write-back cache).

        4. Update head pointer to new Merkle tree. This data gets written to disk before other pending data (e.g. because the head of the disk happens to be at the right location).

        5. Data written in steps (1) and (2) gets written to disk.

        6. Free old blocks that aren't needed anymore.


        The file system will become inconsistent if power is lost between (4) and (5) or while performing step (5). As a consequence the Merkle tree and/or the data might only be partially written causing the file system to become inconsistent.



        In practice you have to be particularly careful when using RAID controllers. They usually disable the write-back caches on the disk and use their own write-back cache instead. There are two common ways for things to go wrong here:




        • The RAID controller's cache is RAM. If it's not backed by a battery or some similar technology you might lose a significant amount of data.


        • Some disks don't disable their write-back caches when asked to do so. This is more likely to happen with cheap desktop drives than with server drives designed for RAIDs.






        share|improve this answer








        New contributor



        Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        I've asked the question on #btrfs IRC, they said should be ok if your hw isn't "buggy" where not-"buggy" means your hw has correct flush/barrier semantics.




        TL;DR: This means that btrfs is protected against data corruption due to power loss in a similar way as ZFS.



        Here is why: The general idea behind ZFS and btrfs is similar. Both use Merkle trees as a data structure. Writes might require multiple blocks on the disk(s) to be updated. The file system is handling this by writing the new data to empty blocks (thus it doesn't need to modify blocks that reflect the old state) and building a new updated tree. Once all the heavy lifting is done and data + the updated tree have been written to the disk they update the head pointer to the new tree.



        Here is how things are supposed to behave when writing to a file:




        1. Write data to free blocks on the disk.

        2. Make a copy of the Merkle tree, update it according to the changes written in (1).

        3. Asks hardware to flush data to disk - hardware writes all pending data.

        4. Update head pointer to new Merkle tree.

        5. Free old blocks that aren't needed anymore.


        If power is lost after (4) the transaction is complete. If power is lost during steps (1) to (3) the file system will come up with the old state (data written in step (1) is lost but the file system is consistent). Note that there is no need to check for file system errors which means the file system is available immediately which is a big advantage (checking large file systems can take very long!).



        Here is an example how things can go wrong with "buggy" hardware:




        1. Write data to free blocks on the disk.

        2. Make a copy of the Merkle tree, update it according to the changes written in (1).

        3. Asks hardware to flush data to disk - hardware confirms completion but doesn't flush all the way (e.g. the data might remain in the disk's write-back cache).

        4. Update head pointer to new Merkle tree. This data gets written to disk before other pending data (e.g. because the head of the disk happens to be at the right location).

        5. Data written in steps (1) and (2) gets written to disk.

        6. Free old blocks that aren't needed anymore.


        The file system will become inconsistent if power is lost between (4) and (5) or while performing step (5). As a consequence the Merkle tree and/or the data might only be partially written causing the file system to become inconsistent.



        In practice you have to be particularly careful when using RAID controllers. They usually disable the write-back caches on the disk and use their own write-back cache instead. There are two common ways for things to go wrong here:




        • The RAID controller's cache is RAM. If it's not backed by a battery or some similar technology you might lose a significant amount of data.


        • Some disks don't disable their write-back caches when asked to do so. This is more likely to happen with cheap desktop drives than with server drives designed for RAIDs.







        share|improve this answer








        New contributor



        Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        share|improve this answer



        share|improve this answer






        New contributor



        Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        answered 53 mins ago









        MartinMartin

        113




        113




        New contributor



        Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.




        New contributor




        Martin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f340947%2fdoes-btrfs-guarantee-data-consistency-on-power-outages%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

            Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

            Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...