Btrfs RAID1: How to replace a disk drive that is physically no more there?












3














I have a btrfs RAID1 system with the following state:



# btrfs filesystem show
Label: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e
Total devices 2 FS bytes used 1.65TiB
devid 1 size 1.82TiB used 1.77TiB path /dev/sde1
*** Some devices missing


The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling.



Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk.



I tried the following:



# btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b
# btrfs replace status /mnt/brtfs-raid1-b
Never started


No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is.



I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic.



Update #1



Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before:



# btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b                                                                                                                     
ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system


/mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog:



BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed


When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left?



It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID. Seams that Brtfs is not production ready, even for RAID1.










share|improve this question
























  • Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
    – elbarna
    Jan 2 '17 at 6:00










  • Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
    – elbarna
    Jan 2 '17 at 6:01


















3














I have a btrfs RAID1 system with the following state:



# btrfs filesystem show
Label: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e
Total devices 2 FS bytes used 1.65TiB
devid 1 size 1.82TiB used 1.77TiB path /dev/sde1
*** Some devices missing


The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling.



Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk.



I tried the following:



# btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b
# btrfs replace status /mnt/brtfs-raid1-b
Never started


No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is.



I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic.



Update #1



Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before:



# btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b                                                                                                                     
ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system


/mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog:



BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed


When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left?



It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID. Seams that Brtfs is not production ready, even for RAID1.










share|improve this question
























  • Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
    – elbarna
    Jan 2 '17 at 6:00










  • Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
    – elbarna
    Jan 2 '17 at 6:01
















3












3








3







I have a btrfs RAID1 system with the following state:



# btrfs filesystem show
Label: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e
Total devices 2 FS bytes used 1.65TiB
devid 1 size 1.82TiB used 1.77TiB path /dev/sde1
*** Some devices missing


The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling.



Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk.



I tried the following:



# btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b
# btrfs replace status /mnt/brtfs-raid1-b
Never started


No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is.



I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic.



Update #1



Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before:



# btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b                                                                                                                     
ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system


/mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog:



BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed


When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left?



It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID. Seams that Brtfs is not production ready, even for RAID1.










share|improve this question















I have a btrfs RAID1 system with the following state:



# btrfs filesystem show
Label: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e
Total devices 2 FS bytes used 1.65TiB
devid 1 size 1.82TiB used 1.77TiB path /dev/sde1
*** Some devices missing


The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling.



Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk.



I tried the following:



# btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b
# btrfs replace status /mnt/brtfs-raid1-b
Never started


No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is.



I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic.



Update #1



Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before:



# btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b                                                                                                                     
ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system


/mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog:



BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed


When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left?



It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID. Seams that Brtfs is not production ready, even for RAID1.







btrfs disk replace raid1






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 7 at 19:35

























asked Jan 2 '17 at 5:25









Hans Deragon

103314




103314












  • Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
    – elbarna
    Jan 2 '17 at 6:00










  • Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
    – elbarna
    Jan 2 '17 at 6:01




















  • Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
    – elbarna
    Jan 2 '17 at 6:00










  • Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
    – elbarna
    Jan 2 '17 at 6:01


















Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
– elbarna
Jan 2 '17 at 6:00




Imho is very very bad to use sd* convention,better is to use disk uuid or label when build array or similar
– elbarna
Jan 2 '17 at 6:00












Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
– elbarna
Jan 2 '17 at 6:01






Try btrfs device del /dev/sdd /mnt...&& btrfs device add /dev/sdd /mnt...
– elbarna
Jan 2 '17 at 6:01












3 Answers
3






active

oldest

votes


















4














Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.



Click on the following links for details:




  • Kernel patch here

  • Full email thread


Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4Gib (8Gib total space), as of 2019-01-01. It is proven, works well, not resource intensive and I have full trust in it.






share|improve this answer



















  • 2




    Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
    – Navin
    Dec 13 '17 at 22:02





















3














Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint then remove the missing drive with btrfs dev del missing /mountpoint remounting the filesystem may be required before btrfs dev del missing will work.






share|improve this answer

















  • 1




    Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
    – Hans Deragon
    Jan 2 '17 at 12:42










  • use the -o degraded option for mount
    – llua
    Jan 2 '17 at 15:46






  • 1




    Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
    – Hans Deragon
    Jan 2 '17 at 16:01












  • Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
    – rrauenza
    Feb 25 at 4:39












  • (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
    – rrauenza
    Feb 25 at 4:41



















0














btrfs replace is indeed the thing to try, but there are two gotchas regarding its invocation: it will only show errors when you use -B (otherwise it'll exit with status 0, as if everything is fine, but you'll see "never started" when you check the status), and invalid parameters will throw unrelated errors.



For example, I think my disk is fine but the RAID1 got out of sync somehow (probably a power outage during which the host survived, but the disks are not on backup power and might have come online at slightly different times). To check, when I power down disk B (while mounted), I can read data just fine. When I power down disk A instead (disk B is turned on, and the filesystem was already mounted) then I get errors and corrupt data. So clearly disk A is fine and disk B is corrupt. But disk B appears to function, so I want to re-use it and just rebuild. Therefore I want to replace /dev/diskB with /dev/diskB.



When I used btrfs replace start -B /dev/diskB /dev/diskB /mnt/btrfs it showed me ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/btrfs": Invalid argument, <illegal result value>. So there is a problem with the mountpoint it seems, right? Nope, when I changed the first /dev/diskB to /dev/diskA, it just worked. The mistake was in the devices, not in the mountpoint.



Similarly, I see the first argument (2) is kind of weird. Perhaps the error is wrong and it would work with a device in place of the 2?



btrfs replace has two modes of operating: one where you use the broken device as first argument (after start -B or whatever), and a mode (if the first option is unavailable) where you use the working device to be copied from. In either case, the second argument is the disk you wish to use to rebuild with.



Whether the filesystem is mounted read-only or read-write, does not seem to matter. That's why I suspect it's rejecting your arguments and giving you a wrong error, rather than the error being correct.






share|improve this answer





















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f334228%2fbtrfs-raid1-how-to-replace-a-disk-drive-that-is-physically-no-more-there%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    4














    Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.



    Click on the following links for details:




    • Kernel patch here

    • Full email thread


    Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4Gib (8Gib total space), as of 2019-01-01. It is proven, works well, not resource intensive and I have full trust in it.






    share|improve this answer



















    • 2




      Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
      – Navin
      Dec 13 '17 at 22:02


















    4














    Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.



    Click on the following links for details:




    • Kernel patch here

    • Full email thread


    Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4Gib (8Gib total space), as of 2019-01-01. It is proven, works well, not resource intensive and I have full trust in it.






    share|improve this answer



















    • 2




      Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
      – Navin
      Dec 13 '17 at 22:02
















    4












    4








    4






    Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.



    Click on the following links for details:




    • Kernel patch here

    • Full email thread


    Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4Gib (8Gib total space), as of 2019-01-01. It is proven, works well, not resource intensive and I have full trust in it.






    share|improve this answer














    Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk.



    Click on the following links for details:




    • Kernel patch here

    • Full email thread


    Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4Gib (8Gib total space), as of 2019-01-01. It is proven, works well, not resource intensive and I have full trust in it.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited yesterday

























    answered Feb 18 '17 at 14:44









    Hans Deragon

    103314




    103314








    • 2




      Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
      – Navin
      Dec 13 '17 at 22:02
















    • 2




      Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
      – Navin
      Dec 13 '17 at 22:02










    2




    2




    Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
    – Navin
    Dec 13 '17 at 22:02






    Woah, that's insane. Nobody there knows that RAID exists only to improve availability?
    – Navin
    Dec 13 '17 at 22:02















    3














    Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint then remove the missing drive with btrfs dev del missing /mountpoint remounting the filesystem may be required before btrfs dev del missing will work.






    share|improve this answer

















    • 1




      Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
      – Hans Deragon
      Jan 2 '17 at 12:42










    • use the -o degraded option for mount
      – llua
      Jan 2 '17 at 15:46






    • 1




      Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
      – Hans Deragon
      Jan 2 '17 at 16:01












    • Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
      – rrauenza
      Feb 25 at 4:39












    • (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
      – rrauenza
      Feb 25 at 4:41
















    3














    Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint then remove the missing drive with btrfs dev del missing /mountpoint remounting the filesystem may be required before btrfs dev del missing will work.






    share|improve this answer

















    • 1




      Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
      – Hans Deragon
      Jan 2 '17 at 12:42










    • use the -o degraded option for mount
      – llua
      Jan 2 '17 at 15:46






    • 1




      Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
      – Hans Deragon
      Jan 2 '17 at 16:01












    • Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
      – rrauenza
      Feb 25 at 4:39












    • (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
      – rrauenza
      Feb 25 at 4:41














    3












    3








    3






    Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint then remove the missing drive with btrfs dev del missing /mountpoint remounting the filesystem may be required before btrfs dev del missing will work.






    share|improve this answer












    Add the new drive to the filesystem with btrfs device add /dev/sdd /mountpoint then remove the missing drive with btrfs dev del missing /mountpoint remounting the filesystem may be required before btrfs dev del missing will work.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 2 '17 at 5:30









    llua

    4,6891420




    4,6891420








    • 1




      Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
      – Hans Deragon
      Jan 2 '17 at 12:42










    • use the -o degraded option for mount
      – llua
      Jan 2 '17 at 15:46






    • 1




      Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
      – Hans Deragon
      Jan 2 '17 at 16:01












    • Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
      – rrauenza
      Feb 25 at 4:39












    • (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
      – rrauenza
      Feb 25 at 4:41














    • 1




      Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
      – Hans Deragon
      Jan 2 '17 at 12:42










    • use the -o degraded option for mount
      – llua
      Jan 2 '17 at 15:46






    • 1




      Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
      – Hans Deragon
      Jan 2 '17 at 16:01












    • Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
      – rrauenza
      Feb 25 at 4:39












    • (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
      – rrauenza
      Feb 25 at 4:41








    1




    1




    Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
    – Hans Deragon
    Jan 2 '17 at 12:42




    Thank you for your response. I updated my question to inform you that I can only mount the Brtfs filesystem in RO, which does not allow me to perform any operation on it.
    – Hans Deragon
    Jan 2 '17 at 12:42












    use the -o degraded option for mount
    – llua
    Jan 2 '17 at 15:46




    use the -o degraded option for mount
    – llua
    Jan 2 '17 at 15:46




    1




    1




    Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
    – Hans Deragon
    Jan 2 '17 at 16:01






    Here is the command I used: mount -t btrfs -o ro,degraded,recovery,nosuid,nodev,nofail,x-gvfs-show /dev/disk/by-uuid/975bdbb3-9a9c-4a72-ad67-6cda545fda5e /mnt/brtfs-raid1-b If I remove 'ro' from the option, I cannot get the filesystem mounted.
    – Hans Deragon
    Jan 2 '17 at 16:01














    Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
    – rrauenza
    Feb 25 at 4:39






    Will btrfs rebalance (duplicate from the remaining good drive) the RAID automatically once the new drive is added and the other deleted?
    – rrauenza
    Feb 25 at 4:39














    (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
    – rrauenza
    Feb 25 at 4:41




    (I believe the answer is YES if my eyes believe what brtfs fi usage /mount is showing me ...)
    – rrauenza
    Feb 25 at 4:41











    0














    btrfs replace is indeed the thing to try, but there are two gotchas regarding its invocation: it will only show errors when you use -B (otherwise it'll exit with status 0, as if everything is fine, but you'll see "never started" when you check the status), and invalid parameters will throw unrelated errors.



    For example, I think my disk is fine but the RAID1 got out of sync somehow (probably a power outage during which the host survived, but the disks are not on backup power and might have come online at slightly different times). To check, when I power down disk B (while mounted), I can read data just fine. When I power down disk A instead (disk B is turned on, and the filesystem was already mounted) then I get errors and corrupt data. So clearly disk A is fine and disk B is corrupt. But disk B appears to function, so I want to re-use it and just rebuild. Therefore I want to replace /dev/diskB with /dev/diskB.



    When I used btrfs replace start -B /dev/diskB /dev/diskB /mnt/btrfs it showed me ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/btrfs": Invalid argument, <illegal result value>. So there is a problem with the mountpoint it seems, right? Nope, when I changed the first /dev/diskB to /dev/diskA, it just worked. The mistake was in the devices, not in the mountpoint.



    Similarly, I see the first argument (2) is kind of weird. Perhaps the error is wrong and it would work with a device in place of the 2?



    btrfs replace has two modes of operating: one where you use the broken device as first argument (after start -B or whatever), and a mode (if the first option is unavailable) where you use the working device to be copied from. In either case, the second argument is the disk you wish to use to rebuild with.



    Whether the filesystem is mounted read-only or read-write, does not seem to matter. That's why I suspect it's rejecting your arguments and giving you a wrong error, rather than the error being correct.






    share|improve this answer


























      0














      btrfs replace is indeed the thing to try, but there are two gotchas regarding its invocation: it will only show errors when you use -B (otherwise it'll exit with status 0, as if everything is fine, but you'll see "never started" when you check the status), and invalid parameters will throw unrelated errors.



      For example, I think my disk is fine but the RAID1 got out of sync somehow (probably a power outage during which the host survived, but the disks are not on backup power and might have come online at slightly different times). To check, when I power down disk B (while mounted), I can read data just fine. When I power down disk A instead (disk B is turned on, and the filesystem was already mounted) then I get errors and corrupt data. So clearly disk A is fine and disk B is corrupt. But disk B appears to function, so I want to re-use it and just rebuild. Therefore I want to replace /dev/diskB with /dev/diskB.



      When I used btrfs replace start -B /dev/diskB /dev/diskB /mnt/btrfs it showed me ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/btrfs": Invalid argument, <illegal result value>. So there is a problem with the mountpoint it seems, right? Nope, when I changed the first /dev/diskB to /dev/diskA, it just worked. The mistake was in the devices, not in the mountpoint.



      Similarly, I see the first argument (2) is kind of weird. Perhaps the error is wrong and it would work with a device in place of the 2?



      btrfs replace has two modes of operating: one where you use the broken device as first argument (after start -B or whatever), and a mode (if the first option is unavailable) where you use the working device to be copied from. In either case, the second argument is the disk you wish to use to rebuild with.



      Whether the filesystem is mounted read-only or read-write, does not seem to matter. That's why I suspect it's rejecting your arguments and giving you a wrong error, rather than the error being correct.






      share|improve this answer
























        0












        0








        0






        btrfs replace is indeed the thing to try, but there are two gotchas regarding its invocation: it will only show errors when you use -B (otherwise it'll exit with status 0, as if everything is fine, but you'll see "never started" when you check the status), and invalid parameters will throw unrelated errors.



        For example, I think my disk is fine but the RAID1 got out of sync somehow (probably a power outage during which the host survived, but the disks are not on backup power and might have come online at slightly different times). To check, when I power down disk B (while mounted), I can read data just fine. When I power down disk A instead (disk B is turned on, and the filesystem was already mounted) then I get errors and corrupt data. So clearly disk A is fine and disk B is corrupt. But disk B appears to function, so I want to re-use it and just rebuild. Therefore I want to replace /dev/diskB with /dev/diskB.



        When I used btrfs replace start -B /dev/diskB /dev/diskB /mnt/btrfs it showed me ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/btrfs": Invalid argument, <illegal result value>. So there is a problem with the mountpoint it seems, right? Nope, when I changed the first /dev/diskB to /dev/diskA, it just worked. The mistake was in the devices, not in the mountpoint.



        Similarly, I see the first argument (2) is kind of weird. Perhaps the error is wrong and it would work with a device in place of the 2?



        btrfs replace has two modes of operating: one where you use the broken device as first argument (after start -B or whatever), and a mode (if the first option is unavailable) where you use the working device to be copied from. In either case, the second argument is the disk you wish to use to rebuild with.



        Whether the filesystem is mounted read-only or read-write, does not seem to matter. That's why I suspect it's rejecting your arguments and giving you a wrong error, rather than the error being correct.






        share|improve this answer












        btrfs replace is indeed the thing to try, but there are two gotchas regarding its invocation: it will only show errors when you use -B (otherwise it'll exit with status 0, as if everything is fine, but you'll see "never started" when you check the status), and invalid parameters will throw unrelated errors.



        For example, I think my disk is fine but the RAID1 got out of sync somehow (probably a power outage during which the host survived, but the disks are not on backup power and might have come online at slightly different times). To check, when I power down disk B (while mounted), I can read data just fine. When I power down disk A instead (disk B is turned on, and the filesystem was already mounted) then I get errors and corrupt data. So clearly disk A is fine and disk B is corrupt. But disk B appears to function, so I want to re-use it and just rebuild. Therefore I want to replace /dev/diskB with /dev/diskB.



        When I used btrfs replace start -B /dev/diskB /dev/diskB /mnt/btrfs it showed me ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/btrfs": Invalid argument, <illegal result value>. So there is a problem with the mountpoint it seems, right? Nope, when I changed the first /dev/diskB to /dev/diskA, it just worked. The mistake was in the devices, not in the mountpoint.



        Similarly, I see the first argument (2) is kind of weird. Perhaps the error is wrong and it would work with a device in place of the 2?



        btrfs replace has two modes of operating: one where you use the broken device as first argument (after start -B or whatever), and a mode (if the first option is unavailable) where you use the working device to be copied from. In either case, the second argument is the disk you wish to use to rebuild with.



        Whether the filesystem is mounted read-only or read-write, does not seem to matter. That's why I suspect it's rejecting your arguments and giving you a wrong error, rather than the error being correct.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 8 at 18:08









        Luc

        891717




        891717






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f334228%2fbtrfs-raid1-how-to-replace-a-disk-drive-that-is-physically-no-more-there%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            サソリ

            広島県道265号伴広島線

            Accessing regular linux commands in Huawei's Dopra Linux