ZFS Pool out of whack












2














I'm in the process of switching from a classic mdadm raid to a zfs pool, and have made a few stumbles that I'm trying to recover from.



Originally I had two 4tb drives in a raid 1 mirror.



I then put two new 4tb drives in the machine and disconnected the originals. I create a zpool with the new drives in a mirror, but I used /dev/sda, and /dev/sdb because that's what the guide I was using told me to do, and wasn't thinking.



So of course when I reconnected the old drives to copy the data over, they took /dev/sdb and /dev/sdc which made one of my two zfs drives /dev/sdd which of course messed up the zfs pool and showed one as UNAVAIL



After working with someone online I managed to get the zfs pool into UUID mode by zpool export pool and then zpool import -d /dev/disk/by-uuid pool



This then allowed me to detach the UNAVAIL drive, which I then wiped clean, and added back to the zfs as a mirror of the first using it's /dev/disk/by-id. After a few days it resilvered successfully.



Now, I have a zpool with one device having a long integer as it's identifier, and another with a string along the lines of ata-WDC_WD.... I wanted to get them all on the same page, so I was planning to detach the first disk with the integer identifier, and re-add it using it's /dev/disk/by-id. However, attempted to detach gives me the error: cannot detach 13419994393693470939: only applicable to mirror and replacing vdevs.



Ok, so I tried to replace it with a different drive, and got this error: cannot open '13419994393693470939': name must begin with a letter



While the pool is working, I would like everything to be in a consistent state. I could use the old two drives to make a new pool and copy the data back over, then destroy the old pool and then add those drives to the new one (which then requires me renaming the pools which causes some interruptions in service in the meantime), but I would hoipe there is a way around this I just haven't found.










share|improve this question



























    2














    I'm in the process of switching from a classic mdadm raid to a zfs pool, and have made a few stumbles that I'm trying to recover from.



    Originally I had two 4tb drives in a raid 1 mirror.



    I then put two new 4tb drives in the machine and disconnected the originals. I create a zpool with the new drives in a mirror, but I used /dev/sda, and /dev/sdb because that's what the guide I was using told me to do, and wasn't thinking.



    So of course when I reconnected the old drives to copy the data over, they took /dev/sdb and /dev/sdc which made one of my two zfs drives /dev/sdd which of course messed up the zfs pool and showed one as UNAVAIL



    After working with someone online I managed to get the zfs pool into UUID mode by zpool export pool and then zpool import -d /dev/disk/by-uuid pool



    This then allowed me to detach the UNAVAIL drive, which I then wiped clean, and added back to the zfs as a mirror of the first using it's /dev/disk/by-id. After a few days it resilvered successfully.



    Now, I have a zpool with one device having a long integer as it's identifier, and another with a string along the lines of ata-WDC_WD.... I wanted to get them all on the same page, so I was planning to detach the first disk with the integer identifier, and re-add it using it's /dev/disk/by-id. However, attempted to detach gives me the error: cannot detach 13419994393693470939: only applicable to mirror and replacing vdevs.



    Ok, so I tried to replace it with a different drive, and got this error: cannot open '13419994393693470939': name must begin with a letter



    While the pool is working, I would like everything to be in a consistent state. I could use the old two drives to make a new pool and copy the data back over, then destroy the old pool and then add those drives to the new one (which then requires me renaming the pools which causes some interruptions in service in the meantime), but I would hoipe there is a way around this I just haven't found.










    share|improve this question

























      2












      2








      2







      I'm in the process of switching from a classic mdadm raid to a zfs pool, and have made a few stumbles that I'm trying to recover from.



      Originally I had two 4tb drives in a raid 1 mirror.



      I then put two new 4tb drives in the machine and disconnected the originals. I create a zpool with the new drives in a mirror, but I used /dev/sda, and /dev/sdb because that's what the guide I was using told me to do, and wasn't thinking.



      So of course when I reconnected the old drives to copy the data over, they took /dev/sdb and /dev/sdc which made one of my two zfs drives /dev/sdd which of course messed up the zfs pool and showed one as UNAVAIL



      After working with someone online I managed to get the zfs pool into UUID mode by zpool export pool and then zpool import -d /dev/disk/by-uuid pool



      This then allowed me to detach the UNAVAIL drive, which I then wiped clean, and added back to the zfs as a mirror of the first using it's /dev/disk/by-id. After a few days it resilvered successfully.



      Now, I have a zpool with one device having a long integer as it's identifier, and another with a string along the lines of ata-WDC_WD.... I wanted to get them all on the same page, so I was planning to detach the first disk with the integer identifier, and re-add it using it's /dev/disk/by-id. However, attempted to detach gives me the error: cannot detach 13419994393693470939: only applicable to mirror and replacing vdevs.



      Ok, so I tried to replace it with a different drive, and got this error: cannot open '13419994393693470939': name must begin with a letter



      While the pool is working, I would like everything to be in a consistent state. I could use the old two drives to make a new pool and copy the data back over, then destroy the old pool and then add those drives to the new one (which then requires me renaming the pools which causes some interruptions in service in the meantime), but I would hoipe there is a way around this I just haven't found.










      share|improve this question













      I'm in the process of switching from a classic mdadm raid to a zfs pool, and have made a few stumbles that I'm trying to recover from.



      Originally I had two 4tb drives in a raid 1 mirror.



      I then put two new 4tb drives in the machine and disconnected the originals. I create a zpool with the new drives in a mirror, but I used /dev/sda, and /dev/sdb because that's what the guide I was using told me to do, and wasn't thinking.



      So of course when I reconnected the old drives to copy the data over, they took /dev/sdb and /dev/sdc which made one of my two zfs drives /dev/sdd which of course messed up the zfs pool and showed one as UNAVAIL



      After working with someone online I managed to get the zfs pool into UUID mode by zpool export pool and then zpool import -d /dev/disk/by-uuid pool



      This then allowed me to detach the UNAVAIL drive, which I then wiped clean, and added back to the zfs as a mirror of the first using it's /dev/disk/by-id. After a few days it resilvered successfully.



      Now, I have a zpool with one device having a long integer as it's identifier, and another with a string along the lines of ata-WDC_WD.... I wanted to get them all on the same page, so I was planning to detach the first disk with the integer identifier, and re-add it using it's /dev/disk/by-id. However, attempted to detach gives me the error: cannot detach 13419994393693470939: only applicable to mirror and replacing vdevs.



      Ok, so I tried to replace it with a different drive, and got this error: cannot open '13419994393693470939': name must begin with a letter



      While the pool is working, I would like everything to be in a consistent state. I could use the old two drives to make a new pool and copy the data back over, then destroy the old pool and then add those drives to the new one (which then requires me renaming the pools which causes some interruptions in service in the meantime), but I would hoipe there is a way around this I just haven't found.







      server zfs






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 4 hours ago









      sharf

      154118




      154118






















          1 Answer
          1






          active

          oldest

          votes


















          1














          Just rerun the process you used to re-identify the disks the first time:




          1. zpool export pool

          2. zpool import -d /dev/disk/by-id pool


          This will unify the drives to the by-id format. You could use by-uuid instead if you prefer to have it be in that format.



          The two errors you're getting are:





          • cannot detach: This detach is being refused because ZFS thinks there are no other valid replicas of the data. Are you sure you configured the pool in a a mirror correctly? It's also possible the CLI is misinterpreting the error and giving you a nonsensical error message; maybe you're actually just running the command with the wrong name for the drive by accident (see next point below).


          • cannot open: I can't quite tell from the information you've given, but I suspect you need to give the full path to this device instead of just its UUID.






          share|improve this answer























          • Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
            – sharf
            1 hour ago











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "89"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1105272%2fzfs-pool-out-of-whack%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          Just rerun the process you used to re-identify the disks the first time:




          1. zpool export pool

          2. zpool import -d /dev/disk/by-id pool


          This will unify the drives to the by-id format. You could use by-uuid instead if you prefer to have it be in that format.



          The two errors you're getting are:





          • cannot detach: This detach is being refused because ZFS thinks there are no other valid replicas of the data. Are you sure you configured the pool in a a mirror correctly? It's also possible the CLI is misinterpreting the error and giving you a nonsensical error message; maybe you're actually just running the command with the wrong name for the drive by accident (see next point below).


          • cannot open: I can't quite tell from the information you've given, but I suspect you need to give the full path to this device instead of just its UUID.






          share|improve this answer























          • Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
            – sharf
            1 hour ago
















          1














          Just rerun the process you used to re-identify the disks the first time:




          1. zpool export pool

          2. zpool import -d /dev/disk/by-id pool


          This will unify the drives to the by-id format. You could use by-uuid instead if you prefer to have it be in that format.



          The two errors you're getting are:





          • cannot detach: This detach is being refused because ZFS thinks there are no other valid replicas of the data. Are you sure you configured the pool in a a mirror correctly? It's also possible the CLI is misinterpreting the error and giving you a nonsensical error message; maybe you're actually just running the command with the wrong name for the drive by accident (see next point below).


          • cannot open: I can't quite tell from the information you've given, but I suspect you need to give the full path to this device instead of just its UUID.






          share|improve this answer























          • Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
            – sharf
            1 hour ago














          1












          1








          1






          Just rerun the process you used to re-identify the disks the first time:




          1. zpool export pool

          2. zpool import -d /dev/disk/by-id pool


          This will unify the drives to the by-id format. You could use by-uuid instead if you prefer to have it be in that format.



          The two errors you're getting are:





          • cannot detach: This detach is being refused because ZFS thinks there are no other valid replicas of the data. Are you sure you configured the pool in a a mirror correctly? It's also possible the CLI is misinterpreting the error and giving you a nonsensical error message; maybe you're actually just running the command with the wrong name for the drive by accident (see next point below).


          • cannot open: I can't quite tell from the information you've given, but I suspect you need to give the full path to this device instead of just its UUID.






          share|improve this answer














          Just rerun the process you used to re-identify the disks the first time:




          1. zpool export pool

          2. zpool import -d /dev/disk/by-id pool


          This will unify the drives to the by-id format. You could use by-uuid instead if you prefer to have it be in that format.



          The two errors you're getting are:





          • cannot detach: This detach is being refused because ZFS thinks there are no other valid replicas of the data. Are you sure you configured the pool in a a mirror correctly? It's also possible the CLI is misinterpreting the error and giving you a nonsensical error message; maybe you're actually just running the command with the wrong name for the drive by accident (see next point below).


          • cannot open: I can't quite tell from the information you've given, but I suspect you need to give the full path to this device instead of just its UUID.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 2 hours ago

























          answered 2 hours ago









          Dan

          1907




          1907












          • Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
            – sharf
            1 hour ago


















          • Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
            – sharf
            1 hour ago
















          Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
          – sharf
          1 hour ago




          Excellent! I hadn't realized I could use the -d option to specify the by-id. I have been mostly going by guides and posts and following the commands without understanding them. That's got them mapped correctly I think. As for the other points, running zpool status shows pool -> mirror-0 -> Drive1,Drive2. All with a state of ONLINE. And since it just resilvered, I assume the mirror is on correctly. But I suspect both the detach and replace were erroring because I was using just the UUID rather than the full path as you suggested.
          – sharf
          1 hour ago


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Ask Ubuntu!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1105272%2fzfs-pool-out-of-whack%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Accessing regular linux commands in Huawei's Dopra Linux

          Can't connect RFCOMM socket: Host is down

          Kernel panic - not syncing: Fatal Exception in Interrupt