“cannot allocate memory” error when trying to create folder in cgroup hierarchy












5















we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



mkdir /sys/fs/cgroup/memory/users/newuser
mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



ls /sys/fs/cgroup/memory/users/ | wc -l
7298


deleting unused folders in the cgroup hierarchy actually fixed the problem



cd /sys/fs/cgroup/memory/users/
ls | xargs -n1 rmdir
# errors for folders in-use, succeeds for unused
mkdir /sys/fs/cgroup/memory/users/newuser
# now works fine


interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
7450
mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
# fine


So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



contents of cgroup mounts may be found here










share|improve this question
















bumped to the homepage by Community 6 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    5















    we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



    mkdir /sys/fs/cgroup/memory/users/newuser
    mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


    That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



    We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



    ls /sys/fs/cgroup/memory/users/ | wc -l
    7298


    deleting unused folders in the cgroup hierarchy actually fixed the problem



    cd /sys/fs/cgroup/memory/users/
    ls | xargs -n1 rmdir
    # errors for folders in-use, succeeds for unused
    mkdir /sys/fs/cgroup/memory/users/newuser
    # now works fine


    interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



    ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
    7450
    mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
    # fine


    So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



    contents of cgroup mounts may be found here










    share|improve this question
















    bumped to the homepage by Community 6 hours ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      5












      5








      5








      we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



      mkdir /sys/fs/cgroup/memory/users/newuser
      mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


      That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



      We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



      ls /sys/fs/cgroup/memory/users/ | wc -l
      7298


      deleting unused folders in the cgroup hierarchy actually fixed the problem



      cd /sys/fs/cgroup/memory/users/
      ls | xargs -n1 rmdir
      # errors for folders in-use, succeeds for unused
      mkdir /sys/fs/cgroup/memory/users/newuser
      # now works fine


      interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



      ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
      7450
      mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
      # fine


      So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



      contents of cgroup mounts may be found here










      share|improve this question
















      we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



      mkdir /sys/fs/cgroup/memory/users/newuser
      mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


      That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



      We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



      ls /sys/fs/cgroup/memory/users/ | wc -l
      7298


      deleting unused folders in the cgroup hierarchy actually fixed the problem



      cd /sys/fs/cgroup/memory/users/
      ls | xargs -n1 rmdir
      # errors for folders in-use, succeeds for unused
      mkdir /sys/fs/cgroup/memory/users/newuser
      # now works fine


      interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



      ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
      7450
      mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
      # fine


      So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



      contents of cgroup mounts may be found here







      memory cgroups






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Sep 22 '17 at 8:54







      hwjp

















      asked Aug 21 '17 at 16:21









      hwjphwjp

      588




      588





      bumped to the homepage by Community 6 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 6 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "106"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f387481%2fcannot-allocate-memory-error-when-trying-to-create-folder-in-cgroup-hierarchy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00


















          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00
















          0












          0








          0







          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer













          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Sep 20 '17 at 10:35









          MariusMatutiaeMariusMatutiae

          3,44011426




          3,44011426













          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00





















          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00



















          i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

          – hwjp
          Sep 20 '17 at 14:32





          i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

          – hwjp
          Sep 20 '17 at 14:32













          to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

          – hwjp
          Sep 22 '17 at 9:00







          to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

          – hwjp
          Sep 22 '17 at 9:00




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f387481%2fcannot-allocate-memory-error-when-trying-to-create-folder-in-cgroup-hierarchy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Accessing regular linux commands in Huawei's Dopra Linux

          Can't connect RFCOMM socket: Host is down

          Kernel panic - not syncing: Fatal Exception in Interrupt