“cannot allocate memory” error when trying to create folder in cgroup hierarchy“/sbin/yast: fork:...

Who would use the word "manky"?

How to tell if JDK is available from within running JVM?

Does unblocking power bar outlets through short extension cords increase fire risk?

How important are the Author's mood and feelings for writing a story?

Brute-force the switchboard

What is this green alien supposed to be on the American covers of the "Hitchhiker's Guide to the Galaxy"?

"Je suis petite, moi?", purpose of the "moi"?

How do you name this compound using IUPAC system (including steps)?

Last-minute canceled work-trip mean I'll lose thousands of dollars on planned vacation

How to tell readers that I know my story is factually incorrect?

BritRail England Passes compared to return ticket for travel in England

Why can't I hear fret buzz through the amp?

Demographic consequences of closed loop reincarnation

Which GPUs to get for Mathematical Optimization (if any)?

🍩🔔🔥Scrambled emoji tale⚛️🎶🛒 #2️⃣

How can I automate this tensor computation?

Which failed attempts have there been to find a contradiction in ZFC or ZF?

"This used to be my phone number"

Do Australia and New Zealand have a travel ban on Somalis (like Wikipedia says)?

Symbolic integration of logmultinormal distribution

How long were the Apollo astronauts allowed to breathe 100% oxygen at 1 atmosphere continuously?

Inscriptio Labyrinthica

Why is Google approaching my VPS machine?

How much solution to fill Paterson Universal Tank when developing film?



“cannot allocate memory” error when trying to create folder in cgroup hierarchy


“/sbin/yast: fork: Cannot allocate memory” on openSUSE 10.1Java/malloc memory corruption error when running SDK Android ManagerFork / Cannot allocate memoryBash session errors on commands: “fork: Cannot allocate memory”find: cannot fork: Cannot allocate memoryDocker “cannot allocate memory” - virtual memory tuningCan a process allocate cache memory such that the kernel can seize it when necessary?Could not find writable mount point for cgroup hierarchy 13 while trying to create cgroupWhat's the Linux kernel's behaviour when processes in a cgroup hit their memory limit?How to use cgroup limit in systemd user slices?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







5















we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



mkdir /sys/fs/cgroup/memory/users/newuser
mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



ls /sys/fs/cgroup/memory/users/ | wc -l
7298


deleting unused folders in the cgroup hierarchy actually fixed the problem



cd /sys/fs/cgroup/memory/users/
ls | xargs -n1 rmdir
# errors for folders in-use, succeeds for unused
mkdir /sys/fs/cgroup/memory/users/newuser
# now works fine


interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
7450
mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
# fine


So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



contents of cgroup mounts may be found here










share|improve this question
















bumped to the homepage by Community 29 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.






















    5















    we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



    mkdir /sys/fs/cgroup/memory/users/newuser
    mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


    That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



    We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



    ls /sys/fs/cgroup/memory/users/ | wc -l
    7298


    deleting unused folders in the cgroup hierarchy actually fixed the problem



    cd /sys/fs/cgroup/memory/users/
    ls | xargs -n1 rmdir
    # errors for folders in-use, succeeds for unused
    mkdir /sys/fs/cgroup/memory/users/newuser
    # now works fine


    interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



    ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
    7450
    mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
    # fine


    So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



    contents of cgroup mounts may be found here










    share|improve this question
















    bumped to the homepage by Community 29 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      5












      5








      5








      we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



      mkdir /sys/fs/cgroup/memory/users/newuser
      mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


      That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



      We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



      ls /sys/fs/cgroup/memory/users/ | wc -l
      7298


      deleting unused folders in the cgroup hierarchy actually fixed the problem



      cd /sys/fs/cgroup/memory/users/
      ls | xargs -n1 rmdir
      # errors for folders in-use, succeeds for unused
      mkdir /sys/fs/cgroup/memory/users/newuser
      # now works fine


      interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



      ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
      7450
      mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
      # fine


      So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



      contents of cgroup mounts may be found here










      share|improve this question
















      we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders:



      mkdir /sys/fs/cgroup/memory/users/newuser
      mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory


      That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect.



      We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine.



      ls /sys/fs/cgroup/memory/users/ | wc -l
      7298


      deleting unused folders in the cgroup hierarchy actually fixed the problem



      cd /sys/fs/cgroup/memory/users/
      ls | xargs -n1 rmdir
      # errors for folders in-use, succeeds for unused
      mkdir /sys/fs/cgroup/memory/users/newuser
      # now works fine


      interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy:



      ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l
      7450
      mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser
      # fine


      So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own?



      contents of cgroup mounts may be found here







      memory cgroups






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Sep 22 '17 at 8:54







      hwjp

















      asked Aug 21 '17 at 16:21









      hwjphwjp

      589 bronze badges




      589 bronze badges





      bumped to the homepage by Community 29 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 29 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00
















          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "106"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f387481%2fcannot-allocate-memory-error-when-trying-to-create-folder-in-cgroup-hierarchy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00


















          0














          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer
























          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00
















          0












          0








          0







          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.






          share|improve this answer













          There are indeed limits per cgroup, you can read about them on LWN.net:




          Each cgroup has a memory controller
          specific data structure (mem_cgroup) associated with it.



          .... Accounting happens per cgroup.




          The maximum amount of memory is stored in /sys/fs/cgroup/memory/memory.limit_in_bytes. If the problem you experienced was really connected with cgroup memory limit, then /sys/fs/cgroup/memory/memory.max_usage_in_bytes should be close to the above, which you can also check by inspecting memory.failcnt, which records the number of times your actual usage hit the limit above.



          Perhaps you may also check memory.kmem.failcnt and memory.kmem.tcp.failcnt for similar statistics on kernel memory and tcp buffer memory.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Sep 20 '17 at 10:35









          MariusMatutiaeMariusMatutiae

          3,5301 gold badge14 silver badges27 bronze badges




          3,5301 gold badge14 silver badges27 bronze badges













          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00





















          • i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

            – hwjp
            Sep 20 '17 at 14:32











          • to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

            – hwjp
            Sep 22 '17 at 9:00



















          i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

          – hwjp
          Sep 20 '17 at 14:32





          i'm not sure you've understood the question. the error i'm getting seems to be coming from the operating system when I'm trying to create a new cgroups folder -- it's not about the limits applied to any particular cgroup. correct me if i've misunderstood, myself...

          – hwjp
          Sep 20 '17 at 14:32













          to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

          – hwjp
          Sep 22 '17 at 9:00







          to answer your question, the top-level /sys/fs/cgroup/memory folder has the following: /sys/fs/cgroup/memory/memory.max_usage_in_bytes = 14010560512, /sys/fs/cgroup/memory/memory.limit_in_bytes = 9223372036854771712 (ie max usage 8 orders of magnitude under limit)

          – hwjp
          Sep 22 '17 at 9:00




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f387481%2fcannot-allocate-memory-error-when-trying-to-create-folder-in-cgroup-hierarchy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Taj Mahal Inhaltsverzeichnis Aufbau | Geschichte | 350-Jahr-Feier | Heutige Bedeutung | Siehe auch |...

          Baia Sprie Cuprins Etimologie | Istorie | Demografie | Politică și administrație | Arii naturale...

          Nicolae Petrescu-Găină Cuprins Biografie | Opera | In memoriam | Varia | Controverse, incertitudini...