Im currently setting up a new MongoDB ops manager machine. Installation works fine but I can’t start the mongodb-mms service. The starting of Instance 0 fails with an java.lang.OutOfMemoryError exception. I use the same configuration as on my test server (2 CPU cores, 8gb ram), there the service starts without any interrupt. Changing the ulimit configuration / starting the service
Tag: out-of-memory
What is using so much memory on an idle linux server? Comparing output of “htop” and “ps aux”
I am trying to understand and compare the output I see from htop (sorted by mem%) and “ps aux –sort=-%mem | grep query.jar” and determine why 24.2G out of 32.3G is in use on an idle server. The ps command shows a single parent (not child process I assume): Whereas htop shows PID 6790 as well as many other PIDs
Why can a user-process invoke the Linux OOM-killer due to memory fragmentation, even though plenty of RAM is available?
I’ve got a headless ARM-based Linux (v3.10.53-1.1.1) system with no swap space enabled, and I occasionally see processes get killed by the OOM-killer even though there is plenty of RAM available. Running echo 1 > /proc/sys/vm/compact_memory periodically seems to keep the OOM-killer at bay, which makes me think that memory fragmentation is the culprit, but I don’t understand why a
How much memory a program can allocate?
How much memory can I allocate for a C++ program running under Linux? In my test case, using new or malloc can allocate more than 170Gb of memory. As a comparison, The same code can only allocate 1.8G in windows and then terminated. My test machine, one is a virtual machine using virtual box, centos7 64-bit, 2Gb memory. The host
What can kill a Java process on Linux other than OOM?
I have two virtual servers for hosting my web app. They are identical, running Debian 6 with 1.5GB of RAM. I configure the OS and Tomcat using a script from a fresh install, so I know they are identical. My webapp runs in Tomcat and I set 850M heap and 100M perm size. My app regularly dies on one of