Skip to content
Advertisement

Allocate more than 2GB on the heap using c++ on a 32bit linux kernel

This seems to be a very common problem, but still I haven’t found a definite answer.

I have access to a server which runs linux, has 16 GB of RAM and a 16-core (64bit) CPU (/proc/cpuinfo gives “Intel(R) Xeon(R) CPU E5520 @ 2.27GHz”). However, the kernel is 32bit (uname -m gives i686). Of course, I have no root access, so I cannot change that.

I am running a C++-program I wrote, which does some memory-hungry calculations, so I I need a big heap – but whenever I try to allocate more than 2GB, I get a badalloc, although ulimit returns “unlimited”. For simplicity, let’s just say my program is this:

#include <iostream>
#include <vector>

int main() {
    int i = 0;
    std::vector<std::vector<int> > vv;
    for (;;) {
        ++i;
        vv.resize(vv.size() + 1);
        std::vector<int>* v = &(vv.at(vv.size() - 1));
        v->resize(1024 * 1024 * 128);
        std::cout << i * 512 << " MB.n";
    }
    return 0;
}

After compiling with g++ (no flags), the output is:

512 MB.
1024 MB.
1536 MB.
2048 MB.
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted

As far as I understand, this is a limitation of 32bit systems, apparently because a 32bit pointer can just hold 2^32 different adresses (Am I right to suppose that if I compiled the same program on the same server, but that server ran a 64bit kernel, than the program could allocate more than 2GB?)

This is not a question of whether the allocated memory is contiguous, if I slice thinner in the example program, the same problem occurs, and in my actual program, there are no big blocks of memory, just many small ones.

So my question is: Is there anything I can do? I cannot change the OS, but of course I could modify my source code, use different compiler options etc. … but I do need more than 2GB of memory, all at once, and there is no obvious way to further optimise the algorithm the program uses.

If the answer is a clear “no”, it would still be good to know that.

Thanks, Lukas

Advertisement

Answer

If you need the memory all at once, no, there is really no way to do it without changing to a 64-bit kernel (which, yes, would allow you to allocate more memory in a single process)

That said, if you don’t need the memory all at once, but instead just with fast access, you can always offload parts of the memory storage to another process.

That could for example work by storing the data in the other process, and have your process temporarily map shared memory from that process into its own memory space when required. It will still be stored in-memory, but you’ll have some overhead when switching memory range. If the overhead is acceptable or not would depend on your memory access patterns.

It’s not a very straight forward approach, but without changing the kernel to give you a 64-bit address space, it sounds like you’re in a bit of a bind.

EDIT: You may be able to raise the limit a little above 2GB by reconfiguring the kernel, but that only means you’ll hit the hard limit there instead. Also, that would require root access.

Advertisement