Skip to content
Advertisement

What would be the disadvantage of creating an array of really big size on 64 bit systems?

Operating systems like Linux work on the principle of Copy-on-write, so even if you are allocating an array of say 100 GB, but only use upto 10GB, you would only be using 10 GB of memory. So, what would be the disadvantage of creating such a big array? I can see an advantage though, which is that you won’t have to worry about using a dynamic array, which would have the cost of reallocations.

Advertisement

Answer

The main disadvantage is that by doing this you are making a strong assumption about how exactly the standard library allocators1 and the underlying Linux allocators work. In fact, the allocators and underlying system do not always work as you mention.

Now, you mentioned “copy on write”, but what you are likely really referring to is the combination of lazy page population and overcommit. Depending on the configuration, it means that any memory you allocate but don’t touch may not count against memory limits and may not occupy physical memory.

The problem is that this often may not work. For example:

  • Many allocators have modes where they touch the allocated memory, e.g., in debug mode to fill it out with a known pattern to help diagnose references to uninitialized memory. Most allocators touch at least a few bytes before your allocated region to store metadata that can be used on deallocation. So you are making a strong assumption about allocator behavior that is likely to break.
  • The Linux overcommit behavior is totally configurable. In practice many server-side Linux users will disable it in order to reduce uncertainty and unrecoverable problems related to the OOM killer. So your claim that Linux behaves lazily is only true in for some overcommit configuration and false for others.
  • You might assume that memory is being committed in 4K chunks and adjust your algorithm around that. However, systems have different page sizes: 16K and 64K are not uncommon as base page sizes, and x86 Linux systems by default have transparent huge pages enabled, so you may actually be getting 2,048K pages without realizing it! In this case you may end up committing nearly the entire array, depending on your access pattern.
  • As mentioned in the comments, the “failure mode” for this type of use is pretty poor. You think you’ll only use a small portion of the array, but if you do end up using more than the system can handle, at best you may get a signal to your application on some random access to a new page, but more like the oom killer will just kill some other random process on your machine.

1 Here I’m assuming you are using something like malloc or new to allocate the array, since you didn’t mention mmaping it directly or anything.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement