While attempting to test Is it allowed to access memory that spans the zero boundary in x86? in user-space on Linux, I wrote a 32-bit test program that tries to map the low and high pages of 32-bit virtual address space. After echo 0 | sudo tee /proc/sys/vm/mmap_min_addr, I can map the zero page, but I don’t know why I
Tag: mmap
Random mmaped memory access up to 16% slower than heap data access
Our software builds a data structure in memory that is about 80 gigabytes large. It can then either use this data structure directly to do its computation, or dump it to disk so it can be reused several times afterwards. A lot of random memory accesses happens in this data structure. For larger input this data structure can grow even
How is shmat() etc. implemented in the Linux kernel. Is there any other way to share memory?
with mmap(), processes must inherit the mapping from a parent to share memory. is there a way to share memory between processes that don’t share a parent? shmat() seems to be the best solution, but it requires clean up if the processes didn’t detach the memory on exit/die. Domain sockets are close to sharing memory… Answer With mmap, processes must
What would be the disadvantage of creating an array of really big size on 64 bit systems?
Operating systems like Linux work on the principle of Copy-on-write, so even if you are allocating an array of say 100 GB, but only use upto 10GB, you would only be using 10 GB of memory. So, what would be the disadvantage of creating such a big array? I can see an advantage though, which is that you won’t have
can /proc/self/exe be mmap’ed?
Can a process read /proc/self/exe using mmap? This program fails to mmap the file: Answer You are making 2 mistakes here: Mapped size must be > 0. Zero-size mappings are invalid. You have to specify, if you want to create a shared (MAP_SHARED) or a private (MAP_PRIVATE) mapping. The following should work for example: If you wish to map the
ARM linux userspace gpio operations using mmap /dev/mem approach (able to write to GPIO registers, but fail to read from them)
Kernel version 3.12.30-AM335x-PD15.1.1 by PHYTEC. If I use the /sys/class/gpio way, I can see that the button input pin (gpio103 of AM3359) value changes from 0 to 1. Following the this exercise http://elinux.org/EBC_Exercise_11b_gpio_via_mmap and executing the below command for reading gpio pins usig /dev/mem approach: (base of gpio bank 3 which is 0x481ae000 + 0x13c dataout offset) I get the
Can mmap and O_DIRECT be used together?
As I understand it, when you mmap a file you are basically mapping the pages from the page cache for that file directly into your process, and when you use O_DIRECT you are bypassing the page cache. Does it ever make sense to use the two together? If my understanding is right how would it even work? mmap seems to
Mem alloced via mmap without munmap will cause leak after process exits or terminals
there is the code about alloc mem via mmap void *ret = mmap(NULL, 4 * 1024, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0); when process exits normally, the memory will be return to os ? Answer According to the man and under unmap: The region is also automatically unmapped when the process is terminated. Which sounds very reasonable as
Write-only mapping a O_WRONLY opened file supposed to work?
Is mmap() supposed to be able to create a write-only mapping of a O_WRONLY opened file? I am asking because following fails on a Linux 4.0.4 x86-64 system (strace log): The errno equals EACCESS. Replacing the open-flag O_WRONLY with O_RDWR yields a successful mapping. The Linux mmap man page documents the errno as: Thus, that behaviour is documented with the
mmap vs sbrk, performance comparison
Which of these calls is faster on average? I’ve heard that mmap is faster for smaller allocations but I haven’t heard a comparison of either. Any information on performance for these would be nice. Answer You should tag this with a particular implementation (like linux) since the answer surely varies by implementation. For now I’ll assume Linux since it’s the