I am trying to allocate physically contiguous pages in DRAM using the alloc_pages_exact function. When I try to allocate 10MB of pages, the returned address is always 0. But when I try to allocate 1MB of pages, the allocation is almost immediate. Also, can someone please tell me how to find the exact row size of DRAM? I had to
Tag: memory-management
Is there a different memory allocation path other than the buddy allocator in linux?
I’m understanding memory allocation in Linux and doing some changes in buddy allocator (__alloc_pages_nodemask) for my experiments. I create a new flag in struct page->flags (by adding a new flag in enum pageflags in page-flags.h. I set this bit permanently in __alloc_pages_nodemask (to not to be cleared once set and survive all further allocation and freeing. I modify PAGE_FLAGS_CHECK_AT_PREP to
Is it possible to add a customized name for the (non file-backed) mmap region?
Just curious whether it is possible to specify a name for the non file-backed mmap region? Some thing like the [New VMA area] in the following example: Answer The content of maps comes from the show_map_vma function in fs/proc/task_mmu.c. Looking at it, if you want a custom name for a non-file-backed mapping, it’d need to come from either vma->vm_ops->name or
Why do I get different results for finding the peak of memory usage?
In Linux, I am using /usr/bin/time -f %M tool (gnu-time) to get the peak of the memory used for a single process/program. But every time that I run this command, I get a different result. What’s the reason of this difference? How can I get the accurate value for the maximum memory consumed by a process? p.s. I’ve already used
Code works logically on macOS but not on Ubuntu 16.04.5
I have a task to write the function: int read_palindrome(); // input comes from stdin which will read one line from standard input and returns 1 if the line is a palindrome and 0 otherwise. A line is terminated by the newline character (ānā) and the does not include the newline. There are requirements to be met: There is no
DMA Engine Timeout and DMA Memory Mapping
I am trying to use a Linux DMA driver. Currently, when I send the transaction out and begin waiting, my request times out. I believe this has to do with the way I am setting up my buffers when I am performing DMA Mapping. In Xilinx’s DMA driver, they take special care to look at memory alignment. In particular, they
Segmentation Fault in pthreads, Linux Ubuntu
I’m getting a Segmentation Fault when I run this code. Surprisingly, when I set thread_count to 16 or less, it doesn’t give any error. When I debug the code using gdb, the code gets an error at line local_answer += vec_1[j] * vec_2[j]; in the Calculate() thread function. What is the reason for this behavior? How can I fix that?
How can I get a guarantee that when a memory is freed, the OS will reclaim that memory for it’s use?
I noticed that this program: which allocates 3 memories, 3 times and each time frees different memory, frees the memory according to watch free -m, which means that the OS reclaimed the memory for every free regardless of the memory’s position inside the program’s address space. Can I somehow get a guarantee for this effect? Or is there already anything
What is contained in code/internal sections of JCMD?
Dimensioning a docker container for a JVM based service is tricky (as we all know). I’m pretty sure we have slightly under-dimensioned a container and want to clear up a few questions I have relating to specific jcmd (Native Memory Tracker) outputs that we see when monitoring. Questions: Are Direct Byte Buffers included in “Internal” as reported by jcmd? What
Linux page table of the process
I’m reading about the memory paging here and now trying to experiment with it. I wrote a simple assembly program for getting Segmentation Fault and ran in gdb. Here it is: I assemble and link this into a 64-bit ELF static executable. As far as I read each process has its own Page Table which cr3 register points to. Now