glibc posix_memalign allocation overhead -


i trying understand memory overhead associated posix_memalign - in other words, if posix_memalign relies on boundary tagging , how big such tagging is.

so wrote following (very simple) program (linux x86_64 platform, "gcc -m64"):

#include <stdio.h> #include <stdlib.h>  int main(int argc, char* argv[]) {     int   i;     void* prev;     void* memp;      int align = atoi(argv[1]);     int alloc = atoi(argv[2]);      (i = 0; < 5; ++i)     {         if (posix_memalign(&memp, align, alloc))         {             fprintf(stderr, "allocation failed\n");             return 1;         }          if (i == 0)             printf("allocated %d bytes @ 0x%08x\n", alloc, memp);         else             printf("allocated %d bytes @ 0x%08x (offset: %d)\n",                    alloc, memp, (int)(memp-prev));          prev = memp;     }      return 0; } 

however, result baffle me...

$ /tmp/test 8 1 allocated 1 bytes @ 0x0133a010 allocated 1 bytes @ 0x0133a030 (offset: 32) allocated 1 bytes @ 0x0133a050 (offset: 32) allocated 1 bytes @ 0x0133a070 (offset: 32) allocated 1 bytes @ 0x0133a090 (offset: 32)  same allocations of 2 24 bytes, until:  $ /tmp/test 8 25 allocated 25 bytes @ 0x0198c010 allocated 25 bytes @ 0x0198c040 (offset: 48) allocated 25 bytes @ 0x0198c070 (offset: 48) allocated 25 bytes @ 0x0198c0a0 (offset: 48) allocated 25 bytes @ 0x0198c0d0 (offset: 48)  same allocations of 26 40 bytes, until:  $ /tmp/test 8 41 allocated 41 bytes @ 0x0130c010 allocated 41 bytes @ 0x0130c050 (offset: 64) allocated 41 bytes @ 0x0130c090 (offset: 64) allocated 41 bytes @ 0x0130c0d0 (offset: 64) allocated 41 bytes @ 0x0130c110 (offset: 64) 

so concluded minimum allocation 32 bytes , posix_memalign used 8 byte boundary tag.

the same results obtained 16 byte alignment. things got weird 32 byte alignment:

$ /tmp/test 32 1 allocated 1 bytes @ 0x0064c040 allocated 1 bytes @ 0x0064c080 (offset: 64) allocated 1 bytes @ 0x0064c120 (offset: 160) allocated 1 bytes @ 0x0064c160 (offset: 64) allocated 1 bytes @ 0x0064c200 (offset: 160)  same allocations of 2 24 bytes, until:  $ /tmp/test 32 25 allocated 25 bytes @ 0x01e0c040 allocated 25 bytes @ 0x01e0c0c0 (offset: 128) allocated 25 bytes @ 0x01e0c140 (offset: 128) allocated 25 bytes @ 0x01e0c1c0 (offset: 128) allocated 25 bytes @ 0x01e0c240 (offset: 128)  same allocations of 26 40 bytes, until:  $ /tmp/test 32 41 allocated 41 bytes @ 0x00a72040 allocated 41 bytes @ 0x00a720a0 (offset: 96) allocated 41 bytes @ 0x00a72160 (offset: 192) allocated 41 bytes @ 0x00a721c0 (offset: 96) allocated 41 bytes @ 0x00a72280 (offset: 192) 

can explain such behaviour? @ total loss...

posix_memalign guarantees min alignment. if coincidentally 0x00a72000, might aligned 0x1000 bytes, doesn't mean lot of memory wasted pad way. first example almost showed -- started @ 0x0198c010 , incremented 0x30 after each one. that'll hit address many low bits cleared.

keep in mind malloc in glibc not "stupid" 1 hands out pages/chunks linearly. if want learn more internal behavior, should read allocators it's based on. glibc uses ptmalloc based on dlmalloc. don't think glibc manual goes depth internal design decisions.


Comments

Popular posts from this blog

sublimetext3 - what keyboard shortcut is to comment/uncomment for this script tag in sublime -

java - No use of nillable="0" in SOAP Webservice -

ubuntu - Laravel 5.2 quickstart guide gives Not Found Error -