Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    David Goldblatt
    @davidtgoldblatt
    The jemalloc build doesn’t do profiling in it’s default configuration
    So, “malloc” is an interface, and there are multiple implementations of that interface (i.e. jemalloc is a malloc implementation, glibc malloc is another implementation, etc.)
    It’s hard to say which will be faster on any given workload
    But we do try to be fast in general
    Alexander Fedorov
    @sith
    are there public benchmark docs? I found this one http://www.adms-conf.org/2019-camera-ready/durner_adms19.pdf
    so if I compile without enable-prof param it should just work as malloc implementation
    David Goldblatt
    @davidtgoldblatt
    Right
    (although, profiling is quite low overhead in general; it shouldn’t make a big difference unless something is pathologically broken)
    As far as I know, there are no good malloc benchmarks out there (there are quite a few bad ones)
    The state of the art as far as I know is to try out a variety of implementations on some program of interest
    Alexander Fedorov
    @sith
    that makes sense
    I have a situation where my java(with some JNI) app consumes all the memory on machine and being killed by oom killer, but the same does not happen when I use jemalloc
    Zac Policzer
    @ZacAttack
    Do you use something which utilizes a native library extensively? Thats the scenario I had. RSS would steadily grow over time under ptmalloc causing virtual memory to rise until overcommit settings induced process death. By swapping to jemalloc I didn't have the RSS growth anymore. I'm doing a write up on it for my companies tech blog.
    Oh wait. You said 'with some JNI'. Yup. That'd do it. The JVM actually makes very little use of malloc under the hood. Preferring to manage it's memory via mmap (though with a few exceptions).
    Zac Policzer
    @ZacAttack
    If you're using glibc, I suggest taking a look at a project called gdb-heap. With some minor tweaks you can use it to actually see how much memory is getting left on the floor (that is, claimed by the process but unused). We found that arena heaps were getting fragmented over time, but jemalloc seems (from poking around the code a bit) to make sure there is a ceiling to your worst case wastage (I think max 20% in the absolute worst case?). glibc has no ceiling as far as I could tell.
    David Goldblatt
    @davidtgoldblatt
    Especially in cases where native deallocation is triggered by some finalizer, GC’d languages calling into malloc can be hard to handle from a fragmentation standpoint
    a lot of the usual things you assume about lifetime stop being true
    Shouchao Jiang
    @lnparker
    @davidtgoldblatt i have tried the method using the method iterating the tsd_nominal_tsds( tsd_list_t ) and accumalting the (tsd_thread_allocated_get(tsd) - tsd_thread_deallocated_get(tsd)) from all threads in tsd_nominal_tsds, but it seems that it will not work.
    pthread_create in nptl will first call tsd_cleanup() and then do some other free work. these "other free work" will not captured by tsd
    image.png
    is there any better method to record the allocated / deallocated size ? thank u ! @davidtgoldblatt
    Bernd Prager
    @bprager
    Being confined at work to a small Cygwin environment, I embarked on a brave/naive journey to get 'jemalloc' working in this environment. I found a (closed) issue on Github (#285) which I implemented. It's basically setting the Linux definitions and disables background threads in Cygwin environments.
    When it successfully compiled I got very excited until I run the test suite. This is where I got stuck.
    The test fails at:
    === test/unit/binshard ===
    test_bin_shard (non-reentrant): pass
    test/test.sh: line 34: 7140 Segmentation fault (core dumped) $JEMALLOC_TEST_PREFIX ${t}.exe /home/berndpra/Tmp/jemalloc/jemalloc-dev/ /home/berndpra/Tmp/jemalloc/jemalloc-dev/
    Test harness error: test/unit/binshard w/ MALLOC_CONF="narenas:1,bin_shards:1-160:16|129-512:4|256-256:8"
    Use prefix to debug, e.g. JEMALLOC_TEST_PREFIX="gdb --args" sh test/test.sh test/unit/binshard
    make: * [Makefile:603: check_unit] Error 1
    Running 'binshar.exe' reports:
    $ ./test/unit/binshard.exe
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:136: Failed assertion: (nshards) == (16) --> 1 != 16: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:138: Failed assertion: (nshards) == (8) --> 1 != 8: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard:test/unit/binshard.c:140: Failed assertion: (nshards) == (4) --> 1 != 4: Unexpected nshards
    test_bin_shard (non-reentrant): fail
    Can somebody give me a hint on where I should start looking in fixing this?
    David Goldblatt
    @davidtgoldblatt
    @lnparker, I’m not completely sure I follow; what isn’t working? Could you clarify what precisely you’re trying to do in the bigger picture sense? I.e. the motivating problem you’re trying to solve
    @bprager, the test failures you get from running the unit test directly aren’t quite right; that test needs to be run in the environment set up by test/unit/binshard.sh (or, if you have gdb access, you can copy the command from the error message; JEMALLOC_TEST_PREFIX="gdb --args" sh test/test.sh test/unit/binshard
    that should give you the segfault stack trace, rather than the config errors that got noticed
    Bernd Prager
    @bprager_gitlab
    gdb.png
    Something like that?
    Bernd Prager
    @bprager_gitlab
    -bash 09_15_2020 11_18_39 AM.png
    Bernd Prager
    @bprager_gitlab
    -bash 09_15_2020 11_35_43 AM.png
    Bernd Prager
    @bprager_gitlab
    I seem to get SIGSEGV in different locations.
    -bash 09_15_2020 12_03_18 PM.png
    David Goldblatt
    @davidtgoldblatt
    Yeah. Looks like something might be going wrong with the thread shutdown hooks
    Maybe they're not getting executed on cygwin?
    Zac Policzer
    @ZacAttack
    I have a usability question (maybe feature request?). I want to dump stats when a new high watermark of virtual memory is reached (which is supported), but I'd prefer to only do it after a certain threshhold is reached. I have a process which is going to allocate 200GB of memory to start, but I'm only really interested in profiling allocations that come AFTER that :). Any tips?
    David Goldblatt
    @davidtgoldblatt
    To clarify, stats or prof dumping
    I.e. bin utilization, arena metadata stuff, the output of malloc_stats_print, or the set of sampled allocations
    Zac Policzer
    @ZacAttack
    prof dumping I think. I'm interested in this situation what codepaths are doing the allocations
    David Goldblatt
    @davidtgoldblatt
    You could set prof_active to false at startup, and then to true once you’ve done the bootstrapping
    or initialization rather
    Zac Policzer
    @ZacAttack
    Oh, is it dynamic?
    David Goldblatt
    @davidtgoldblatt
    Yeah
    Zac Policzer
    @ZacAttack
    niiicccee. Ok
    thanks!
    David Goldblatt
    @davidtgoldblatt
    Somewhat confusingly, you need prof:true,prof_active:false, and then can set prof_active:true
    no problem!
    Jarrett Lusso
    @jclusso
    Is there any simple way to see if LD_PRELOAD worked?
    David Goldblatt
    @davidtgoldblatt
    if you do LD_PRELOAD=/path/to/libjemalloc.so MALLOC_CONF=“not_a_real_malloc_conf_option:true” ./myprog, the jemalloc bootup code will print warnings