These are chat archives for HdrHistogram/HdrHistogram

21st
Jul 2015
Michael Barker
@mikeb01
Jul 21 2015 04:30
@giltene I've pushed a change to make the use of big endian explicit in the encoding of histograms.
Michael Barker
@mikeb01
Jul 21 2015 06:50
@ahothan I think it should be possible to have a single instance that can be encoded into and reused to avoid constantly reallocating a new one. It would need a little bit of logic to discard the old one if the various configuration parameters didn't match (e.g. bucketSize etc), but if they do match reusing the instance should be possible. I'll have a look at tweaking the C implementation to work this way.
Michael Barker
@mikeb01
Jul 21 2015 07:43
@giltene I've added support for Direct ByteBuffers to the encoding classes. It's not very efficient as it needs to allocate a temporary array, but it is functional.
Alec
@ahothan
Jul 21 2015 16:19
@mikeb01 if there is a mismatch you can simply return an error and keep the target/aggregation histogram. I think in most use cases all histogram settings should match. What might be important is to minimize memory allocation (especially Java/Python) for every decode+add operation, which could mean keeping around the memory where to decompress to (in my case it would be a 2720484 = 221KB). Just the idea of unpacking this struct/array in python as suggested by @tmontgomery will yield a new list of 55296 integers (which will take another 200+KB) for every add, which is not terribly exciting to say the least. Looks like the only way to avoid this alloc overhead in python would be to use some C code to access that array...
Todd L. Montgomery
@tmontgomery
Jul 21 2015 18:01
@ahothan that is a bummer. Is there a way to reuse a list to make that easier?