These are chat archives for HdrHistogram/HdrHistogram
jonhoo/hdrsample@5f5f6ae is a quick test about the aforementioned multiplication behavior -- it compares multiplying a quantile (
f64) by a count (
u64 cast to
f64) with multiplying a quantile (
f64 converted to an arbitrary-precision rational) by a count (
u64 as rational) and then converting the result to an
f64. This isn't a perfect test (maybe I should convert the first
f64 product to a rational and compare that to the rational product?) but when running 10 million random iterations, here's some sample output:
high 5002030 low 0 same 4997969
In other words, the fp product was higher than the rational product half the time and the same half the time but never too low. So, that lends some justification for stripping an ulp off of the product.
(quantile.prev() * count).ceil(),
(quantile * count).ceil(), and
(quantile * count).prev().ceil()vs the equivalent calculation with rational arithmetic. All three did about the same with 10m random numbers (different random numbers for each one):
mult then ceil high 0 low 1 same 9999999 mult, prev, ceil high 0 low 3 same 9999997 prev, mult, ceil high 0 low 2 same 9999998
quantile = 0.5000000000000001(1 ulp past 0.5) with a histogram containing
1, 2produces a value of
(quantile * count).prev().ceil() as u64. However, why are we using
ceil()? Seems like that is just introducing yet another FP opportunity to not express an integer well. If I use
(quantile * count).prev() as u64 + 1, that value becomes
2, which is what we want for 1 ulp past 0.5.
ceil()will return the callee if it is already at an fp integer-ish number.
1 << 60in size, you're probably gonna have a bad day precision wise, and I'm not sure if we can really bound that based on ulps at the scale of the input, which since it's in
[0, 1]will have little teeny ulps.
.prev()in there to bias things down, we're unlikely to see getting a bucket that's too high, but I haven't convinced myself that it couldn't still happen.)