I believe there could be an argument over interpretation. Although it's a mouthful, I think using the term "inverse probability" is helpful. I'm also in favor of calling it a lower bound--where a lower bound on inverse probability equates with an upper bound on probability. It's saying that "at the time of
Extract on a context, we believed the sampling rate a.k.a. inverse probability was no less than the indicated value.
I say this because some sampling schemes are a bit speculative about what is kept-- I'm thinking of reservoir sampling approaches.
Samplerrequired to re-evaluate its decision when new attributes are set.
My position is (1) that callers ought to be able to tell whether a span operation will have no effect w/o a lazy interface, (2)
UpdateName should not exist,
SetName is OK, (3) Sampler should be considered a "head" sampler.
Sampler decision informs whether a
SpanData will be built and processed. The span processors can all implement their own sampling designs after the decision is made to build a
SpanData, and these will each be recorded with different sampling rates. It's in this setting that I consider the propagated sampling rate to be a lower bound--it's the result of a head-sampling decision to build a span or trace based on the initial conditions, whereas the span or trace could eventually be recorded with a higher sampling rate if it survives (through random chance) some sort of selection process.