@pmineiro In this case, should we ignore reward signals after a window has expired, or should we still process them trusting that central limit theorem will help us achieve accuracy over time as we observe more events?
I'm not trying to be salty, but there's no CLT issue here. When you update VW, you are saying "for this context i observed this reward". If you do it again, you are saying "i happened to observe the exact same context again but this time i got this other reward". So the best estimate after that is the average of the first and second reward, which is probably not what you want. With respect to time limit, if you define reward as, e.g., "1 if a click within 30 minutes of presentation else 0" then what happens after 30 minutes is irrelevant.
@wmelton yeah, just to be clear:
If you have a bernoulli bandit, what some people do is that when an arm is pulled, they record +1 trials and update the posterior, and only when they get a reward for that pull do they update +1 successes. In a context-free setting this is sort of OK and will be kind of eventually consistent. I've done this before, primarily because it saves me from keeping track of pulls that get zero rewards and assigning those explicitly. It isn't "correct", however. In bandit settings you should learn when the reward is available, not do such a half-step. But it's a practical compromise.
In contextual bandits, and in VW, doing this will fail because of the issue @pmineiro mentioned. The way to overcome this is to keep track of all predictions and their context in some DB or memory store and learn only when a reward arrives for a particular prediction/context, or a suitable amount of time has passed such that you can assume zero reward and learn on that.
If anyone has any comments on this message I posted I'd be very grateful:
@pmineiro thanks for your patience answering all my questions. I did a quick sanity check: I'd expect explore_eval with 100% exploration against a "world" that never changes and where exactly half of actions are positive (-1 cost) and half negative (+1) would get an estimated average loss of 0, but that's not the case. I'm not sure if this is due to some systemic bias, because in this particular case
--cb_explore_adf
reports the loss I'd expect. I made an issue but I'm not sure if it's a bug or intended behaviour: VowpalWabbit/vowpal_wabbit#2621
@maxpagels_twitter : you definitely do not ever run
--cb_explore
(or--cb_explore_adf
) on an offline CB dataset without--explore_eval
. you only run--cb_explore
either 1) online, i.e., acting in the real-world, 2) offline with a supervised dataset and--cbify
(to simulate #1) or 3) offline with--explore_eval
and an offline CB dataset (to simulate #1). nothing else is coherent.
@maxpagels_twitter I think Paul was referring to your question here
In contextual bandits, and in VW, doing this will fail because of the issue @pmineiro mentioned. The way to overcome this is to keep track of all predictions and their context in some DB or memory store and learn only when a reward arrives for a particular prediction/context, or a suitable amount of time has passed such that you can assume zero reward and learn on that.
This join operation is done for you by Azure Personalizer (https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/). We done presentations and workshops at AI NextConn conferences where we show the detailed dataflow diagram, maybe you can find one of those ... or you could just use APS.
More questions: why, in cb_explore_adf
with epsilon set to 0.0, do I se probability distributions with values other than 0.0 or 1.0? This only happens in the start of a dataset:
maxpagels@MacBook-Pro:~$ vw --cb_explore_adf test --epsilon 0.0
Num weight bits = 18
learning rate = 0.5
initial_t = 0
power_t = 0.5
using no cache
Reading datafile = test
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.666667 0.666667 1 1.0 known 0:0.333333... 6
0.833333 1.000000 2 2.0 known 1:0.5... 6
0.416667 0.000000 4 4.0 known 2:1... 6
0.208333 0.000000 8 8.0 known 2:1... 6
0.104167 0.000000 16 16.0 known 2:1... 6
0.052083 0.000000 32 32.0 known 2:1... 6
0.026042 0.000000 64 64.0 known 2:1... 6
0.013021 0.000000 128 128.0 known 2:1... 6
0.006510 0.000000 256 256.0 known 2:1... 6
finished run
number of examples = 486
weighted example sum = 486.000000
weighted label sum = 0.000000
average loss = 0.003429
total feature number = 4374
maxpagels@MacBook-Pro:~$
All examples have the same number of arms (3), and on different datasets, I see the same thing at the start of a dataset. One large dataset I have takes some 20,000 examples before giving correct probabilities
--first
works as expected, but not --epsilon
, which at 0.0 exploration should be greedy, ie. the probability vector should have one value of 1.0 and the reset of 0.0.
vw --cb_explore_adf
, is there a command line argument to make the policy class be decision trees?
hi @darlwen ,
Stack of reductions for every vw run is defined by 2 things:
1) DAG of dependencies that are defined in setup function for every reduction.
i.e. here:
https://github.com/VowpalWabbit/vowpal_wabbit/blob/b8732ffec3f8c7150dace1c41434bf3cdb4d8436/vowpalwabbit/cb_explore_adf_greedy.cc#L96
if we have cb_explore_adf reduction included, we also include cb_adf one.
2) topoligical order here: https://github.com/VowpalWabbit/vowpal_wabbit/blob/b8732ffec3f8c7150dace1c41434bf3cdb4d8436/vowpalwabbit/parse_args.cc#L1246
So, final stack of reduction for each vw run is actually sub-stack from 2) that contains:
1) reductions that you explicitly provided in your command line
2) reductions that defined in input model file (if any)
3) reductions populated as dependencies.
In your case you have ccb_explore_adf, ftrl provided explicitly by you, others are populated as dependencies:
ccb_explore_adf -> cb_sample
ccb_explore_adf -> cb_explore_adf_greedy -> cb_adf -> csoaa_ldf
thanks @ataymano much more clear now. In VW::LEARNER::base_learner* setup_base(options_i& options, vw& all)
when enter the following logic,
else
{
all.enabled_reductions.push_back(std::get<0>(setup_func));
return base;
}
my understanding is that it won't do auto setup_func = all.reduction_stack.top();
anymore, for example, when we get "ftrl_setup" then it enters the else
logic, then how it makes the rest reductions(scorer, ccb_explore_adf etc.) enabled?
pyvw.vw
object to process a data file when I instantiate it with a --data
argument. Based on this fairly recent s.o. answer https://stackoverflow.com/a/62876763, my understanding is that it should do just that, but I am not having any luck. I'm using vw version 8.9.0, did something change in a recent release? I have confirmed that using the same options from the command line works so I don't think I'm doing something obviously wrong like using a wrong file name
reported cost/probability
, or 0 if cost is not reported. (c(a) = cost/probability * I(observed action = a)
). Unbiased if probabilities are correct, usually high varianceThis is the only thing I’ve found that describes the implementation for csoaa: http://users.umiacs.umd.edu/~hal/tmp/multiclassVW.html. As I read it, that means csc based bandit methods:
If so, is it reasonable to think of ips and mtr as essentially the same except:
cost * I(action = observed action)/probability
as target and 1
as weightcost
as target and I(action = observed action)/probability
as weightmean(cost * I(observed action = predicted action) / probability))
or something more sophisticated, like https://arxiv.org/abs/1210?