Hi @pmineiro thank you for the clarification. I haven't the chance yet to fully play with VW because (for now) I just want to do something very basic (with no context) so I went and chosen TS which was simple to understand and to code.
Drawing the distirbution is the soltuion I'm currently doing to see how the variation evolve over time and the only way I found so fare is , like you mentionned, simulate with different param to see how much impression are needed in order to reach for example 95% confidence
But I was wondering if there are some general formulas that could apply for any MAB algos
--eval
is only supported for --cb
and not --cb_adf
. So the flag is being silently ignored. Difference is between this line in cb_algs.cc and a lack of something similar in cb_adf.cc
vw --cb_explore_adf --bootstrap 100 -d train.dat
and to get out confidence intervals on the final PVL
--progress 1
that is the loss of each training example
Hi everybody,
I'm new here! It's been a few years since I've used VW so I'm really glad I have found this community :) I'm currently writing the second edition of my book Data Science at the Command Line and VW will play a big role in Chapter 9: Modeling Data. I'm also working on the Data Science Toolbox which will include VW and many other command-line tools.
I was wondering, when installing VW via pip
, is the command-line tool vw
also installed? The documentation seems to suggest so, but I'm unable to locate it. I'm on Ubuntu.
Thanks,
Jeroen
pip
will not install the command line tool as far as I know. https://vowpalwabbit.org/start.html has info about how to get the C++/command line tool by building from source (or brew on MacOS). Please feel free to reach out to me if you have any more questions!
Hi all, I’m a newbie to contextual bandits and learning to use VW.
Could anyone help me understand if I’m using it correctly.
Problem: I have a few hundred thousands of historical data and I want to use them to learn a warm-start model. I saw there are some tutorials showing how to use cli in wiki. But i wonder if I can use its python version in this way, assuming the data has been formatted:
vw = pyvw.vw("--cb 20 -q UA --cb_type ips")
for i in range(len(historical_data)):
vw.learn(historical_data[i])
my questions are:
1) Is this the correct way to warm start the model?
2) If so, what prob should I use for each training instance? If it is deterministic, I guess it would be 1.0?
3) For exploitation/exploration after having this initial model, can I save the policy and then apply --cb_explore 20 -q UA --cb_type ips --epsilon 0.2 -i cb.model
to continue the learning?
Thanks for the help in advance!
Hi Guys, I am working on a project similar to News Recommendation Engine which predicts the most relevant articles given user feature vector. I wanted to used VW's contextual bandit for the same.
I have tried using VW, but it seems that VW only output's a single action per trial. Instead, I wanted some sort of ranking mechanism such that I can get the top k articles per trial.
Is there any way to use VW for such use case?
I have asked this question in stackoverflow as well. (https://stackoverflow.com/questions/63635815/how-to-learn-to-rank-using-vowpal-wabbits-contextual-bandit )
Thanks in Advance.
Hi! Thanks to VW authors for the CCB support, finding it very useful!
Quick question: how is offline policy evaluation handled for CCBs in VW? IPS, DM, something else? Was wondering if there is a paper I can read about this. Was looking into https://arxiv.org/abs/1605.04812 but wasn't sure this estimator is the one VW uses specifically for CCBs.
@pmineiro excellent, thanks for the response.
A second question: let's say I have collected bandit data from several policies deployed to production one after the other, i.e. thought of as a whole, it is nonstationary.
Can I use all of the logged data to train a new policy, even though the logged data is generated by X different policies? If so, are ips/dm/dr all acceptable choices or do they break against nonstationary logged data?
How about offline evaluation of a policy? This paper https://arxiv.org/pdf/1210.4862.pdf suggest that IPS can't be used, is explore_eval
the right option?
What I'm looking for is the "correct" way for a data scientist to offline test & learn new policies, possibly with different exploration strategies, using as much data as possible from N previous deployments with N different policies. The same question also applies to automatic retraining of policies on new data as part of a production system, I'm unsure of the "proper" way to do it
Nice, thanks! I've used the personalizer service, just curious as to how it works under the hood. So with IPS & DM it's ok to train model on logged dataset A-> deploy model -> collect logged data B -> train on A+B -> repeat with ever-growing dataset?
What is the purpose of explore_eval
then?
Good day, Vowpal Community, @all
we wanted to switch our contextual bandit models from epsilon-greedy approach to the online cover approach. However, when we ran this simple snippet of code (see below) to check how online cover is going to perform for us, result was not as expected.
import vowpalwabbit.pyvw as pyvw
data_train = ["1:0:0.5 |features a b", "2:-1:0.5 |features a c", "2:0:0.5 |features b c",
"1:-2:0.5 |features b d", "2:0:0.5 |features a d", "1:0:0.5 |features a c d",
"1:-1:0.5 |features a c", "2:-1:0.5 |features a c"]
data_test = ["|features a b", "|features a b"]
model1 = pyvw.vw(cb_explore=2, cover=10, save_resume=True)
for data in data_train:
model1.learn(data)
model1.save("saved_model.model")
model2 = pyvw.vw(i="saved_model.model")
for data in data_test:
print(data)
print(model1.predict(data))
print(model2.predict(data))
for data in data_test:
print(data)
print(model1.predict(data))
print(model2.predict(data))
Output for this snippet was like this:
|features a b
[0.75, 0.25]
[0.5, 0.5]
|features a b
[0.7642977237701416, 0.2357022762298584]
[0.5, 0.5]
|features a b
[0.7763931751251221, 0.22360679507255554]
[0.5, 0.5]
|features a b
[0.7867993116378784, 0.21320071816444397]
[0.5917516946792603, 0.40824827551841736]
For some reason, initiated model2 does not seem to provide results, influenced by loaded weights (it starts with uniform distribution between two actions). Moreso, though no learning has been happening for model1 and model2 for test dataset, predicted probabilities changed over time for both models. Is this an expected behavior for online cover approach? And if yes, could you please guide me to any documentation/article, where I could find an explanation on why it's happening.
explore_eval
is to estimate the online performance of a learning algorithm as it learns, but using an off-policy dataset. it's different than evaluating or learning a policy over an off-policy dataset, because you have to account for the change in information revealed to the algorithm as the result of making different decisions. as such, it is far less data efficient, but sometimes necessary. one use case is to evaluate exploration strategies offline, hence the name.
@pmineiro thanks. So just to be clear, let's say I have logged bandit data and want to know whether an epsilon-greedy algorithm at 10% or 20% would be better. Do I:
As far as I can tell I should be using explore_eval, which is why I'm wondering what the use case for the second option is, i.e. comparing different exploration algorithms by simply comparing losses of respective --cb_explore experiments? it there any situation where this is a valid approach?
--cb_explore
(or --cb_explore_adf
) on an offline CB dataset without --explore_eval
. you only run --cb_explore
either 1) online, i.e., acting in the real-world, 2) offline with a supervised dataset and --cbify
(to simulate #1) or 3) offline with --explore_eval
and an offline CB dataset (to simulate #1). nothing else is coherent.