Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Allegra Latimer
    @alatimer
    Hi again all---does anyone know whether I can use the --bootstrap flag with cb_adf (or with the cb functionality generally)? It would be great to have an idea of the trained model's variance. Aside from taking longer to train, it doesn't seem like my stdout changes at all with or without the flag. EG, I would like to run a command like this: vw --cb_explore_adf --bootstrap 100 -d train.dat and to get out confidence intervals on the final PVL
    wangtianyu61
    @wangtianyu61
    Hi all. I am wondering if the command line version for vw in online contextual bandit can return the loss in each time t (rather than just pv-loss finally) by the command like that vw --cbify data set path --epsilon 0.05.
    Allegra Latimer
    @alatimer
    Hi @wangtianyu61 , do you mean just have the stdout print the loss for every example? If so, you can use the --progress flag, eg vw --cbify N -d data_path --epsilon 0.05 --progress 1
    The column "since last" prints the average loss since the last printout, so in the case of --progress 1 that is the loss of each training example
    wangtianyu61
    @wangtianyu61
    Thanks @alatimer ! That works well.
    Jeroen Janssens
    @jeroenjanssens

    Hi everybody,

    I'm new here! It's been a few years since I've used VW so I'm really glad I have found this community :) I'm currently writing the second edition of my book Data Science at the Command Line and VW will play a big role in Chapter 9: Modeling Data. I'm also working on the Data Science Toolbox which will include VW and many other command-line tools.

    I was wondering, when installing VW via pip, is the command-line tool vw also installed? The documentation seems to suggest so, but I'm unable to locate it. I'm on Ubuntu.

    Thanks,

    Jeroen

    Jack Gerrits
    @jackgerrits
    Hi @jeroenjanssens, welcome back! pip will not install the command line tool as far as I know. https://vowpalwabbit.org/start.html has info about how to get the C++/command line tool by building from source (or brew on MacOS). Please feel free to reach out to me if you have any more questions!
    Jeroen Janssens
    @jeroenjanssens

    Thanks @jackgerrits, that's good to know. Building from source works great on Ubuntu, so I'll just stick to that.

    The Getting Started tutorial assumes that the command-line tool is installed. Would it be a good idea to add a note that clarifies that?

    Jack Gerrits
    @jackgerrits
    There is a prerequisites note on the tutorial page - think it needs a more prominent note?
    Jeroen Janssens
    @jeroenjanssens
    I think it would be helpful to mention that the command-line tool is needed for this tutorial and that installing VW via pip is not sufficient. Related thought: would it be possible and desirable to let pip install the command-line tool as well?
    Jack Gerrits
    @jackgerrits
    Okay I'll make a note to review the wording there. I am not sure if we want to distribute the CLI as part of the Python package or not, but I agree it would be a good to have an easier way to get the CLI exe
    I created an issue to track here: VowpalWabbit/vowpalwabbit.github.io#153
    Jeroen Janssens
    @jeroenjanssens
    Excellent @jackgerrits!
    pmcvay
    @pmcvay
    On the wiki page, it states that the squared loss is the default loss function for vw. Is this true even for binary classification? I've always thought that using squared loss for binary classification is frowned upon
    Srinath
    @SrinathNair__twitter
    Hi Everyone,
    I have a very basis question. Based on what I have understood the goal of contextual bandit algorithm is to find the best policy among a policy class, i.e. the one that provide the maximum average reward over a period of time.
    So, what is the policy class used by Vowapal Wabbit's contextual bandit tool? Is it neural network or decision tree or something else?
    Allegra Latimer
    @alatimer
    Hi @SrinathNair__twitter , try reading the Bake-off paper (https://arxiv.org/abs/1802.04064), it does a good job of explaining VW's CB implementation
    1 reply
    Yiqiang Zhao
    @YiQ-Zhao

    Hi all, I’m a newbie to contextual bandits and learning to use VW.
    Could anyone help me understand if I’m using it correctly.

    Problem: I have a few hundred thousands of historical data and I want to use them to learn a warm-start model. I saw there are some tutorials showing how to use cli in wiki. But i wonder if I can use its python version in this way, assuming the data has been formatted:

    vw = pyvw.vw("--cb 20 -q UA --cb_type ips")
    for i in range(len(historical_data)):
        vw.learn(historical_data[i])

    my questions are:
    1) Is this the correct way to warm start the model?
    2) If so, what prob should I use for each training instance? If it is deterministic, I guess it would be 1.0?
    3) For exploitation/exploration after having this initial model, can I save the policy and then apply --cb_explore 20 -q UA --cb_type ips --epsilon 0.2 -i cb.model to continue the learning?

    Thanks for the help in advance!

    2 replies
    Srinath
    @SrinathNair__twitter

    Hi Guys, I am working on a project similar to News Recommendation Engine which predicts the most relevant articles given user feature vector. I wanted to used VW's contextual bandit for the same.
    I have tried using VW, but it seems that VW only output's a single action per trial. Instead, I wanted some sort of ranking mechanism such that I can get the top k articles per trial.

    Is there any way to use VW for such use case?

    I have asked this question in stackoverflow as well. (https://stackoverflow.com/questions/63635815/how-to-learn-to-rank-using-vowpal-wabbits-contextual-bandit )
    Thanks in Advance.

    2 replies
    Avighan Majumder
    @AvighanMajumder_twitter
    Do we have any good technical literature regarding the package? Can anyone advise any good place to look into for vowpal wabbit?
    1 reply
    Max Pagels
    @maxpagels_twitter

    Hi! Thanks to VW authors for the CCB support, finding it very useful!

    Quick question: how is offline policy evaluation handled for CCBs in VW? IPS, DM, something else? Was wondering if there is a paper I can read about this. Was looking into https://arxiv.org/abs/1605.04812 but wasn't sure this estimator is the one VW uses specifically for CCBs.

    Paul Mineiro
    @pmineiro
    @maxpagels_twitter : re ope in ccb, great question. ccb currently uses an sum-over-IPS estimate on each slot independently, which is biased (doesn't account for effects of earlier actions on subsequent actions). we're investigating alternate strategies so this might change in another release. the slates estimator you reference is distinct: in slates there is a single reward (not per slot) and the pseudoinverse does a form of credit assignment. slates will be released eventually as a distinct feature.
    Max Pagels
    @maxpagels_twitter

    @pmineiro excellent, thanks for the response.

    A second question: let's say I have collected bandit data from several policies deployed to production one after the other, i.e. thought of as a whole, it is nonstationary.

    • Can I use all of the logged data to train a new policy, even though the logged data is generated by X different policies? If so, are ips/dm/dr all acceptable choices or do they break against nonstationary logged data?

    • How about offline evaluation of a policy? This paper https://arxiv.org/pdf/1210.4862.pdf suggest that IPS can't be used, is explore_eval the right option?

    What I'm looking for is the "correct" way for a data scientist to offline test & learn new policies, possibly with different exploration strategies, using as much data as possible from N previous deployments with N different policies. The same question also applies to automatic retraining of policies on new data as part of a production system, I'm unsure of the "proper" way to do it

    Paul Mineiro
    @pmineiro
    @maxpagels_twitter: first, regarding offline evaluation: IPS (and DR) is a martingale so the estimator is unbiased even if the behaviour policy is changed on every decision. the only thing prohibited is that the behaviour policy "looks into the future". however this assumes the world is IID producing (context, reward vector) pairs and then the behaviour policy draws on p(a|x) and reveals r_a. if the world is actually nonstationary then even if the behaviour policy is constant IPS can be biased. furthermore DM is typically biased. note unbiased isn't everything and biased estimator can have better overall accuracy.
    @maxpagels_twitter: second, regarding learning new policies and automatically retraining. Azure Personalizer Service is VW wrapped in a system that does this. it uses IPS estimator along with counterfactual evaluation to test offline CB algorithms. this supports model selection strategies similar to supervised learning. it's a pain in the butt to get all this right, so just use the product, that's why we made it.
    Max Pagels
    @maxpagels_twitter

    Nice, thanks! I've used the personalizer service, just curious as to how it works under the hood. So with IPS & DM it's ok to train model on logged dataset A-> deploy model -> collect logged data B -> train on A+B -> repeat with ever-growing dataset?

    What is the purpose of explore_eval then?

    Fedor Shabashev
    @fshabashev
    I wonder if it is possible to use Vowpal Wabbit with unix socket (file socket) instead of a TCP socket.
    The documentation only describes the TCP socket usage, while file sockets could be convenient so I won't have to use a port
    Diana Omelianchyk
    @omelyanchikd

    Good day, Vowpal Community, @all
    we wanted to switch our contextual bandit models from epsilon-greedy approach to the online cover approach. However, when we ran this simple snippet of code (see below) to check how online cover is going to perform for us, result was not as expected.

    import vowpalwabbit.pyvw as pyvw
    data_train = ["1:0:0.5 |features a b", "2:-1:0.5 |features a c", "2:0:0.5 |features b c",
                  "1:-2:0.5 |features b d", "2:0:0.5 |features a d", "1:0:0.5 |features a c d",
                  "1:-1:0.5 |features a c", "2:-1:0.5 |features a c"]
    data_test = ["|features a b", "|features a b"]
    model1 = pyvw.vw(cb_explore=2, cover=10, save_resume=True)
    for data in data_train:
        model1.learn(data)
    model1.save("saved_model.model")
    model2 = pyvw.vw(i="saved_model.model")
    for data in data_test:
        print(data)
        print(model1.predict(data))
        print(model2.predict(data))
    for data in data_test:
        print(data)
        print(model1.predict(data))
        print(model2.predict(data))

    Output for this snippet was like this:

    |features a b
    [0.75, 0.25]
    [0.5, 0.5]
    |features a b
    [0.7642977237701416, 0.2357022762298584]
    [0.5, 0.5]
    |features a b
    [0.7763931751251221, 0.22360679507255554]
    [0.5, 0.5]
    |features a b
    [0.7867993116378784, 0.21320071816444397]
    [0.5917516946792603, 0.40824827551841736]

    For some reason, initiated model2 does not seem to provide results, influenced by loaded weights (it starts with uniform distribution between two actions). Moreso, though no learning has been happening for model1 and model2 for test dataset, predicted probabilities changed over time for both models. Is this an expected behavior for online cover approach? And if yes, could you please guide me to any documentation/article, where I could find an explanation on why it's happening.

    Diana Omelianchyk
    @omelyanchikd
    Many thanks in advance :)
    Paul Mineiro
    @pmineiro
    @maxpagels_twitter : the purpose of explore_eval is to estimate the online performance of a learning algorithm as it learns, but using an off-policy dataset. it's different than evaluating or learning a policy over an off-policy dataset, because you have to account for the change in information revealed to the algorithm as the result of making different decisions. as such, it is far less data efficient, but sometimes necessary. one use case is to evaluate exploration strategies offline, hence the name.
    1 reply
    Wes
    @wmelton
    Are there any practical examples in the wild of taking a similar action and context data in JSON format as shown on Personalizer's documentation (https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/concepts-features#actions-represent-a-list-of-options) and converting it to the VW format for use with cb or cb_adf? The VW website example for news recommendation only uses static strings as actions, where real world news recs would use article features in the actions to improve decision quality. Appreciate any help/guidance.
    Max Pagels
    @maxpagels_twitter

    @pmineiro thanks. So just to be clear, let's say I have logged bandit data and want to know whether an epsilon-greedy algorithm at 10% or 20% would be better. Do I:

    • use explore_eval for both and choose the one with the best average loss?
    • run vw --cb_explore <n> --epsilon 0.1 and vw --cb_explore <n> --epsilon 0.2 and choose the one with the best average loss?

    As far as I can tell I should be using explore_eval, which is why I'm wondering what the use case for the second option is, i.e. comparing different exploration algorithms by simply comparing losses of respective --cb_explore experiments? it there any situation where this is a valid approach?

    Paul Mineiro
    @pmineiro
    @maxpagels_twitter : you definitely do not ever run --cb_explore (or --cb_explore_adf) on an offline CB dataset without --explore_eval. you only run --cb_explore either 1) online, i.e., acting in the real-world, 2) offline with a supervised dataset and --cbify (to simulate #1) or 3) offline with --explore_eval and an offline CB dataset (to simulate #1). nothing else is coherent.
    Wes
    @wmelton
    @pmineiro In online cb scenarios, if you are predicting clicks on 3 pieces of content, why is it necessary to explicitly update the model when no action was taken by the user? In traditional bayes-bernoulli approaches, “regret” was implicit by nature of reward and trials being separate. Trying to make the mental shift here. Challenge i see in our current inplementation is that if we update the model to say cost of 0 (no action) but user shortly after takes an action (cost -1), model sees the probability as 50% now, which seems odd to me. Outside of batch updates (which seems to defeat the purpose of “online” learning), is there a way to tell VW the incremental value of a given prediction as to not dilute the model?
    Paul Mineiro
    @pmineiro
    @wmelton : the short answer is that by fitting the zeros you are regressing against an unbiased target. the long answer is very long.
    Wes
    @wmelton
    @pmineiro haha that makes sense. Am i correct in assuming that omitting zero cost outcomes would reduce performance significantly? Are there any solid papers or videos that are helpful in describing typical real-time data flows for using vw in RL scenario like this? It seems like outside of fixed window batch scenarios it would be very difficult to do this efficiently.
    Paul Mineiro
    @pmineiro
    @wmelton it's hard to understand your question. in your 3-pieces-of-content-recommendation-problem, when a user takes no action in response to a piece of content that is presumed bad (cost 0) and you need to tell the learning algorithm about it, why is that surprising? of course you wait for some amount of time before concluding the user has taken no response, and you only update the model once per decision. azure personalizer (https://azure.microsoft.com/en-us/services/cognitive-services/personalizer/) parametrizes this delay as the "experimental unit window". i suggest you use that as the dataflows have been all worked out already.
    Wes
    @wmelton
    @pmineiro i appreciate your help and feedback. We considered using Personalizer but it is exceptionally expensive for a startup. From what you’ve shared here, i think i understand now the correct way to handle this. Thanks for your time and help! If you have a coffee or beer fund, happy to drop something in there for the help. Thanks!
    Max Pagels
    @maxpagels_twitter
    Regarding @wmelton's question, I think he is wondering because in e.g. standard bernoulli bandits with beta posterior updates, you only need to record trials + successes, and that can technically be done by incrementing a trial count at the point of prediction and incrementing successes only if you get a positive reward. Whereas the algorithms VW uses require you to make a prediction, keep track of the context, then wait for a positive or negative reward. Both types or feedback are explictly needed
    Paul Mineiro
    @pmineiro
    @maxpagels_twitter : ok. in the bandit (no context) case with discrete actions, the model parameters are a (c, n) pair per action. the update is (c += 1 if success else 0, n += 1) ... since the n update is constant you can apply it anytime you want and still get the same answer. you do have to remember what action was taken to be able to apply the "c" update later, so that's the analog of "remembering context". however, by applying the "n update" before you "know c" you are actually creating a pessimistic (over recent trials) estimator by assuming "c = 0 for now", whereas the rest of the technique uses an optimistic estimator (to explore). but counts are "data linear" (e.g., you can just remove some of the c-and-n if you decide later those interactions were lying spammers) whereas in more complicated model spaces we don't know how to be data linear.
    Wes
    @wmelton
    @maxpagels_twitter exactly! Thanks for helpfully asking what i apparently wasnt asking clearly lol. @pmineiro In this case, should we ignore reward signals after a window has expired, or should we still process them trusting that central limit thereom will help us achieve accuracy over time as we observe more events?
    @maxpagels_twitter on a different topic, in your analysis, how are you extracting confidence intervals for prediction accuracy? I havent uncovered in the documentation (yet) how to observe model confidence per inference or total over time.
    Paul Mineiro
    @pmineiro

    @pmineiro In this case, should we ignore reward signals after a window has expired, or should we still process them trusting that central limit theorem will help us achieve accuracy over time as we observe more events?

    I'm not trying to be salty, but there's no CLT issue here. When you update VW, you are saying "for this context i observed this reward". If you do it again, you are saying "i happened to observe the exact same context again but this time i got this other reward". So the best estimate after that is the average of the first and second reward, which is probably not what you want. With respect to time limit, if you define reward as, e.g., "1 if a click within 30 minutes of presentation else 0" then what happens after 30 minutes is irrelevant.

    Wes
    @wmelton
    @pmineiro i dont mind saltiness - just here to learn. I referenced CLT to highlight the same situation as found in the example you gave, at least in my mind. At a sufficiently large sample size, errors in reporting reward with perfect accuracy should regress to the mean over time, correct? I may have wrongly assumed this conclusion given my current understanding that features are “shared” across many users in a given model, so i assumed attribution errors would ultimately more or less tend to the mean given a large enough corpus of events. If Im totally wrong, no sweat haha. Like i said, just here to learn - trying to make the mental leap from a more traditional non-contextual approach to this approach.
    Paul Mineiro
    @pmineiro
    @wmelton using the bandit analogy, if you do 2 vw updates you'll get the equivalent of (n += 2) in the bandit setting. with vw, every time you send in a reward ("c") you get the equivalent of an increment in the number of trials ("n"). so it'll cause you problems.
    Max Pagels
    @maxpagels_twitter

    @wmelton yeah, just to be clear:

    If you have a bernoulli bandit, what some people do is that when an arm is pulled, they record +1 trials and update the posterior, and only when they get a reward for that pull do they update +1 successes. In a context-free setting this is sort of OK and will be kind of eventually consistent. I've done this before, primarily because it saves me from keeping track of pulls that get zero rewards and assigning those explicitly. It isn't "correct", however. In bandit settings you should learn when the reward is available, not do such a half-step. But it's a practical compromise.

    In contextual bandits, and in VW, doing this will fail because of the issue @pmineiro mentioned. The way to overcome this is to keep track of all predictions and their context in some DB or memory store and learn only when a reward arrives for a particular prediction/context, or a suitable amount of time has passed such that you can assume zero reward and learn on that.

    @wmelton regarding the second question, I've found no flags directly in VW for this. I've made my own system with bootstrapping
    Max Pagels
    @maxpagels_twitter

    If anyone has any comments on this message I posted I'd be very grateful:

    @pmineiro thanks for your patience answering all my questions. I did a quick sanity check: I'd expect explore_eval with 100% exploration against a "world" that never changes and where exactly half of actions are positive (-1 cost) and half negative (+1) would get an estimated average loss of 0, but that's not the case. I'm not sure if this is due to some systemic bias, because in this particular case --cb_explore_adf reports the loss I'd expect. I made an issue but I'm not sure if it's a bug or intended behaviour: VowpalWabbit/vowpal_wabbit#2621

    olgavrou
    @olgavrou

    @maxpagels_twitter : you definitely do not ever run --cb_explore (or --cb_explore_adf) on an offline CB dataset without --explore_eval. you only run --cb_explore either 1) online, i.e., acting in the real-world, 2) offline with a supervised dataset and --cbify (to simulate #1) or 3) offline with --explore_eval and an offline CB dataset (to simulate #1). nothing else is coherent.

    @maxpagels_twitter I think Paul was referring to your question here

    Max Pagels
    @maxpagels_twitter
    @olgavrou yeah, I already read that and tested explore_eval as suggested, but it gives a loss i wouldn't expect against a uniform random dataset with exactly as much positive and negative feedback. The reported loss is systematically wrong. Which is why I'm wondering if it's a feature of explore_eval or if there is a bug