Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    raubitsj
    @raubitsj
    @dspoka, we released a new version of wandb with a fix which should make the dupicate parameters much less likely in your case but there is likely one more fix needed in our cloud service to prevent this from happening in all cases.
    The version with the fix is 0.8.8. pip install —upgrade wandb to get the latest version
    Tommaso
    @tommy9114
    hi guys, I was checking how to rerun an old test, however, I cannot find the branch in my local git repo... If I understood correctly, wandb should store a new branch for each new run with config and data, is it right? Also, in the overview of the run, there is a git command to retrieve the code to reproduce the run, however, as I already said, there is no such a branch in my local repo. What am I missing? Thanks
    Chris Van Pelt
    @vanpelt
    Hey @tommy9114 wandb stores a pointer to the commit that the experiment was run with. The command in the run overview page will create a local branch from the git pointer we store. i.e. git checkout -b "atomic-jazz-6" 92205f5e5b49f7ffa80a3655c6c51e94ca962582 that creates a local branch named atomic-jazz-6 from that commit id.
    Tommaso
    @tommy9114
    I see so it does not do it automatically, I have to do it by my self, correct?
    Chris Van Pelt
    @vanpelt
    There's a wandb restore RUN_ID command that automates this.
    Tommaso
    @tommy9114
    Thanks! I was also interested in understanding if it possible to do it from python api so that when I run a code it also automatically save the run in a new branch/commit
    Michael Hsieh
    @Microsheep
    Hi~
    There was previously a dropdown to quickly select the steps with plotly plots
    We were using those plots to visualize certain metrics, but we won't add them at every single step
    Currently, it is not shown anymore, which makes it really difficult to pinpoint the steps with plots using the slider, and typing/calculating in the numbers will be a pain because those plots might not be in step 100, 200... but maybe like 160, 480, ...
    I think this is just a bug because the dropdown still exist on the popup to add custom visualizations
    Michael Hsieh
    @Microsheep

    Also, after switching to the new UI (which looks great BTW)
    There is a bug in the tab for models, there is no way to scroll left/right anymore
    For large models or custom models with a lot of blocks containing lots of parameters, the text for the Type Column will be cut off
    A quick fix might just to add back the scrolling, but I think a better way to show the column is to format the text in that column

    For Example,

    BasicConv1d( (conv): Conv1d(32, 64, kernel_size=(5,), stride=(1,)) (bn): BatchNorm1d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (dropout): Dropout(p=0.0) )

    Could be changed to something like

    BasicConv1d(
        (conv): Conv1d(32, 64, kernel_size=(5,), stride=(1,))
        (bn): BatchNorm1d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (dropout): Dropout(p=0.0)
    )

    This really makes a difference when many of the custom models we have use blocks within blocks with nn.Sequential

    John Qian
    @Xyzrr
    @Microsheep Thanks for letting us know, I'm on it right now
    John Qian
    @Xyzrr
    @Microsheep You should be able to see a cell's full content (formatted correctly) by hovering over it now. Please let me know if it there's still problems.
    Michael Hsieh
    @Microsheep
    @Xyzrr Thanks for the quick response!
    I can now see the full content~
    The dropdown problem for plotly plots is still there, but take the time on fixing it.
    Vibhav Kunj
    @vibhavk
    dum quesn alert: How do I add people to my team on wandb?
    Deyan Ulevinov
    @du210
    Hi all, I am having problems getting wandb client to log plotly charts. I tried running this vanilla example: wandb.log({'responses': plotly.graph_objects.Scatter(x=[1, 2, 3])}) and I do get a "responses" plot box in the "MEDIA" section of my run's charts but the box is empty, i.e. there are no points corresponding to [1, 2, 3] there. Any idea what might be going on? Thanks in advance.
    Deyan Ulevinov
    @du210
    Probably worth pointing out that I've just tried running the matplotlib code from https://docs.wandb.com/docs/log.html#logging-plots and that's working fine.
    Deyan Ulevinov
    @du210
    Python version: 3.6.9 (default, Aug 22 2019, 12:39:52)
    [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
    TensorFlow version: 1.14.0
    Keras version: 2.2.4-tf
    wandb version: 0.8.9
    Carey Phelps
    @cvphelps
    @vibhavk go to app.wandb.ai/teams/<your-team-name>
    basil
    @bask0
    Hi - How to delete an entire project online? Somehow can't figure it out...
    Deyan Ulevinov
    @du210
    go to https://app.wandb.ai/[username]/[projectname]/overview hit the three vertical dots in the top right corner and select "Delete project"
    basil
    @bask0
    Thank you @du210
    psmaragdis
    @psmaragdis
    I'm often having some trouble finding the right line when I have too many runs in a single plot. As a suggestion, it would be really helpful to bolden (or otherwise highlight) the line that corresponds to the run that my mouse pointer is hovering over in the runs table.
    Benjamin
    @bestrauc
    The WandbHook seems to slow down training significantly for me. Does that sound possible? Looking at the code, the WandbHook seems to generate the summaries at every step in before_run (but saves them only every steps_per_log in after_run). The estimator's default SummarySaverHook for instance only generates summaries at the steps when it actually outputs them. Could that be the reason? (I'm not an expert with tf's Estimator framework, though).
    Chris Van Pelt
    @vanpelt
    Hey @bestrauc the hook calls merge_all which will pull down the metrics from the graph. The most performant way to log tf summaries is to use our tensorboard integration by adding sync_tensorboard=True this will just mirror what's already being sent to tensorboard.
    @psmaragdis thanks for the suggestion, we'll add it to the backlog!
    Benjamin
    @bestrauc
    Thanks @vanpelt , makes sense. I'll use sync_tensorboard then, I initially didn't use it because it complained about my tfexample being on a cloud path (GCS). But I figured I don't need to copy the tfexample files themselves and silenced the warning by using util.get_module("wandb.tensorboard").patch(save=False) explicitly.
    Pierre Snell
    @Ierezell
    Hi, does anyone has a good solution to run wandb on a compute server which as no access on classic internet (only login with ssh) ?
    Chris Van Pelt
    @vanpelt
    Hey @Ierezell as long as the machine can access the internet it's fine if the machine itself is only accessible over SSH.
    If the machine can't access internet you can set WANDB_MODE=dryrun and the results will only be logged to that machines file system. You could then download the run directory onto a machine that can access the internet and run wandb sync dirname to upload the stats to the cloud.
    Pierre Snell
    @Ierezell
    Thanks a lot ! I needed the second solution :D
    Pierre Snell
    @Ierezell
    I'm really sorry to bother again.... but wandb sync will copy the new run online (and create a copy)... If I sync 3 times I will have 3 times the same run. Should I open an issue on github for that or is it a feature ? (It should be nice to just update an existing run and not create a new one when syncing). In any case thanks a lot for this beautiful tool and for your help
    Chris Van Pelt
    @vanpelt
    @lerezell please file a ticket. Currently sync always creates new runs, we should be able to detect existing runs and just update. It's complicated so we don't currently, but it's definitely possible.
    ayhyap
    @ayhyap
    Hey new here. Something seems to be wrong with my wandb.init() because all of my runs are getting dumped into uncategorized with randomly generated names, although I have specified 'name' and 'project' keyword arguments. Somehow the 'config' argument is recognized. What am I missing?
    Pierre Snell
    @Ierezell
    @vanpelt Okay, understood. I will fill an issue on github. And I believe you when you say it's complicated !
    raubitsj
    @raubitsj
    @ayhyap, Can you PM me any runs you are having a problem with? It looks like most of your runs are properly getting named.
    ayhyap
    @ayhyap
    @raubitsj i'm manually renaming them afterwards so I don't go crazy
    (and moving them into their own projects)
    but basically all of my runs have had the same issue so far
    Pierre Snell
    @Ierezell
    Hi, sorry to bother again.... (I switched from tensorboard to W&b and i'm using it intensively). It runs perfectly fine on Linux (my laptop and computing servers) but I needed to run it on windows (with more troubles) and the wand.init() isn't running correctly. It runs and log scalars but not with the name I specified and not saving weights or syncing images... (I runned it with wandb run python my_file.py) ? Is it my fault or a bug ? (I checked github and cannot find related problem)
    Pierre Snell
    @Ierezell
    Quite often it says that wandb.init was not runned (but I totally did on top of my file), I don't know if integration with windows is stable now
    raubitsj
    @raubitsj
    @Ierezell Regarding the windows logging tensorboard files, I have seen issues with pathnames dealing with windows (forward slashes vs back slashes), can you send me (PM if you would like to keep it private) a run URL where you had this issue.
    And with respect to the message about wand.init() not being run, that is something I haven’t seen, do you have an example of what type of operation you were trying when you got that message? And can you confirm you were running with wandb run python my_file whenever you run on windows. If you have a sample python file you would like me to look at, I might be able to help. If your program uses multiple processes, you will run into a similar issue.
    Rujikorn Charakorn
    @51616
    Is there anyway to plot 2 values is the same figure (e.g. to compare training/test loss)?
    Chris Van Pelt
    @vanpelt
    @51616 if you click the "Pin" button on a chart, you can then edit it using the pencil icon to add additional metrics.
    Bishal Santra
    @bsantraigi
    Hi, I am new here. Found an issue on wandb web platform. Don't know if this is the right place to raise this. Anyway, the problem is that for last few days, the line-plot smoothing functionality on plots stopped working correctly. Earlier it used to make the original plot transparent and the smoothed version being fully opaque. But now the transparency feature is gone and both the lines are fully opaque.
    meghabyte
    @meghabyte
    Hi, For several runs I see on the wandb web platform the summary variables (and when I click 'raw', I can see them in dictionary form). However, when I call "run.summary" in a script for that particular run, it returns an empty dictionary. "run.config" returns as expected. How can I access the summary variables?
    Priyansh Trivedi
    @geraltofrivia
    Hi folks, small organizational query.
    Is there a way to delete a team (no member added yet). There was a typo in the team name, and it's just there now, no one actually uses it to projects.
    raubitsj
    @raubitsj
    @meghabyte, we have a bug in the current release. If you use run.summary_metrics right now you will get a read/only version of the metrics. We will fix this in the next release
    raubitsj
    @raubitsj
    @geraltofrivia Let me know the name of the team and I can look into having it deleted, you can Private message me the info if you want.
    Rujikorn Charakorn
    @51616
    I ran training process yesterday and the internet died during the training. That caused the process to stop running. Is there any fixed to that? Or I have to turn off wandb sync entirely?