Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Amir
    @amir-abdi
    @spMohanty the total execution time is not taking into account the time waited in queue for evaluation to start. Consequently, I believe my latest submission to go over the time limit.
    @spMohanty is it possible for you to check this and compensate for the wasted time if that was the case?
    SP Mohanty
    @spMohanty
    @amir-abdi : Will do !
    and @mseitzer : v1.2 is being used on the evaluator now.
    Loris Michel
    @lorismichel
    due to the versioning of libs as pointed out by @mseitzer?
    thx a lot for your help inadvance
    Loris Michel
    @lorismichel

    CUDA out of memory. Tried to allocate 6.22 GiB (GPU 0; 11.17 GiB total capacity; 6.70 GiB already allocated; 4.17 GiB free; 4.19 MiB cached) (malloc at /opt/conda/conda-bld/pytorch_1556653099582/work/c10/cuda/CUDACachingAllocator.cpp:267)
    frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f13a088fdc5 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libc10.so)
    frame #1: <unknown function> + 0x16ca7 (0x7f13a044dca7 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
    frame #2: <unknown function> + 0x17347 (0x7f13a044e347 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
    frame #3: THCStorage_resize + 0x96 (0x7f137180e706 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
    frame #4: at::native::empty_strided_cuda(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::TensorOptions const&) + 0x4f1 (0x7f1372fa5851 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
    frame #5: at::CUDAType::empty_strided(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::TensorOptions const&) const + 0x1b4 (0x7f13716ba244 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
    frame #6: at::TensorIterator::allocate_outputs() + 0x526 (0x7f136db06f66 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #7: at::TensorIterator::Builder::build() + 0x48 (0x7f136db075e8 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #8: at::TensorIterator::binary_op(at::Tensor&, at::Tensor const&, at::Tensor const&) + 0x31f (0x7f136db0843f in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #9: <unknown function> + 0x629d09 (0x7f136d966d09 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #10: at::native::threshold(at::Tensor const&, c10::Scalar, c10::Scalar) + 0x3d (0x7f136d96757d in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #11: at::TypeDefault::threshold(at::Tensor const&, c10::Scalar, c10::Scalar) const + 0x6d (0x7f136ddc87ad in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #12: at::native::relu(at::Tensor const&) + 0x5f (0x7f136d96577f in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
    frame #13: at::CUDAType::relu(at::Tensor const&) const + 0xc2 (0x7f1371756212 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
    frame #14: torch::autograd::VariableType::relu(at::Tensor const&) const + 0x479 (0x7f1365e06619 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
    frame #15: <unknown function> + 0xa1d09d (0x7f13663cf09d in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
    frame #16: <unknown function> + 0xa73df8 (0x7f1366425df8 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
    frame #17: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x22 (0x7f1366421372 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
    frame #18: <unknown function> + 0xa5b2d9 (0x7f136640d2d9 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
    frame #19: <unknown function> + 0x457f18 (0x7f13a0f00f18 in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
    frame #20: <unknown function> + 0x12ce4a (0x7f13a0bd5e4a in /srv/conda/envs/notebook/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

    <omitting python frames>
    :
    operation failed in interpreter:
    bias = _11.bias
    _12 = _0.head_mu
    weight0 = _12.weight
    bias0 = _12.bias
    input0 = torch._convolution(input, _3, _4, [2, 2], [1, 1], [1, 1], False, [0, 0], 1, False, False, True)
    input1 = torch.relu(input0)
    input2 = torch._convolution(input1, _6, _7, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True)
    input3 = torch.relu(input2)
    input4 = torch._convol

    here is the error message I get when trying to evaluate factor vae metric. Looks too much memory used, but the model used is not overly big...
    SP Mohanty
    @spMohanty
    @lorismichel : This seems to be because of the large batch size used in the factorvae evaluation metric. If you look at the v1.2 release (on github : google-research/disentanglement-lib) , the new release addresses this problem
    Loris Michel
    @lorismichel
    thanks @spMohanty
    mseitzer
    @mseitzer
    @spMohanty Could you check what happened at https://gitlab.aicrowd.com/mseitzer/disentanglement-challenge/issues/9 please? I get "HTTPSConnectionPool(host='gitlab.aicrowd.com', port=443): Read timed out."
    Insu Jeon
    @InsuJeon
    Hi, @spMohanty. My model submission is "waiting_in_queue_for_evaluation label" for almost 16 hours. Model training stage is done long ago, but waiting time for evaluation seems to be a bit too long. Is it normal and ok? https://gitlab.aicrowd.com/isjeon/neurips2019_disentanglement_challenge_starter_kit/issues/3
    SP Mohanty
    @spMohanty
    @InsuJeon : The queue is clogged because of many submissions. And your submissions hasnt started being evaluated yet
    we are increasing capacity soon
    Insu Jeon
    @InsuJeon
    @spMohanty Thank you! :)
    Sourabh Balgi
    @sobalgi
    @spMohanty submission shows training in progress and after sometime failed https://gitlab.aicrowd.com/sourabh_balgi/neurips2019_disentanglement_challenge_starter_kit/issues/5
    @spMohanty Any logs available for debugging?
    Amir
    @amir-abdi
    @spMohanty any feedback on why this failed? https://gitlab.aicrowd.com/amirabdi/disentanglement/issues/65
    And
    Amir
    @amir-abdi
    The evaluation of the following issue never started; yet, I was let know that this is a overtime problem which doesn't sound right. Please double check. Thanks.
    ShabnamGh
    @ShabnamGh
    @spMohanty would you please help with the submission: https://gitlab.aicrowd.com/Shab7nam/neurips2019_disentanglement_challenge_shabnam/issues/5. Training is in progress but there is a failed message for evaluation!
    Shivam Khandelwal
    @skbly7
    Hi all,
    AIcrowdHQ has moved to Discord for IM. We will be archiving this channel shortly.
    Join link: https://discord.gg/3jwn25E