guillaumekln on master
Updating the intel-mkl URL. (#5… (compare)
guillaumekln on master
Add CTranslate2 Change project cards title (compare)
THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-1460/cutorch/lib/THC/generic/THCStorage.c line=32 error=39 : uncorrectable ECC error encountered /torch/install/bin/luajit: cuda runtime error (39) : uncorrectable ECC error encountered at /tmp/luarocks_cutorch-scm-1-1460/cutorch/lib/THC/generic/THCStorage.c:32. Do I have to be cautious with GPU usage? Is there any documentation/known ways to handle multiple processes using a GPU?
tis concatenated to the word embedding input at
t + 1. cf. https://arxiv.org/abs/1508.04025
brnn_mergeoption only applies to bidirectional layer, which the
rnnencoded type does not have.
-encoder_type brnn -layers 4 -rnn_size 1000.
-layers 4 -rnn_size 1000 -encoder_type brnn -word_vec_size 600for the next training (ENJA), with 4M segments, and Epoch 1 has been training for ~2 days. Maybe it is OK, but it also takes a lot of GPU memory, and it constantly crashes out with "out of memory" errors.
-update_vocab merge. If the vocabulary is merged, I guess I should also merge the BPE models to use when translating, right?
THCudaCheck FAIL file=/tmp/luarocks_cunn-scm-1-1394/cunn/lib/THCUNN/generic/SoftMax.cu line=72 error=48 : no kernel image is available for execution on the device /torch/install/bin/luajit: /torch/install/share/lua/5.1/nn/THNN.lua:110: cuda runtime error (48) : no kernel image is available for execution on the device at /tmp/luarocks_cunn-scm-1-1394/cunn/lib/THCUNN/generic/SoftMax.cu:72after
[07/06/18 11:09:23 INFO] Preparing memory optimization...? I run training on a Tesla V100 having 4 GPU cores and can't work around this issue. I run training using
th ./train.lua ... -gpuid 1 2 3 4.
CUDA_VISIBLE_DEVICES=0,1,2,3 th ./train.lua ... gpuid 1 2 3 4, but for the life of me, it still says there is no nccl though I installed it from NVIDIA site, both Ubuntu 16.04 normal and S agonstic versions (for CUDA 9.0, though I have CUDA 9.1 installed - but there is no NCCL 9.1 :().
<unk> tag replacework in that case.
-phrase_tablethat can be used for this: for any
<unk>token in the target, the corresponding source token is looked up in the table to find a translation. Other approaches include splitting names on characters and let the model learn the translation or, more commonly, replacing entities with placeholder tokens and have a separate processing software to replace these placeholders.