maximum_decoding_lengthin the parameters: https://opennmt.net/OpenNMT-tf/configuration.html. Maybe
length_penaltycan also help.
replace_unknown_targetuses the model attention to select the corresponding source token. However, it is well known that Transformer attention usually can not be used as target-source alignments. You should either constrain the attention to be an alignment or use subword tokenization (like SentencePiece) to avoid UNK. Note that the UNK token does not appear in the vocab but is automatically added when starting the training.