In transformer-based neural machine translation (NMT), the positional
encoding mechanism helps the self-attention networks to learn the source
representation with order dependency, which makes the Transformer-based NMT
achieve state-of-the-art results for various translation tasks. How