We propose a novel positional encoding for learning graph on Transformer
architecture. Existing approaches either linearize a graph to encode absolute
position in the sequence of nodes, or encode relative position with another
node using bias terms. The former loses preciseness of rela