Recent works have introduced gnn-to-mlp knowledge distillation (KD)
frameworks to combine both GNN's superior performance and MLP's fast inference
speed. However, existing KD frameworks are primarily designed for node
classification within single graphs, leaving their applicability to