Recently, neural language representation models pre-trained on large corpus
can capture rich co-occurrence information and be fine-tuned in downstream
tasks to improve the performance. As a result, they have achieved
state-of-the-art results in a large range of language tasks. However,
本文介绍了知识图谱(KGs)以及其与关系知识的上下文信息的整合,重点讨论了基于三元组的 KGs 存在的局限性和上下文 KGs 的优势,并提出了 KGR$^3$,一个利用大型语言模型(LLMs)进行 KG 推理的范例,实验证明 KGR$^3$ 显著提高了 KG 补全和 KG 问答任务的性能,验证了将上下文信息整合到 KG 表示和推理中的有效性。