Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess...
TL;DR使用预训练语言模型实现模式补全、序列建模、机器人控制等任务。
Abstract
We observe that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences -- from arbitrary ones procedurally generated by probabilistic context-free grammars (PCFG), to more rich spatial patterns found in the Abstract Reasoning Corpus (ARC), a general AI benchmark, prompted in the style of ASCII art. Surpris