Recent works on word representations mostly rely on predictive models.
Distributed word representations (aka word embeddings) are trained to optimally
predict the contexts in which the corresponding words tend to appear. Such
models have succeeded in capturing word similarties as well as semantic and
syntactic regularities. Instead, we aim at reviving intere
本文研究了分布式文本表示的应用,比较了基于单词向量和基于上下文的句子向量两种表示方式在不同分类问题下的效果,结果显示,基于 ELMo,Universal Sentence Encoder,Neural-Net Language Model 和 FLAIR 的上下文表示效果较好,相较于基于词向量的表示,分类准确率提高了 2-4%。