SemEval 2017 sentiment analysis summary
DataStoriesAbstractThis paper built two sentiment analysis systems based on LSTM with two kinds of attention mechanism on top of word embeddings pre-trained on a big collection of Twitter messages. Th
DataStoriesAbstractThis paper built two sentiment analysis systems based on LSTM with two kinds of attention mechanism on top of word embeddings pre-trained on a big collection of Twitter messages. Th
AbstractCompare two variants of retrofitting and a joint-learning approach. Although the word embeddings is sometimes desirable, it may in other cases be detrimental to downstream performance. For ex
Abstract and IntroductionThey introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, wh
AbstractA new learning objective that incorporates both a neural language model objective and prior knowledge from semantic resources to learn improved lexical semantic embeddings. IntroductionIn thi
AbstractThey reveal that much of the performance gains of word embeddings are due to certain system design choices, rather than the embedding algorithms themselves. Furthermore, they show that these m
AbstractIn this paper, a novel morph-fitting procedure is proposed to injects morphological constraints generated using simple language-specific rules. Result: improves low-frequency word estimates;
wordscongruent:全等 AbstractThe paraphrase Database (PPDB) consists of a list of phrase pairs with heuristic confidence estimates. Its coverage is also necessarily incomplete. They propose models to:
IntroductionThis paper proposes a method for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representation
ACL2016Representation Learning Context-Dependent Sense Embedding Jointly Embedding Knowledge Graphs and Logical Rules Learning Connective-based Word Representations for Implicit Discourse Relation Ide
ACL2016Representation Learning Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning Pointing the Unknown Words 这篇论文是用已有的信息来预测未登录词。可以一看 Literal and Metaph