TY - GEN
T1 - Contextual bidirectional long short-term memory recurrent neural network language models
T2 - 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017
AU - Mousa, Amr El Desoky
AU - Schuller, Björn
N1 - Publisher Copyright:
© 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - Traditional learning-based approaches to sentiment analysis of written text use the concept of bag-of-words or bag-of-ngrams, where a document is viewed as a set of terms or short combinations of terms disregarding grammar rules or word order. Novel approaches de-emphasize this concept and view the problem as a sequence classification problem. In this context, recurrent neural networks (RNNs) have achieved significant success. The idea is to use RNNs as discriminative binary classifiers to predict a positive or negative sentiment label at every word position then perform a type of pooling to get a sentence-level polarity. Here, we investigate a novel generative approach in which a separate probability distribution is estimated for every sentiment using language models (LMs) based on long short-term memory (LSTM) RNNs. We introduce a novel type of LM using a modified version of bidirectional LSTM (BLSTM) called contextual BLSTM (cBLSTM), where the probability of a word is estimated based on its full left and right contexts. Our approach is compared with a BLSTM binary classifier. Significant improvements are observed in classifying the IMDB movie review dataset. Further improvements are achieved via model combination.
AB - Traditional learning-based approaches to sentiment analysis of written text use the concept of bag-of-words or bag-of-ngrams, where a document is viewed as a set of terms or short combinations of terms disregarding grammar rules or word order. Novel approaches de-emphasize this concept and view the problem as a sequence classification problem. In this context, recurrent neural networks (RNNs) have achieved significant success. The idea is to use RNNs as discriminative binary classifiers to predict a positive or negative sentiment label at every word position then perform a type of pooling to get a sentence-level polarity. Here, we investigate a novel generative approach in which a separate probability distribution is estimated for every sentiment using language models (LMs) based on long short-term memory (LSTM) RNNs. We introduce a novel type of LM using a modified version of bidirectional LSTM (BLSTM) called contextual BLSTM (cBLSTM), where the probability of a word is estimated based on its full left and right contexts. Our approach is compared with a BLSTM binary classifier. Significant improvements are observed in classifying the IMDB movie review dataset. Further improvements are achieved via model combination.
UR - http://www.scopus.com/inward/record.url?scp=85021656130&partnerID=8YFLogxK
U2 - 10.18653/v1/e17-1096
DO - 10.18653/v1/e17-1096
M3 - Conference contribution
AN - SCOPUS:85021656130
T3 - 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference
SP - 1023
EP - 1032
BT - Long Papers - Continued
PB - Association for Computational Linguistics (ACL)
Y2 - 3 April 2017 through 7 April 2017
ER -