Is Language Modeling enough?
Evaluating Effective Embedding Combinations
Rudolf Schneider, Tom Oberhauser, Paul Grundmann, Felix Alexander Gers, Alexander Löser and Steffen Staab
PubMedSection Dataset GitHub: Dataset Creation GitHub: Extended SentEval PDF
Abstract
Universal embeddings, such as BERT or ELMo, are useful for a broad set of natural language processing tasks like text classification or sentiment analysis. Moreover, specialized embeddings also exist for tasks like topic modeling or named entity disambiguation. We study if we can complement these universal embeddings with specialized embeddings. We conduct an in-depth evaluation of nine well known natural language understanding tasks with SentEval. Also, we extend SentEval with two additional tasks to the medical domain. We present PubMedSection, a novel topic classification dataset focussed on the biomedical domain. Our comprehensive analysis covers 11 tasks and combinations of six embeddings. We report that combined embeddings outperform state of the art universal embeddings without any embedding fine-tuning. We observe that adding topic model based embeddings helps for most tasks and that differing pre-training tasks encode complementary features. Moreover, we present new state of the art results on the MPQA and SUBJ tasks in SentEval.
How to cite
@InProceedings{
author = {Schneider, Rudolf and Oberhauser, Tom and Grundmann, Paul and Gers, Felix Alexander and Loeser, Alexander and Staab, Steffen},
title = {Is Language Modeling Enough? Evaluating Effective Embedding Combinations},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {4741--4750},
url = {https://www.aclweb.org/anthology/2020.lrec-1.583}
}