Article 423
  • Pablo Gamallo
  • Machine Learning and Knowledge Extraction, 2018

Using the Outlier Detection Task to Evaluate Distributional Semantic Models

In this article, we define the outlier detection task and use it to compare neural-based word embeddings with transparent count-based distributional representations. Using the English Wikipedia as a text source to train the models, we observed that embeddings outperform count-based representations when their contexts are made up of bag-of-words. However, there are no sharp differences between the two models if the word contexts are defined as syntactic dependencies. In general, syntax-based models tend to perform better than those based on bag-of-words for this specific task. Similar experiments were carried out for Portuguese with similar results. The test datasets we have created for the outlier detection task in English and Portuguese are freely available.
Keywords: distributional semantics, dependency analysis, evaluation, word similarity
Canonical link