Congress 1319
  • Javier González Corbelle, José María Alonso Moral, Alberto Bugarín Diz
  • 1st Workshop on Evaluating NLG Evaluation (EvalNLGEval) collocated with the 13th International Conference on Natural Language Generation (INLG). Dublin, Irlanda. 2020

A proof of concept on triangular test evaluation for Natural Language Generation

The evaluation of Natural Language Generation (NLG) systems has recently aroused much interest in the research community, since it should address several challenging aspects, such as readability of the generated texts, adequacy to the user within a particular context and moment and linguistic quality-related issues (e.g., correctness, coherence, understandability), among others. In this paper, we propose a novel technique for evaluating NLG systems that is inspired on the triangular test used in the field of sensory analysis. This technique allows us to compare two texts generated by different subjects and to i) determine whether statistically significant differences are detected between them when evaluated by humans and ii) quantify to what extent the number of evaluators plays an important role in the sensitivity of the results. As a proof of concept, we apply this evaluation technique in a real use case in the field of meteorology, showing the advantages and disadvantages of our proposal.
Keywords: NLG evaluation, sensory analysis, triangular test
Canonical link