REPROHUM: ReproHum: Investigating Reproducibility of Human Evaluations in Natural Language Processing

In this foundational project, our key goals are the development of a methodological framework for testing the reproducibility of human evaluations in NLP, and of a multi-lab paradigm for carrying out such tests in practice, carrying out the first study of this kind in NLP. We will (i) systematically diagnose the extent of the human evaluation reproducibility problem in NLP and survey related current work to address it (WP1); (ii) develop the theoretical and methodological underpinnings for reproducibility testing in NLP (WP2); (iii) test the suitability of the shared-task paradigm (uniformly popular across NLP fields) for reproducibility testing (WP3); (iv) create a design for multi-test reproducibility studies, and run the ReproHum study, an international large-scale multi-lab effort conducting 50+ individual, coordinated reproduction attempts on human evaluations in NLP from the past 10 years (WP4); and (v) nurture and build international consensus regarding how to address the reproducibility crisis, via technical meetings and growing our international network of researchers (WP5).

Obxectivos

Probar e Cuantificar a Reproducibilidade de Avaliacións Humanas en Procesamento de Lingua Natural (NLP). ReproHum é un consorcio con 20 grupos líderes a nivel internacional en NLP que van levar a cabo o primeiro laboratorio multi-estudo multi-proba de reproducibilidade en NLP.

Ligazón á páxina web do proxecto