REPROHUM: ReproHum: Investigating Reproducibility of Human Evaluations in Natural Language Processing

In this foundational project, our key goals are the development of a methodological framework for testing the reproducibility of human evaluations in NLP, and of a multi-lab paradigm for carrying out such tests in practice, carrying out the first study of this kind in NLP. We will (i) systematically diagnose the extent of the human evaluation reproducibility problem in NLP and survey related current work to address it (WP1); (ii) develop the theoretical and methodological underpinnings for reproducibility testing in NLP (WP2); (iii) test the suitability of the shared-task paradigm (uniformly popular across NLP fields) for reproducibility testing (WP3); (iv) create a design for multi-test reproducibility studies, and run the ReproHum study, an international large-scale multi-lab effort conducting 50+ individual, coordinated reproduction attempts on human evaluations in NLP from the past 10 years (WP4); and (v) nurture and build international consensus regarding how to address the reproducibility crisis, via technical meetings and growing our international network of researchers (WP5).

Objetivos

Prueba y cuantificación de la reproducibilidad de las evaluaciones humanas en el procesamiento del lenguaje natural (NLP). ReproHum está formado por 20 grupos líderes en NLP de todo el mundo para llevar a cabo el primer estudio de reproducibilidad en PNL con múltiples pruebas y laboratorios.

Enlace a la página web del proyecto