Disentangling 20 years of confusion: the need for standards in human evaluation

Human assessment remains the most trusted form of evaluation in natural language generation, among other areas of NLP, but there is huge variation in terms of both what is assessed and how it is assessed. We recently surveyed 20 years of publications in the NLG community to better understand this variation and conclude that we need to work together to develop clear standards for human evaluations.

Research Fellow in Natural Language Generation

Dave Howcroft is a computational linguist working at Edinburgh Napier University.