Workshops, etc, on Evaluating Text Quality

Over the past year (writing in April 2020) I have become increasingly interested in how we conduct evaluations in the NLG community. We generally agree that human evaluation is the best tool we’ve got, but we don’t even agree on what we want to measure (e.g. clarity, readability, fluency, naturalness; adequacy, informativeness, truthfulness). As part of my efforts to dig into this topic with Verena Rieser, I’ve been learning more about the different workshops and tasks that have been run over the years.

This page serves as an archive of these resources, primarily for myself, but stored publicly in case it is helpful for others as well. The list is (necessarily) incomplete, because I am adding things as I remember them, but if something important is missing, please let me know!

Unless otherwise specified, these focus on the evaluation of text or speech in a dialect of English.

(Emphasis in quotes added by me.)

Current Events

  • CHAVAL is a JSALT summer workshop aimed at both NLU and automated evaluation for open-domain spoken dialogue systems. (NB: I am involved with this.)
  • Eval4NLP focuses on “the [design] of adequate metrics for evaluating performance in high-level text generation tasks…; properly evaluating word and sentence embeddings; and rigorously determining whether and under which conditions one system is better than another; etc.”
  • eval.gen.chal is a mailing list originating in the SIGGEN community following INLG 2019 which brings together folks interested in organizing Evaluating Generation Challenges (the abbreviation’s a bit wonky…). I am leading this group, and we’d love to have more active participants working to make NLG evaluation more consistent and meaningful.
  • EVALITA “promote[s] the development of language and speech technologies for the Italian language, providing a shared framework where different systems and approaches can be evaluated in a consistent manner.”

Past Events

Datasets

  • NLG system outputs with evaluation data
  • WMT & GEC post-edit data
  • Data from digital humanities or survey & information delivery design?

Other Resources

Research Fellow in Natural Language Generation

Dave Howcroft is a computational linguist working at Edinburgh Napier University.