Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input speciﬁcation. In this paper, we show the impact of semantic noise on state-of-theart NNLG models which implement different semantic control mechanisms. We ﬁnd that cleaned data can improve semantic correctness by up to 97%, while maintaining ﬂuency. We also ﬁnd that the most common error is omitting information, rather than hallucination.