In Deep Learning for Natural Language Processing, supervised text generation tasks are usually trained by minimizing the cross-entropy loss (i.e. an error) between the ground-truth sequence and the predicted sequence. However, when we tackle unsupervised text generation task, we may … Continue reading

September 9, 2019
by Léo Laugier
0 comments