Automatic generation of the complete vocal tract shape from the sequence of phonemes to be articulated
Résumé
Articulatory speech synthesis requires generating realistic vocal tract shapes from thesequence of phonemes to be articulated. This work proposes the first model trained fromrt-MRI films to automatically predict all of the vocal tract articulators’ contours. The dataare the contours tracked in the rt-MRI database recorded for one speaker. Those contourswere exploited to train an encoder-decoder network to map the sequence of phonemes andtheir durations to the exact gestures performed by the speaker. Different from other works,all the individual articulator contours are predicted separately, allowing the investigation oftheir interactions. We measure four tract variables closely coupled with critical articulatorsand observe their variations over time. The test demonstrates that our model can producehigh-quality shapes of the complete vocal tract with a good correlation between the predictedand the target variables observed in rt-MRI films, even though the tract variables are notincluded in the optimization procedure.
Fichier principal
Speech_Communication_Generate_Vocal_Tract_Shape.pdf (1.41 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|