Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues - Université de Lorraine
Communication Dans Un Congrès Année : 2023

Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues

Chuyuan Li
Patrick Huber
  • Fonction : Auteur
  • PersonId : 1237763
Wen Xiao
  • Fonction : Auteur
  • PersonId : 1237764
Maxime Amblard
Giuseppe Carenini
  • Fonction : Auteur
  • PersonId : 1237765

Résumé

Discourse processing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to build discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple tasks for fine-tuning and show that the dialogue-tailored Sentence Ordering task performs best. To locate and exploit discourse information in PLMs, we propose an unsupervised and a semi-supervised method. Our proposals thereby achieve encouraging results on the STAC corpus, with F 1 scores of 57.2 and 59.3 for the unsupervised and semisupervised methods, respectively. When restricted to projective trees, our scores improved to 63.3 and 68.1.
Fichier principal
Vignette du fichier
lisa_eacl23_codi_submission.pdf (1.67 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04031267 , version 1 (15-06-2023)

Licence

Identifiants

Citer

Chuyuan Li, Patrick Huber, Wen Xiao, Maxime Amblard, Chloé Braud, et al.. Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues. Findings of the Association for Computational Linguistics: EACL 2023, May 2023, Dubrovnik, Croatia. pp.2562--2579, ⟨10.18653/v1/2023.findings-eacl.194⟩. ⟨hal-04031267⟩
86 Consultations
96 Téléchargements

Altmetric

Partager

More