Multilingual Transformer Encoders: a Word-Level Task-Agnostic Evaluation - Department of Natural Language Processing & Knowledge Discovery Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Multilingual Transformer Encoders: a Word-Level Task-Agnostic Evaluation

Félix Gaschi
  • Fonction : Auteur
  • PersonId : 1120199
François Plesse
  • Fonction : Auteur
Parisa Rastin
  • Fonction : Auteur
  • PersonId : 1120200
Yannick Toussaint

Résumé

Some Transformer-based models can perform crosslingual transfer learning: those models can be trained on a specific task in one language and give relatively good results on the same task in another language, despite having been pre-trained on monolingual tasks only. But, there is no consensus yet on whether those transformer-based models learn universal patterns across languages. We propose a word-level task-agnostic method to evaluate the alignment of contextualized representations built by such models. We show that our method provides more accurate translated word pairs than previous methods to evaluate word-level alignment. And our results show that some inner layers of multilingual Transformer-based models outperform other explicitly aligned representations, and even more so according to a stricter definition of multilingual alignment.
Fichier principal
Vignette du fichier
2415-1.pdf (333.27 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03723760 , version 1 (15-07-2022)

Identifiants

  • HAL Id : hal-03723760 , version 1

Citer

Félix Gaschi, François Plesse, Parisa Rastin, Yannick Toussaint. Multilingual Transformer Encoders: a Word-Level Task-Agnostic Evaluation. WCCI2022 - IEEE World Congress on Computational Intelligence, Jul 2022, Padoue, Italy. ⟨hal-03723760⟩
228 Consultations
62 Téléchargements

Partager

Gmail Facebook X LinkedIn More