What do BERT word embeddings learn about the French language? - Université de Lorraine
Communication Dans Un Congrès Année : 2024

What do BERT word embeddings learn about the French language?

Résumé

Pre-trained word embeddings (for example, BERT-like) have been successfully used in a variety of downstream tasks. However, do all embeddings, obtained from the models of the same architecture, encode information in the same way? Does the size of the model correlate to the quality of the information encoding? In this paper, we will attempt to dissect the dimensions of several BERT-like models that were trained on the French language to find where grammatical information (gender, plurality, part of speech) and semantic features might be encoded. In addition to this, we propose a framework for comparing the quality of encoding in different models.
Fichier principal
Vignette du fichier
WE_CLIB_2024.pdf (773.73 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04727384 , version 1 (09-10-2024)

Licence

Identifiants

  • HAL Id : hal-04727384 , version 1

Citer

Ekaterina Goliakova, David Langlois. What do BERT word embeddings learn about the French language?. Computational Linguistics in Bulgaria, Department of Computational Linguistics Institute for Bulgarian Language, Sep 2024, Sofia, Bulgaria. pp.14-32. ⟨hal-04727384⟩
17 Consultations
11 Téléchargements

Partager

More