DeepSPIN at SIGMORPHON 2020: One-Size-Fits-All Multilingual Models
Peters, B.
;
Martins, A.
DeepSPIN at SIGMORPHON 2020: One-Size-Fits-All Multilingual Models, Proc Workshop on Computational Research in Phonetics, Phonology, and Morphology SIGMORPHON, Seattle, United States, Vol. , pp. - , July, 2020.
Digital Object Identifier: 10.18653/v1/2020.sigmorphon-1.4
Download Full text PDF ( 363 KBs)
Abstract
This paper presents DeepSPIN’s submissions to Tasks 0 and 1 of the SIGMORPHON 2020 Shared Task. For both tasks, we present multilingual models, training jointly on data in all languages. We perform no language-specific hyperparameter tuning – each of our submissions uses the same model for all languages. Our basic architecture is the sparse sequence-to-sequence model with entmax attention and loss, which allows our models to learn sparse, local alignments while still being trainable with gradient-based techniques. For Task 1, we achieve strong performance with both RNN- and transformer-based sparse models. For Task 0, we extend our RNN-based model to a multi-encoder set-up in which separate modules encode the lemma and inflection sequences. Despite our models’ lack of language-specific tuning, they tie for first in Task 0 and place third in Task 1.