Modelling vowel acquisition using the Birkholz synthesizer

Abstract:

Human infants have a remarkable ability to learn to speak. To examine theories of some aspects of speech production development we previously developed Elija, a computational model of infant speech acquisition. Elija is an agent that can influence its environment by generating acoustic output by controlling an articulatory synthesizer as well as receiving somatosensory feedback from the environment. We first describe the Elija model more formally within the framework of reinforcement learning. Then we implement Elija’s vocal apparatus using the more sophisticated 3-D articulatory Birkholz synthesizer instead of the Maeda model used previously. Here we focus on vowel learning and show that, despite the increase in synthesizer complexity, the Elija model agent can still learn to generate vocalic speech sounds unassisted. Subsequently, using a selection process by a caregiver, Elija can refine these utterances leading to a set of L1 vowels. We present examples of the discovered vowels and show that they compare favorably to standard vowel configurations made available with the Birkholz synthesizer.


Year: 2019
In session: Sprachproduktion
Pages: 304 to 311