Learning what not to say
Entrenchment and pre-emption effects in artificial language learning.
- Project funded by the British Academy (BA/Leverhulme Small Research Grant)
Our study addresses long-standing questions in the area of language learning. Using new methods, we test two usage-based hypotheses on how learners may infer what not to say in a language, from the input that they are exposed to: (1) entrenchment, whereby the frequency with which words occur in the input curbs their likelihood to be used in novel ways, and (2) pre-emption, whereby learners avoid using a word in a novel way if it has already been witnessed in another construction to fulfil the same function.
In previous research based on natural data, these two accounts have proven hard to operationalize, test, and especially disentangle, which calls for more controlled methods. We propose new artificial language learning experiments, in which participants are exposed to a made-up language, to test the predictions of entrenchment and pre-emption by manipulating frequencies in the input. With these experiments, we aim to better understand entrenchment and pre-emption and their relative importance.
Project team
- Dr Florent Perek (PI, University of Birmingham)
- Professor Adele Goldberg (CI, Princeton University)
More information
Email: f.b.perek@bham.ac.uk