[PS-2.14]A statistical model from information theory to explain Zipf's law of brevity

Hernández-Fernández, A. 1 , González-Torre, I. 2, 4 , Lacasa, L. 3 , Kello, C. T. 4 & Luque, B. 2

1 Complexity and Quantitative Linguistics Lab; Laboratory for Relational Algorithmics; Complexity and Learning (LARCA); Institut de Ciències de l?Educació; Universitat Politècnica de Catalunya, Barcelona (Catalonia, Spain)
2 Departamento de Matemática Aplicada ETSIAE; Universidad Politécnica de Madrid; Plaza Cardenal Cisneros 28040 Madrid (Spain)
3 School of Mathematical Sciences; Queen Mary University of London; Mile End Road E14NS London (UK)
4 Cognitive and Information Sciences University of California , Merced, 5200 North Lake Rd., Merced, CA 95343, USA

Brevity and frequency are two crucial factors in the processes of statistical learning. The compression principle had already been used previously to explain the origin of Zipf?s law for the frequency of words. Here we use a model from information theory to also explain the Zipf?s law of abbreviation, or the statistical tendency of more frequent elements in language to be shorter (in characters in the case of written language, and in time durations for oral communication).

As far as we know, we show for the first time that Zipf?s law of abbreviation is a global speech process that holds in words regardless of what are the linguistics units of study. In addition, the derived model from information theory allows us to fit empirically linguistic data considering both acoustic elements (phonemes, words and sentences) and its transcripts.

This raises that the processes measured in units of written text are a byproduct of spontaneous speech patterns. The more a word is used, the greatest effort in compression that will make it shorter; but also the shorter it is, the more times it will be used statistically. This work paves the way for new experimental approaches to the study of statistical learning.