Encoding words in an attractor neural network

Pirmoradian, S. & Treves, A.

Cognitive Neuroscience, SISSA, Trieste, Italy

How language, human's highly cognitive ability, emerges from the microscopic (or mescoscopic) properties of individual neurons and of networks of neurons in the brain?
We would like to tackle this question by developing and analyzing a Potts attractor neural network model, whose units hypothetically represent patches of the cortex. The network has the ability to spontaneously hop (or latch) across memory patterns (which have been stored as dynamical attractors), thus producing an infinite sequence of patterns, at least in some regimes. We would like to train the network with a corpus of sentences in BLISS. BLISS is a scaled-down synthetic language of intermediate complexity, with about 150 words and about 40 rewrite rules. We expect the Potts network to generate sequences of memorized words, with statistics reflecting to some degree that of the BLISS corpus used in training it.
Before training the network on the corpus, the critical issues to be addressed, and the central ones here, are: how should the words be represented in a cognitively plausible manner in the network?
We represent words in a distributed fashion on 900 units, 541 out of which express the semantic content and the rest, 359 units, are representative of the syntactic characteristics of a word. The distinction between the semantic and syntactic characteristics of a word and between the encoding of function words and content words have been loosely inspired by a vast number of neuropsychological studies.
The preliminary analysis of the produced patterns indicates the resemblance between the statistics of the representation of words and the patterns that can generate the latching behavior of the network. This is a promising step towards building a neural network that can spontaneously generate a sequence of words (sentences) with desired syntactic and semantic relationships between words in sentences.