MCQUEEN, T., HOPGOOD, A.A., ALLEN, T.J. and TEPPER, J.A., 2005. Extracting finite structure from infinite language. Knowledge-Based Systems, 18 (45), pp. 135-141. ISSN 0950-7051Full text not available from this repository.
This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [T. McQueen, A. Hopgood, J. Tepper, T. Allen, A recurrent self-organizing map for temporal sequence processing, in: Proceedings of Fourth International Conference in Recent Advances in Soft Computing (RASC2002), Nottingham, 2002] with laterally interconnected neurons. A derivation of functionalequivalence theory [J. Hopcroft, J. Ullman, Introduction to Automata Theory, Languages and Computation, vol. 1, Addison-Wesley, Reading, MA, 1979] is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar [A. Cleeremans, D. Schreiber, J. McClelland, Finite state automata and simple recurrent networks, Neural Computation, 1 (1989) 372–381] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.
|Item Type:||Journal article|
|Publication Title:||Knowledge-Based Systems|
|Creators:||McQueen, T., Hopgood, A.A., Allen, T.J. and Tepper, J.A.|
|Publisher:||Elsevier (not including Cell Press)|
|Place of Publication:||Amsterdam|
|Divisions:||Schools > School of Science and Technology|
|Depositing User:||EPrints Services|
|Date Added:||09 Oct 2015 10:13|
|Last Modified:||19 Oct 2015 14:29|
Actions (login required)
Views per month over past year
Downloads per month over past year