by Prof. Dominic Palmer-Brown and Dr Chrisina Jayne
Professor Dominic Palmer-Brown
London Metropolitan University
166-220 Holloway Road, London, N7 8DB, UK
e-mail: d.palmer-brown@londonmet.ac.uk
Dr. Chrisina Jayne
London Metropolitan University
166-220 Holloway Road, London, N7 8DB, UK
e-mail: c.jayne@londonmet.ac.uk
Abstract
Modal learning in neural computing refers to the strategic combination of modes of adaptation and
learning within a single artificial neural network structure. Modes, in this context, are learning methods
that are transferable from one learning architecture to another, such as weight update equations.
Two or more modes may proceed in parallel in different parts of the neural computing structure
(layers and neurons), or they occupy the same part of the structure, and there is a mechanism for
allowing the neural network to switch between modes. The switching can be periodic, random,
or performance guided.
When we look at human and machine learning in a wider context, there are many reasons and
motivations to consider modal learning, as it allows for a range of learning methods to be taken
into account, along the spectrum from memorisation to generalisation. From a theoretical perspective
any individual mode has inherent limitations because it is trying to optimise a particular objective
function. Since we cannot in general know a priori the most effective learning method or combination
of methods for solving a given problem, we should equip the system (the neural network) with the
means to discover the optimal combination of learning modes during the learning process. There is
potential to furnish a neural system with numerous modes. Most of the work conducted so far
concentrates on the effectiveness of two to four modes. The modal learning approach applies equally
to supervised and unsupervised (including self organisational) methods.
Snap-Drift Neural Network (SDNN), introduced by (Lee, S. W., Palmer-Brown, 2004), is an example of
a modal learning method which toggles its weight update equation between two modes:
’Min’(’Fuzzy AND’) and Learning Vector Quantization. This tutorial focuses on the Snap-Drift Neural
Network and two recent developments of the algorithm related to self-organisational maps and
sequence learning. The Snap-Drift SOM (SDSOM) adopts the Kohonen SOM architecture, while the
Recurrent SDNN uses the Simple Recurrent Network architecture (RSDNN). In the tutorial we review
modal learning in general, and present the Snap-Drift algorithms. We demonstrate their use and
results obtained with Matlab implementations for well known data sets and real-world applications.
Dominic Palmer-Brown, London Metropolitan University, 166-220 Holloway Road,
London, N7 8DB, UK, e-mail: d.palmer-brown@londonmet.ac.uk
Professor Dominic Palmer-Brown is Dean of the Faculty of Computing, a fellow of the British Computer
Society and a professor of neural computing. He received a BSc(Hons) in Electrical and Electronic
Engineering from Leeds University, MSc (Dist) in Intelligent Systems from Plymouth University, and a
PhD in Neural Networks from Nottingham University. He researches modal learning neural computing
methods for data mining, processing language, and modelling interaction in virtual learning environments.
Dominic was the editor of the Elsevier review journal, Trends in Cognitive Sciences, and professor of
neurocomputing, at Leeds Metropolitan University. He has also worked for GEC Marconi, British Aerospace,
Nottingham Trent University and the publishers Elsevier Science London. He became Dean of the Faculty
of Computing, at London Metropolitan University in 2009. Publications include articles in many journals
such as IEEE Transactions in Neural Networks,
Neurocomputing, Connection Science, Ecological Modelling and Information Sciences and over 80 conference
papers. He has supervised 14 PhD students to successful completion. Dominic is co-chair of the International
Neural Network Society’s Special Interest Group on Engineering Applications of Neural Networks.
Chrisina Jayne, London Metropolitan University, 166-220 Holloway Road,
London, N7 8DB, UK e-mail: c.jayne@londonmet.ac.uk
Chrisina Jayne has an MSc in Computing Science, an MSc in Mathematics and Informatics, and a Ph.D. degree
in Applied Mathematics. She has worked in several British universities including London Metropolitan University,
South-Bank University, Kingston University, University College London, and at Veliko Turnovo University, Bulgaria.
Chrisina was awarded UK National Teaching Fellow award in 2009. Her research includes the development of
new methods in the area of approximation and interpolation with spline functions and innovative applications of
neural networks. In the last years her research has concentrated on applying effective neural network methods
to a number of applications including: face interpretation, automatic age estimation, restoration of partial occluded
shapes of faces, isolating sources of variation in multivariate distributions and the enhancement of learning and
teaching. She has published research results in numerous refereed conference proceedings and in
peer-reviewed journals. Chrisina is also co-chair of the International Neural Network Society’s Special Interest
Group on Engineering Applications of Neural Networks.