Keynotes
Professor Alessandro E.P. Villa
NeuroHeuristic Research Group
Information Science Institute, University of Lausanne, Switzerland
Institut des Neurosciences, Université Joseph Fourier, Grenoble, France
Title: Spatiotemporal firing patterns and dynamical systems in neural networks
Abstract
The simultaneous recording of the time series formed by the timing of neuronal
discharges produced by a cell assembly reveals important features of the dynamic of
information processing in the brain. Experimental evidence of firing sequences with
precision of few milliseconds over intervals lasting hundreds of milliseconds have
suggested that particular topologies of converging/diverging chains of neuronal
assemblies may propagate the activity with the necessary time accuracy. Simulation
studies of critical phases of brain development suggest the emergence of stimulus-driven
cell assemblies that will form the 'wiring' of the adult brain out of randomly connected
large scale networks. These results are presented from the viewpoint of dynamical
systems and chaotic attractors.
Bio
Alessandro E.P. Villa received the PhD in Neurophysiology (Faculty of Science of the
University of Lausanne) and the Master degree in Applied Computer Science from the Swiss
Federal School of Technology (EPFL). In the late 1980s and 1990s he worked with several
international institutions including the Hebrew University, the Brain Research Institute
of UCLA, the Fidia Research Laboratories in Italy and the French CNRS Research Center in
Cognitive Neuroscience at Marseille. He received awards from the Swiss Society for
Biological Psychiatry and from the French Atomic Agency (CEA) for his studies on neural
network dynamics. In 1995 Alessandro Villa founded the
neuroheuristic research group
promoting a transdisciplinary approach involving neuroscience, molecular biology,
physiology and computer science. In 2001 he moved to the
the Faculty of Medicine of the University Joseph Fourier in Grenoble as Chair of
Neuroscience and Biophysics where he leads an interdisciplinary team within the field of
clinical and preclinical neuroscience at the Inserm (the French NIH). Villa's research
group initiated an innovative open source project
(openAdap.net), a dynamic
"socialware" for data processing aimed at bridging the digital gap between people who
have data and people who have ideas how to process data as well as between developing
countries and first world entities. The Faculty of Business and Economics of the
University of Lausanne appointed Alessandro Villa a professorship for his
transdisciplinary research in neuroinformatics and neuroeconomics. He collaborates with
several institutions in Japan and Spain and his current interests are focused on the
nonlinear dynamical analysis of brain activity and the sensorimotor integration in
natural and artificial brain systems. He is among the editors of the journal Neural
Networks and is a member of the executive committee of the European Neural Network Society.
Professor Stephen Grossberg
Department of Cognitive and Neural Systems,
Center for Adaptive Systems, and
Center of Excellence for Learning in Education, Science, and Technology,
Boston University
Title: The Predictive Brain: Autonomous Search, Learning, Recognition, and Navigation in a Changing World
Abstract
We live in a world filled with unexpected events and autonomously learn how to search for, recognize,
navigate towards, acquire, and manipulate desired objects and other goals. In all of these competences,
top-down attentive and predictive mechanisms play a key role. This talk will provide an overview of an
emerging neural architecture that has begun to clarify how the brain achieves these goals, and that may be
useful in autonomous mobile robots. The talk will review topics such as: How the brain predictively
searches a scene with eye movements to discover and learn view-invariant object categories. How the brain
stores sequences of events in working memory and learns sequential plans that can be performed at variable
speeds under volitional control. How the brain combines predictive smooth pursuit and ballistic eye movements
to efficiently track unpredictably moving targets. How the brain uses visual information, notably optic
flow information, to navigate towards a goal while avoiding obstacles. How the brain uses path integration
information to form self-stabilizing spatial representations of locations in the world, and how visual
information may be merged with spatial representations to achieve the benefits of both vision and path
integration during navigation. How the brain acquires objects using motor-equivalent reaching and tool use.
How the brain uses adaptively timed reinforcement learning to print successful
perception-cognition-action cycles into long-term memory, and controls the motivation to perform these
actions. These various competences use many different brain regions interacting coherently together.
The talk will indicate how such coherence may be achieved.
See http://cns.bu.edu/~steve for recent results.
Bio
Stephen Grossberg is Wang Professor of Cognitive and Neural Systems and Professor of Mathematics,
Psychology, and Biomedical Engineering at Boston University. Grossberg founded and was first President
of the International Neural Network Society (INNS), and attracted 3700 members from 49 states of the
United States and 38 countries during the fourteen months of his presidency. He founded the Society's official
journal, Neural Networks, and built the journal with the cooperation of two other editors-in-chief and over 100
action editors worldwide into the world's leading journal publishing the full range of research about the
biology and technology of neural networks. Grossberg has also served as an editor for more than 25 other
journals. He was general chairman of the IEEE First International Conference on Neural Networks and played
a key role in organizing the first annual meeting of INNS, whose fusion led to the International Joint
Conference on Neural Networks (IJCNN). He founded the Department of Cognitive and Neural Systems at Boston
University, which he built into a leading institution for advanced training in biological neural networks
and neuromorphic technology. He is the founder and Director of the Center for Adaptive Systems, which
he built into one of the world's leading academic research institutes in computational neuroscience and
neural network technology. His year-long lecture series at MIT Lincoln Laboratory on neural network
technology motivated the laboratory to initiate the national DARPA Neural Network Study in 1987. He
organized and is founding Director of the NSF Center of Excellence for Learning in Education, Science, and
Technology. He won the 1991 IEEE Neural Network Pioneer Award, the 1992 INNS Leadership Award, the 1992 Boston
Computer Society Thinking Technology Award, the 2000 Information Science Award of the Association for
Intelligent Machinery, the 2002 Charles River Laboratories prize of the Society for Behavioral Toxicology,
and the 2003 INNS Helmholtz Award. He is a 1994 Fellow of the American Psychological Association, a 1996
member of the Society of Experimental Psychologists, a 2002 Fellow of the American Psychological Society,
and a 2005 IEEE Fellow. Grossberg has published 15 books, over 480 research articles, and has 7 patents.
In the 1960s, he introduced the paradigm and nonlinear systems of differential equations that form the
foundation of neural network research today. He introduced and made critical contributions to adaptive
resonance theory (ART), competitive learning and self-organizing maps, and content-addressable memories and
their Liapunov functions. His subsequent work introduced key equations and seminal concepts used in many
models currently under investigation. This work focuses upon the design principles and mechanisms that
enable the behavior of individuals, or machines, to adapt autonomously in real-time to unexpected
environmental challenges. His research has stimulated research by many scientists and engineers, and
includes vision and image processing; object and event recognition; audition, speech and language;
cognitive information processing; reinforcement learning and cognitive-emotional interactions; navigation;
sensory-motor control and robotics; mental disorders; and neural network technology. See his web site
http://cns.bu.edu/~steve for further information.
Professor Sergios Theodoridis
Department of Informatics and Telecommunications,
National and Kapodistrian University of Athens
Title: Adaptive Learning in a World of Projections
Abstract
The task of parameter/function estimation has been at the center of scientific attention for
a long time and it comes under different names such as filtering, prediction,
beamforming, classification, regression. Conventionally, the task has been treated as an
optimization task of an appropriately adopted loss function. However, in most of the
cases, the choice of the loss function is mainly dictated by its mathematically tractability
and not by a physical reasoning related to the specific problem at hand. The task is
further complicated when a-priori information, in the form of constraints, becomes
available. The presence of constraints in estimation tasks is recently gaining in
importance, due to the revival of interest in robust learning schemes.
In this talk, the estimation task is treated in the context of set theoretic estimation arguments. Instead of a single optimal point, we are searching for a set of solutions that are in agreement with the available information, which is provided to us in the form of a set of training points and a set of constraints.
The goal of this talk is to present a general tool for parameter/function estimation, both for classification as well as regression tasks, in a time adaptive setting in (infinite dimensional) Reproducing Kernel Hilbert spaces (RKHS). The general framework is that of convex set theory via the powerful and elegant tool of projections.
The structure of this talk evolves along the following directions:
- It presents in simple geometric arguments the basic principles behind the convex set theoretic approach via projections in the generalized online setting. In contrast to the classical POCS theory, developed for a fixed number of convex sets, we consider convex sets which are built "around" the training data, and thus their number increases as time evolves and new data samples are received.
- It presents two case studies of particular interest in the adaptive learning community:
- On line classification
- Adaptive constrained regression in the context of robust beamforming
The resulting algorithms are of linear complexity with respect to the number of unknown parameters and are supported by strong convergence results.
The work has been carried out in cooperation with Isao Yamada and Kostas Slavakis.
Bio
Sergios Theodoridis is currently Professor of Signal Processing and Communications in the
Department of Informatics and Telecommunications at the National and Kapodistrian University
of Athens. His research interests lie in the areas of Adaptive Algorithms and Communications,
Machine Learning and Pattern Recognition, Signal Processing for Audio Processing and Retrieval.
He is the co-editor of the book "Efficient Algorithms for Signal Processing and System Identification",
Prentice Hall 1993, the co-author of the book "Pattern Recognition", Academic Press, 4th Ed. 2009,
and the co-author of three books in Greek, two of them for the Greek Open University.
He is the co-author of four papers that have received best paper awards, including the IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award. He currently serves as Distinguished Lecturer of the IEEE Signal Processing Society.
He has served as President of the European Association for Signal Processing (EURASIP) and he is currently a member of the Board of Governors for the IEEE Circuits and Systems (CAS) Society. He was the general chairman of EUSIPCO-98, the Technical Programme co-chairman of ISCAS-2006 and the Co-chairman of CIP-2008. He has served as an Associate Editor in all major Signal Processing related journals, including IEEE Transactions on Signal Processing, IEEE Signal Processing Magazine, IEEE Transactions on Neural networks, IEEE Transactions on Circuits and Systems, Signal Processing. He is currently the Editor-in-Chief of the EURASIP Signal Processing book series of Academic Press.
He is a member of the Greek National Council for Research and Technology and Chairman of the SP advisory committee for the Edinburgh Research Partnership (ERP). He has served as vice chairman of the Greek Pedagogical Institute and he was for four years member of the Board of Directors of COSMOTE (the Greek mobile phone operating company). He is Fellow of IET, a Corresponding Fellow of RSE and a Fellow of IEEE.
Professor Nikola Kasabov
FIEEE, FSRNZ,
Knowledge Engineering and Discovery Research Institute, KEDRI,
Auckland University of Technology
Title: Evolving Integrative Spiking Neural Networks: A Computational Intelligence Approach
Abstract
Spiking neural networks (SNN) are computational models that mimic the biological neurons
in their main functions. They have been widely used for biological modeling. Nowadays they have
been increasingly used for computational intelligence and engineering applications. Due to some
limitations of the existing SNN models, new models are required based on new biological facts.
Many factors define in a concert how neuronal connections evolve and if a neuron will spike or not
at a time. These factors include external stimuli, gene and protein expressions, quantum properties
of the nervous system. The talk describes a way how these factors can be integrated into new types
of evolving SNN models such as: probabilistic SNN, neuro-genetic models, quantum inspired SNN.
In the latter type a quantum inspired probability distribution estimation algorithm is applied for an
integrated feature selection and model parameter optimisation. The talk illustrates how these models
can be used to model brain functions and to solve efficiently complex engineering problems.
The talk also presents a list of challenging problems for future research in the area of spiking neural
networks and their applications.
Bio
Professor Nikola Kasabov is the Director of the Knowledge Engineering and Discovery
Research Institute (KEDRI), Auckland. He holds a Chair of Knowledge Engineering at the
School of Computing and Mathematical Sciences at Auckland University of Technology. He is a Fellow
of IEEE, Fellow of the Royal Society of New Zealand and Fellow of the New Zealand Computer
Society. He is the President of the International Neural Network Society (INNS) and a Past President
of the Asia Pacific Neural Network Assembly (APNNA). He is a member of several technical committees
of IEEE Computational Intelligence Society and IFIP. Kasabov has served as Associate Editor of Neural
Networks, IEEE TrNN, IEEE TrFS, Information Science, J. Theoretical and Computational Nanosciences,
Applied Soft Computing and other journals. Kasabov holds MSc and PhD from the Technical University of
Sofia, Bulgaria. His main research interests are in the areas of intelligent information systems, soft computing,
neuro-computing, bioinformatics, brain study, novel methods for data mining and knowledge discovery. He
has published more than 400 publications that include 15 books, 120 journal papers, 60 book chapters,
32 patents and numerous conference papers. He has extensive academic experience at various academic
and research organisations: University of Otago, New Zealand; University of Essex, UK; University of Trento,
Italy; Technical University of Sofia, Bulgaria; University of California at Berkeley; RIKEN and KIT, Japan;
TUniversity Kaiserslautern, Germany, and others. Prof. Kasabov has received the Bayer Science Innovation
Award, the RSNZ Science and Technology Silver Medal, the APNNA Excellent Service Award and other
awards. More information of Prof. Kasabov can be found on the KEDRI web site:
http://www.kedri.info.