Keynote speakers

Prof. Barbara Hammer

  • Bielefeld University, Germany

Subject: Autonomous model selection for prototype based architectures

Abstract

Prototype-based learning techniques enjoy a wide popularity due to their intuitive training techniques and model interpretability. Applications include biomedical data analysis, image classificion, or fault detection in technical systems. One striking property of such models consists in the fact that they represent data in terms of typical representatives; this property allows an efficient extension of the techniques to life-long learning and model adatation for streaming data. Within the talk, we will mainly focus on modern variants of so-called learning vector quantization (LVQ) due to their strong learning theoretical background and exact mathematical derivative from explicit cost functions.

We will focus on three aspects which are of particular interest if these models are used as autonomous learning models: 1) metric learning in prototyoe based models, 2) incremental learning with adaptive model complexity, and 3) optimum reject options. Metric learning autonomously adjusts the used used metric, usually the Euclidean one, towards a richer and more problem-adjusted representation of the data. Metric learning does not only greatly enhance the model performance, but it usually also increases model interpretability, a very important property e.g. in biomdedical data applications. We will discuss recent results which investigate metric learning mechanisms with a focus on their uniqueness, and we will present efficient schemes which account for a regularization of this process in particular for high dimensional data. Further, we will show that metric learning in LVQ techniques can be extended towards non-vectorial data such as sequences.

Incremental learning and the possibility to reject classification are tightly interwoven aspects. These properties enable to autonomously adjust model complexity, and they enhance the system with the capability to judge its limitations in classification accuracy. We will present recent work which investigates different measurements which allow the quantification of the model insecurity, including the notion of conformal measures as one approach with vey clear statistical background. Based on such measures, we will present incremental models with self-adjusted mode complexity, on the one hand, and an efficient strategy for an optimum combination of rejects in a mathematically precise sense, on the other hand.

References:

1. Bassam Mokbel, Benjamin Paassen, Frank-Michael Schleif, Barbara Hammer, Metric learning for sequences in relational LVQ, Neurocomputing, to appear, 2015.
2. Lydia Fischer, Barbara Hammer, Heiko Wersing: Optimum Reject Options for Prototype-based Classification. CoRR abs/1503.06549 (2015)
3. Xibin Zhu, Frank-Michael Schleif, Barbara Hammer: Adaptive conformal semi-supervised vector quantization for dissimilarity data. Pattern Recognition Letters 49: 138-145 (2014)
4. Barbara Hammer, Daniela Hofmann, Frank-Michael Schleif, Xibin Zhu: Learning vector quantization for (dis-)similarities. Neurocomputing 131: 43-51 (2014)
5. Benoit Frenay, Daniela Hofmann, Alexander Schulz, Michael Biehl, Barbara Hammer: Valid interpretation of feature relevance for linear data mappings. CIDM 2014: 149-156
6. Michael Biehl, Barbara Hammer, Thomas Villmann: Distance Measures for Prototype Based Classification. BrainComp 2013: 100-116
7. Marc Strickert, Barbara Hammer, Thomas Villmann, Michael Biehl: Regularization and improved interpretation of linear data mappings and adaptive distance measures. CIDM 2013: 10-17
8. Kerstin Bunte, Petra Schneider, Barbara Hammer, Frank-Michael Schleif, Thomas Villmann, Michael Biehl: Limited Rank Matrix Learning, discriminative dimension reduction and visualization. Neural Networks 26: 159-173 (2012)
9. Petra Schneider, Michael Biehl, Barbara Hammer: Adaptive Relevance Matrices in Learning Vector Quantization. Neural Computation 21(12): 3532-3561 (2009)

Bio

Barbara Hammer received her Ph.D. in Computer Science in 1995 and her venia legendi in Computer Science in 2003, both from the University of Osnabrueck, Germany. From 2000-2004, she was chair of the junior research group Learning with Neural Methods on Structured Data' at University of Osnabrueck before accepting an offer as professor for Theoretical Computer Science at Clausthal University of Technology, Germany, in 2004. Since 2010, she is holding a professorship for Theoretical Computer Science for Cognitive Systems at the CITEC cluster of excellence at Bielefeld University, Germany.

Several research stays have taken her to Italy, U.K., India, France, the Netherlands, and the U.S.A. Her areas of expertise include hybrid systems, self-organizing maps, clustering, and recurrent networks as well as applications in bioinformatics, industrial process monitoring, or cognitive science. She has been chairing the IEEE CIS Technical Committee on Data Mining in 2013 and 2014, and she is chair of the Fachgruppe Neural Networks of the GI and vice-hair of the GNNS. She has published more than 200 contributions to international conferences / journals, and she is coauthor/editor of four books.

Prof. Nikola Kasabov, Fellow IEEE

  • Director, Knowledge Engineering and Discovery Research Institute (KEDRI) Auckland University of Technology

Subject: Neuromorphic Predictive Systems based on Deep Learning

Abstract

The current development of the third generation of artificial neural networks - the spiking neural networks (SNN) [1,5,9] along with the technological development of highly parallel neuromorphic hardware systems of millions of artificial spiking neurons as processing elements [2,3], makes it possible to model big and fast data in a fast on-line manner, enabling large-scale problem solving across domain areas including building better predictive systems. The latter topic is covered in this talk. The talk first presents some principles of deep learning inspired by the human brain, such as automated feature selection, 'chain fire', polychronisation. These principles are implemented in a recent evolving SNN (eSNN) architecture called NeuCube [4] and its software development system that is made available from: www.kedri.aut.ac.nz/neucube/. These principles allow for an eSNN system to predict events and outcomes, so that once the eSNN is trained on whole spatio-temporal patterns, it can be made to spike early, when only a part of a new pattern is presented as input data.

The talk presents a methodology for the design and implementation of NeuCube-based eSNN systems for deep learning and early and accurate outcome prediction from large-scale spatio-/spectro temporal data, referred here as spatio-temporal data machines (STDM). A STDM has modules for: preliminary data analysis, data encoding, pattern learning, classification, regression, prediction and knowledge discovery. This is illustrated on early event prediction tasks using benchmark large spatio/spectro-temporal data with different spatial/temporal characteristics, such as: EEG data for brain computer interfaces; personalised and climate date for stroke occurrence prediction and for the prediction of ecological and seismic events [6-8]. The talk discusses implementation on highly parallel neuromorphic hardware platforms such as the Manchester SpiNNaker [2] and the ETH Zurich chip [3,10]. The STDM are not only significantly more accurate and faster than traditional machine learning methods and systems, but they lead to a significantly better understanding of the data and the processes that generated it.

References

  1. EU Marie Curie EvoSpike Project (Kasabov, Indiveri): http://ncs.ethz.ch/projects/EvoSpike/
  2. Furber, S. et al (2012) Overview of the SpiNNaker system architecture, IEEE Trans. Computers, 99.
  3. Indiveri, G., Horiuchi, T.K. (2011) Frontiers in neuromorphic engineering, Frontiers in Neuroscience, 5, 2011.
  4. Kasabov, N. (2014) NeuCube: A Spiking Neural Network Architecture for Mapping, Learning and Understanding of Spatio-Temporal Brain Data, Neural Networks, 52, 62-76.http://dx.doi.org/10.1016/j.neunet.2014.01.006
  5. Kasabov, N., Dhoble, K., Nuntalid, N., Indiveri, G. (2013). Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition. Neural Networks, 41, 188-201.
  6. Kasabov, N., et al. (2014). Evolving Spiking Neural Networks for Personalised Modelling of Spatio-Temporal Data and Early Prediction of Events: A Case Study on Stroke. Neurocomputing, 2014.
  7. Kasabov, N. (ed) (2014) The Springer Handbook of Bio- and Neuroinformatics, Springer.
  8. Kasabov, N. (2015) Deep Machine Learning and Predictive Data Modelling with Spiking Neural Networks, Springer.
  9. Schliebs, S., Kasabov, N. (2013). Evolving spiking neural network-a survey. Evolving Systems, 4(2), 87-98. doi:10.1007/s12530-013-9074-9.
  10. Scott, N., N. Kasabov, G. Indiveri (2013) NeuCube Neuromorphic Framework for Spatio-Temporal Brain Data and Its Python Implementation, Proc. ICONIP 2013, Springer LNCS, 8228, pp.78-84.

Bio

Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand and DVF of the Royal Academy of Engineering, UK. He is the Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland. He holds a Chair of Knowledge Engineering at the School of Computing and Mathematical Sciences at Auckland University of Technology. Kasabov is a Past President and Governors Board member of the International Neural Network Society (INNS) and also of the Asia Pacific Neural Network Assembly (APNNA). He is a member of several technical committees of IEEE Computational Intelligence Society and a Distinguished Lecturer of the IEEE CIS (2011-2013). He is a Co-Editor-in-Chief of the Springer journal Evolving Systems and has served as Associate Editor of Neural Networks, IEEE TrNN, IEEE TrFS, Information Science and other journals. Kasabov holds MSc and PhD from the TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 560 publications that include 15 books, 180 journal papers, 90 book chapters, 30 patents and numerous conference papers. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia, University of Essex, University of Trento, University of Otago, Guest professor at the Shanghai Jiao Tong University, Guest Professor at ETH/University of Zurich, DAA Professor TU Kaiserlautern.. Prof. Kasabov has received the APNNA ‘Outstanding Achievements Award’, the INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’, the EU Marie Curie Fellowship, the Bayer Science Innovation Award, the APNNA Excellent Service Award, the RSNZ Science and Technology Medal, and others. He has supervised to completion 40 PhD students. More information of Prof. Kasabov can be found on the KEDRI web site: http://www.kedri.aut.ac.nz.

Prof. Paul Verschure

  • Catalan Institute of Advanced Research (ICREA)
  • Technology Department, Universitat Pompeu Fabra

Paul F.M.J. Verschure Laboratory of Synthetic Perceptive, Emotive and Cognitive Systems, Center of Autonomous Systems and Neurorobotics, Universitat Pompeu Fabra & Catalan Institute of Advanced Studies (ICREA) specs.upf.edu

Subject:

Engineering Biologically and Psychologically grounded Living Machines: The Distributed Adaptive Control theory of mind, brain and behaviour

Abstract

Our society is facing a number of fundamental challenges in a range of domains that will require a new class of machines. I will call these Living Machines and will describe how their engineering will depend on extracting fundamental design principles from nature. In particular I will emphasize the emergence of a new class of machines that are based on our advancing understanding of mind and brain. The argument that this can lead to a new form of engineering is based on the Distributed Adaptive Control (DAC) that has been applied in a range of domains including robotics, the clinic and in education. DAC is based on the assumption that the brain evolved to maintain a dynamic equilibrium between an organism and its environment through action. The fundamental question that such a brain has to solve in order to deal with the how of action in a physical world is: why (motivation), what (objects), where (space), when (time) or the H4W problem. Post the Cambrian explosion of about 560M years ago, a last factor became of great importance for survival: the who of other agents. I propose that H5W defines the top-level objectives of the brain and will argue that brain and body evolved to provide specific solutions to it by relying on a layered control architecture that is captured in DAC. I will show how DAC addresses H5W through interactions across multiple layers of neuronal organization, suggesting a very specific structuring of the brain, which can be captured in robot control architectures. In explaining how the function of the brain is realized I will show how the DAC theory provides for very specific predictions that have been validated at the level of behaviour and the neuronal substrate. Subsequently I will show how the DAC theory has given rise to a qualitative new class of clinical interventions for the rehabilitation of deficits following stroke illustrating the notion of deductive medicine. These examples will show that robot based models of mind and brain do not only advance our understanding of ourselves and other animals but can also lead to novel technical solutions to complex applied problems.

References

1. Arsiwalla, X. D., R. Zucca, A. Betella, E. Martinez, D. Dalmazzo, P. Omedas, G. Deco and P. F. Verschure (2015). "Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction." Frontiers in Neuroinformatics 9.
2. Verschure , P. F. M. J., C. M. A. Pennartz and G. Pezzulo (2014). "The why, what, where, when and how of goal-directed choice: neuronal and computational principles." Proceedings of the Royal Society B: Biological Sciences.
3. Cameirao, M. S., S. B. i Badia, E. Duarte, A. Frisoli and P. F. M. J. Verschure (2012). "The Combined Impact of Virtual Reality Neurorehabilitation and Its Interfaces on Upper Extremity Functional Recovery in Patients With Chronic Stroke." Stroke 43(10): 2720-2728.
4. Verschure, P. F. M. J. (2012). "The Distributed Adaptive Control Architecture of the Mind, Brain, Body Nexus." Biologically Inspired Cognitive Architecture - BICA 1(1): 55-72.
5. Mathews, Z. and P. F. M. J. Verschure (2011). "PASAR-DAC7: An Integrated Model of Prediction, Anticipation, Sensation, Attention and Response for Artificial Sensorimotor Systems." Information Sciences 186(1): 1-19.
6. Verschure, P. F. M. J., T. Voegtlin and R. J. Douglas (2003). "Environmentally mediated synergy between perception and behaviour in mobile robots." Nature 425: 620--624.