Tutorials
Giacomo Boracchi, Ph.D. Assistant Professor
- Department of Electronics and Informatics, Politecnico di Milano, Italy
Subject: Learning Under Concept Drift: Methodologies and Applications
Most machine learning techniques assume that the process generating the data is stationary. This guarantees that the model learned during the initial training phase remains valid during the subsequent operation. Unfortunately, stationarity is often an oversimplifying assumption because real-world processes typically change overtime. In the classification literature, changes in the data-generating process are referred to as concept drift.
Learning under concept-drift is a challenging research topic. In fact, in addition to the online-learning issues, the learner has to deal with possible changes, which would make it obsolete and unfit. Given the fact that changes are often unpredictable, as they might occur at any time and shift the data-generating process to an unforeseen state, the learner has to either undergo continuous adaptation to match the recent operating conditions (passive approach) or to steadily monitor the data stream to detect changes and, eventually, react (active approaches). In the last few years, there has been a flourishing of algorithms designed for learning under concept drift, also given the large number of applications where these techniques can be employed.
The tutorial introduces the main issues of learning under concept drift, the active and passive approaches as two extreme adaptation strategies, and few relevant applications such as those related to fraud-detection or those meant for detecting anomalies/changes in streams of signals and images
RNDr. Vera Kurkova, DrSc.
- Institute of Computer Science, Czech Academy of Sciences, Czech Republic
Subject: Strength and Limitations of Shallow Networks
Although originally biologically inspired neural networks were introduced as multilayer computational models, later shallow (one-hidden-layer) architectures became dominant in applications. Recently, interest in architectures with several hidden layers was renewed due to successes of deep convolutional networks. These experimental results motivated theoretical research aiming to characterize tasks which can be computed more efficientlyvby deep networks than by shallow ones.vThis tutorial will review recent developments regarding theoretical analysis of strength and limitations of shallow networks. The tutorial will focus on the following topics:
- Universality and tractability of representations of multivariable mappings by shallow networks.
- Trade-o between maximal generalization capability and model complexity.
- Limitations of computation of highly-varying functions by shallow networks.
- Probability distributions of functions which cannot be tractably represented by shallow networks.
- Examples of representations of high-dimensional classication tasks by one and two-hidden-layer networks.
Attendees will learn about consequences of these theoretical results for the methodology of choosing a neural network architecture and about open problems related to deep and shallow architectures.