UnivIS
Information system of Friedrich-Alexander-University Erlangen-Nuremberg © Config eG 
FAU Logo
  Collection/class schedule    module collection Home  |  Legal Matters  |  Contact  |  Help    
search:      semester:   
 
 Layout
 
printable version

 
 
 Also in UnivIS
 
course list

lecture directory

 
 
events calendar

job offers

furniture and equipment offers

 
 
Communications and Multimedia Engineering (Master of Science) >>

  Selected Topics on Machine Learning (STASC)

Lecturer
Gastredner

Details
Vorlesung
, ECTS studies, ECTS credits: 5
nur Fachstudium, Sprache Englisch, Lecturer is Sergios Theodoridis (Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the University of Athens)
Time and place: Mon 8:15 - 9:45, 05.025; Tue 14:15 - 15:45, 05.025; Wed 16:15 - 17:45, 05.025; Thu 10:15 - 11:45, 05.025; Fri 12:15 - 13:45, 05.025
from 24.6.2019 to 26.7.2019

Fields of study
WPF CME-MA ab 2 (ECTS-Credits: 5)
PF ASC-MA 2 (ECTS-Credits: 5)

Contents
Introduction: What is Machine Learning, some typical examples

Learning in Parametric modelling- Basic Concepts: Parametric vs non-parametric modelling, regression, least squares, classification, supervised vs unsupervised and semisupervised learning, bias and unbiased estimation, MSE optimal estimation, the bias-variance trade-off, inverse problems and overfitting, regularization, the maximum likelihood method, curse of dimensionality, cross validation.

Classification-A tour to the Classics: Bayes optimal classification, minimum distance classifiers, the naïve Bayes classifier, risk optimal classification, nearest neighbor classifiers, logistic regression, decision trees.

Learning in reproducing kernel spaces: The need for nonlinear models, Cover’s theorem and capacity in linear dichotomies, reproducing kernel Hilbert spaces, kernels and the kernel trick, representer theorem, kernel ridge regression, support vector regression, margin classifiers and support vector machines.

Bayesian Learning: Maximum likelihood, Maximum a-posteriori, Bayesian approach, the evidence function and Occam’s razor rule, Laplacian approximation, the exponential family of probability distributions, latent variables and the EM algorithm, linear regression via the EM algorithm, Gaussian mixture models, variational approximation to Bayesian learning, variational Bayesian and linear regression, variational approach and mixture modelling, the relevance vector machine framework, Gaussian processes, nonparametric Bayesian learning: Chinese restaurant process and indian buffet process.

Neural Networks and deep learning: The perceptron and the perceptron rule, feed-forward neural networks, the backpropagation algorithm, selecting the cost function and the output nonlinearity, diminishing and expanding gradients, the ReLU activation function, pruning networks and the dropout method, universal approximation properties of neural networks, the need for deep architectures: representation, optimization and generalization properties, convolutional networks, convolution over volumes, 1X1 convolution, inception and residual networks, recurrent neural networks, adversarial training, transfer learning, generative adversarial networks, capsule modules, deep belief networks, autoencoders.

ECTS information:
Credits: 5

Contents
Introduction: What is Machine Learning, some typical examples

Learning in Parametric modelling- Basic Concepts: Parametric vs non-parametric modelling, regression, least squares, classification, supervised vs unsupervised and semisupervised learning, bias and unbiased estimation, MSE optimal estimation, the bias-variance trade-off, inverse problems and overfitting, regularization, the maximum likelihood method, curse of dimensionality, cross validation.

Classification-A tour to the Classics: Bayes optimal classification, minimum distance classifiers, the naïve Bayes classifier, risk optimal classification, nearest neighbor classifiers, logistic regression, decision trees.

Learning in reproducing kernel spaces: The need for nonlinear models, Cover’s theorem and capacity in linear dichotomies, reproducing kernel Hilbert spaces, kernels and the kernel trick, representer theorem, kernel ridge regression, support vector regression, margin classifiers and support vector machines.

Bayesian Learning: Maximum likelihood, Maximum a-posteriori, Bayesian approach, the evidence function and Occam’s razor rule, Laplacian approximation, the exponential family of probability distributions, latent variables and the EM algorithm, linear regression via the EM algorithm, Gaussian mixture models, variational approximation to Bayesian learning, variational Bayesian and linear regression, variational approach and mixture modelling, the relevance vector machine framework, Gaussian processes, nonparametric Bayesian learning: Chinese restaurant process and indian buffet process.

Neural Networks and deep learning: The perceptron and the perceptron rule, feed-forward neural networks, the backpropagation algorithm, selecting the cost function and the output nonlinearity, diminishing and expanding gradients, the ReLU activation function, pruning networks and the dropout method, universal approximation properties of neural networks, the need for deep architectures: representation, optimization and generalization properties, convolutional networks, convolution over volumes, 1X1 convolution, inception and residual networks, recurrent neural networks, adversarial training, transfer learning, generative adversarial networks, capsule modules, deep belief networks, autoencoders.

Additional information
Keywords: Machine Learning
Expected participants: 15, Maximale Teilnehmerzahl: 20

Verwendung in folgenden UnivIS-Modulen
Startsemester SS 2019:
Selected Topics in ASC (STASC)

Department: Chair of Multimedia Communications and Signal Processing (Prof. Dr. Kaup)
UnivIS is a product of Config eG, Buckenhof