||Prof.dr.ir. P.P. Jonker
|Contact Hours / Week x/x/x/x:
||0/0/0/2 Course canceled in 2013-2014
||In robotics, the environment is always uncertain. We can try to limit the uncertainty, such as in industrial robotics, but this is not an option in less structured environments such as homes or offices. This course is organized around the various methods to deal with such uncertainty, in both sensing and planning. Model-based as well as model-free approaches will be discussed. The course will treat the following topics: Probability and robotics, Sensing, Localization, Mapping and SLAM, Planning, Learning from demonstration, Learning from exploration.
||The student is able to describe the role of probability in robotics, why it has to be taken into account and which methods are available for this purpose. The student is able to select one of these methods based on their relative advantage and disadvantages, and to apply it to a specific problem.
After successful completion of each topic, the student is able to:
* Probability and robotics
- Describe some of state-of-the-art robots and their capabilities.
- Understand the need of statistical models and dedicated method for robotics.
- Understand basic concepts of statistics.
- Be familiar with the Bayes formula and Markov hypothesis and their implications.
- Use a Bayesian framework to solve some simple problems.
- Describe the application areas of the Kalman filter.
- Understand under which conditions the Kalman filter is optimal.
- Apply the Kalman filter in linear and nonlinear systems.
- Use the Kalman filter to integrate the outputs of multiple sensors.
- Understand the concepts of Monte Carlo estimation for function estimation.
- Understand the problem of model free pdf estimation and how to solve it using Monte Carlo methods.
- Understand the degeneracy problem and how to apply resampling techniques to solve it.
- Implement several resampling strategies in a particle filter framework and compare them.
* Mapping and SLAM
- Describe different ways of modeling a robot’s environment.
- Understand the difficulties in modeling an environment when the robot doesn’t know its own position.
- Apply simultaneous localization and mapping (SLAM) using the extended Kalman filter.
- Describe the properties and components of a (Partially Observable) Markov Decision Process.
- Understand the use of dynamic programming to solve MDPs, and apply it to simple problems.
- Describe in which way partial observability affects planning.
- Understand the extension of dynamic programming to POMDPs.
* Learning from demonstration
- Describe the concepts of a Learning from Demonstration (LD)-system and explain its technical and industrial need.
- Explain the general architecture of an LD-system and its performance requirements, and appoint for given tasks which probabilistic model should be applied.
- Use a toolbox to train the parameters of a probabilistic model for a specific LD- system for a specific task imitation and evaluate the performance of the system.
- Use prior information to improve the learning process of a specific LD-system.
* Learning from exploration
- Describe the relative merits of model-based and model-free planning.
- Understand how reinforcement learning can avoid the need for a model.
- Understand how environmental uncertainty affects reinforcement learning.
- Use a reinforcement learning toolbox to design a model-free solution to a motor control task.
||Lectures (2 hours/week)
Instruction (2 hours/week)
PC practical (2 hours/week)
||During the course couples of 2-3 students study a problem each from another viewpoint of one of the topics of the course (e.g Kalman filtering, SLAM, ...). During the course literature study, design and programming of solutions and intermediate presentation, lecturing ang final reporting is part of the examination procedure.