This laboratory course teaches fundamental concepts in computational science and machine learning based on matrix factorization.


X \approx A \cdotp B

where a data matrix X is (approximately) factorized into two matrices A and B. Based on the choice of approximation quality measure and the constraints on A and B, the method provides a powerful framework of numerical linear algebra that encompasses many important techniques, such as dimension reduction, clustering and sparse coding.

Date What?
20.02 Remember to bring a laptop with Matlab running to the tutorial sessions!
10.02 Website is now running.
Calendar Week Topic Lecture Exercise
8 Introduction to the course lecture01.pdf exercise01.pdf
submission-stub.zip
data_gen.zip
matlab_solutions.zip
9 Principal Component Analysis lecture02.pdf exercise02.pdf
tutorial02.pdf
step_by_step_pca.m
10 Singular Value Decomposition lecture03.pdf exercise03.pdf
tutorial03.pdf
11 K-means lecture04.pdf
norms.pdf
exercise04.pdf
tutorial04.pdf
12 Mixture Models lecture05.pdf
exercise05.pdf
tutorial05.pdf
PredictMissingValues.m
k_means.m
13 Multi Assignment Clustering lecture06.pdf exercise06.pdf
tutorial06.pdf
mf.pdf
gmm_template.m
demoGMM.m
14 Non-negative Matrix Factorization lecture07.pdf tutorial07.pdf
NMF, Nature
15 Sparse Coding lecture08.pdf exercise08.pdf
tutorial08.pdf
haarTrans.m
16 Overcomplete Dictionaries lecture09.pdf exercise09.pdf
tutorial09.pdf
haarTrans.m
overDCTdict.m

Some of the material is password protected for copyright reasons. Please send an email to CILAB to obtain it. Note that we can only provide access to ETH students.

Time and Place

Lectures Tue 08 - 10 HG E 5
Exercises Thu 16 - 18 CAB G 61
Fri 08 - 10 CAB G 11
Presence hour Mo 11-12 CAB H 53

Exercises and Assignments

Each exercise session will provide you with a pen-and-paper problem and discussion of the solution in the session. These problems help solidify theory presented in the lecture and identify areas of lack of understanding.

Assignments are larger problems, that you work on in groups of two or three students. For each assignment, you develop and implement an algorithm that solves one of the four application problems. Submitting your Matlab code provides you with feedback in terms of correctness, accuracy, speed and efficiency.

Semester Project

Based on the implementations you developed during the semester, you create a novel solution for one of the application problems, by extending or combining previous work. You write up your methodology and an experimental comparison to baselines in the form of a short scientific paper. You submit your novel solution to the online ranking system for competitive comparison.

Projects are due on Friday 20 June.

Written Exam

The mode of examination is written, 120 minutes length. The language of examination is English. As written aids, you can bring two A4 pages (i.e. one A4 sheet of paper), either handwritten or 11 point minimum font size.

Grade

You need to satisfy two requirements to pass this course:

  1. Your average grade (33.3 % group project, 66.6 % written exam) is greater or equal to 4.
  2. Your written exam grade is greater or equal to 3.5.

Therefore, your final grade will be:

  1. Your average grade, if your written exam grade is greater or equal to 3.5.
  2. Your written exam grade if it is below 3.5.

This lab course has a strong focus on practical assignments. Students work in groups to develop solutions to four application problems. To learn more about the applications, click here.

Solving Assignments

For solving assignments, you...

  1. work in groups of two to three students (no more, no less)
  2. download the assignment description sheet (see the exercise sheets in the syllabus)
  3. download Matlab function skeletons, training data and an evaluation script
  4. develop, debug and optimise your solution on the training data
  5. submit your solution to online evaluation on test data
  6. see where you stand in a ranking of all submissions

You can find more details on the Submission System pages.

Problem Reports

If you are experiencing problems with the submission system, then consult the instructions. The same page also gives details on how to report problems with the submission system.

Your semester project is a group effort. It consists of four parts:

  1. The programming assignments you solve during the semester (not graded!).
  2. Developing a novel solution for one of the assignments, e.g. by combining methods from previous programming assignments into a novel solution.
  3. Comparing your novel solution to previous assignments.
  4. Writing up your findings in a short scientific paper.

If you don't belong to any group so far, please write to Brian McWilliams by 25 May 2013.

Developing a Novel Solution

As your final programming assignment, you develop a novel solution to one of the four application problems. You are free to exploit any idea you have, provided it is not identical to any other group submission or existing Matlab implementation of an algorithm on the internet.

Two examples for developing a novel solution:

  1. You implemented a collaborative filtering algorithm based on dimension reduction as part of an assignment. Now you apply dimension reduction to inpainting.
  2. You implemented both a clustering and a sparse coding algorithm for image compression. Now you combine both techniques into a novel compression method.

Comparison to Baselines

You compare your novel algorithm to at least two baseline algorithms. For the baselines, you can use the implementations you developed as part of the programming assignments.

Ranking of Novel Solution

You submit your novel algorithm to the online ranking system.

Write Up

Technical report: how to write a scientific paper. [PDF] [source]

The document should be a maximum of 4 pages.

Grading

There are two different types of grading criteria applied to your project, with the corresponding weights shown in brackets.

Competitive

The following criteria are scored based on your rank in the submission system tables in comparison with the rest of the class. Ranking is performed only over the projects, no assignments are included. Only the very last submission will count.

  • time taken for computation (10%)
  • average rank for all other criteria relevant to the task, for example reconstruction error and sparsity (20%)

The ranks will then be converted on a linear scale into a grade between 4 and 6.

Non-competitive

The following criteria are graded based on an evaluation by the teaching assistants.

  • quality of paper (30%)
  • quality of implementation (20%)
  • creativity of solution (20%)

Submission

We are grateful to Microsoft Research for providing us with their Conference Management Toolkit to manage the submission and review of project reports.

To submit your report, please go to https://cmt.research.microsoft.com/CIL2013, register and follow the instructions given there. You can resubmit any number of times until the deadline passes.

Instructions

For a successful submission please follow these steps:

  1. Your group should consist of two or three students registered to the CIL 2013 course.
  2. Register your group on the code and paper submission webpages.
  3. Upload the code of your final submission. Make sure to check the checkbox: "Include for final project". The latest submission which is marked in this way, will be used for the final ranking. Also make sure that you get a confirmation that the code was successfully ran and check that your submission was included in the ranking.
  4. Prepare your project paper as described on the course webpage. Include the name of your group in the header of the submitted PDF file, e.g: \author{Hans Mustermann and John Doe\\group: mustermann_doe, Department of Computer Science, ETH Zurich, Switzerland}
  5. Submit the paper and your code (as supplementary material) through the CMT system.

Here is a list of additional material if you want to read up on a specific topic of the course.

Probability Theory and Statistics

Chapter 1.2 "Probability Theory" in: Christopher M. Bishop (2006). Pattern Recognition and Machine Learning. Springer.

Larry Wasserman (2003). All of Statistics. Springer.

Linear Algebra

Gene Golub and Charles Van Loan (1996). Matrix Computations. The Johns Hopkins University Press.

Lloyd Trefethen and David Bau (1997). Numerical Linear Algebra. SIAM.

Dave Austin. We recommend a Singular Value Decomposition. (SVD tutorial)

Michael Mahoney. Randomized algorithms for matrices and data. (Recent review paper)

Collaborative Filtering

Yehuda Koren, Robert Bell and Chris Volinsky (2009). Matrix Factorization Techniques for Recommender Systems. IEEE Computer.

Clustering

Chapter 9 "Mixture Models and EM" in: Christopher M. Bishop (2006). Pattern Recognition and Machine Learning. Springer.

Mario Frank, Andreas Streich and Joachim M. Buhmann (2012). Multi-Assignment Clustering for Boolean Data. JMLR.

Sparse Coding

Chapter 1 "Sparse Representations" in: Stephane Mallat (2009). A Wavelet Tour of Signal Processing - The Sparse Way. Academic Press.

Chapter 6 "Wavelet Transforms", pp. 244-276; in: Andrew S. Glassner (1995). Principles of Digital Image Synthesis, Vol. 1. Morgan Kaufmann Publishers, inc.

Chapter 13 "Fourie and Spectral Application", pp. 699-716; in: William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery (2007). Numerical Recipes. The Art of Scientific Computing. Cambridge University Press.

Aharon, Elad and Bruckstein (2005). K-SVD: Design of Dictionaries for Sparse Representation. Proceedings of SPARS.

Richard Baraniuk (2007). Compressive sensing. IEEE Signal Processing Magazine.

We maintain a forum at the VIS board which we regularly attend. Please post questions there, so others can see them and share in the discussion.

If you have questions which are not of general interest, please don't hesitate to contact us directly.

The main email point of contact for the course is CILAB (David Balduzzi).

Instructor: Prof. J. M. Buhmann
Head Assistant: Dr David Balduzzi
Assistants: Sharon Wulff, Dr Brian McWilliams, Dr Dwarikanath Mahapatra, Alkis Gkotovos, Dmitry Laptev