CVPR 2018 Tutorial on

Interpretable Machine Learning for Computer Vision

Salt Lake City, USA
Monday afternoon, June 18, 2018


Speakers


Overview

Complex machine learning models such as deep convolutional neural networks and recursive neural networks have made great progress in a wide range of computer vision applications, such as object/scene recognition, image captioning, visual question answering. But they are often perceived as black-boxes. As models are going deeper in search of better recognition accuracy, it becomes even harder to understand the predictions given by the models and why.

This tutorial is to broadly engage the computer vision community with the topic of interpretability and explainability in models used in computer vision. We will introduce the definition of interpretability and why it is important, and have a review on visualization and interpretation methodologies for analyzing both the data and the models in computer vision.


Schedule

14:00 - 14:10 . Welcome and Overview.

14:10 - 14:50 . Talk 1 by Been Kim: Introduction to Interpretable Machine Learning

14:50 - 15:30 . Talk 2 by Laurens van der Maaten: Dos and Don'ts of using t-SNE to Understand Vision Models

15:30 - 16:15 . Break.

16:15 - 16:55 . Talk 3 by Bolei Zhou: Revisiting the Importance of Single Units in Deep Networks

16:55 - 17:35 . Talk 4 by Andrea Vedaldi: Understanding Deep Networks using Natural pre-images, Meaningful Perturbations, and Vector Embeddings


Please contact Bolei Zhou if you have question. The webpage template is by the courtesy of awesome Georgia.