1. Course Information
- Instructors: Hanjie Chen, Yangfeng Ji
- Semester: Spring 2022
- Location: Rice Hall 340
- Time: Tuesday and Thursday 11 AM - 12:15 PM
- TA: Wanyu Du
- Office hours:
- Schedule
- Campuswire for online discussion. By the time of our first class, students registered for this course should all receive an invitation from Campuswire. Please let the instructors know if you haven’t gotten one.
2. Course Description
Machine learning models have achieved remarkable performance in a wide range of AI fields, such as Natural Language Processing and Computer Vision. However, the lack of interpretability of machine learning models raises concerns regarding the trustworthiness and reliability of their predictions. This problem blocks their applications in the real world, especially in high-stake scenarios, such as healthcare, economy and criminal justice. The goal of this course is to let students get familiar with the emerging problem in machine learning and recent advances in interpretable and explainable AI.
2.1 Topics
This course will include but not limit to the following contents:
- Background of interpretable machine learning
- Interpretability in machine learning
- Brief introduction of deep learning
- Techniques in exploring the interpretability of machine learning models
- Different classes of interpretable models (e.g., prototype based approaches, sparse linear models, rule based techniques, generalized additive models)
- Post-hoc explanations (e.g., white-box explanations, black-box explanations, saliency maps)
- Connections between model interpretability and other properties, such as robustness, uncertainty, and fairness
- Implementation of model interpretability in real-world applications, including natural language processing, computer vision, healthcare, etc.
2.2 Format
- Hybrid: lectures will be given in person at Rice Hall 340 and also streamed and recorded on Zoom. Students can find the Zoom link on Collab.
- From Week 4: one lecture + one discussion per week
2.3 Prerequisites
- Machine Learning: Students are expected to have machine learning background, for example, by taking one of our machine learning classes (CS 4774 or CS 6316).
- Programming: Students are also expected to have programming and software engineering skills to work with machine packages using Python (e.g., Sklearn, PyTorch, Tensorflow).
- Calculus and Linear Algebra: Multivariable derivatives, matrix/vector notations and operations; singular value decomposition, etc.
- Probability and Statistics: Mean and variance, multinomial distribution, conditional dependence, maximum likelihood estimation, Bayes theorem, etc.
2.4 Textbook/Materials
- Textbook
- Christoph Molnar, Interpretable Machine Learning, 2021
- Kush R. Varshney, Trustworthy Machine Learning, 2021
- Selected readings (full reading list)
- Du, Mengnan, Ninghao Liu, and Xia Hu. Techniques for interpretable machine learning. Communications of the ACM 63.1 (2019): 68-77.
- Murdoch, W. James, et al. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116.44 (2019): 22071-22080.
- Lipton, Zachary C. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16.3 (2018): 31-57.
3. Assignments and Evaluation Schemes
- Application-oriented (for undergraduates)
- 3 programming assignments (45%)
- 1 paper presentation (15%)
- 10 paper summaries (10%)
- Final project (20%)
- In-class discussion (7%) + attendance (3%)
- Research-oriented (for graduates)
- 2 programming assignments (30%)
- 2 paper presentations (30%)
- 10 paper summaries (10%)
- Final project (20%)
- In-class discussion (7%) + attendance (3%)
- Rubrics
4. Additional Information
Acknowledgement
Hanjie Chen is supported by the UVa Engineering Graduate Teaching Internship Program (GTI) for designing and teaching this course.