IISER Pune
INDIAN INSTITUTE OF SCIENCE EDUCATION AND RESEARCH (IISER) PUNE
where tomorrow’s science begins today
An Autonomous Institution, Ministry of Education, Govt. of India
Events
Links
Events:

Workshop on Machine Learning Models  Mar 12, 2020

This event is rescheduled untill further notice as a pre-emptive measure in view of the current Covid-19 (coronavirus) outbreak.

Data Science at IISER Pune is organising a workshop on:

Opening the Black Box: How to Interpret Machine Learning models; techniques, tools, and takeaways

Date: March 12, 2020

Time: 10:00 am to 1:00 pm

Venue: New Lecture Hall, Smt. Indrani Balan Centre, IISER Pune

Resource person: Dr Farhat Habib, Director, Data Science, TruFactor

About: Interpretability of a model is the degree to which a human can consistently predict the model’s result. The higher the interpretability of a machine learning model, the easier it is to comprehend why certain decisions or predictions have been made. While interpretability is not important in low-risk domains and black box models abound, in domains such as medicine or finance, or high-risk domains such as self-driving cars, or weapons systems, model interpretability is a strong requirement. As privacy preserving legislation such as GDPR becomes a norm across the globe, interpretability of models is important for explaining how particular recommendations or decisions were made. The more a ML model decision affects a person’s life, the more important it is for it to be interpretable. The training data fed to a model may have biases, inconsistencies, and other artifacts. Interpretability is a useful debugging tool for detecting bias in machine learning models.

Various interpretation methods are explained in depth. How do they work under the hood and their strengths and weaknesses? How can their outputs be interpreted? We will start with models that are interpretable easily such as linear regression and decision trees and then go on to interpretation methods that are model agnostic such as feature importance, local surrogate (LIME), and Shapley values (SHAP). In the traditionally inscrutable domain of deep learning, we will look at gradient based and attention based methods for interpreting deep neural nets.

Broad Outline:

  1. Importance of interpretability
  2. Uninterpretable model failures
  3. Evaluation of interpretability
  4. Human-friendly explanations
  5. Interpretable models
    1. Linear and Logistic regression
    2. GLM and GAM
    3. Decision trees
  6. Model Agnostic Methods
    1. Partial Dependence Plots
    2. Feature Interaction
    3. Feature Importance
    4. Global and local surrogate (LIME)
    5. Shapley Values
  7. Interpretability of Deep Learning Models
    1. Gradient based methods
    2. Attention based methods
  8. Counterfactual explanations
  9. Adversarial examples

Registration link

 

 


 

 

event_newsdetails