A proposal for a 180-minute hands-on tutorial at ACM FAT* 2020, Barcelona, Spain.
All tutorial code and materials are available here: https://github.com/h2oai/xai_guidelines. All materials may be re-used and re-purposed, even for commerical applications, with proper attribution of the authors.
For the tutorial outline, please see: responsible_xai.pdf.
- Navigate to https://aquarium.h2o.ai.
- Click
Create a new account
below the login. Follow the Aquarium instructions to create a new account. - Check the registered email inbox and use the temporary password sent there to login to Aquarium.
- Click
Browse Labs
in the upper left. - Find
Open Source MLI Workshop
and clickView Details
. - Click
Start Lab
and wait for several minutes as a cloud server is provisioned for you. - Once your server is ready, click on the
Jupyter URL
at the bottom of your screen. - Enter the token
h2o
at the top Jupyter securityPassword or Token
text box. - Click the
xai_guidelines
folder. (For those interested, thepatrick_hall_mli
folder contains resources from a 2018 FAT* tutorial.) - You now have access to the tutorial materials. You may browse them at your own pace or wait for instructions. You may also come back to them at anytime using your Aquarium login.
- Guideline 2.1: An explainable, but untrustworthy, model
- Guideline 2.3: Augmenting surrogate models with direct explanations
- Corollary 2.3.1: Augmenting LIME with direct explanations
- Corollary 2.4.1: Combining interpretable models and explanations
- Corollaries 2.4.2 - 2.4.2: Combining constrained models, explanations, and bias testing
Preliminary tutorial slides: Guidelines for Responsible Explainable ML
Patrick Hall: Patrick Hall is senior director for data science products at H2O.ai where he focuses on increasing trust and understanding in machine learning through interpretable models, post-hoc explanations, model debugging, and bias testing and remediation. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute. Find out more about Patrick on GitHub, Linkedin, or Twitter.
Navdeep Gill: Navdeep Gill is a senior data scientist and engineer at H2O.ai. Navdeep is a founding member of the interpretability team at H2O.ai and has worked on various other projects at H2O.ai including the open source h2o, automl, and h2o4gpu machine learning libraries. Before joining H2O.ai, Navdeep worked at Cisco, focusing on data science and software development and previous to that he researched neuroscience. Find out more about Navdeep on GitHub, Linkedin, or Twitter.
Nick Schmidt: Nick Schmidt is the director of the AI Practice at BLDS, a leading fair-lending advisory firm. At BLDS, Nick concentrates on creating real-world ethical AI systems for some of the largest financial institutions in the world. Prior to BLDS, Nick worked as an analyst and consultant at several well-respected economic and financial firms. Find out more about Nick on Linkedin.