The last decade has seen rapid strides in Artificial Intelligence (AI) moving from being a fantasy to a reality that is a part of each one of our lives, embedded in various technologies. A catalyst of this rapid uptake has been the enormous success of deep learning methods for addressing problems in various domains including computer vision, natural language processing, and speech understanding. However, as AI makes its way into risk-sensitive and safety-critical applications such as healthcare, autonomous navigation and finance, it is essential for AI models to not only make predictions but also be able to explain their predictions. This tutorial will introduce the audience to this increasingly important area of explainable AI, as well as describe most popularly used methods for explaining deep neural network models.
Vineeth N Balasubramanian
Associate Professor, IIT-H