Conference registration is open . Student travel grant applications are invited. Contact Us         




Tutorials



Robust Query Processing: Mission Possible


Speaker :
Jayant Haritsa, Sr. Professor, Indian Institute of Science (IISc)


Abstract
Robust query processing with strong performance guarantees is an extremely desirable objective in the design of industrial-strength database engines. However, it has proved to be a largely intractable and elusive challenge in spite of sustained efforts spanning several decades. The good news is that in recent times, there have been a host of exciting technical advances, at different levels in the database architecture, that collectively promise to materially address this problem. In this tutorial, we will present these novel research approaches, characterize their strengths and limitations, and enumerate open technical problems that remain to be solved to make robust query processing a contemporary reality.






Causal Inference and Counterfactual Reasoning


Speakers :
Amit Sharma, Senior Researcher, Microsoft Research India
Emre Kıcıman, Senior Principal Researcher, Microsoft Research AI


Abstract
As computing systems are more frequently and more actively intervening to improve people’s work and daily lives, it is critical to correctly predict and understand the causal effects of these interventions. Conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for causal analysis. This tutorial will introduce participants to concepts in causal inference and counterfactual reasoning, drawing from a broad literature on the topic from statistics, social sciences and machine learning. We will first motivate the use of causal inference through examples in domains such as recommender systems, social media datasets, health, education and governance. To tackle such questions, we will introduce the key ingredient that causal analysis depends on — counterfactual reasoning — and describe the two most popular frameworks based on Bayesian graphical models and potential outcomes. Based on this, we will cover a range of methods suitable for doing causal inference with large-scale online data, including randomized experiments, observational methods like matching and stratification, and natural experiment-based methods such as instru- mental variables and regression discontinuity. We will also focus on best practices for evaluation and validation of causal inference techniques, drawing from our own experiences. We will show application of these techniques using DoWhy, a Python library for causal inference. Throughout, the emphasis will be on considerations of working with large-scale data, such as logs of user interactions or social data.






Graph-based Deep Learning in Natural Language Processing


Speakers :
Shikhar Vashishth, PhD student, Indian Institute of Science (IISc)
Naganand Y, Google PhD Fellow, Indian Institute of Science (IISc)
Partha Talukdar, Assistant Professor, Indian Institute of Science (IISc)


Abstract
This tutorial aims to introduce recent advances in graph-based deep learning techniques such as Graph Convolutional Networks (GCNs) for Natural Language Processing (NLP). It provides a brief introduction to deep learning methods on non-Euclidean domains such as graphs and justices their relevance in NLP. It then covers recent advances in applying graph- based deep learning methods for various NLP tasks, such as semantic role labeling, machine translation, relationship extraction, and many more.






Software Testing & Quality Assurance for Machine Learning Applications: from research bench to real world


Speakers :
Sandya Mannarswamy, Conduent Labs India
Shourya Roy, American Express Big Data Labs
Saravanan Chidambaram, Hewlett Packard Enterprise India


Abstract
Rapid progress in Machine Learning (ML) has seen a swift translation to real world commercial deployment. While research and development of ML applications have progressed at an exponential pace, the required software engineering process for ML applications and the corresponding eco-system of testing and quality assurance tools which enable software reliable, trustworthy and safe and easy to deploy, have sadly lagged behind. Specifically, the challenges and gaps in quality assurance (QA) and testing of AI applications have largely remained unaddressed contributing to a poor translation rate of ML applications from research to real world. Unlike traditional software, which has a well-defined software testing methodology, ML applications have largely taken an ad-hoc approach to testing. ML researchers and practitioners either fall back to traditional software testing approaches, which are inadequate for this domain, due to its inherent probabilistic and data dependent nature, or rely largely on non-rigorous self-defined QA methodologies. These issues have driven the ML and Software Engineering research communities to develop of newer tools and techniques designed specifically for ML. These research advances need to be publicized and practiced in real world in ML development and deployment for enabling successful translation of ML from research prototypes to real world. This tutorial intends to address this need. This tutorial aims to:
1) Provide a comprehensive overview of testing of ML applications
2) Provide practical insights and share community best practices for testing ML software (Besides scientific literature, we derive our insights from our conversations with industry experts in ML).






Fairness in Algorithmic Decision Making


Speakers :
Abhijnan Chakraborty, Post Doctoral Researcher, Max Planck Institute for Software Systems (MPI-SWS)
Krishna P. Gummadi, Faculty & Scientific Director, Max Planck Institute for Software Systems (MPI-SWS)


Abstract
Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking applicants), judiciary (offender profiling) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have attempted to identify (measure) bias or unfairness in algorithmic decisions and propose mechanisms to control (mitigate) such biases. In this tutorial, we introduce the related literature to the cods-comad community. Moreover, going over the more prevalent works on fairness in classification or regression tasks, we explore fairness issues in other decision making scenarios, where the decision needs to account for preferences of multiple stakeholders. Specifically, in this tutorial, we cover our own past and ongoing research works on fairness in recommendation and matching systems. We discuss the notions of fairness in these contexts and propose techniques to achieve them. Additionally, we briefly touch upon the possibility of utilizing user interface of platforms (choice architecture) to achieve fair outcomes in certain scenarios. We conclude the tutorial with a list of open questions and directions for future work.