Conference recordings for CODS-COMAD 2021 are available here Click Here Contact Us         


Trustworthy AI

Speaker :
Richa Singh, IIT Jodhpur
Mayank Vatsa, IIT Jodhpur
Nalini Ratha, SUNY Buffalo

Abstract In every walk of life, AI is playing a significant and increasing role. AI systems are being employed for making mundane day to day decisions such as healthy food choices and dress recommendation, as well as mission critical and life-changing decisions such as diagnosis of diseases, detection of financial frauds, and selecting new employees. Many upcoming applications such as autonomous driving, automated financial loan approval and cancer treatment recommendations have many worrying about the level of trust associated with AI today. Such concerns are genuine as many weaker sides of modern AI systems have been exposed through adversarial attacks, bias, and lack of explainability in the current rapidly evolving AI systems. While these AI systems are reaping the advantage of novel learning methods, they exhibit brittleness to minor changes in the input data and lack the capability to explain their decisions to a human. Furthermore, they are unable to address the bias in their training data, as demonstrated in non-uniform performance across different groups, and are often highly opaque in terms of revealing the lineage of the system, how they were trained and tested, and under which parameters and conditions they can reliably guarantee a certain level of performance. Present AI systems have not demonstrated the ability to learn without compromising on the privacy and security of data. Nor can they even assign appropriate credit to the data sources. We propose the tutorial on “Trustworthy AI” to address five critical issues in enhancing user and public trust in AI systems, namely: (i) bias and fairness, (ii) explainability, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building and (v) model attribution, including the right level of credit assignment to the data sources, model architectures, and transparency in lineage.

Exploring State-of-the-Art Nearest Neighbor (NN) Search Techniques

Speakers :
Parth Nagarkar, New Mexico State University
Arnab Bhattacharya, IIT Kanpur
Omid Jafari, New Mexico State University

Abstract Finding nearest neighbors (NN) is a fundamental operation in many diverse domains such as machine learning, information retrieval, multimedia retrieval, etc. Due to the data deluge and the application of nearest neighbor queries in many applications where fast performance is necessary, efficient index structures are required to speed up finding nearest neighbors. Different application domains have different data characteristics, which require different types of indexing techniques. While the internal searches are often hidden from the top-level application, it is beneficial for a data scientist to understand these fundamental operations and choose a correct indexing technique to improve the performance of the overall end-to-end workflow. Choosing the correct index structure to solve a Nearest Neighbor query can be a daunting task. A wrong choice can potentially lead to low accuracy, slower execution times, or in worst cases, both. The objective of this tutorial is to present the audience with the knowledge to choose the correct index structure for their specific task application. We present the state-of-the-art Nearest Neighbor indexing techniques for different data characteristics. We also present the effect, in terms of time and accuracy, of choosing the wrong index structure for different application needs. We conclude the tutorial with a discussion on the future challenges in the Nearest Neighbor search domain.

Opening the NLP Blackbox - Analysis and Evaluation of NLP Models: Methods, Challenges and Opportunities

Speakers :
Sandya Mannarswamy, Independent Researcher
Saravanan Chidambaram, Independent Researcher

Abstract Rapid progress in NLP Research has seen a swift translation to real world commercial deployment. While a number of success stories of NLP applications have emerged, failures of translating scientific progress in NLP to real-world software have also been considerable. Evaluation of NLP models is often limited to held out test set accuracy on a handful of datasets, and analysis of NLP models is often limited to ablation studies. Lack of rigorous evaluation leads to over-estimation of generalization performance of the built model. A lack of understanding of the inner workings of the model results in ‘Clever Hans’ models which fail in real world deployments. This tutorial aims to address this gap, by providing a detailed overview of NLP model analysis and evaluation methods, discuss their strengths and weaknesses and also point towards future research directions in this area.

Coresets in Machine Learning

Speakers :
Anirban Dasgupta, IIT Gandhinagar

Abstract In the face of the data onslaught, smart algorithms have a big role to play. Over the last couple of decades, coresets, a small and efficiently calculable summary of the data, have grown in their popularity, both in theoretical and practical settings. They enable approximating large optimizations while needing only a fraction of the resources. In this tutorial, we will do an ab-initio survey of the recent coreset literature, aiming to highlight few design principles as well as some of the interesting use-cases in machine learning, e.g. in numerical linear and multilinear algebra, supervised and unsupervised setups, and ranging from Bayesian inferencing to deep learning. The tutorial is geared towards students and early researchers in the field and the only background needed are basic courses in machine learning, linear algebra, and probability.

Explainable AI using Knowledge Graphs

Speakers :
Amit Sheth, University of South Carolina
Manas Gaur, University of South Carolina
Keyur Faldu, Embibe, Bangalore
Ankit Desai, Embibe, Bangalore

Abstract : During the last decade, traditional data-driven deep learning (DL) has shown remarkable success in essential natural language processing tasks, such as relation extraction. Yet, challenges remain in developing artificial intelligence (AI) methods in real-world cases that require explainability through human interpretable and traceable outcomes. The scarcity of labeled data for downstream supervised tasks and entangled embeddings produced as an outcome of self-supervised pre-training objectives also hinders interpretability and explainability. Additionally, data labeling in multiple unstructured domains, particularly healthcare and education, is expensive as it requires a pool of human expertise. Consider Education Technology, where AI systems fall along a “capability spectrum” depending on how extensively they exploit various resources, such as academic content, granularity in student engagement, and knowledge bases to identify concepts that would help achieve knowledge mastery and to nudge behavioral attributes for improving student performance. Likewise, the task of assessing human health using online conversations raises challenges for current statistical DL methods through evolving cultural and context-specific discussions. Hence, developing strategies that merge AI with stratified knowledge to identify concepts that would delineate healthcare conversations patterns and help healthcare professionals decide. Such technological innovations are imperative as they provide consistency and explainability in outcomes. This tutorial discusses the notion of explainability and interpretability through the use of knowledge graphs in (1) Healthcare on the Web, (2) Education Technology. This tutorial will provide details of knowledge-infused learning algorithms and its contribution to explainability for the above two applications that can be applied to any other domain using knowledge graphs. Background: Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

Big Data Analysis Using Multilayer Networks (MLNs)

Speakers :
Sharma Chakravarthy, University of Texas, Arlington
Abhishek Santra, University of Texas, Arlington

Abstract : In this tutorial, we argue that graph analysis techniques are extremely important and is receiving renewed attention as the data sets are becoming complex and their sizes are increasing. There is also renewed interest in answering graph queries in both exact and approximate way due to the presence of large graph based such as FreeBase and entity-based very large graphs. Although aggregate analysis techniques (such as communities, hubs, subgraphs, etc.) exist for single and simple graphs, extending them to attribute graphs is not easy. As an alternative, multilayer Networks (or MLNs) are being explored and new analysis approaches are being developed. We start with single graph analysis, establish their limitations for complex data sets analysis and introduce MLNs. We will discuss the notions of communities and hubs and their relevance to aggregate analysis. We will discuss extending these notions to multilayer networks, challenges, and current approaches and solutions. We will apply the techniques discussed to several case studies to appreciate their need, utility, applicability, efficiency, and scalability.