Abstract: Research in AI suffers from a longstanding ambivalence to humans–swinging as
it does, between their replacement and augmentation. Now, as AI technologies enter our
everyday lives at an ever increasing pace, there is a greater need for AI systems to work
synergistically with humans. To do this effectively, AI systems must pay more attention to
aspects of intelligence that helped humans work with each other–including emotional and
social intelligence.
I will discuss the research challenges in designing such human-aware AI systems, including
modeling the mental states of humans in the loop, recognizing their desires and intentions,
providing proactive support, exhibiting explicable behavior, giving cogent explanations on
demand, and engendering trust. I will survey the progress made so far on these challenges,
and highlight some promising directions. I will also touch on the additional ethical
quandaries that such systems pose.
I will end by arguing that the quest for human-aware AI systems broadens the scope of AI
enterprise, necessitates and facilitates true inter-disciplinary collaborations, and can go a
long way towards increasing public acceptance of AI technologies.
Bio: Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State
University. He received his B.Tech. in Electrical Engineering (Electronics) from Indian
Institute of Technology, Madras (1983), and M.S.(1985) and Ph.D.(1989) in Computer
Science (1985,1989) from University of Maryland, College Park. Kambhampati studies
fundamental problems in planning and decision making, motivated in particular by the
challenges of human-aware AI systems. Kambhampati is a fellow of AAAI and AAAS, and
was an NSF Young Investigator. He received multiple teaching awards, including a
university last lecture recognition. Kambhampati served as the President of AAAI and as a
trustee of IJCAI. He was the program chair for IJCAI 2016 , ICAPS 2013, AAAI 2005 and
AIPS 2000. He serves on the board of directors of Partnership on AI. Kambhampati’s
research as well as his views on the progress and societal impacts of AI have been
featured in multiple national and international media outlets. URL rakaposhi.eas.asu.edu
Twitter @rao2z
Abstract: Artificial Intelligence systems’ ability to explain their conclusions is crucial to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA), the task of answering natural language questions about images. However, most of them are opaque black boxes with limited explanatory capability. The goal of Explainable AI is to increase the transparency of complex AI systems such as deep networks. We have developed a novel approach to XAI and used it to build a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Crowd-sourced human evaluation of these explanations demonstrate the advantages of our approach.
Bio: Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign.
He is an author of over 170 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the Association for Computational Linguistics and the recipient of best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07
Abstract: Due to a variety of reasons including privacy, scalability, bandwidth restrictions and robustness, data is often aggregated or obfuscated in various ways before being released to the public. Is it possible to learn predictive models on aggregated data that can even come close in the predictive performance or parameter recovery possible if the full-resolution (non-aggregated) data were available? This is a challenging problem that requires significant algorithmic innovation since simple ways of imputing the missing data and then learning a model can fail dramatically. In this talk I will present new approaches that are actually able to obtain reasonable results from aggregated data in certain scenarios.
Bio: Professor Joydeep Ghosh is currently the Schlumberger Centennial Chaired Professor at UT Austin. He has worked on a wide variety of data mining and machine learning problems, resulting in 400+ refereed publications (including 90+ full length archival journal papers), several successful industrial projects and 16 best paper awards. Service to the community includes chairing top data mining conferences (KDD'11, SDM'12, SDM'13 etc), giving keynote talks (ICHI'15, ICDM'13, MCS, ANNIE etc), and consulting with a wide range of companies, from startups to large corporations such as IBM. He is currently Chief Scientist of CognitiveScale, which was selected by the World Economic Forum in 2018 as one of the 100 emerging companies worldwide most likely to benefit humanity.
Abstract: Relational database systems, the workhorse of today's information industry,
have been extensively researched for over four decades, and a consensus
has emerged on the implementation of most of their components. The design
of the declarative query processing module, however, continues to be mired in
challenging technical problems. In this talk, we will present promising new
approaches to address these chronic difficulties.
Bio: Jayant Haritsa is on the faculty of the Computer Science & Automation
department at the Indian Institute of Science, Bangalore, since 1993. He
received a BTech degree from IIT Madras, and the MS and PhD degrees from
the University of Wisconsin−Madison. He is a Fellow of ACM and IEEE.
Abstract: Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce for learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. Finally, I will discuss more recent work on "closing-the-loop" where information is solicited from humans as needed that allows for seamless interactions with the human expert. I will discuss these methods in the context of supervised learning, planning, reinforcement learning and inverse reinforcement learning.
Bio: Sriraam Natarajan is an Associate Professor at the Department of Computer Science at University of Texas Dallas. He was previously an Associate Professor and earlier an Assistant Professor at Indiana University, Wake Forest School of Medicine, a post-doctoral research associate at University of Wisconsin-Madison and had graduated with his PhD from Oregon State University. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award and the IU trustees Teaching Award from Indiana University. He is an editorial board member of MLJ, JAIR and DAMI journals and is the electronics publishing editor of JAIR. He is the organizer of the key workshops in the field of Statistical Relational Learning and has co-organized the AAAI 2010, the UAI 2012, AAAI 2013, AAAI 2014, UAI 2015 workshops on Statistical Relational AI (StarAI), ICML 2012 Workshop on Statistical Relational Learning, and the ECML PKDD 2011 and 2012 workshops on Collective Learning and Inference on Structured Data (Co-LISD). He was also the co-chair of the AAAI student abstract and posters at AAAI 2014 and AAAI 2015 and the chair of the AAAI students outreach at AAAI 2016 and 2017.
Abstract: Due to increase in the number of sources of data, research in cross-modal matching is
becoming an increasingly important area of research. It has several applications like matching text with
image, matching near infra-red images with visible images for night-time or low-light surveillance,
matching sketch images with pictures for forensic applications, etc. This is an extremely challenging task
due to significant differences between data from different modalities.
In this talk, I will discuss about
the different challenges of this problem and also some of the approaches we are working on for
addressing this. In addition, I will also touch upon some related problems, like zero-shot learning and
low-resolution face recognition.
Bio: Dr. Soma Biswas is an Assistant Professor in the Electrical Engineering department in IISc. She
received her PhD degree in Electrical and Computer Engineering from University of Maryland, College
Park, in 2009. Then she worked as a Research Assistant Professor at University of Notre Dame and as a
Research Scientist at GE Research before joining IISc. Her research interests include image processing,
computer vision, and pattern recognition.
Abstract: Fashion designers and fashion houses usually start conceptualizing and designing products for the new season six months to one year prior to the actual selling season–though in recent times this has been drastically reduced with the emergence of fast-fashion retailers. That’s why for most apparel retailers, and the fashion industry in general, knowing the trends customers would like to wear next season is extremely important. This talk will describe how AI based tools which can understand fashion images and articles can be used to provide a more data-driven approach for trend analysis and forecasting. I will also describe some of our recent collaborations with various fashion designers.
Bio: Vikas C. Raykar works as a researcher at IBM Research, India. An expert in machine learning he is currently focused on building machines that can understand natural language and images in par with humans. He finished his doctoral studies in the computer science department at the University of Maryland, College Park. He is also defining a roadmap for what can be done for the fashion industry, primarily leveraging deep image and text understanding together with other AI capabilities.
Abstract: E-commerce websites such as Amazon, Alibaba, and Walmart typically process billions of orders every year. Semantic representation and understanding of these orders is extremely critical for an eCommerce company. Each order can be represented as a tuple of
Bio: Arijit Biswas is currently a Senior Machine Learning Scientist at the India machine learning team in Amazon, Bangalore. His research interests are mainly in deep learning, machine learning and computer vision. Earlier he was a research scientist at Xerox Research Centre India (XRCI) from June, 2014 to July, 2016. He received his PhD in Computer Science from University of Maryland, College Park in April 2014. His PhD thesis was on Semi-supervised and Active Learning Methods for Image Clustering. His thesis advisor was David Jacobs and he closely collaborated with Devi Parikh and Peter Belhumeur during his stay at UMD. While doing his PhD, Arijit also did internships at Xerox PARC and Toyota Technological Institute at Chicago (TTIC). He has published papers in CVPR, ECCV, ACM-MM, BMVC, IJCV and CVIU. Arijit has a Bachelor's degree in Electronics and Telecommunication Engineering from Jadavpur University, Kolkata. Arijit is also a recipient of the MIT Technology Review Innovators under 35 award from India in 2016.
Abstract: Code-mixing or Code-switching is the linguistic phenomenon of mixing more than one language in a single conversation, sometime even in a single sentence or utterance. Such fluid alternation between languages is frequently observed in multilingual societies across the world, primarily in casual speech, but these days also in online user generated text. Processing of code-mixed text is challenging because of absence of adequate datasets compounded by the combinatorial explosion of language varieties and mixing patterns. In this talk, I will give a brief overview of these challenges, and present a few techniques that can effectively address these hurdles. I will also touch upon some of our research on social and pragmatic aspects of code-mixing.
Bio: Dr. Monojit Choudhury is a researcher in Microsoft Research Lab in India since 2007. His research spans many areas of Artificial Intelligence, cognitive science and linguistics. In particular, Dr. Choudhury has been working on technologies for low resource languages, code-switching (mixing of multiple languages in a single conversation), computational sociolinguistics and conversational AI. He has more than 100 publications in international conferences and refereed journals. Dr. Choudhury is an adjunct faculty at International Institute of Technology Hyderabad. He also organizes the Panini Linguistics Olympiad for high school children in India, and is the founding chair of the Asia-Pacific Linguistics Olympiad. Dr. Choudhury holds a B.Tech and PhD degree in Computer Science and Engineering from IIT Kharagpur.