Keynotes

Prof. Thomas Dietterich

Oregon State University

What’s wrong with Large Language Models and What we should be building instead

Large Language Models provide a pre-trained foundation for training many interesting AI systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. This talk will review these shortcomings and current efforts to address them within the existing LLM framework. It will then argue for a different, more modular architecture that decomposes the functions of existing LLMs and adds several additional components. We believe this alternative can address all of the shortcomings of LLMs.

Speakers Bio: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.

Prof. David Page

Duke University

AI for Electronic Health Records (EHRs)

Virtual Talk

AI has great potential to improve human health, and EHRs are an excellent data source. This talk begins with examples of past AI-based EHR analyses, including prediction of dangerous conditions from EHR data. It then discusses limitations and risks of such analyses, including safety, efficacy, privacy, and confounding. Motivated by these examples, the talk discusses two new AI innovations. The first shows how every trained deep neural network can be understood statistically as a conditional random field. The second innovation is a new computational model of confounding and causation analogous to PAC-learning, but for causal discovery rather than supervised learning.

Speakers Bio: David Page is J.B. Duke Professor and Chair of the Department of Biostatistics and Bioinformatics at Duke University, and he is a Fellow of the American College of Medical Informatics. His PhD dissertation in Computer Science in 1993, at the University of Illinois at Urbana-Champaign, focused on theoretical aspects of AI and machine learning. He then became involved in biomedical applications of AI/ML while a post-doctoral researcher in the Oxford University Computing Laboratory, working with Stephen Muggleton. David was on the executive committee for the International Warfarin Pharmacogenetics Consortium (IWPC) and Scientific Advisory Board for the Observational Medical Outcomes Partnership. Before coming to Duke in 2019, he was Kellett and Vilas Distinguished Achievement Professor at the University of Wisconsin-Madison.

Prof. Ragini Verma

Cohen Veterans Bioscience and University of Pennsylvania

Harnessing the Power of Big Brain Data: Decoding Sex Differences for Targeted Therapeutics and Equitable Medicine

The talk will delve into how data science has revolutionized the investigation into the intricate landscape of the human brain. It will focus on the fascinating dimension of sex differences, with the goal of unraveling the complexities of how male and female brains differ in anatomy and consequently, function. We will embark on a scientific journey, navigating through the research findings that highlight the structural differences, and the associated functional nuances that contribute to diverse cognitive abilities and susceptibilities between the sexes. By examining the role of these differences in mental health, disease prevalence and progression, and treatment outcomes, we can pave the way for more effective and personalized therapeutics.

For the data science community, the talk will underscore the pivotal role of data collation and multi-modal data integration in uncovering sex-specific patterns and responses. We will explore the transformative potential of computational tools in interrogating vast datasets, elucidating complex relationships, and extracting meaningful insights. By harnessing the power of big data in clinical research, we can gain a comprehensive understanding of sex-related variations, paving the way for more precise and personalized medical interventions. It will also provide a way in which other therapeutically important factors like race and ethnicity can be investigated, providing medical care that transcends borders. By identifying data challenges in the brain space, the talk will identify avenues of clinically meaningful research.

Speakers Bio: Ragini is a Professor in Diffusion &ConnectomicsIn Precision Healthcare Research (DiCIPHR), Department of Radiology as well as a Professor of Neurosurgery at the University of Pennsylvania. She has masters in Mathematics and Computer Applications followed by a PhD in computer vision and mathematics, from IIT Delhi (India). She did two years of postdoc at INRIA, Rhone-Alpes, with the MOVI project (currently LEARS and PERCEPTION). She then did two years of postdoc in medical imaging at SBIA, prior to taking up her current position. Ragini's research interest span the area of diffusion tensor imaging, multi-modality statistics and facial expression analysis. She is actively involved in several clinical studies in schizophrenia, aging, tumors and multiple sclerosis) as well as projects in animal imaging.

Prof. Susan Davidson

University of Pennsylvania

Provenance and Explainability

Understanding the why and how of query results – data provenance – is a well-studied problem in the database community. A variety of approaches have been applied to this problem, ranging from provenance polynomials which explain how an output tuple is constructed from input tuples, to notions of causality and responsibility. More recently, a lot of attention has been paid to explainable AI (XAI). In particular, the use of Shapley values has become increasingly popular as a model-agnostic method for explaining individual predictions.

In this talk, I will discuss connections between provenance and work in XAI. In particular, I will discuss how Shapley values can be used in the context of data provenance and rule-based data insights, and the different perspectives that can be gained. I will also discuss how provenance can be used for incrementally updating machine learning models for linear or logistic regression, as well as in symbolic reasoning for AI applications.

Speakers Bio: Susan B. Davidson received the B.A. degree in Mathematics from Cornell University, Ithaca, NY, in 1978, and the M.A. and Ph.D. degrees in Electrical Engineering and Computer Science from Princeton University, Princeton NJ, in 1980 and 1982. Dr. Davidson is the Weiss Professor of Computer and Information Science (CIS) at the University of Pennsylvania, where she has been since 1982. She was the founding co-director of the Penn Center for Bioinformatics from 1997-2003, the founding co-director of the Greater Philadelphia Bioinformatics Alliance, and served as Deputy Dean of the School of Engineering and Applied Science from 2005-2007 and Chair of CIS from 2008-2013. Her research interests include data management for data science, database and web-based systems, provenance, crowdsourcing, and data citation.

Dr. Davidson is a Fellow of the AAAS, ACM Fellow, Corresponding Fellowship of the Royal Society of Edinburgh, received the Lenore Rowe Williams Award, and was a Fulbright Scholar and recipient of a Hitachi Chair. She received the IEEE Technical Committee of Data Engineering Impact Award, the Lindback Distinguished Teaching Award, the Ruth and Joel Spira Award for Excellence in Teaching, the Trustees' Council of Penn Women/Provost Award for her work on advancing women in engineering, and served as Chair of the board of the Computing Research Association.

Prof. Partha Talukdar

Google Research, India and IISc Bangalore

Towards Responsible and Inclusive Large Language Modelling

Even though there are more than 7000 languages in the world, language technologies are available only for a handful of these languages. Lack of training data poses a significant challenge in developing language technologies for these languages. Recent advances in Multilingual Large Language Modeling present an opportunity to transfer knowledge and supervision from high web-resource languages to languages with lower web-resources. In this talk, I shall present an overview of research in this promising area in the Languages group at Google Research India.

Speakers Bio: Partha is a Senior Staff Research Scientist at Google Research, India where he leads the Natural Language Processing group. He is also an Associate Professor at IISc Bangalore. Previously, Partha was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University. He received his PhD (2010) in CIS from the University of Pennsylvania. Partha is broadly interested in Natural Language Processing, Machine Learning, and in making language technologies more inclusive. Partha is a recipient of several awards, including an Outstanding Paper Award at ACL 2019 and the ACM India Early Career Award 2022. He is a co-author of a book on Graph-based Semi-Supervised Learning. Homepage: https://parthatalukdar.github.io/

Prof. Sunita Sarawagi

IIT Bombay

Re-introducing Structure in AI Models for Structured Data Analysis

Despite being trained with the sequence-to-sequence generation paradigm, Large Language Models (LLMs) have demonstrated impressive capabilities across a wide range of tasks, including code generation. Even for natural language interfaces for structured data analysis, LLMs currently dominate the leaderboard on standard benchmarks.

In this talk, I will make the case for a model that goes beyond the sequence-to-sequence paradigm for the Text-to-SQL task. We will see that structure is critical in how we select relevant schemas for retrieval augmentation, how we adapt to new schemas with few-shot prompting, and how we represent uncertainty. I hope that this discussion will lead to alternative designs of custom models in other domains as well.

Speakers Bio: Sunita Sarawagi researches in the fields of databases and machine learning. She is institute chair professor at IIT Bombay, and an ACM fellow. She got her PhD in databases from the University of California at Berkeley and a bachelor's degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999). She was awarded the Infosys Prize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur.