Responsible Data Science and AI speaker series

In Fall 2021, we are offering a speaker series on "Responsible Data Science and AI". We will discuss topics such as explainability, reproducibility, biases, data curation and governance, and privacy. The presentations and discussions will take place on Fridays, 9-10am Central time (unless otherwise noted below), and will be virtual (via Zoom). Our first session will be on August 27th and the last on December 3rd. This series is hosted by the Center for Informatics Research in Science and Scholarship (CIRSS) at the School of Information Sciences at the University of Illinois at Urbana-Champaign, and co-organized by Jana Diesner and Nigel Bosch.

If you have any questions for the speaker series, please contact Jana Diesner ( or Nigel Bosch (

To join the Zoom session, please click the "View Event" link for each talk, and click the "PARTICIPATE online" button from the event page. The "PARTICIPATE online" button will go live closer to the event date.


08/27/2021 [.ics]
09:00am - 10:00am
Su Lin Blodgett, PhD
Postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal

Title: Towards Building Equitable Language Technologies

Bio: Su Lin Blodgett is a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. She is broadly interested in examining the social implications of natural language processing technologies. Her work currently focuses on better conceptualizing and measuring harms arising from language technologies, and on uncovering practices, assumptions, and constraints surrounding the production of these technologies. Previously, she completed her Ph.D. in computer science at the University of Massachusetts Amherst.

Selected publications:

09/03/2021 [.ics]
09:00am - 10:00am
Allison Morgan, PhD
Data Scientist at Twitter

Title: Faculty hiring, social class, and epistemic inequality

Bio: Allison Morgan is currently a data scientist at Twitter. Broadly, she’s interested in using causal inference and network science, joining surveys with big data, and studying fairness and social inequality by building systems. Her research has measured the structural factors that drive a lack of diversity in science and highlighted their consequences. She earned her PhD in Computer Science at University of Colorado, Boulder, where she was supported by the NSF GRFP. Her research has been published in PNAS and Science Advances, and covered by outlets such as the Washington Post and Scientific American.

Selected publications:

09/10/2021 [.ics]
09:00am - 10:00am
Ryan Baker, PhD
Associate Professor in the Graduate School of Education, University of Pennsylvania, USA

Title: Algorithmic Bias in Education: From Unknown Bias to Known Bias to Fairness to Equity

Bio: Ryan Baker is an Associate Professor at the University of Pennsylvania, and Director of the Penn Center for Learning Analytics. His lab conducts research on engagement and robust learning within online and blended learning, seeking to find actionable indicators that can be used today but which predict future student outcomes. Baker has developed models that can automatically detect student engagement in over a dozen online learning environments, and has led the development of an observational protocol and app for field observation of student engagement that has been used by over 150 researchers in 7 countries. Predictive analytics models he helped develop have been used to benefit over a million students, over a hundred thousand people have taken MOOCs he ran, and he has coordinated longitudinal studies that spanned over a decade. He was the founding president of the International Educational Data Mining Society, is currently serving as Editor of the journal Computer-Based Learning in Context, is Associate Editor of the Journal of Educational Data Mining, was the first technical director of the Pittsburgh Science of Learning Center DataShop, and currently serves as Co-Director of the MOOC Replication Framework (MORF). Baker has co-authored published papers with over 400 colleagues.

Selected publications:

09/17/2021 [.ics]
09:00am - 10:00am
Lindah Kotut, PhD
Assistant Professor in the Information School at the University of Washington, USA

Title: Amplifying the Griot: (Ancient) Stories Guiding the Design of Fair, Equitable and Transparent Systems

Selected publications:

  • Kotut, L., & McCrickard, S. (2022). The TL;DR Charter: Speculatively Demystifying Privacy Policy Documents and Terms Agreements. In GROUP ’22: ACM Conference on Supporting Group Work, January 23–26, 2022, Sanibel Island, FL. ACM, New York, NY, USA. Retrieved from
  • Kotut, L., Bhatti, N., Saaty, M., Haqq, D., Stelter, T. L., & McCrickard, D. S. (2020). Clash of Times: Respectful Technology Space for Integrating Community Stories in Intangible Exhibits. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Retrieved from

09/24/2021 [.ics]
09:00am - 10:00am
Pardis Emami-Naeini, PhD
Postdoctoral scholar at School of Computer Science, University of Washington, USA

Title: Designing an Informative and Usable Security and Privacy Label for IoT Devices

Bio: Pardis Emami-Naeini ( is a postdoctoral scholar in the Security and Privacy Research Lab at the University of Washington. Her research is broadly at the intersection of security and privacy, usability, and human-computer interaction. Her work has been published at top venues in security (IEEE S&P, SOUPS) and human-computer interaction and social sciences (CHI, CSCW) and covered by multiple outlets, including Wired and the Wall Street Journal. Pardis received her B.Sc. degree in computer engineering from Sharif University of Technology in 2015 and the M.Sc. and Ph.D. degrees in computer science from Carnegie Mellon University in 2018 and 2020, respectively. She was selected as a Rising Star in electrical engineering and computer science in October 2019 and was awarded the 2019-2020 CMU CyLab Presidential Fellowship.

Selected publications:

10/01/2021 [.ics]
09:00am - 10:00am
Rainer Böhme, PhD
Professor for Security and Privacy, Department of Computer Science, University of Innsbruck, Austria

Title: Privacy preferences and choice architecture: the case of consent management on the web

Bio: Rainer Böhme is professor of Computer Science and head of the Security & Privacy Lab at the University of Innsbruck in the Austrian Alps. His background is interdisciplinary with degrees in Communication Science, Economics, and Computer Science. A large part of his research concerns the design or evaluation of technical systems with impact on society at large. This includes privacy, forensics, cyber risk, and most recently digital money. See for more information.

Selected publications:

  • Hils, M., Woods, D. W., & Böhme, R. (2021). Privacy Preference Signals: Past, Present and Future. Proceedings on Privacy Enhancing Technologies, 2021(4), 249–269. Retrieved from arXiv.rog
  • Hils, M., Woods, D.W., and Böhme, R. (2020). Measuring the Emergence of Consent Management on the Web. In Proceedings of the ACM Internet Measurement Conference (IMC). Retrieved from
  • Woods, D.W. and Böhme, R. (2020). The Commodification of Consent. In Workshop on the Economics of Information Security (WEIS). Brussels, Belgium. Retrieved from WEIS2020
  • Machuletz, D., & Böhme, R. (2020). Multiple Purposes, Multiple Problems: A User Study of Consent Dialogs after GDPR. Proceedings on Privacy Enhancing Technologies, 2020(2), 481–498. Retrieved from arXiv.rog
  • Böhme, R., & Köpsell, S. (2010). Trained to accept? Proceedings of the 28th International Conference on Human Factors in Computing Systems - CHI ’10. the 28th international conference, Atlanta, Georgia, pp. 2403–2406. Retrieved from

10/08/2021 [.ics]
09:00am - 10:00am
Alexandra Olteanu, PhD
Principal Researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research

Title: Challenges to the Foresight and Measurement of Computational Harms

Bio: Alexandra Olteanu is a principal researcher at Microsoft Research Montréal, part of the Fairness, Accountability, Transparency, and Ethics (FATE) group. Her work currently examines practices and assumptions made when evaluating a range of computational systems, particularly measurements aimed at quantifying possible computational harms. Before joining Microsoft Research, Alexandra was a Social Good Fellow at IBM's T.J. Watson Research Center. Her work has been featured in governmental reports and in popular media outlets. Alexandra has co-organized tutorials/workshops and has served on the program committee of all major web and social media conferences, including SIGIR, ICWSM, KDD, WSDM, WWW, as the Tutorial Co-chair for ICWSM 2018, 2020 and FAccT 2018. She also sits on the steering committee of the ACM Conference on Fairness, Accountability, and Transparency. Alexandra holds a Ph.D. in Computer Science from École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.

Selected publications:

  • Robertson, R. E., Olteanu, A., Diaz, F., Shokouhi, M., & Bailey, P. (2021, May 6). “I Can’t Reply with That”: Characterizing Problematic Email Reply Suggestions. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21: CHI Conference on Human Factors in Computing Systems. doi: 10.1145/3411764.3445557. Retrieved from Microsoft Research
  • Olteanu, A., Diaz, F., & Kazai, G. (2020). When Are Search Completion Suggestions Problematic? Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1–25. doi: 10.1145/3415242. Retrieved from Microsoft Research
  • Boyarskaya, M., Olteanu, A., and Crawford, K. (2020). Overcoming Failures of Imagination in AI Infused System Development and Deployment. Paper presented at the Navigating the Broader of Impacts of AI Research (NeurIPS 2020) Workshop. Retrieved from

10/15/2021 [.ics]
09:00am - 10:00am
Babak Salimi, PhD
Assistant Professor in the Halıcıoğlu Data Science Institute, University of California, San Diego, USA

Title: Generating Post-hoc Explanations for ML Models Using Contrastive Counterfactuals

Bio: Babak Salimi is an assistant professor in HDSI at UC San Diego. Before joining UC San Diego, he was a postdoctoral research associate in the Department of Computer Science and Engineering, University of Washington, where he worked with Prof. Dan Suciu and the database group. He received his Ph.D. from the School of Computer Science at Carleton University, advised by Prof. Leopoldo Bertossi. His research seeks to unify techniques from theoretical data management, causal inference and machine learning to develop a new generation of decision-support systems that help people with heterogeneous background to interpret data. His ongoing work in causal relational learning aims to develop the necessary conceptual foundations to make causal inference from complex relational data. Further, his research in the area of responsible data science develops needed foundations for ensuring fairness and accountability in the era of data-driven decisions. His research contributions have been recognized with a Research Highlight Award in ACM SIGMOD, a Best Demonstration Paper Award at VLDB and a Best Paper Award in ACM SIGMOD.

10/22/2021 [.ics]
09:00am - 10:00am
Katrin Weller, PhD
Team lead “Digital Society Observatory”, GESIS Leibniz Institute for the Social Sciences
Co-lead “Research Data and Methods” Unit, CAIS Center for Advanced Internet Studies

Title: Acknowledging potential pitfalls in social media research – between researchers practices and structured documentation approaches

Bio: Dr. Katrin Weller is leading the Digital Society Observatory team as part of GESIS’ Computational Social Science department. From 2021-2023 she is also co-leading the Research Data & Methods unit at the Center for Advanced Internet Studies (CAIS). In her work she looks into how researchers across disciplines use data from Web and Social Media Platforms as new types of research data – and how this leads to new challenges along the research process.

Selected publications:

  • Sen, I., Flöck, F., Weller, K., Weiß, B., & Wagner, C. (2021). A Total Error Framework for Digital Traces of Human Behavior on Online Platforms. Public Opinion Quarterly. Volume 85, Issue S1, 2021, Pages 399–422. Retrieved from
  • Kinder-Kurlanda, K.E., & Weller, K. (2020). Perspective: Acknowledging Data Work in the Social Media Research Lifecycle. Frontiers in Big Data 3:509954. doi: 10.3389/fdata.2020.509954. Retrieved from
  • Weller, K., & Kinder-Kurlanda, K.E. (2015). Uncovering the Challenges in Collection, Sharing and Documentation: The Hidden Data of Social Media Research? In Standards and Practices in Large-Scale Social Media Research: Papers from the 2015 ICWSM Workshop. Proceedings Ninth International AAAI Conference on Web and Social Media Oxford University, May 26, 2015 – May 29, 2015, 28-37. Ann Arbor, MI: AAAI Press. Retrieved from AAAI

10/29/2021 [.ics]
09:00am - 10:00am
Julia Stoyanovich, PhD
Institute Associate Professor of Computer Science & Engineering, Tandon School of Engineering
Associate Professor of Data Science, Center for Data Science
Director, Center for Responsible AI (R/AI)
New York University, USA

Title: Teaching Responsible Data Science

Abstract: Although an increasing number of ethical data science and AI courses is available, pedagogical approaches used in these courses rely primarily on texts rather than on algorithmic development or data analysis. Technical students often consider these courses unimportant and a distraction from the “real” material. To develop instructional materials and methodologies that are thoughtful and engaging, we must strive for balance: between texts and coding, between critique and solution, and between cutting-edge research and practical applicability. In this talk, I will discuss responsible data science courses that I have been developing and teaching to technical students at New York University since 2019. I will also speak about a public education course called "We are AI" that is offered in a peer-learning setting. I will draw on these efforts to chart a path towards strengthening the distributed accountability structures, to make the design, development, use, and oversight of automated decision systems responsible.

Bio: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU). Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. She established the "Data, Responsibly" consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio. Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic. In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst. She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Selected publications:

11/05/2021 [.ics]
09:00am - 10:00am
Ceren Budak, PhD
Assistant Professor of Information, School of Information, University of Michigan, USA

Title: Quantifying the Role of Display Advertising in the Disinformation Ecosystem

Bio: Ceren Budak is an Assistant Professor of Information at the School of Information at the University of Michigan. Her research interests lie in the area of computational social science. She utilizes network science, machine learning, and crowdsourcing methods and draws from scientific knowledge across multiple social science communities to contribute computational methods to the field of political communication.

Selected publications:

  • Bozarth, L., & Budak, C. (2021). Market Forces: Quantifying the Role of Top Credible Ad Servers in the Fake News Ecosystem. Proceedings of the International AAAI Conference on Web and Social Media, 15(1), 83-94. Retrieved from
  • Bozarth, L., & Budak, C. (2021). An Analysis of the Partnership between Retailers and Low-credibility News Publishers. Journal of Quantitative Description: Digital Media, 1. Retrieved from
  • Bozarth, L., Saraf, A., & Budak, C. (2020). Higher Ground? How Groundtruth Labeling Impacts Our Understanding of Fake News about the 2016 U.S. Presidential Nominees. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 48-59. Retrieved from

11/12/2021 [.ics]
09:00am - 10:00am
Laura Nelson, PhD
Assistant Professor of Sociology, University of British Columbia, Canada

Title: Partial Perspective and Situated Knowledge: A Feminist Appraisal of Machine Learning and AI

Bio: Laura K. Nelson is an assistant professor of sociology at the University of British Columbia. She uses computational methods – principally text analysis, natural language processing, machine learning, and network analysis techniques – to study social movements, culture, gender, and organizations and institutions. Previously, she was an assistant professor of sociology at Northeastern University, a postdoctoral research fellow at Northwestern University, and a postdoctoral fellow at the Data Science Institute and Digital Humanities at the University of California, Berkeley, which is also where she received her PhD. She has published in outlets such as the American Journal of Sociology, Gender & Society, Poetics, Mobilization: An International Quarterly, and Sociological Methods & Research. She is currently on the editorial board of Sociological Methodology and is an associate editor at EPJ Data Science.

Selected publications:

11/19/2021 [.ics]
09:00am - 10:00am
Ana-Andreea Stoica
PhD student in Computer Science, Columbia University, USA

Title: Diversity and inequality in social networks

Bio: Ana-Andreea Stoica is a Ph.D. candidate at Columbia University. Her work focuses on mathematical models, data analysis, and inequality in social networks. From recommendation algorithms to the way information spreads in networks, Ana is particularly interested in studying the effect of algorithms on people's sense of privacy, community, and access to information and opportunities. Since 2019, she has been co-organizing the Mechanism Design for Social Good initiative.

Selected publications:

  • Stoica, A.-A., Han, J. X., & Chaintreau, A. (2020). Seeding Network Influence in Biased Networks and the Benefits of Diversity. In Proceedings of The Web Conference 2020, WWW ’20. ACM. Retrieved from
  • Stoica, A.-A., Riederer, C., & Chaintreau, A. (2018). Algorithmic Glass Ceiling in Social Networks. In Proceedings of the 2018 World Wide Web Conference, WWW ’18. ACM. Retrieved from
  • Finocchiaro, J., Maio, R., Monachou, F., Patro, G. K., Raghavan, M., Stoica, A.-A., & Tsirtsis, S. (2021). Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT ’21. ACM. Retrieved from

12/03/2021 [.ics]
09:00am - 10:00am
Tjitze Rienstra, PhD
Assistant Professor in the Department of Data Science & Knowledge Engineering, Faculty of Science and Engineering, Maastricht University, The Netherlands

Title: Explanation through argumentation

Abstract: The field of computational argumentation is concerned with models of reasoning that mimic how humans reason when they settle issues by exchanging arguments in a dialogue. In this talk I provide an overview of the development and current state of the field, with a special focus on recent applications of computational argumentation as a basis of explainable and interactive AI techniques.

Bio: Tjitze Rienstra is Assistant Professor at the Department of Data Science & Knowledge Engineering, Faculty of Science and Engineering, Maastricht University, The Netherlands. In his research focuses on Explainable AI, computational models of argumentation, and reasoning under uncertainty.