Attending Program

August 15, 2022, Washington DC Convention Center, Room 209B

8:00 – 8:05 am Opening Remarks
8:05 – 8:10 am Speaker Intro
8:10 – 8:40 am Keynote Talk 1: Virtue Throughout the Artificial Intelligence Lifecycle
Dr. Matt Turek, Defense Advanced Research Projects Agency
8:40 – 9:40 am Accepted Paper Talks 1
  • Multiple Attribute Fairness: Application to Fraud Detection.
  • Democratizing Ethical Assessment of Natural Language Generation Models.
  • Consensus-determinacy Space and Moral Components for Ethical Dilemmas.
9:40 – 9:45 am Speaker Intro
9:45 – 10:15 am Keynote Talk 2: Striving for Socially Responsible AI
Dr. Huan Liu, Arizona State University
Abstract: AI has never been this pervasive and effective. AI algorithms are used in news feeds, friend/purchase recommendation, making hiring and firing decisions, and political campaigns. Data empowers AI algorithms and is then collected again for further training AI algorithms. We come to realize that AI algorithms have biases, and some biases might result in deleterious effects. Facing existential challenges, we explore how socially responsible AI can help in data science: what it is, why it is important, and how it can protect and inform us, and help prevent or mitigate the misuse of AI. We show how socially responsible AI works via use cases of privacy preservation, cyberbullying identification, and disinformation detection. Knowing the problems with AI and our own conflicting goals, we further discuss some quandaries and challenges in our pursuit of socially responsible AI.
10:15 – 10:30 am Coffee Break
10:30 – 11:30 am Accepted Paper Talks 2
  • Fair Collective Classification in Networked Data.
  • Information Theoretic Framework For Evaluation of Task Level Fairness.
  • Stress-testing Fairness Mitigation Techniques under Distribution Shift using Synthetic Data.
11:30 – 11:35 am Speaker Intro
11:35 – 12:05 pm Keynote Talk 3: Algorithmic Foundation of Fair Graph Mining
Dr. Hanghang Tong, University of Illinois Urbana-Champaign
Jian Kang (Presenter), University of Illinois Urbana-Champaign
Abstract: Network (i.e., graph) mining plays a pivotal role in many high-impact application domains. State-of-the-art offers a wealth of sophisticated theories and algorithms, primarily focusing on answering who or what type question. On the other hand, the why or how question of network mining has not been well studied. For example, how can we ensure network mining is fair? How do mining results relate to the input graph topology? Why does the mining algorithm `think’ a transaction looks suspicious? In this talk, I will present our work on addressing individual fairness on graph mining. First, we present a generic definition of individual fairness for graph mining which naturally leads to a quantitative measure of the potential bias in graph mining results. Second, we propose three mutually complementary algorithmic frameworks to mitigate the proposed individual bias measure, namely debiasing the input graph, debiasing the mining model and debiasing the mining results. Each algorithmic framework is formulated from the optimization perspective, using effective and efficient solvers, which are applicable to multiple graph mining tasks. Third, accommodating individual fairness is likely to change the original graph mining results without the fairness consideration. We develop an upper bound to characterize the cost (i.e., the difference between the graph mining results with and without the fairness consideration). Toward the end of my talk, I will also introduce some other recent work on addressing the why & how question of network mining, and share my thoughts about the future work.
12:05 – 12:10 pm Closing