Copyright 2024 SPARK

AI models have shown significant promise in many application areas, including high-stakes domains such as healthcare. However, concerns remain about the ethical considerations of adopting AI models in healthcare. In particular, recent works have shown that AI models may potentially produce predictions with biased and discriminatory impacts, such as the under-recommendation of Black patients for additional treatment, thus diverting resources away from already-undertreated populations.

This talk focuses on an underexplored mechanism for such bias – differences in laboratory testing, which we call disparate censorship. First, I provide a theoretical analysis on how disparate censorship could result in disproportionate harm across patient populations. Next, I explore the limitations of existing methods for addressing disparate censorship and propose a solution.

Trenton Chang is a 3rd year EECS PhD candidate at the University of Michigan with the MLD3 Lab. His primary research area is machine learning fairness in healthcare. To date, his work has focused on analyzing and mitigating the downstream impact of biases in clinical decision-making on the performance and fairness of AI models for clinical decision support. Previously, Trenton has earned degrees in American Studies (BA ’20) and Computer Science (MS ’21) from Stanford University. In his free time, Trenton enjoys playing speed chess or practicing jazz piano.

Logistics:

  • Special thanks to Ann Arbor SPARK for sponsoring the venue, pizzas and light beverages.
  • Doors open at 5:30 p.m. Event starts at 6:00 p.m.
  • In person attendance is highly encouraged to help build a community. (If you wish to attend virtually, leave a comment and we’ll consider adding a zoom link if needed).