I apply Resource Allocation and market design methods to real settings, including Kidney Exchange and Blood Donation. This work combines theory (such as matching and complexity analysis), empirical methods (such as data-driven simulations), and real-world experiments. To ensure that these systems align with stakeholder interests, I also study AI and Human Decision Making. My academic advisor is John P Dickerson.
I apply resource allocation and market design methods to real settings, including Kidney Exchange and Blood Donation.
Patients with kidney failure have only two options: a lifetime on dialysis, or kidney transplantation. Dialysis is far more expensive and burdensome than transplantation, however donor kidneys are scarce — on average, 20 people die each day in the US while waiting for a transplant. Furthermore, many patients in need of a kidney have willing living donors, but cannot undergo transplantation due to medical incompatibilities.
To address this supply-demand mismatch, kidney exchange allows patients with willing living donors to swap their donors in order to find a compatible (or better) patient-donor match. Formulated as an optimization problem, kidney exchange is NP-hard and APX-hard, though modern exchanges are solvable in a reasonable amount of time.
How can we prioritize marginalized patients, without severely impacting the overall exchange? We study several different methods for enforcing this notion of fairness, and demonstrate their effects on data collected from real exchanges.
Designing a kidney exchange program requires input from medical professionals, policymakers, computer scientists, and ethicists. A "good" program should be both technically- and morally-sound — however technical experts (e.g., computer scientists) and stakeholders (e.g., medical professionals) often work independently. We propose a formal division of labor between technical experts and stakeholders, and outline a framework through which these experts can collaborate. Through this framework we analyze existing kidney exchange programs and survey the technical literature on kidney exchange algorithms. We identify areas for future collaboration between technical experts and stakeholders.
Recent advances in immunosuppression treatments allow some patients to receive a kidney transplant from donors who are otherwise medically incompatible. We develop a theoretical model for this setting, and provide an optimization framework for a wide variety of objectives (e.g., prioritizing certain types of patients or transplants). Simulations indicate that even a small number of immunosuppressants (10 or 20) can double the number of transplants facilitated by real-sized exchanges with hundreds of patients.
Blood is a scarce resource that can save the lives of those in need, and managing the blood supply chain has been a topic of research for decades. We consider an aspect of the blood supply chain that is seldom addressed by the literature: coordinating a network of donors to meet demand from a network of recipients.
In a collaboration with Facebook, we deployed the first large-scale algorithmic system for matching blood donors with recipients. We focused on the Facebook Blood Donation Tool, a platform that connects prospective blood donors with nearby recipients. There are many objectives in this setting, including: increasing the number of blood donations, treating recipients fairly, and respecting user preferences. To formalize these goals we developed an online matching framework, and matching policies for automatic donor notification. Both simulations and a fielded experiment demonstrate that our methods increase expected donation rates by 5% which—when generalized to the entire Blood Donation Tool—corresponds to an increase in tens of thousands of donations every few months.
AI is increasingly used to influence, or make, important decisions in a wide range of domains, including medicine, education, criminal justice, and financial services. Indeed, two key examples are kidney exchange and blood donation. I study the interaction between stakeholders and algorithmic decision tools, with an eye toward developing responsible decision support tools.
AI and ML researchers have proposed several mathematical definitions of fairness, however it is not clear if stakeholders understand or agree with these notions. We develop and validate a comprehension score to measure peoples’ understanding of mathematical fairness. Using a hypothetical decision scenario related to hiring, we translate several mathematical fairness definitions into “rules” that a hiring manager must follow. Using our comprehension score, we find that most people do not understand these rules, and those who do often disagree with them. This raises questions about the usefulness of algorithmic fairness: can an AI system be truly “fair” if its stakeholders do not understand its behavior?
Many modern AI methods, are guided by models of stake- holder preferences, which are often learned through observed decisions (such as product purchases) or through hypothetical decisions (e.g., surveys). There are many cases where stakeholders may be unwilling to express a preference: for example, if more information is needed to arrive at a decision, or of all available options are bad; in these cases we say they are indecisive. Drawing from moral philosophy and psychology, we develop a class of indecision models, which can be fit to observed data; in two survey studies we that indecision is common, and several causes of indecision are plausible. This raises many questions for the use of AI in decision making: from a theoretical perspective, how should we aggregate indecisive voters if indecision has multiple meanings? From an empirical perspective, how can we identify an indecisive agent, and what characteristics of a decision scenario lead to indecision?
Using a survey study we simulate the effect of an AI tool on decision making: we suggest random predictions of participant preferences, and we attribute this prediction to an "AI system," or a human "expert." We find that participants follow these prediction—even though they are random— both when the predictions are attributed to “AI”, and to "experts" (compared with a control group that receives no prediction). This has serious implications for AI in decision making: if random decision support can influence a stakeholder’s behavior, it is easy to imagine that adversarial AI systems can easily manipulate a stakeholder’s decisions. Furthermore, if AI tools are both trained on and influence stakeholders’ behavior, it is difficult to define what a "correct" decision is.
𝔸 = alphabetical author ordering.