Joint work with Leah Rosenzweig.
During mass vaccination campaigns, social media platforms can facilitate the dissemination of public health information but may also contribute to vaccine hesitancy by serving as avehicle for the spread of false and misleading information. Although talking with health professionals is an important avenue to address individuals’ concerns, one-on-one conversationswith healthcare providers are challenging to scale. Can automated, personalized messaging delivered by a chatbot address individuals’ concerns and increase vaccine acceptance? To answer this question, we designed and deployed a Facebook Messenger chatbot to address questions and concerns social media users in Kenya and Nigeria had about the COVID-19 vaccine. After optimizing messaging using an adaptive experimental design on 3,905 respondents, we compare the interactive concern-addressing chatbot to a chatbot that delivers a non-interactive public service announcement (PSA), as well as to a control, no information, chatbot condition. We find that the concern-addressing chatbot increases COVID-19 vaccine intentions and willingness by 4-5% compared to the control condition, and by 3-4% compared to the PSA intervention. Among the 22,052 respondents in our evaluation sample, who at the time of the survey in early 2022 had not yet received a single COVID-19 vaccine, we observe the largest treatment effects among those most hesitant at baseline. With advertising costs as low as $0.21 per person engaged and $4.33 per person influenced, policymakers may want to consider using personalized messaging on digital platforms to quickly and cheaply reach many people to encourage compliance with public health programs during disease outbreaks.
Joint work with Leah Rosenzweig and Susan Athey.
How can we induce social media users to be discerning when sharing information during a pandemic? An experiment on Facebook Messenger with users from Kenya (n = 7,498) and Nigeria (n = 7,794) tested interventions designed to decrease intentions to share COVID-19 misinformation without decreasing intentions to share factual posts. The initial stage of the study incorporated: (1) a factorial design with 40 intervention combinations; and (2) a contextual adaptive design, increasing the probability of assignment to treatments that worked better for previous subjects with similar characteristics. The second stage evaluated the best-performing treatments and a targeted treatment assignment policy estimated from the data. We precisely estimate null effects from warning flags and related article suggestions, tactics used by social media platforms. However, nudges to consider the accuracy of information reduced misinformation sharing relative to control by 4.9% (estimate = −2.3 percentage points, 95% CI = [−4.2, −0.35]). Such low-cost scalable interventions may improve the quality of information circulating online.
Joint work with Alexander Coppock and Donald P. Green.
Experimental researchers in political science frequently face the problem of inferring which of several treatment arms is most effective. They may also seek to estimate mean outcomes under that arm, construct confidence intervals, and test hypotheses. Ordinarily, multi-arm trials conducted using static designs assign participants to each arm with fixed probabilities. However, a growing statistical literature suggests that adaptive experimental designs that dynamically allocate larger assignment probabilities to more promising treatments are better equipped to discover the best-performing arm. Using simulations and empirical applications, we explore the conditions under which such designs hasten the discovery of superior treatments and improve the precision with which their effects are estimated. Recognizing that many scholars seek to assess performance relative to a control condition, we also develop and implement a novel adaptive algorithm that seeks to maximize the precision with which the largest treatment effect is estimated.
Developed with Vitor Hadad and Susan Athey.
Much of my recent work has been on developing tools for designing and analyzing data from adaptive experiments. This tutorial provides an overview of adaptive experimental design, descibes some basic algorithms for treatment assignment, and discusses considerations for inference.
Suppose we have a factorial experiment, where we want to account for two-way and higher-order interactions. We may think that interaction effects will be small but not exactly equal to zero, and higher-order interactions will tend to be associated with smaller effects relative to lower order interactions.
Accounting for all interactions in a standard linear model may be costly in terms of variance, so we want to use some form of regularization.
Hierarchical ridge regression facilitates penalization that is increasing with degree of complexity of interactions.
Joint work with Drew Dimmery.
When the Stable Unit Treatment Value Assumption (SUTVA) is violated and there is interference among units, there is not a uniquely defined Average Treatment Effect (ATE), and alternative estimands may be of interest, among them average unit-level differences in outcomes under different homogeneous treatment policies. We term this target the {Homogeneous Assignment Average Treatment Effect} (HAATE). We consider approaches to experimental design with multiple treatment conditions under partial interference and, given the estimand of interest, we show that difference-in-means estimators may perform better than correctly specified regression models in finite samples on root mean squared error (RMSE). With errors correlated at the cluster level, we demonstrate that two-stage randomization procedures with intra-cluster correlation of treatment strictly between zero and one may dominate one-stage randomization designs on the same metric. Simulations demonstrate performance of this approach; an application to online experiments at Facebook is discussed.