Dissertation: Value-Aligned Recommendation Systems

My dissertation addresses the gap between what users say they prefer (stated preferences) and what their behaviors suggest they prefer (revealed preferences). Social media algorithms infer user preferences from engagement behaviors, even when those behaviors don’t reflect what users actually value, often amplifying content that is attention-grabbing but misaligned with users’ true values.

How can we design social media feed algorithms that better align with what users consciously value? And what effects might such value-aligned systems have on individuals and society? To address these questions, I develop a stated preference-based recommendation algorithm that leverages large language models (LLMs) to re-rank content based on users’ explicitly expressed values. I test its effects on both individual- and system-level outcomes through randomized controlled trials and large-scale simulations.


Ongoing Projects

In addition to my dissertation, I have led and collaborated on several projects that explore how digital platforms shape political behavior and discourse:


Future Directions: Human-AI Interaction for Democratic Engagement

Building on these lines of work, I am expanding my research into the domain of human-AI interaction, with a focus on civic applications. I explore how AI can be leveraged to support more prosocial and pro-democratic engagement online—for example, by facilitating constructive political dialogue or helping bridge partisan divides.

Methodologically, I am also interested in integrating AI into social science experiments. This includes AI-augmented survey designs that simulate dynamic conversations or deliberative settings at scale, enabling the study of opinion change, persuasion, and engagement in more interactive environments.