I'm a human-AI interaction researcher and strategist specializing in AI participation design: the behavioral architecture that determines how AI engages with human thinking, decision-making, and emotional processes over time.
My focus: when agents should talk vs. show, how to design confirmation gates for high-stakes decisions, how to prevent over-reliance, and how to recover trust after failures.
What makes my approach different: I've been both the researcher generating insights and the PM deciding what to build with them. I've led 0→1 conversational AI products from concept through launch, which means I understand the constraints and tradeoffs that determine whether research actually ships.
My background includes training in mental health and relational dynamics alongside HCI, which helps me recognize patterns most teams miss: dependency formation, reassurance loops, and self-trust erosion in any agentic system, not just therapeutic AI.
Methods: Mixed-methods AI UX research, diary studies, Wizard-of-Oz prototyping, human-in-the-loop evaluation, log analysis, behavioral analytics, SQL
Deliverables: Interaction pattern libraries, evaluation rubrics, agent UX heuristics, dependency risk assessments, decision-gate designs
MS Human Factors, Bentley University
BA Cognitive Science, Brown University
Meta FAIR
Native Voice AI
Matter Neuroscience
IBM