👋 Hi, I’m Amia.

I’m a Senior UX Researcher with 8 years of experience driving product strategy through mixed-methods research. With a background in Product Management, I take a strategic, business-aligned approach to UX research—securing stakeholder buy-in and influencing executive decisions.

I thrive in ambiguous, fast-moving environments, where research shapes not just usability, but product vision and market differentiation.

I specialize in:

  • AI & Human Interaction – Trust, personalization, and usability in AI-powered experiences

  • Behavioral Science & Habit Formation – Designing engagement-driven, retention-focused products in wellness

  • Multimodal & Connected Experiences – Researching seamless interactions across voice, touch, and IoT

  • Strategic UX Research – Translating insights into high-impact product and business decisions

I hold a BA in Cognitive Science and an MS in User Research / Human Factors, with a passion for shaping the future of tech-enabled communities, human-AI experiences, and behavior-driven design.

Recent Work.

Building an In-Car Voice Assistant System

Led foundational, generative, and evaluative UX research for a voice AI car device and companion app, delivering a seamless, hands-free driving experience.

My research secured a 50,000-device Walmart deal, forged API partnerships with iHeart Radio, TuneIn, and AccuWeather, and optimized UX across hardware, mobile app, and AI-human interactions for a successful launch.

Enhancing Engagement for a Mental Health App

Led foundational, generative, and evaluative UX research for Matter, a neuroscience-backed mental health app aimed at building long-term well-being habits.

My work drove a 2x improvement in onboarding conversion, and a 210% increase in engagement by introducing habit-forming features, gamification, clear value propositions, and enhanced privacy.

Exploring User Expectations for AI Assistants

This research was conducted before OpenAI launched voice functionality for GPT, making it a first-mover exploration into user expectations and opportunities for LLM-driven voice interactions.

I conducted foundational and generative research to understand how users would engage with this AI assistant in a multimodal experience, and what characteristics would make the assistant most compelling.