GPT-Powered Voice Assistant Persona

Year
2023

Company
Native Voice AI

OVERVIEW

To define the future of multimodal AI assistants, I led research to determine whether users preferred a general-purpose AI or specialized persona-driven assistants. My work guided product strategy, assistant design, and roadmap prioritization.

Role: Sole UX Researcher

Time Frame: 2 Months

Methods: Focus Groups, Survey, Concept Testing

RESEARCH LIMITATIONS

While this research provided actionable insights, I acknowledge several limitations:

  • Sample Representation Bias: Participants skewed urban and tech-savvy, potentially underrepresenting less tech-experienced users.
    Mitigation: Broader survey validation (n=101) captured more diverse sentiment, and future research should expand outreach.

  • Time Constraints & Longitudinal Gaps: The two-month timeline limited observation of long-term assistant adoption.
    Mitigation: Recommended post-launch studies to track long-term retention and behavioral shifts.

  • Prototype Fidelity: Concept testing used scripted interactions rather than fully functional AI.
    Mitigation: Follow-up usability testing with interactive prototypes will provide richer behavioral insights.

  • Market Evolution: Research was conducted before OpenAI launched native voice functionality, which may have shifted expectations.
    Mitigation: Findings were framed in a technology-agnostic way to ensure continued relevance.

These constraints were carefully factored into research interpretation and product decisions.

FINAL IMPACT & STRATEGIC IMPLICATIONS

This research didn’t just inform AI design—it shaped company strategy.

  • From Generic AI to Specialized Assistants: Improved trust and usability with persona-driven models.

  • Data-Driven Prioritization: Ensured engineering focused on high-demand assistants.

  • Business-User Alignment: Balanced adoption potential with market feasibility.

  • Long-Term Impact: Established core AI principles—context-awareness, personalization, and specialization.

By grounding AI development in real user needs, I ensured assistants weren’t just technically possible—they were genuinely valuable.

CHALLENGE

Native Voice had developed voice assistants using traditional intent-based entity matching, which worked for structured commands but lacked true conversational intelligence.

With OpenAI's API release, we faced a pivotal decision:

  • Develop a single general-purpose assistant?

  • Or create specialized AI personas tailored to user needs?

Why Now?

This study was conducted before OpenAI launched native voice functionality for GPT, making it a first-mover exploration into multimodal AI experiences—without established UX heuristics.

This research aimed to uncover:

  1. How users engage with AI assistants in multimodal experiences.

  2. What personas, tone, and use cases would make AI compelling.

  3. Whether to build one AI model or multiple persona-driven assistants.

  4. How to prioritize development based on both user demand and business strategy.

A key tension emerged: A general AI could be more versatile but risked feeling generic and unfocused, while persona-driven assistants could build trust and usability but required validation.

Early Insight

Users were frustrated with existing AI assistants like Siri and Alexa:

“Alexa just sets timers and plays music. It doesn’t understand what I actually need.” – Focus Group Participant

This feedback shaped our core research questions and reinforced the need for deeper investigation.

RESEARCH APPROACH

This study followed a mixed-method approach, starting with qualitative focus groups, followed by a survey for validation, and concluding with concept testing.

  1. Focus Groups – Uncovered mental models, frustrations, and desired AI behaviors.

  2. Survey – Quantified feature demand, adoption likelihood, and assistant preferences (n=101).

  3. Concept Testing – Validated engagement, usability, and adoption intent through scripted AI interactions.

Focus Groups & Affinity Diagram

To explore frustrations with existing AI assistants and uncover core user needs, I conducted two 1-hour focus groups (n=8) with a mix of frequent and casual assistant users.

Methodology

  • Participants mapped pain points and expectations for AI assistants using Miro for digital whiteboarding.

  • We explored mental models around how users expect AI to behave, engage, and provide value.

  • Insights were synthesized into an Affinity Diagram, categorizing key pain points and needs.

Key Insights

  • Lack of Context Awareness – Users hated repeating themselves and expected AI to retain memory across interactions.

  • Proactive AI Over Reactive AI – Users wanted assistants that anticipated needs and suggested actions, rather than waiting for commands.

  • Specialization Over General AI – Trust was higher with niche assistants over a one-size-fits-all AI.

“If I’m commuting and something pops into my head, I want an assistant that remembers it—not one that makes me rephrase my thoughts.”

Top Insight

Users didn’t want AI to be more “human”—they wanted it to be smarter, more context-aware, and seamlessly integrated into their lives.

Impact on Next Steps

These findings directly shaped the survey, where we quantified user preferences and validated which assistant personas had the highest demand.

Survey

To quantify user demand, excitement levels, and usage intent, I conducted a survey with 101 participants to refine our AI assistant strategy.

Methodology

The survey measured:

  • Preference for persona-driven vs. general AI – Do users prefer one adaptable AI or multiple specialized assistants?

  • Usage Frequency & Excitement Levels – Participants rated how often they’d use each assistant and how excited they were about its potential.

  • Feature Prioritization – Users ranked key assistant capabilities by importance.

Key Findings

  • Persona-driven assistants were preferred by 74% over a general AI, reinforcing the need for specialized, context-aware assistants.

  • Research & Productivity assistants had the highest daily use potential, indicating strong practical value.

  • Social assistant had high excitement (4.4/5) but niche adoption potential—users found it engaging but less essential for daily use.

  • Health assistant ranked lowest overall (3.7/5), but mental wellness features stood out as a promising opportunity.

Unexpected Insight

  • We assumed the Research Assistant would be a niche tool, but it had the highest engagement potential—users wanted an AI that remembered their research history and helped them synthesize information over time.

  • This data directly shaped the next phase: Concept Testing, where we validated real-world adoption intent and assistant interactions.

Developing AI Personas

Method

  • Collaborated with Product, Design, and Engineering to synthesize survey and focus group insights into four AI personas.

  • Defined each persona’s core function, personality, and primary use cases based on user needs.

Concept Testing & Prioritization

To validate engagement, perceived usefulness, and adoption intent, I conducted structured concept testing with 12 participants.

Methodology

  • Scenario-Based Testing – Participants engaged in scripted AI interactions reflecting real-world use cases (e.g., scheduling a meeting, planning an event).

  • Usability & Adoption Ratings – Rated likelihood to use & perceived usefulness (1-5 scale).

  • Qualitative Feedback – Gathered insights on response helpfulness, tone, and intuitiveness.

Key Findings & Insights

Research Assistant (4.6 usefulness, 4.3 likelihood to use)

  • Most indispensable for research-heavy tasks, particularly for users in academic or professional settings.

  • Biggest surprise: Users didn’t just want accurate answers—they wanted verifiable sources to ensure reliability.

  • Adoption challenge: Trust concerns—users wanted citations and source transparency to validate AI-generated insights.

"I don’t just want answers—I want proof."

Social Assistant (4.4 usefulness, 4.1 likelihood to use)

  • High engagement but situational use case—users loved music/event recommendations but wouldn’t use it daily.

  • Biggest surprise: Users expected a more interactive chat-style experience for social planning.

  • Adoption challenge: Lacked integrations with messaging platforms, limiting real-world functionality.

"I’d use this for music, but I need it to sync with my group chats."

Productivity Assistant (4.0 usefulness, 3.9 likelihood to use)

  • High potential, but redundancy concerns—users compared it to existing task-management tools like Siri and Google Assistant.

  • Biggest surprise: Memory-based task tracking significantly boosted perceived usefulness.

  • Adoption challenge: Users wanted proactive nudges and better integration with existing workflows.

"If it reminded me of unfinished tasks, it’d be a game-changer."

Health Assistant (3.7 usefulness, 3.5 likelihood to use)

  • Lower immediate adoption, but strong mental wellness appeal—habit tracking and stress check-ins resonated more than fitness tracking.

  • Biggest surprise: Users were more interested in emotional support and wellness nudges than workout suggestions.

  • Adoption challenge: Lacked personalization—users wanted tailored insights based on mood, energy levels, and stress triggers.

"If this could check in on my stress levels, I’d actually use it."

This concept testing helped inform and prioritize AI assistants, shaping the roadmap to align with user needs and market viability.

ROADMAP & NEXT STEPS

To translate research findings into a strategic product roadmap, I facilitated a cross-functional workshop with Product, Design, and Engineering. The final roadmap prioritizes assistants based on user demand, feasibility, and competitive market trends.

Phase 1: Research Assistant – Immediate Development Priority

  • Highest perceived usefulness and strong likelihood to use (4.6, 4.3).

  • Users saw it as indispensable for knowledge work, but trust concerns must be addressed.

  • Development focus: Multi-turn dialogue capabilities and source verification features to enhance credibility and user confidence.

Phase 2: Social Assistant – Engagement-Driven Expansion

  • High engagement and excitement (4.4, 4.1).

  • Users valued personalized recommendations, but integrations with social platforms were a key request.

  • Development focus: Enhancing personalization through past preference tracking and proactive recommendations.

Phase 3: Competitive & User Validation Before Expanding Further

  • Productivity Assistant (4.0, 3.9) – Market Monitoring Required

    • Users liked automation features but saw overlap with existing tools like Siri and Google Assistant.

    • Strategic pause: Monitor major competitors (Apple, Google, Amazon) to assess if they launch LLM-powered task assistants before investing in development.

    • Potential differentiation: Context-aware automation and memory-driven task tracking to stand out.

  • Health Assistant (3.7, 3.5) – Further Research Needed

    • Initial adoption intent was low for fitness tracking, but mental health support showed promise.

    • Follow-up research focus: Test whether integrating stress check-ins, habit tracking, and AI-driven wellness recommendations significantly improves adoption.

    • If validation is strong, this assistant could become a long-term priority for AI-driven well-being.

This roadmap ensures resources are allocated to the highest-impact assistants first.