Human-AI Systems Researcher & Strategist

I study and shape how AI participates in human thinking, emotion, and agency.

Defining how AI should support humans without overwhelming, replacing, or creating dependency.

Participation design

When AI should help, how much to help, and when to step back

Relational architecture

Trust, dependency, attachment, agency, identity

Trust & adoption diagnostics

Why users trust the demo, then abandon or over-rely

Amia Oberai

AI capability ≠ adoption. And adoption ≠ sustainable human-AI use.

Teams build powerful AI agents, ship them, and then wonder why adoption stalls, or why users become over-reliant or anxious without it. This happens everywhere AI becomes a thinking or emotional partner.

I work on two hard problems:

  • Agent usability and adoption: interaction architecture, structured outputs, decision gates
  • Human autonomy, trust, and relational safety: shaping how AI supports confidence, preserves agency, and prevents over-reliance, skills erosion, and unhealthy dependency patterns

Enterprise Copilots

Users who can't decide without checking AI. Junior employees who never learn the work. Skills erosion hidden behind productivity gains.

Therapeutic AI

Where's the line between supportive and surrogate? How do you help without creating dependency?

Scientific Agents

Researchers losing confidence in their own hypotheses. Assistance that quietly becomes a crutch.

Consumer Companions

Users distressed when AI is unavailable. Erosion of social and decision confidence.

The human side of AI systems

I'm a human-AI interaction researcher and strategist specializing in AI participation design: the behavioral architecture that determines how AI engages with human thinking, decision-making, and emotional processes over time.

My focus: when agents should talk vs. show, how to design confirmation gates for high-stakes decisions, how to prevent over-reliance, and how to recover trust after failures.

What makes my approach different: I've been both the researcher generating insights and the PM deciding what to build with them. I've led 0→1 conversational AI products from concept through launch, which means I understand the constraints and tradeoffs that determine whether research actually ships.

My background includes training in mental health and relational dynamics alongside HCI, which helps me recognize patterns most teams miss: dependency formation, reassurance loops, and self-trust erosion in any agentic system, not just therapeutic AI.

Methods: Mixed-methods AI UX research, diary studies, Wizard-of-Oz prototyping, human-in-the-loop evaluation, log analysis, behavioral analytics, SQL

Deliverables: Interaction pattern libraries, evaluation rubrics, agent UX heuristics, dependency risk assessments, decision-gate designs

MS Human Factors, Bentley University BA Cognitive Science, Brown University Meta FAIR Native Voice AI Matter Neuroscience IBM

Core Principle

As human constraint increases (cognitive load, emotional intensity, decision pressure), AI participation complexity should decrease.

This principle guides how I shape and evaluate interaction gates, pacing, intervention levels, and handoffs.

Reusable architecture for human-AI relationships

The Seven Governors of AI Participation

A framework for calibrating how AI participates in human work, thinking, and emotional support.

  • Pacing
  • Intervention calibration
  • Repair
  • Boundary setting
  • Strategic non-intervention
  • Relational stance modulation
  • Epistemic calibration

Human-AI Relational Health Taxonomy

The five dimensions that determine whether a human-AI relationship is healthy or harmful.

  • Trust
  • Dependency (emotional, cognitive, skill, self-efficacy)
  • Attachment
  • Power / Agency
  • Identity

Attuned-Not-Attached (ANA)

Boundary architecture for emotionally expressive AI. Warm, not intimate. Supportive, never surrogate.

  • Tone Zones
  • Emotional Firewall (5 layers)
  • Relational Topology
  • Persona Safety Matrix
Read the framework →

Full frameworks available upon request.

Selected Work

Meta FAIR 2025
Foundational Research Human-AI Interaction Evaluation Frameworks

Defining Emotional & Social Intelligence for AI Assistants

Led human-AI interaction research and framework development for emotional and social intelligence within Meta's FAIR lab. Integrated cognitive science, HCI, and internal model analysis to define capability requirements and evaluation frameworks for AI assistants, including intention understanding, trust calibration, user control, and adaptability.

→ Informed AI research and product direction across FAIR, GenAI, Wearables, and AR/VR teams.

Native Voice AI 2023
Wizard-of-Oz Diary Study Trust Calibration

Trust Calibration & Real-World Validation

Repeated AI errors caused rapid trust breakdown. I ran Wizard-of-Oz experiments comparing neutral vs. empathic error responses in realistic driving simulations, identifying over-apology as harmful and brief empathy as beneficial. Error-handling guidelines were adopted into core product logic.

To validate the full system for Walmart distribution, I then designed a 5-day diary study with 50 Walmart shoppers, tracking how trust and satisfaction evolved through repeated use in noisy driving conditions. Identified wake-word failures and assistant-switching confusion. Partnered with ML to retrain acoustic models.

→ Contributed to product readiness and user validation supporting device distribution through Walmart (online and in-store).

Read case study →
Native Voice AI 2023
Concept Testing Strategic Research Product Direction

Resolving the Personas vs. Single Assistant Debate

Leadership was debating whether to build one adaptive assistant or multiple specialized personas, a decision that would affect engineering architecture, brand, and roadmap. The debate was driven by intuition, not evidence. I ran Wizard-of-Oz studies and concept testing to understand real user mental models.

→ Discovered both user archetypes disliked managing multiple personas. Synthesized a hybrid solution: one assistant with contextual modes. Research directly shaped product positioning and engineering direction.

Read case study →
Matter Neuroscience 2024
Mixed Methods Product Analytics Behavior Change

Closing the Gap Between Belief and Behavior

Users believed in the mental health app's science but dropped off after the first week. I diagnosed a misalignment between scientific timescales and human motivation: feedback loops weren't matching what users needed to sustain behavior change. Co-designed daily goals with immediate feedback and A/B tested progress metaphors.

→ Ring metaphor (vs. bar) felt calming and motivating. Achieved 210% increase in engagement and strong DAU/WAU.

Read case study →

Advisory & Consulting

I advise AI teams on participation design, trust calibration, and relational safety. If you're building agents, copilots, or collaborative AI and want to get the human side right before you ship and discover adoption problems, let's talk.

Open to advisory engagements and embedded contract roles.

Interaction Architecture Review

Audit your agent's participation patterns, autonomy boundaries, and trust dynamics before they become adoption problems.

Dependency Risk Assessment

Identify where your agent might cause over-reliance, skills erosion, or unhealthy attachment patterns, and how to prevent them.

Evaluation Framework Design

Build rubrics that capture what matters: trust, adoption, collaboration quality, user autonomy, not just task success.

Let's talk about human-AI relational experiences

I'm always interested in conversations about AI participation design, trust calibration, and how to build systems that genuinely work for humans.