The promise of AI-augmented research is that you never have to choose between speed and depth again. The reality is more nuanced: AI scales the parts of research that benefit from scale and introduces new risks in the parts that don’t.

What AI Does Well in Research

Transcription and tagging. Let the model transcribe interviews and apply your taxonomy of tags. This is high-volume, low-stakes work. A tagging error is recoverable. Review a sample (10–15%) to calibrate accuracy, then trust the rest.

Pattern surfacing across large datasets. If you have 200 support tickets, an AI can cluster them by theme in minutes. Use this as a starting point for your own pattern analysis, not as a replacement for it. The clusters tell you where to look — your judgment tells you what it means.

Generating synthesis hypotheses. Feed a model 20 interview summaries and ask it to generate a list of potential insights. You will get 80% noise and 20% things that make you think. That 20% is worth the 5 minutes it takes.

Where to Stay Hands-On

The moment of insight. The thing a user says that reframes how you think about the problem. Models summarise — they don’t surface what surprises them. Read every transcript yourself for the first 10 sessions on any new problem space.

Emotional tone. Frustration, resignation, delight — the affective content of an interview shapes how you weight the insight. AI-generated summaries reliably flatten this. It matters.

Contradictions. Research value often lives in what doesn’t fit the pattern. AI optimises for consistency. Train yourself to notice what the model left out.

The Hybrid Workflow That Works

Use AI for volume processing. Use yourself for the ten most important interviews. Write the synthesis yourself — with AI-generated notes as a reference, not as a draft to edit. The act of writing synthesis is where the insight actually forms.