AI as fuzzy search

AI as fuzzy search

A modern interview stack


The idea started with Benjamín Labatut’s The MANIAC, a fictional oral history-style narrative about mathematician John von Neumann. What starts as a near-reality narrative about Neumann’s work and genius descends into near-madness as one technological development after another snowballs into machines that can think, each step presented in the imagined quoted voice of a contemporary. One step after another, you cease to understand the world, and with Labatut, begin questioning what happens when machines outthink humans.

Found history, oral narratives, quotes melded into a patchwork, now this person telling their perspective, now this person building on it, now another contrasting it, until a fuller picture of the idea emerges than would be possible in a traditional narrative.

And so I set out to write one.

The idea was to interview a dozen people about their workflows, and turn it into a series of articles about what goes into turning a budding idea for a blog or newsletter into a successful publication from scratch. And, perhaps, the quote-heavy format could impart more life to what could otherwise easily be a dry, prescriptive documentation.

Simple enough at face value, far harder when you have eight hours of call recordings and pages of notes and only a fuzzy recollection of what each person said. Scrubbing works when you know the desired quote is at 07:39 in the recording; search excels when you know they mentioned a specific keyword then and then only. But if you recall only that the interviewee said something about not hitting a deadline and it was in the latter half of the call but you can’t remember the exact phrasing, you’d better have a half hour free to find that needle in the proverbial haystack. Rinse, repeat for every other idea you need to resurface from the hours of recordings.

Or you could use AI to lighten the load. Maybe.

I’d be remiss to ignore the elephant in the room, that AI-hallucinated quotes from a workflow not all that dissimilar from the one I’m recommending below is what recently tripped up an Ars Technica writer, leading them to publish a piece with quotes the AI made up but that they believed had come directly from their source material. The risk is real. Yet I believe that it’s a manageable risk—that the benefit of the tool merits use with guardrails, just as a bandsaw isn’t worth dismissing for woodwork but is worth using with healthy caution and failsafe guardrails—and you need a workflow that marries the best of AI and human-directed research.

Finding a job for the robots without abdicating your own

I’d written, six months before ChatGPT launched, that AI was here to save you from busywork. “Robots aren’t ready to write your blog posts,” I wrote, optimistic for team human.

That aged poorly.

Yet I’d still argue, at least today in early 2026, that it’s fairly easy to sniff out AI-written content. AI has overused my beloved em dash to the point that they’re seen as de facto proof of AI writing (I present to the jury my decade of em dash use prior to the great AI-fication as evidence that humans, too, can use the best dash, and rest my case). They’re fond of “it’s not this, it’s that” comparisons, one-sentence paragraphs, and rhetorical questions followed by an answer. I now purposefully avoid all in my writing, along with other AI tropes. Beyond those, there’s still a feeling, a certain je ne sais quoi, if you will, to AI written text. You know it when you read it. Or, at least, this cheerleader for team human hopes. To ignore it, though, to fully discount AI as slop, risks myopia veering on ludditeness, akin to obstinately rejecting spellcheck in 1997.

You shouldn’t have AI write for you, not when writing is thinking and half the job of writing is to process your ideas about a thing. But you should have it do the tasks it’s best at.

AI is a very good proofreader, adept at catching spelling mistakes (including inconsistent name spelling), miswritten words, tense changes, clichés, and repetitive phrasings, much better than a traditional spellcheck is at any of those. You should definitely have AI run through your draft writing and tell you what to change and why (don’t have it wholesale make the changes for you, unless you want a smoothed-out simulacrum of your original writing).

AI is great as a fuzzy search. When you want a quote that’s about something in general but you don’t remember the exact wording, Google’s likely to fail where AI is far more likely to succeed. You have to carefully double-check and ask for citations; quotes are where hallucinations are most likely to surface. But it’s reasonably good at finding needles in the internet’s haystack (but that comes with the additional job of verification; you should never take a quote from AI at face value, instead treating it as a possible clue in a wider research project that you need to verify before ever considering using it in your work).

Combine the two ideas—AI’s good at focused tasks around existing text, and good at more abstract extraction but more fallible when searching against the corpus of human knowledge—and you find another use case: Fuzzy search inside your existing text. All you need is a large enough context window. That, tempered with guardrails of only using AI as a first-level search, then searching for that quote and copying directly from the original source material.

And so I switched to Kiro, Amazon’s take on an AI coding app (Cursor, Visual Studio Code, and Claude Code running in Terminal should, generally, work just as well, as theoretically could directly chat in ChatGPT or Claude but with more limited context windows).

AI for fuzzy search

The basic setup is simple:

  • Save your source material—notes, call transcripts, quotes, and so on—in plain text files, ideally with one file per person or source.
  • Open the folder of text files in a coding app—Cursor, Kiro, Claude Code, or other similar apps.
  • Use the AI sidebar to search for quotes—but never actually copy the quote from the AI chat.
  • Open the original source document in the coding app, and CMD/Ctrl+F search for the quote that AI surfaced, then copy the original text in context from your source material.

I took each interview in Zoom, recording the conversation with both Zoom’s transcripts and with Granola for a quick summary and outlines. I saved each interview, individually, in a plain text file titled with the interviewee’s name, saved in a folder for this project.

Then came the code editor. I opened the folder in Kiro, then opened the chat and asked for quotes about specific topics. “Find a quote from PersonX about a time that they ScenarioY”-type prompts, or broader ones like “Find all of the quotes about ScenarioZ.” In my experience, AI in a coding app focused on local documents is far less likely to be inaccurate, and yet it’s still not perfect. Citations were tricky; I could ask for line items from better-formatted interviews, but that only narrows the scope in longer conversations with run-on paragraphs.

And if anything, that’s good: You should always copy the quotes from your raw source material, and never directly from the AI chat. AI results require eternal vigilance, because as writers have already found, AI tools can still hallucinate quotes, no matter how fine-tuned they are. Using AI as a search tool, then actually copying the quotes from your original notes is the only insurance against hallucinated quotes that were never actually said by a human.

90% of the time or better, the AI returned exact quotes. AI inside coding environments are built to execute on the files you currently have open, and that process is generally the same no matter what type of text the files contain. It’s less likely to delete critical code from text and less likely to make up quotes that don’t exist in the source material. It only went off the rails when I asked for quotes that weren’t there—and even still, it tended to respond with less-relevant quotes rather than fully hallucinated quotes. And it was less accurate when I stored the article draft and my notes files in the same folder; it works best when it only has access to interviews.

So I would ask AI chat for a quote from a person, then open their interview document, press CMD+F, and search for a few words of the quote that AI surfaced. Formatting oddities often meant that the full quote wouldn’t show up in a search, so I’d search for a seemingly unique phrase to find the original quote.
Sometimes I’d go with the quote that AI surfaced, copying it, cleaning up filler words and repetition from the transcribed audio, then using that in my final draft. Often, though, the quotes AI found were clues in a wider discovery process. I’d take a quote that AI surfaced, search for it in the source material, and often find a wider story or some more human anecdote that wasn’t the precise quote I was looking through but that would add color to the story.

And so I worked through the material, drafting early outlines from post-call intuition, refining them with AI-surfaced quotes, digging deeper into the source material as the AI found spots worth highlighting, then discovering something I’d forgotten or otherwise missed and sending the AI on a quest to find similar threads in other interviews. Rinse, repeat, as the piece took shape, written by me, with AI as a gopher.

That left me with the work of thinking through the interviews and turning them into a narrative. That’s hard enough. And I could only do that by reading through the interviews and thinking through the answers.

A similar workflow could work for any research project. Whether pointing the AI at your carefully curated notes or at original source material, AI can help you find quotes that are similar but not exactly like what you’re looking for. It can organize those—but here there be dragons, and I’d argue your best work is done when you think through the content and come to your own conclusions on meaning and order, relying on your writing intuition on what to feature when. Research. Pull your thesis together. Use AI to help resurface quotes. Search for those quotes in your source material, and pull them into your writing. Revise. Proofread. Double-check everything.

With an AI to help do the busywork of finding needles in the haystack of research, you’ll have more time to think through the material and waste less time looking for something you know was there but can’t manage to find again. AI saved you enough time already; invest a bit of that savings in double and triple checking everything, in using the quotes as a springboard to zoom out and work in more context.

And that’s a win—even if it does still require more work, more searching and re-checking, than just copying surfaced quotes directly from chat.

Header Image by Niklas Hamann via Unsplash

 
The writing platform for creativity