
The most experienced person in the room isn’t always the most useful one.
There’s a particular kind of interview that goes very well on the surface. The candidate is polished. Their case studies are structured. They use the right language — governance, systems thinking, enablement, adoption. They’ve worked at impressive organizations, managed complex programs, and built things at scale. You leave the room thinking: this person has clearly done a lot.
And then you try to pin down what, exactly, they did. And the answers get soft.
“It depends on the context.” “I’d need to understand the org first.” “There’s no single right answer here.”
This is the specificity problem. And it shows up most reliably in experienced candidates — not junior ones.
Why Experience Doesn’t Automatically Produce Specificity
When someone is early in their career, they have a small number of things they’ve done. They can describe those things in granular detail because those details are what they have. Ask a junior designer to walk you through a component they built and they’ll tell you the exact problem, the constraints, the iteration cycle, what didn’t work, and what they shipped.
Ask someone with twenty years of experience the same question and you often get a category of experience instead of an instance of it. “I’ve built governance models across several organizations.” “I typically approach this by understanding the maturity level first.”
This isn’t dishonesty. It’s a real cognitive shift that happens as careers accumulate. Patterns replace memories. Frameworks replace specific decisions. And in many contexts, that abstraction is genuinely useful — it’s what allows experienced people to adapt quickly to new environments.
But in an interview, abstraction is not evidence. It’s a claim.
The Gap Between Vocabulary and Craft
One of the most useful mental models for evaluating candidates is distinguishing between vocabulary fluency and craft depth.
Vocabulary fluency is the ability to talk about a domain correctly. It means knowing the right words, the right frameworks, the right questions to ask. Almost anyone who has worked in a field for a decade has high vocabulary fluency. It’s a prerequisite, not a differentiator.
Craft depth is something different. It’s the ability to describe specific decisions, name specific trade-offs, and recall specific things that didn’t work. It’s the ability to say: “At one point we had three teams all trying to contribute to the same pattern library, and what we ended up doing was building a formal intake process where the requesting team had to document the use case and demonstrate why the existing component was insufficient. It slowed things down, but it reduced the noise significantly. Here’s how we knew it was working.”
That kind of answer is almost impossible to fake. It requires that you actually did the thing.
The interview technique that separates the two is deceptively simple: just keep asking for the next level of detail. “Can you give me a specific example of that?” “Who made that call?” “What did that actually look like in practice?” The candidate with real craft depth will keep answering. The candidate with vocabulary fluency will eventually start to loop back to abstractions.

The 90-Day Deliverable Question
One of the most revealing questions in any senior hire interview is some version of: If you joined next month and had 90 days to demonstrate the value of this work, what would you actually ship?
The word ship is doing a lot of work there. It forces a move from process-speak to artifact-speak. From “I would audit and understand” to “I would produce X for teams Y and Z by the end of month two.”
Process answers — audit, align, understand, establish trust — are not wrong. They reflect a legitimate consulting instinct toward responsible discovery before action. The problem is that a 90-day plan that ends with “and then I’d suggest some quick wins” is not a plan. It’s a posture.
Candidates who have actually done the work at a specific organization will often answer this question with something concrete and slightly too narrow. That narrowness is a good sign. It means they’re drawing on a real memory. “In my last role, the first thing I could show was a standardized component annotation format that cut the back-and-forth with engineering by about half. Something like that.” That answer tells you what they value, what they can execute, and how they think about demonstrating impact. A vague audit plan tells you none of those things.
When “It Depends” Is the Answer
To be fair to experienced candidates: sometimes “it depends” really is the correct answer. Governance models do depend on organizational maturity. Contribution processes do depend on team size, tooling, and culture. Being overly prescriptive can be its own signal of inexperience — the candidate who walks in with a rigid framework regardless of context.
The distinction is whether “it depends” is followed by anything.
“It depends on the maturity of the organization and the relationship between design and engineering — at one org I came into, design and dev had almost no shared language, so we started by just mapping what we each meant by the same terms before we built anything. At another place, the system already existed but nobody owned it, so the first move was establishing a lightweight review process.” That’s a real answer. The dependency is defined, and the speaker can demonstrate how they navigated it in practice.
“It depends on a lot of factors” is not an answer. It’s a deferral.
What to Listen For
When evaluating experienced candidates for hands-on individual contributor roles, a few signals are worth calibrating around:
Recency matters. If the most specific example a candidate can give is from eight years ago, ask about it directly. Experience is not static. How has their thinking evolved? What would they do differently now? The inability to discuss growth is itself a signal.
Specificity under pressure is the test. The first answer to a question is often rehearsed. The real information comes in the follow-up. “Can you walk me through what that actually looked like?” “Who else was involved in that decision?” “What happened when someone pushed back?” Candidates who have genuine depth will get more specific, not less.
Watch the flip. One of the more telling patterns in interviews with experienced candidates is the question flip — when asked “what would you do in the first 90 days?”, they respond with “well, what would you want me to do?” It’s a consulting instinct, and it’s not always wrong. But in a permanent individual contributor role, you need someone who walks in with a hypothesis and adjusts, not someone who waits to be assigned a direction. The flip is useful information.
Numbers are easy, decisions are hard. Candidates often cite impressive outcome metrics — bounce rates reduced, adoption doubled, compliance percentages reached. These are real signals of impact, but they’re also easy to claim partial credit for. The more revealing question is: which decisions did you make that led to those outcomes, and why did you make them that way? The answer to that question is much harder to manufacture.
The Underlying Issue
The specificity test isn’t really about catching candidates out. It’s about finding the people who have genuinely wrestled with a hard problem in a specific context — not just read about it or observed it, but actually made the call, shipped the thing, watched it succeed or fail, and learned from that experience.
That kind of knowledge is irreplaceable. It’s also, unfortunately, easy to imitate at a surface level. The work of the interview is to get past the surface.
The candidates who can’t get specific usually aren’t lying. They’ve often done real things. But somewhere along the way, the specifics got abstracted into principles, and the principles got polished into a presentation, and the presentation stopped being tested against memory. Asking for the next level of detail is how you find out whether the foundation is there.
Sometimes the most useful question in an interview is the simplest one: can you just tell me exactly what you did?