Testing the wrong skill
Memorizing algorithms matters less when every engineer has AI. What matters is how they decompose problems, prompt effectively, and validate AI output.
AI-native interviews
Your engineers work with AI every day. Your interviews pretend AI doesn't exist. That gap is why you keep hiring people who can invert a binary tree but can't ship an agent. Use the same IDE your team builds in — and see how candidates actually think.
Memorizing algorithms matters less when every engineer has AI. What matters is how they decompose problems, prompt effectively, and validate AI output.
Can they break a complex task into clear prompts? Do they verify or blindly accept? Do they know when to switch models? You can't see any of this in a whiteboard interview.
Every interview uses a different setup. Candidates fight the tooling instead of showing their thinking. Agent packs give every candidate the same starting point.
Define the problem, starter code, constraints, and evaluation criteria in a versioned pack. Every candidate gets identical conditions. Update the pack, not a wiki page.
Not a toy sandbox. Full editor, terminal, file tree, Git — with the AI engine of their choice. They work the way they'd work on your team.
See every prompt, tool call, edit, and model response. Understand their reasoning process, not just the final output. How did they break the problem down? Did they test? Did they iterate?
Spotlight analytics across interview sessions. Compare prompt strategies, token efficiency, tool usage patterns, and time-to-solution. Hiring decisions backed by traces, not gut feel.
This isn't a separate product. It's your agent IDE doing what it already does — running packs, tracing sessions, showing you what happened. The same environment your team builds in is the most honest interview environment you can offer. No extra subscription. No new vendor. Just a new pack.