Hire on Hooks, Not Skills: The New Senior Engineer Bar on HN
A May 2026 Hacker News job ad rewrote the senior engineer bar around Claude Code hooks and skills. Here's how to source for it.
A trak-pro job ad in this month's Hacker News "Who is hiring?" thread quietly moved the senior engineer bar. It doesn't ask for "experience with AI tools." It asks for a daily Claude Code or Codex user with their own skills, hooks, and commands, plus opinions on when to override the agent. If you're sourcing senior ICs in May 2026 and still keyword-matching "Cursor" or "Copilot" on LinkedIn, you are screening for the wrong layer of the stack.
The job spec that changed the bar
The trak-pro listing (HN item 47975571) is for a Rails 8 / Hotwire role. The line that matters:
Daily Claude Code or Codex user with a real workflow, your own skills/hooks/commands, opinions on when to override the agent, habit of reviewing AI output as critically as a junior PR.
Applicants are asked to send a paragraph on hooks and commands they've built. That requirement is doing more work than it looks like. It rules out anyone who has only used Claude Code through chat. It rules out anyone who treats the agent as autocomplete. What survives is engineers who have shaped Claude Code into a tool with deterministic guardrails around a probabilistic core.
trak-pro isn't alone in the May 2026 thread. MixRank wants engineers with "real (pre-Claude Code) experience" who can iterate fast in a post-AI world "while actually reading what the AI does." Another listing on the same thread asks for "an understanding of the internals of things like Claude Code, and a broader understanding of orchestrating agents to complete complex discrete tasks." Axis describes a team where "using agentic tooling isn't a side experiment, it's central to how the team operates." Several listings now require a written paragraph on AI workflow with the warning: "Applications without this information will not be considered."
The pattern is clear. The hiring bar shifted from tool familiarity to workflow depth, and it shifted in public.
Why "Cursor" on a LinkedIn profile is now a negative signal
Listing "Cursor" or "Claude Code" as a skill on LinkedIn used to be a weak positive. In May 2026 it's closer to neutral, and arguably negative for senior IC roles. A bootcamp grad who installed Cursor last week can list the same skill. The information value collapses to zero.
The bar moved to artifacts: custom hooks, slash commands, skills files, and a coherent point of view on when to override the agent. None of that lives on a LinkedIn skill tag. It lives in .claude/ directories on GitHub, in dotfiles repos, and in the way a candidate talks about their workflow during a 20-minute call.
Forty-seven people. Globally. That is the entire public LinkedIn surface for this phrase. If your sourcing pipeline starts with LinkedIn Recruiter title or skill search, you are working from a 47-person funnel for a hiring requirement that is rapidly becoming standard at AI-native shops.
What "real workflow" actually means in artifacts
Before you can source for this, you need to know what you're sourcing for. Three reference points worth bookmarking:
1. harperreed/dotfiles
Harper Reed's public dotfiles repo has 305 stars and 48 forks, and inside it is a .claude/commands/ directory with files like careful-review.md, security-review.md, plan-tdd.md, and find-missing-tests.md. Each one is a slash command that encodes a workflow: how to review code carefully, how to plan a feature with TDD, how to surface missing tests. Read three of these files and you know more about Harper's engineering taste than you'd learn from his last five job titles.
2. MaxGhenis/.claude
Max Ghenis (PolicyEngine) publishes his entire .claude configuration as a public repo: a CLAUDE.md, a settings.json, five hook scripts, a set of slash commands, and local plugins. The hooks are the part most recruiters miss. Hooks are user-defined shell commands that fire at specific points in Claude Code's lifecycle (PreToolUse, PostToolUse, before commit). They give deterministic control: enforce that tests run, block destructive bash, format on edit, deny commits without a ticket reference. From the docs:
Hooks provide deterministic control over Claude Code's behavior, ensuring certain actions always happen rather than relying on the LLM to choose to run them.
CLAUDE.md is advisory. Hooks are mandatory. An engineer who has written enforcement hooks is thinking about production safety. An engineer with only a CLAUDE.md is still in chat mode.
3. PaulDuvall/claude-code
Paul Duvall's repo has 57 custom commands (13 active, 44 experimental) plus a layered hooks architecture that intercepts operations for security and audit purposes. This is what a power-user setup looks like at the high end. You don't need every candidate to ship 57 commands. You need them to recognize this exists and have opinions about why most of it is overkill.
If you want a single seed list to start from, hesreallyhim/awesome-claude-code curates skills, agents, hooks, status lines, and orchestrators. Its contributor graph is a sourcing list in disguise.
The interview signal: "When do you override?"
Hooks are deterministic. The model is probabilistic. The senior signal is the engineer who can tell you, in concrete terms, which decisions they refuse to delegate and how they enforce that boundary in code.
Listen for things like: "I block bash commands matching rm -rf at the PreToolUse hook." "I switch from Sonnet to Opus when planning a migration." "I have a /careful-review command that forces the agent to surface assumptions before writing code." "I never let the agent run unattended past the test-write step."
Engineers who can't answer "when do you override the agent?" with specifics are still in passenger mode. Years of experience don't help here. The hiring philosophy is shifting accordingly. Boris Cherny, who leads Claude Code at Anthropic, has said his team now hires mostly generalists rather than specialists, "since many traditional programming skills are less relevant when AI handles implementation details. The model can fill in the details." That is the implicit screen being adopted across AI-native shops: you're not hiring for what someone knows, you're hiring for how they orchestrate.
CLAUDE.md is advisory. Hooks are mandatory. Source for the engineers who know the difference. </pull> ## Sourcing surface: where these engineers actually live If LinkedIn skill search returns 47 people globally, where do you go? ### GitHub dotfiles and `.claude/` repos The single highest-signal query is "find me public repos with a populated `.claude/commands/` or `.claude/skills/` directory and non-trivial hook scripts." That isn't a search you can run cleanly in the GitHub UI. You need to combine code search ("path:.claude/commands"), repo search (dotfiles repos with recent commits), and a way to rank by depth, not just presence. This is exactly the kind of fuzzy, multi-source sourcing question that breaks Boolean search and where [Refolk](/) earns its keep: you describe the engineer in plain English ("senior backend engineer with a public `.claude` config that includes hook scripts and at least five slash commands, ideally Rails or Python") and get a ranked shortlist across GitHub, LinkedIn, and the open web. ### Contributor graphs of canonical repos The contributors and forkers of `harperreed/dotfiles`, `hesreallyhim/awesome-claude-code`, and high-quality public `.claude` repos are a curated population. These are people who care enough about agentic workflow to study it in public. Pull the contributor list, deduplicate against your existing pipeline, and you have a warm sourcing seed of engineers who already self-identify as the bar trak-pro is hiring for.
refolk prompt: Senior backend engineers with a public .claude/ directory on GitHub that includes hook scripts and custom slash commands, active in the last 90 days, US or EU. note: You'd get a ranked list with the actual repo links, hook script counts, and command files surfaced inline so you can skim taste before you reach out.
### Conference talks, blog posts, and the HN comment graph
Engineers writing about Claude Code internals, agent harness design, or override heuristics are publicly self-selecting into this hiring bar. The HN comment threads under every "Who is hiring?" post since January 2026 contain more concrete workflow detail than 99% of resumes. Indexing those comments by author is a sourcing project most teams skip.
## The numbers that justify the new bar
The reason job specs are tightening is that the production data is in, and it's lopsided.
Stripe deployed Claude Code across 1,370 engineers. One team completed a 10,000-line Scala-to-Java migration in four days, work that had been estimated at ten engineer-weeks. Ramp cut incident investigation time by 80%. Claude Code went from over $500M in run-rate revenue in September 2025 to over $2.5B by February 2026, with weekly active users doubled and business subscriptions quadrupled since the start of 2026. Daily installs in VS Code went from a 17.7M 30-day moving average at the start of 2026 to 29M and rising.
When a tool delivers those numbers, hiring around it stops being optional. The trak-pro spec isn't avant-garde. It's catching up.
A practical screen you can run this week
Three steps.
1. Replace skill keywords with artifact requirements. In your sourcing notes, drop "uses Claude Code" and replace it with "has a public .claude/ directory with at least three slash commands or one non-trivial hook script." The first is a keyword. The second is a binary you can verify in 30 seconds.
2. Add an "AI workflow paragraph" to your application. Borrow the line from May 2026 HN listings: "A short explanation of how you currently use AI tools in your workflow. Applications without this information will not be considered." This single field will sort your inbound pipeline harder than any take-home.
3. Run a 20-minute "override interview." Ask one question: "Walk me through a recent task where you stopped the agent and did it yourself, and one where you delegated and were surprised by the result." The senior signal is specificity. Vague answers fail the screen.
If your team is doing all three but stuck on step 1 because GitHub code search isn't built for this, that's the gap Refolk closes: ask in plain English for engineers whose public artifacts match the new bar, get a ranked shortlist back, and skip the LinkedIn skill-tag dead end entirely.
FAQ
Is requiring "Claude Code experience" legal and defensible in a job ad?
Tool-specific requirements have always been allowed (the same way "5 years of Postgres" is allowed) as long as they're tied to the actual job. The shift in May 2026 is that AI workflow is now considered core to senior IC work at a growing number of shops, not a nice-to-have. trak-pro and others are framing it that way deliberately. The defensibility question is the same as for any tool requirement: is it actually how the team works day to day?
What if a strong candidate uses Codex or Cursor instead of Claude Code?
Most of the listings explicitly say "Claude Code or Codex," and the underlying signal (custom hooks, commands, override heuristics) translates across tools. Cursor has its own rules and project config surface. The screen isn't "did they pick the right vendor," it's "have they shaped the tool to enforce their engineering taste." A candidate with a deep Cursor .cursorrules and MCP setup is the same archetype as one with a deep .claude/ config.
How do I avoid filtering out senior engineers who just don't post their dotfiles publicly?
Two-track it. Public artifacts are the cheapest, fastest signal, but they're not the only one. Pair the artifact search with an outbound message that asks for a workflow paragraph, the way trak-pro does. Plenty of senior engineers keep their config private and will happily describe it in writing. The combination catches both the public-portfolio and private-but-deep populations.
Does this mean years of experience no longer matter?
It means the weighting shifted. Anthropic's own hiring philosophy under Boris Cherny is generalists over specialists, on the thesis that "the model can fill in the details." That doesn't make experience worthless. It does mean that a candidate with three years of experience and a deeply considered agentic workflow may now beat a candidate with eight years and no public artifacts, especially for greenfield AI-native teams. Source accordingly.