Refolk
May 13, 2026·9 min read

Codex Hit 3M WAU. Claude Code Owns 75% of Startups. Source Accordingly.

Codex jumped to 3M weekly users while Claude Code dominates 75% of tiny startups. Here is how to read coding agent mentions as a seniority and tier signal.

sourcing AI engineersClaude Code vs Codex hiringAI coding agent adoption by companytechnical recruiter Boolean AI toolssenior engineer AI tool signal
Codex Hit 3M WAU. Claude Code Owns 75% of Startups. Source Accordingly.

Sam Altman confirmed on April 8 that Codex hit 3 million weekly active users, up from roughly 600K at the start of 2026. A week earlier, JetBrains' April 2 research drop showed Claude Code awareness at 57% globally. Together with The Pragmatic Engineer's 906-engineer survey, the picture is now clear: the AI coding tool a candidate names is no longer a generic "AI-native" tag. It is a stratification marker that tells you company size, seniority, and likely compensation expectations before you write your first message.

If your Boolean still treats "AI" as one bucket, you are missing the signal. A candidate who lists "Claude Code + MCP + subagents" is a very different hire from one who lists "Copilot Enterprise + JetBrains." Same keyword family. Different humans, different orgs, different offers.

The 75/56 split is procurement, not preference

The Pragmatic Engineer's February 2026 survey (median 11 to 15 years of experience, fielded Jan 27 to Feb 17) gave the industry its cleanest by-segment cut so far. At companies with 1 to 10 employees, 75% use Claude Code. At companies with 10,000+ employees, GitHub Copilot leads at 56%. Cursor sits at 42% in the tiniest startups. Across all respondents, Claude Code is the #1 most-used tool 8 months after launch, and "most loved" by 46% versus 19% for Cursor and 9% for Copilot.

Read this carefully. Gergely Orosz explicitly attributes enterprise Copilot dominance to existing GitHub and Microsoft contracts and compliance review, not engineer choice. That means Copilot on a Fortune-500 engineer's resume tells you almost nothing about their actual AI fluency. It tells you their procurement team approved one vendor. To find out what they actually reach for, you have to ask what they run on personal projects, or look at their GitHub activity outside work hours.

The reverse inference is sharper. Claude Code on a profile, especially with terminal or MCP language attached, is a very strong signal of a sub-50-person company or a founder-tier IC. Refolk's own index of US profiles mentioning "Claude Code" returns 247 people, heavily concentrated in the San Francisco Bay Area (8 of the top 25), and the dominant current titles are Founder and Co-Founder. The mention itself is a self-selected stratification.

75%
Claude Code adoption at companies with 1 to 10 employees
GitHub Copilot leads at 56% in 10,000+ employee enterprises. Pragmatic Engineer survey, Feb 2026.

What Codex's 3M jump actually means for sourcing

Codex head Thibault Sottiaux confirmed the tool went from 2M weekly actives "a little under a month ago" to 3M by April 8, a 50% jump in about four weeks. The longer arc is steeper: npm downloads grew 177x between April 2025 (82K) and March 2026 (14.53M). OpenAI committed to resetting usage limits at every additional 1M users up to 10M, and launched a $100 ChatGPT Pro tier with 5x more Codex usage than Plus.

The takeaway for sourcers is not that Codex is "winning." Claude Code's run-rate is estimated at $2.5B annualized by March, Codex's revenue crossed $2.5B in February with >100% growth since the start of the year, and SemiAnalysis estimates Claude Code accounts for around 4% of all public GitHub commits. Both are growing into adjacent populations.

The takeaway is that the population mentioning Codex on profiles in April and May 2026 is different from the population that mentioned it in December 2025. Early Codex mentions came mostly from OpenAI-adjacent engineers and ChatGPT power users. The 2M to 3M wave pulled in former Claude Code users who switched after GPT-5.5 shipped. That switching cohort is your highest-signal pool right now, because they have used both tools in anger and chosen.

The Datadog post is the canonical switching artifact

Jonathan Fulton, Staff Software Engineer at Datadog (previously Eppo, SVP Product and Engineering at Storyblocks, McKinsey before that) published a May 2026 Medium post titled "Codex vs Claude Code: Why I decided to switch to Codex." The trigger, in his telling, was a coworker watching GPT-5.5 one-shot a task that took Opus 4.7 many iterations. After nearly a year on Claude Code, he moved.

If you are sourcing senior ICs, that post is a template. Search for engineers writing comparison posts after April 2026 about switching directions between Claude Code, Codex, Cursor Composer 2, Google Antigravity, or Junie. A switching post is a credibility marker. It demonstrates the candidate evaluates tools rather than collecting them, and it almost always reveals their actual workflow (harnesses, MCP servers, subagent patterns) in the body.

The seniority filter is "agents," not "AI"

Across the Pragmatic Engineer dataset, 95% of respondents use AI weekly. That number is now useless as a screen. The sharper cut is agent use: 55% regularly use agents, and staff-plus engineers lead at 63.5%. Agent users are roughly 2x as likely to be positive about AI overall.

That gives you a much better Boolean. Stop searching for "AI" and "Copilot." Start searching for the agent vocabulary that staff-plus engineers actually use:

  • "Claude Code" AND ("skills" OR "subagents" OR "MCP" OR "Agent SDK")
  • "Codex" AND ("/goal" OR "Codex CLI" OR "harness")
  • "Cursor Composer 2" OR "Antigravity" OR "Junie"
  • "cc-switch" (the harness has 49K stars on GitHub and surfaces in bios of multi-tool operators)
  • "BEADS" OR "Metaswarm" OR "Archon" OR "Citadel" OR "awesome-harness-engineering"

These are the phrases that staff engineers use because they have actually wired the tools into a workflow. They are also rare enough that very few recruiters know to search for them. Pragmatic Engineer's data shows 70% of respondents juggle 2 to 4 tools, so the people writing about orchestration are signaling a real and uncommon skill: multi-agent ops. That is not a buzzword on a resume. It is the actual job in 2026.

The problem with that Boolean is that it falls apart fast. LinkedIn truncates, GitHub bios are short, blog posts live on Medium and Substack, and the most credible signal (a switching post) lives outside any one platform. This is exactly the friction we built Refolk to remove: you describe the engineer in plain English ("senior IC who has posted about switching from Claude Code to Codex after GPT-5.5, ideally at a sub-200-person infra company") and get a ranked shortlist with the source artifact attached.

How to read each tool as a tier signal

Treat the tool mention as a first-pass classifier. You will still need to verify, but the prior is strong.

Claude Code on the profile

Almost certainly a founder, early engineer, or staff IC at a sub-50-person company. Probably uses Anthropic's Agent SDK or custom MCP servers. Comp expectations skew toward equity-heavy startup packages with a strong autonomy requirement. Will not respond well to a generic "we use AI tools" pitch. Will respond to a specific technical problem you cannot solve internally.

Codex on the profile (post-April 2026)

Mixed pool. The pre-April cohort is OpenAI-adjacent. The post-April cohort includes the switchers, who are your highest-signal subgroup. If they mention /goal, Codex CLI, or the harness pattern, treat them like the Claude Code cohort above. If they only mention "ChatGPT" without Codex specifically, downgrade to "uses AI casually" and verify on personal repos.

GitHub Copilot on the profile

Procurement signal. Often a tenured engineer at a 1,000+ employee org. Compensation expectations are anchored to large-cap base plus RSUs. AI fluency is unknowable from this alone. The Sonar State of Code 2026 survey also flags that Cursor, Perplexity, and Codex skew junior, while command-line AI tools skew SMB, so cross-check against role and tenure.

Cursor, Antigravity, Junie, or Cursor Composer 2

Cursor in particular skews toward newer-to-AI engineers and junior developers per Sonar. Antigravity hit 6% workplace adoption within about two months of its November 2025 launch (JetBrains data), so an Antigravity mention right now signals an early adopter inside Google's developer relations orbit or a tools-curious senior. Junie is JetBrains-native and almost always paired with IntelliJ shops.

A switching post is a credibility marker. Tool collection is not.

Stop screening for "AI-native." Start reading the harness.

The reason "AI-native" failed as a keyword is that it conflated four very different populations: enterprise engineers running whatever procurement approved, juniors who installed Cursor last month, senior ICs orchestrating multi-agent workflows, and founders shipping production code through Claude Code alone. Those people want different jobs. They cost different money. They reject different pitches.

JetBrains' April 2 data backs this up at scale. Across more than 10,000 professional developers, workplace adoption is now a three-way tie: Copilot 29%, Cursor 18%, Claude Code 18%. Claude Code awareness jumped from 31% (April to June 2025) to 57% by January 2026, with a 91% CSAT and +54 NPS among regular users. The middle of the market is fragmenting. The signal value of a specific tool name is going up, not down.

For founders and engineering leaders sourcing the staff-plus tier right now, the practical move is to build two pipelines in parallel. One for the Claude Code founder-tier pool concentrated in the Bay Area (Refolk's index pegs the SF concentration at 8 of the top 25 Claude Code mentions). One for the post-April Codex switcher pool, where the freshest senior-IC signal lives. Treat them as distinct funnels with distinct outreach.

For recruiters in larger orgs, the move is to stop treating "uses Copilot" as a credential. Ask candidates what they reach for at home. The answer to that question is the actual AI fluency screen.

One more thing. The tool landscape in 2026 changes on a roughly six-week cycle. The Anthropic pricing reshuffle that briefly removed Claude Code from the $20 Pro tier (then reversed, after Amol Avasare confirmed only about 2% of new prosumer signups were affected) shifted some power users to Codex in a single weekend. Sourcing on tool keywords means you have to refresh the Boolean every quarter. That is another reason describing the candidate in plain English, the way Refolk lets you, beats locking yourself into a static keyword list that will be stale by the next model release.

FAQ

Is mentioning Claude Code on a resume always a senior signal?

No, but it is a strong company-size signal. Refolk's index of 247 US profiles mentioning Claude Code skews to founders and co-founders, and the Pragmatic Engineer survey puts adoption at 75% in 1 to 10 employee companies. Combined with role and tenure, a Claude Code mention is a good prior for "early-stage IC or founder," but you should still verify seniority through commit history, prior titles, and whether they describe agent or harness workflows rather than just naming the tool.

How should I rewrite my Boolean strings for AI coding agents in 2026?

Drop generic "AI" and "AI-native" terms. Replace them with specific tool names plus workflow vocabulary: "Claude Code" AND ("MCP" OR "subagents" OR "skills"), "Codex" AND ("/goal" OR "CLI" OR "harness"), and harness-engineering terms like cc-switch, BEADS, or Metaswarm. The agent vocabulary correlates with staff-plus seniority (63.5% agent adoption per Pragmatic Engineer) far better than the tool name alone.

What does a candidate switching from Claude Code to Codex actually tell me?

It tells you they evaluate tools instead of collecting them, and it usually surfaces their real workflow in the post. The May 2026 Datadog Medium post by Jonathan Fulton is the canonical example: a year on Claude Code, switch triggered by GPT-5.5 one-shotting a task. Engineers who publish switching posts after April 2026 are your highest-signal pool because they have used both tools in production and made an explicit choice.

Why is Copilot on a big-company resume a weak AI signal?

Because at 10,000+ employee enterprises, Copilot leads at 56% largely due to existing GitHub and Microsoft contracts and compliance approval, not engineer preference. The same engineer might use Claude Code or Codex on personal projects. To screen for actual fluency in that cohort, ask what they reach for outside work, look at their public GitHub activity, or check for talks, posts, and side projects that reveal the tool they choose when procurement is not in the room.

Read next