Refolk
May 10, 2026·6 min read

Freshworks Just Broke GitHub Screening. Here's What to Measure Instead.

Freshworks cut 500 jobs while admitting AI writes over half its code. Here's why GitHub screening stopped working and what senior recruiters should measure now.

screening engineers AI generated codeGitHub screening 2026Freshworks layoffs AI codetechnical recruiter signalssourcing engineers AI era
Freshworks Just Broke GitHub Screening. Here's What to Measure Instead.

On May 6, 2026, Freshworks cut 500 people (about 11% of staff) on the same day it posted a Q1 beat: $228.6M in revenue, up 16% year over year, and the two largest contracts in company history. CEO Dennis Woodside told Reuters, "Over half of our code is written by AI." That's the first time a profitable, growing public software company has tied layoffs explicitly to AI authorship of its codebase. If you screen senior engineers on GitHub, your evaluation surface just changed underneath you.

The disclosure that breaks the screen

Plenty of executives have hinted at AI's share of their codebase. Satya Nadella said 20-30% at LlamaCon in April 2025. Sundar Pichai put Google north of 25-30% on an earnings call. Microsoft CTO Kevin Scott projected 95% by 2030. Snap has disclosed at least 65% of new code is AI-generated. Meta has set an internal target of 75% AI-generated committed code by mid-2026.

Freshworks is different in two ways. First, the percentage crossed 50%. Second, the company is profitable and growing, and it told the public the layoff and the AI authorship are the same story. That removes the usual hedge ("we're cutting because revenue softened, AI is incidental") and puts a number on it: roughly 22% headcount reduction since late 2024 across two AI-rationale rounds, while revenue kept climbing.

For sourcing, the takeaway isn't the layoff math. It's the authorship math. If a profitable shop is over 50% AI-authored in production, the assumption that a candidate's public repos reflect their unaided engineering ability is no longer defensible.

50%+
Share of Freshworks' production code written by AI
Disclosed by CEO Dennis Woodside on the same day the company cut 500 jobs and beat Q1 estimates.

What GitHub actually shows you in 2026

GitHub still anchors most senior sourcing pipelines. Over 90% of Fortune 100 companies use it as their build platform. HireEZ, SeekOut, and Findem all weight GitHub artifacts heavily in their scoring. A query for US senior, staff, or principal software engineers returns roughly 162,000 profiles in our index, and a sizable chunk of recruiters sort that pool by stars, languages, and commit cadence.

Here's the problem. Those signals were calibrated for a world where a clean, idiomatic, well-commented repo meant the candidate could write clean, idiomatic, well-commented code. In 2026, polish is at least as likely to indicate good prompt hygiene with Copilot, Cursor, or Claude Code. The candidate may still be excellent. They may also be a competent operator of a model that's doing 60% of the lifting. You cannot tell from the artifact.

This hits senior screening harder than junior. Juniors get live coding rounds and fundamentals questions in a controlled setting. Seniors are still screened heavily on past artifacts: open-source contributions, design docs, side projects, blog posts. That's exactly the surface most contaminated by AI authorship, because senior engineers were the earliest and most aggressive adopters of agentic coding tools.

The detector trap

The first instinct is to reach for an AI-code detector. Codequiry, Overchat, and aicodedetector.org self-report 80-90% accuracy on fresh snippets. That number drops sharply once a human edits the output, which is what every competent senior engineer does. Using a detector as a pass/fail gate will produce false positives on the engineers who edit AI output heavily, which is to say the engineers you most want to hire.

iDiallo and others have made this case in detail: there is no reliable, language-agnostic way to fingerprint AI code in a repo after human revision. Treating "this looks AI-generated" as disqualifying punishes good editors and rewards people who deliberately roughen their commits. That's a worse signal than the one you started with.

Polish used to be evidence of skill. In 2026 it's evidence of tooling. </pull> Wait, that should be a pull block. Let me redo:

pull Polish used to be evidence of skill. In 2026 it is evidence of tooling.


## What candidates are actually doing

Codility cites a 2025 survey where 88% of students admitted using generative AI on technical tests, up from 53% the year before. 59% of hiring managers suspect candidates have used AI to misrepresent ability. 31% have caught a candidate who turned out to be a different person entirely. The candidate-side tooling has names now: InterviewMan, Cluely, AceRound, all marketing themselves as "undetectable" interview copilots.

The assessment vendors have already conceded the arms race on artifacts. HackerRank, Codility, CodeSignal, and Sherlock have pivoted from plagiarism detection to "AI collaboration" evaluation. Codility's Cody and HackerRank's AI Copilot let candidates use AI inside the assessment but record the full prompt transcript. The diagnostic artifact is no longer the code. It's the prompts and the diffs.

If your funnel still treats "did they use AI?" as the question, you're playing 2022's game. The right question is "how do they use AI, and is their judgment about when to trust it any good?"

## What to measure instead

Five things actually carry signal on a senior engineer in 2026. None of them are commit count.

### 1. Prompt and diff transcripts

This is the new high-signal artifact. When a candidate uses Cody or AI Copilot in an assessment, you get the prompts they wrote, the suggestions they accepted, the ones they rejected, and what they edited afterward. That sequence tells you whether they understand the problem, whether they recognize bad output, and whether they can shape a model toward something correct. A candidate who accepts everything verbatim is not the same hire as one who rejects three suggestions and writes a prompt that pins down the constraint the model kept missing.

### 2. Live design conversation about a system the candidate actually built

Architecture, data model, failure modes, what they'd change. AI is much worse at this than at code. A senior who built and operated something nontrivial can answer in detail; a senior who stitched together AI output cannot. Forty-five minutes of this beats two days of take-home review.

### 3. PR review behavior in their current company, when you can get it

This is hard to source from outside, but candidates will describe it if you ask the right way. "Walk me through the last PR you blocked and why" tells you more than their entire public commit history. People who review well in an AI-heavy codebase are doing the job that most senior engineering now is.

### 4. Operational war stories with specifics

Outage handling, on-call instincts, the migration that almost shipped broken. AI does not produce these. Senior engineers who lived through them have details that a model cannot fabricate on demand.

### 5. Tool fluency, named explicitly

What's in their setup? Cursor, Claude Code, Aider, Continue? Which models for which tasks? When do they reach for an agent versus inline completion? An engineer who has thoughtful answers here is doing their actual job. One who is vague is either pretending or hasn't kept up. Both matter.

The sourcing arbitrage hiding in the layoff narrative

Here's the part most recruiters are missing. Freshworks, Coinbase, Atlassian, Meta, Amazon, and Snap have all publicly attributed cuts to AI productivity gains. Tech-sector layoffs hit roughly 80,000 to 93,000 by early May 2026, with AI cited as the rationale on a meaningful share. The engineers being displaced from these companies are disproportionately the ones who built the AI-augmented workflows that made the cuts possible.

That is a high-quality pool, and GitHub-keyword sourcing will systematically miss it. Their best work for the last two years lived behind corporate firewalls: internal copilots, prompt libraries, evaluation harnesses, agent frameworks. Their public GitHub may look quiet or stale because their employer absorbed all their output. If you sort the 162,000 senior IC pool by recent public commit activity, you'll filter out exactly the people who just shipped the most interesting AI-native infrastructure in the industry.

162,000
US senior, staff, and principal software engineers in our index
Most are being sorted by GitHub artifacts whose authorship is now ambiguous.

stat fences. Apologies, let me note: this is the second stat, fine to include since 1-2 allowed.

This is the case for describing the person you want in plain English instead of stacking keyword filters. "Engineers who left Freshworks or Snap in the last 90 days, worked on internal developer tooling, and have a public talk or blog post about AI-assisted coding" is a sentence that returns the right people. It is not a query you can build cleanly in a Boolean search. That's exactly the gap Refolk closes: you describe the person, the system does the cross-source resolution across GitHub, LinkedIn, and the open web, and you get a ranked shortlist instead of a 1,000-result cap.

A practical screening rebuild

If you're a recruiter or engineering manager rewriting your senior loop this quarter, here's a workable shape.

Top of funnel: Stop using GitHub stars or commit cadence as a primary filter. Use them as a tiebreaker. Source on the description of the person and the team they came from. When you're sourcing engineers in the AI era, the "where they worked and what shipped" signal is more durable than artifact polish. This is another place Refolk earns its keep, because plain-English queries pull together the company-trajectory signal that LinkedIn and GitHub each only show half of.

Screen: Replace the take-home with a 60 to 90 minute live session that includes AI tools and records the transcript. Codility's Cody, HackerRank's AI Copilot, or your own setup with screen capture all work. Score on prompt quality, suggestion judgment, and final correctness, in that order.

Onsite: One round on a system they actually built. One round on operational reality. One round on tool fluency and judgment about when not to use AI. Drop the algorithm round or move it earlier as a quick filter; it stopped being predictive a while ago and AI made that worse.

Reference checks: Ask peers about review behavior and AI judgment specifically. "Did they catch bad AI suggestions in PRs?" is a more useful question than "are they a strong coder?"

The Freshworks disclosure didn't break engineering hiring. It broke one specific assumption in it: that public code artifacts are a clean proxy for individual ability. That assumption was already strained. Now it's named. The recruiters and managers who adjust their technical recruiter signals this quarter will hire faster and better than the ones still sorting by stars.

FAQ

Should I use AI-code detectors as part of screening?

Not as a gate. The accuracy claims (80-90%) are for fresh, unedited snippets, and they collapse once a human revises the output. The engineers you want most are the ones doing heavy revision, so a detector-driven filter actively selects against them. If you use detectors at all, treat the output as a conversation prompt ("walk me through this file"), not a pass/fail.

Is the GitHub profile useless now?

No, but its weight should drop. Treat it as one signal among several, useful for confirming someone works on the kind of systems you care about and for surfacing public talks, blog posts, or design docs they've authored. Stop using commit graphs and star counts as quality proxies. They were always weak; AI authorship made them weaker.

How do I evaluate an engineer who claims heavy AI tool use?

Ask for specifics. Which tools, which models, which workflows, where it fails. Have them drive a real task with their setup for 30 minutes while you watch. Strong candidates will describe failure modes ("Claude is great at X but terrible at Y, so I switch to Z") and have opinions about agent design. Weak candidates will be generic.

What about engineers from Freshworks, Coinbase, or Snap on the market right now?

This is the highest-leverage pool of the year for AI-native engineering hires, and it will not last. Their best work is internal, so their public footprint understates them. Source on company, role, and timeline rather than GitHub activity, reach out within two weeks of the announcement, and lead with a problem they'd find interesting rather than a JD.

Read next