Refolk
May 12, 2026·9 min read

"Top 1% With Lovable or Cursor": May's HN Thread Rewrote the JD

The May 2026 HN "Who is Hiring" thread made Lovable, Cursor, and Replit fluency the hard screen. Here's how to source for it without keywords.

sourcing AI engineersvibe coding hiringLovable Cursor Replit talentHacker News who is hiring May 2026screening for AI engineers
"Top 1% With Lovable or Cursor": May's HN Thread Rewrote the JD

The May 2026 "Ask HN: Who is Hiring" thread went up on May 1 with 321 listings, and somewhere around the tenth post you can feel the JD changing under your feet. Tiger Tracks doesn't ask for Python or React. It asks for "top 1% with Lovable, Replit, or Cursor" and the ability to ship a complex tool, multiple data sources, APIs, server hosting, LLM integration, in under a day. Read down the thread and you see the same shape repeating: adclear.ai listing Cursor, Claude Code, and Greptile as the day-to-day stack. SentiLink declaring an "AI-first engineering culture." A Saigon ad saying no formal coding background required, just fluency with Claude Code or Cursor.

This is the first hiring thread where vibe-coding tool fluency is the hard screen and the language stack is the afterthought. If you are still sourcing AI engineers off a Python-plus-LangChain Boolean, you are looking at the wrong layer of the stack.

What changed in the JD, exactly

The Tiger Tracks ad is the cleanest specimen. Required: a public GitHub or portfolio with shipped projects the hiring manager can actually play with. The interview loop is a 30-minute intro, a 45-minute founder conversation, and a 60-minute live build in Lovable or Replit where they watch you ship. Offer within two weeks. Comp is hourly contract, bi-weekly, net-14. The work to be done is named: landing page conversion analyzers, creative analyzers for video ads, vertical media buying co-pilots wired into Meta, Google, and TikTok APIs. Real LLM tools that go into client production the same week they ship.

Notice what is missing. No years of experience. No degree. No "must have built distributed systems at scale." No specific cloud. The screen is: can you, in 60 minutes, scope a tool with three data sources, an LLM call, and a deploy target, and produce a URL the founder can click?

321
Listings in the May 2026 HN "Who is Hiring" thread
Multiple posts now treat Lovable, Cursor, or Replit fluency as a hard requirement instead of a nice-to-have.

The market context underneath this shift is not subtle. Anysphere, the company behind Cursor, raised $900M in May 2025 at a $9B valuation, with Cursor at $500M ARR by June 2025. Replit's effort-based billing launched in July 2025 maxes out at $4,000 per month per seat on the Pro tier. People are spending real money inside these tools every day. The JDs are catching up to where the work already moved.

Why your existing search query returns almost nothing

Here is the trap nobody warns you about. If you go to LinkedIn and Boolean-search ("Founding AI Engineer" OR "AI Engineer") AND ("Cursor" OR "Lovable" OR "Replit"), you will get a handful of results. Single digits, in many US markets. The people who can actually pass the Tiger Tracks 60-minute build are not labeling themselves this way on their profiles yet. The skill is too new, the tools are too new, and the candidates who live in them are mostly shipping under their own name on GitHub, on Lovable showcase pages, in the v0 community gallery, and on Replit Bounties.

In our own index at Refolk, the slice of US profiles that self-list Cursor as a skill at a Founding or AI Engineer title is tiny, and the top employers in that slice are obscure (small health and therapy startups, a few solo-founder shops). The signal is real but it is not on the resume. It is on the deploy URL.

This is the part where keyword sourcing breaks and natural-language sourcing wins. Refolk lets you describe the person the way the JD actually describes them ("engineers who have shipped a public LLM tool in the last 90 days, ideally with a Lovable or v0 deploy URL on their GitHub") and returns a ranked shortlist that doesn't depend on whether the candidate remembered to add "Cursor" to their LinkedIn skills.

The skill is too new to be on the resume. It is on the deploy URL.

Where the candidates actually live

Stop sourcing from LinkedIn first. Source from the artifact, then enrich. The map, roughly:

  • GitHub, filtered for recent shipped projects with LLM dependencies in package.json or requirements.txt. Look for repos with a live demo link in the README and a commit cadence under 30 days old.
  • Lovable showcase and templates gallery. The author of a popular template is, by definition, somebody who can scope a tool fast.
  • v0 community gallery for the React/UI-heavy end of the spectrum.
  • Replit Bounties for people already working as 1099 vibe-coders for cash, which (we'll get to this) is exactly the talent pool right now.
  • Cursor community forum for power users who post their workflows.
  • GoodVibeCode and VibeCodeCareers job boards as inverse sourcing: who is hiring and who is posting comp ranges. The same source notes salary bands of $90K junior to $400K+ senior for these roles.
  • Baseten's customer list (Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma, Writer) and Featurebase's customer list (Lovable, Raycast, n8n) are themselves sourcing maps. Engineers who ship infra for the vibe-coding ecosystem are the highest-leverage hires for any team trying to use it.

What the 60-minute build is really screening for

The phrase "top 1% with Lovable" is, on its face, unmeasurable. There is no leaderboard. No certification. No GitHub badge that ranks you among Lovable users worldwide. So what are hiring managers actually testing?

Three things, in order:

  1. Scoping under a clock. Can you take "build me a landing page conversion analyzer" and decide, in the first three minutes, what data sources, what LLM call shape, what UI surface, and what deploy target you'll use? This is product judgment, compressed.
  2. Taste in stack choices. Lovable is, in the words of one technical write-up, "heavily opinionated and tightly coupled with specific backend-as-a-service providers like Supabase." Picking it, or picking against it, says something about how you think about lock-in versus speed.
  3. Velocity to a working URL. Not pseudocode. Not a Figma. A link the founder can click and break.

This is why the take-home is dying in this segment. A 48-hour take-home tests endurance and willingness to do unpaid work. A 60-minute live Lovable build tests the actual job. As one practitioner put it, the old question was "can you code?" and the new question is "can you build and ship?"

The bilingual problem nobody is naming

There is a real counter-argument worth taking seriously, and you'll lose candidates if you ignore it. Senior engineers in adjacent communities (iOS being the loudest) are openly hostile to the live-vibe-build interview. The complaint: companies test candidates on deep platform knowledge with AI tools banned, then ship production features using AI tools that generate code nobody on the team fully reviews. The screen and the job have diverged.

The Tony Trejo Medium piece from April 2026 ("Vibe Coding Is Making It Worse") is the readable version of this take. If you are hiring for a codebase that has to live for five years, a Lovable-only screen optimizes for demo velocity and selects against maintainability. That tradeoff might be the right one for Tiger Tracks, where the deliverables are client tools that ship the same week. It is probably the wrong one if you are hiring a fifth engineer at a Series B SaaS company.

The fix is to be honest about which job you are hiring for, and to write the JD accordingly. Vibe coding hiring as a category is split into two real jobs: "ship me a working demo this week" and "extend a codebase I have to maintain in 2029." Don't screen for one and pay for the other.

The talent pool is currently 1099, not W-2

This is the most under-priced fact in the May 2026 thread. Tiger Tracks is hiring hourly, net-14, with a "path to Head of AI" dangled at the end. Several adjacent posts in the same thread are pre-seed equity or explicit freelance arrangements. The people who win on a 60-minute Lovable build are largely operating as 1099 contractors right now, often with three or four overlapping engagements.

That has two implications for sourcing AI engineers in this segment:

  • Your "are you open to roles?" InMail is the wrong opener. Try "are you open to a 6-week paid pilot?" Conversion will be noticeably higher.
  • The candidate's portfolio is their last four client projects, not their last full-time job. Ask for the deploy URLs, not the resume.

For inbound triage, this is also where natural-language search earns its keep. Instead of trying to keep a Boolean string up to date as new tools (Bolt, v0, Lovable, Replit, Cursor, Claude Code, Greptile, and whatever ships next quarter) get added to the canon, you describe the behavior. We use Refolk internally on queries like "engineers who have shipped at least two LLM-integrated tools as solo projects in the last six months and who currently list themselves as available for contract work" and skip the keyword-maintenance treadmill entirely.

A revised screening rubric you can steal

If you are rewriting your AI engineer JD this week, here is the shape that actually maps to what the May 2026 thread is asking for. Screening for AI engineers in 2026 looks like this:

  1. Artifact, not resume. Require two deploy URLs in the application. No URLs, no screen.
  2. Scoping call before build call. A 30-minute conversation about how they would build the next tool. Cheap to run, high signal, catches the candidates who can prompt but can't scope.
  3. A 60-minute live build, tool of their choice. Don't mandate Lovable. Let them pick Lovable, Cursor, Replit, v0, or Bolt. The choice is data.
  4. A read-the-code session. This is the part most teams skip. Have them walk you through one of their existing deploys and explain a decision they regret. This is the maintainability screen that Lovable-only loops are missing.
  5. Reference the deploy, not the manager. Ask the candidate's last paying client whether the tool is still in production and whether they had to rebuild any of it.

That last one filters out the demo-only operators in about one phone call.

FAQ

Is "vibe coding" actually a real hiring category or a fad?

It is real enough that Anysphere is at a $9B valuation and the May 2026 HN thread codifies it as a hard requirement in multiple posts from real companies, not just one viral outlier. Whether the term survives is a separate question from whether the skill (scoping and shipping LLM-integrated tools fast) is being hired for. The skill is being hired for. Plan accordingly.

How do I source Lovable Cursor Replit talent when nobody lists those tools on LinkedIn?

Source from the artifact layer first. GitHub recent activity, Lovable showcase pages, v0 community gallery, Replit Bounties, and the customer rosters of infra companies like Baseten and Featurebase will surface names that LinkedIn skill tags will not. Then enrich those names with contact and context, which is the workflow Refolk is built around.

Should I drop the take-home entirely?

For roles where the deliverable is "ship a client-facing LLM tool this week," yes. Replace it with a 60-minute live build plus a read-the-code session on something the candidate already shipped. For roles where the deliverable is "maintain a codebase for five years," keep some form of code review or system design. The mistake is using one screen for both jobs.

What's the realistic comp range for a "top 1% Cursor" engineer in 2026?

Public data points put vibe-coding roles between $90K junior and $400K+ senior at top companies, with a large 1099 contractor middle that bills hourly. Tiger Tracks itself is hourly net-14. If you are competing on full-time W-2 against a candidate currently running three contract engagements, lead with equity and ownership scope, not base.

Read next