AI lead scoring without burning your funnel
AI lead scoring sounds great until it quietly disqualifies your best buyers. A practical guide to scoring that helps reps without nuking your pipeline.
Lead scoring sells itself: feed an AI your past wins, let it rank inbound, and your reps only call the hot ones. In practice, half the AI lead-scoring rollouts we’ve seen at US SaaS startups disqualified the wrong leads for ninety days before anyone noticed. Here’s how to actually do it.
Why naive scoring breaks
Most models train on closed-won deals. Closed-won is a function of which leads your reps liked enough to work, not which leads were actually best. Train on that and you’ll get a model that loves the leads your team is already biased toward — usually big logos in obvious markets like SF and NYC — while quietly flagging everyone else as cold.
The four signals that actually predict
- Intent depth, not breadth. Three pages on your pricing page beats fifteen pages across your blog. Time on high-intent surfaces is the cleanest signal you have.
- Recency of activity. A lead who came back this week is hotter than one who downloaded six things in March. Decay aggressively.
- Title plus company shape. Founder at a 20-person Brooklyn D2C is a different lead than founder at a 200-person Austin SaaS, even if the title string matches.
- Referral source. A warm intro from an existing customer outperforms every other signal combined. Score it like it.
What to feed the model
Don’t feed it everything. Feed it the ten features your best AE would actually use to triage on a Monday morning. If a feature wouldn’t change a human’s decision, it shouldn’t change the model’s. Feature creep is how you get a black box that nobody trusts.
The shadow-mode rollout
For the first thirty days, run the model in shadow. Score every lead, but don’t route on the score. Compare what the model prioritised against what your reps actually closed. The gap is your bias map. We’ve had clients discover their model was 40% worse than the rep’s gut on inbound from non-coastal cities. Better to find that in shadow than in production.
Where AI helps versus where rules win
- Rules are better at: hard disqualifiers (free email domain, no company), routing (round-robin by territory), SLAs (respond in 5 minutes).
- AI is better at:ranking the middle 60% of your inbound where the rules don’t fire either way. That’s where reps waste the most time and where a model earns its keep.
The metric that tells you it’s working
Not lift on closed-won. That takes too long and confuses too many variables. Track speed-to-first-touch on top-quartile leads, and meeting-booked rate on top-quartile leads. If those move, the model is helping. If they don’t, your reps don’t trust the score and you have a behaviour problem, not a model problem.
How we help at The Nerdish Mic
We build lead-scoring systems that your reps will actually use — usually a thin AI layer over a sharp set of rules, wired into HubSpot or Salesforce, with a feedback loop the team can read. If your pipeline ranking feels like a coin flip, we can fix that in a few weeks.