Marketers keep killing what works because they stare at the wrong numbers. I’ve watched great channels get cut, not because they failed, but because they weren’t measured right. That mistake shrinks brands, stalls momentum, and creates false confidence in bad dashboards.
Here’s my take: influencer marketing works more than most reports admit. If the only metric is coupon code use or link clicks, you’re flying blind. You’re undercounting real impact and making bad calls.
The Metric Problem No One Wants to Admit
We’ve trained ourselves to worship what’s easy to track. But easy doesn’t mean accurate. As I’ve said before:
“People try to track, like, Instagram models promoting their stuff…through, like, a coupon code or a link in their bio…But the percentage of people that actually go use that code or click that link is…way less than the majority.”
That’s the trap. Most people don’t click a bio link while they’re scrolling. They see the post, then Google you later. They swipe, forget, and buy next week. They mention it to a friend who then searches your brand. None of that shows up in a coupon report.
When clicks and codes are your only north star, you’ll kill a channel that’s actually moving revenue. I’ve seen it happen at startups and at nine-figure brands.
What You Should Track Instead
Measurement needs to reflect how people actually buy. One post can spark searches, email sign-ups, and store visits over days or weeks. Your model has to catch that.
- Blended return: track total revenue against total marketing cost, not just last-click.
- Branded search: watch lifts in search volume and click-through after campaigns.
- Correlation windows: measure impact for 7–28 days post-promotion, not 24 hours.
- Attribution sanity checks: compare platform-reported results to actual sales trends.
- Survey and post-purchase data: ask buyers what drove their decision.
These methods won’t give perfect precision. But they will stop precision theater that leads you to bad decisions.
Yes, There Are Bad Influencer Bets—Here’s How to Spot Them
Skeptics argue that influencer spend is fluffy. Sometimes it is. That’s not an argument against the channel. It’s an argument against lazy setups.
Watch for:
- Mismatched audience: high reach, wrong buyer.
- Low-quality creative: no hook, no story, no reason to care.
- One-and-done posts: zero repetition means zero memory.
- No landing path: unclear next step or bad offer.
Add a simple filter: would this content make me stop scrolling? If not, fix the creative before you blame the channel.
Stop Cutting Winners Because a Spreadsheet Said So
Here’s the bigger issue. Teams default to the most traceable path and then declare victory. They pour money into bottom-of-funnel clicks while top-of-funnel dries up. A few months later, costs rise and new customers stall. They didn’t “optimize.” They starved the engine.
“If that is the only measurement you’re using…you’re gonna stop doing these marketing campaigns, even though in the broader picture, they’re actually helping your brand a lot.”
Great brands protect what creates demand, not just what closes it. That means holding two truths at once: performance matters, and not every driver shows up in a UTM tag.
How I Decide Whether To Keep an Influencer Program
Here’s a simple, repeatable check that won’t waste your time:
- Run a clear test with 3–5 creators for 30 days.
- Set a blended revenue target for the test budget.
- Track post-week and post-month lifts in branded search, direct traffic, and email sign-ups.
- Survey new buyers on first touch; require at least 15% response.
- Scale only the creators whose posts pass the stop-scroll test and show lift within 28 days.
This creates a fair read without pretending we can trace every step.
I’ve built and sold companies on this kind of thinking. When measurement matches human behavior, growth compounds. When it doesn’t, you spin in circles.
The Bottom Line
Don’t let narrow tracking kill broad impact. Influencer marketing isn’t magic. It’s a demand driver that deserves a grown-up scorecard. Use blended metrics, longer windows, and real customer feedback. Keep what moves revenue, even if it doesn’t light up a last-click report.
My challenge to you: audit the channels you paused in the last year. Look for brand search lifts, cohort improvements, and repeat rates around those periods. If you find signal, bring them back under a smarter test. Your future revenue will thank you.
Frequently Asked Questions
Q: How do I justify influencer spend to a data-driven team?
Pair creative tests with blended revenue targets and show lifts in branded search, direct traffic, and email sign-ups. Present channel impact in a 28-day window, not just last-click.
Q: What’s a reasonable time frame to see brand lift?
You’ll often see signal within 7–14 days and clearer gains by 28 days. Shorter windows tend to miss delayed purchases and word-of-mouth effects.
Q: Should I still use coupon codes and links?
Yes, but treat them as partial indicators. Combine them with surveys, search trends, and total sales movement for a fair assessment.
Q: How many creators do I need to test?
Start with 3–5 creators to reduce variance. Scale with those who drive measurable lift and strong creative resonance.
Q: What’s the simplest success metric to align the team?
Blended return on ad spend across the test period. If total revenue rises in line with total costs—and brand signals climb—you’re on the right track.






