You finally shipped something people want. A few users are converting. Revenue exists, technically. But every time someone asks, “Why is it priced this way?” you realize the answer is mostly vibes. You picked a number that felt reasonable, copied a competitor, or anchored on what you’d personally pay. That works for about five minutes. Then churn, stalled growth, or awkward sales calls force the real question: what will customers actually pay, and why?
To put this guide together, we reviewed founder essays, pricing teardown posts, and podcast interviews from operators who have run pricing at scale. That includes early pricing work described by Patrick Campbell during his time building ProfitWell, documented experiments shared by Des Traynor at Intercom, and public founder reflections from companies like Atlassian, Basecamp, and Stripe. We focused on what these teams actually tested in market, how they structured experiments, and what changed as a result.
In this article, we will walk through a practical, founder-friendly process for running pricing experiments that surface willingness to pay without torching trust or revenue.
Why Pricing Experiments Matter More Than You Think
At pre-seed and seed, pricing feels secondary to product. That is a mistake. Pricing is not just a monetization decision, it is a positioning decision. It determines who says yes, who churns, and how fast you can reinvest in growth.
Most early founders underprice. Patrick Campbell has repeatedly pointed out, across years of ProfitWell writing and talks, that early stage SaaS companies routinely leave 20 to 50 percent of revenue on the table simply because they never test pricing. Not because customers would revolt, but because no one ever asked them to choose.
In the next 30 to 60 days, your goal is not to “find the perfect price.” It is to reduce uncertainty. You want evidence about price sensitivity, value drivers, and where resistance actually shows up. If you skip this, you will optimize features, marketing, and sales around a number that might be quietly holding the company back.
What a Pricing Experiment Actually Is
A pricing experiment is a controlled way to observe behavior when price changes, not opinions about price. This distinction matters.
As Des Traynor explained when writing about Intercom’s early monetization decisions, customers are very good at telling you what they dislike, and very bad at predicting what they will pay for. Intercom learned more from watching upgrade behavior than from any survey.
A real pricing experiment has three properties:
- It asks customers to make a real choice.
- It isolates one variable at a time.
- It measures downstream behavior, not just conversion.
If your “experiment” is a survey asking, “Would you pay $49?”, that is feedback, not evidence.
Step 1: Define the Decision You Are Trying to Make
Before touching numbers, write down the decision you need to unlock. Pricing experiments are only useful if they inform a concrete choice.
Examples:
- Should we raise the base price or introduce tiers?
- Is our bottleneck acquisition or monetization?
- Are we undercharging our power users?
Atlassian has shared publicly that their early pricing decisions were driven by one core question: could they stay self serve while still growing revenue per customer? That clarity guided years of experimentation with user based pricing.
For an early stage founder, translate this into one decision per experiment. If you try to learn everything at once, you will learn nothing actionable.
Step 2: Choose the Right Experiment Type
Different questions require different experiment designs. Here are the four most reliable types for early stage teams.
A. Willingness to Pay Interviews (With Teeth)
These are not generic customer interviews. They are structured conversations that force tradeoffs.
Patrick Campbell popularized a version of this by asking customers to rank features and then removing their favorite. The key insight was not what people said, but how upset they got.
In practice:
- Ask about the last time the problem cost them time or money.
- Anchor on concrete outcomes, not features.
- Introduce price only after value is clear.
Your output is not a price, it is a range and a list of value drivers customers reference unprompted.
B. Price Packaging Tests
Instead of changing the price, change what is included.
Intercom has written about how early packaging experiments taught them which features felt core versus premium. Customers did not revolt when features moved tiers, as long as the core job remained intact.
For early founders, this often looks like:
- One plan vs three plans.
- Usage limits vs feature gates.
- Per seat vs per account.
You are testing perception of fairness more than raw willingness to pay.
C. A/B Tests on New Traffic
This is the cleanest quantitative method, but only works if you have enough volume.
Stripe has documented that they tested pricing copy and presentation extensively on their pricing page, even when the underlying fees were fixed. Small framing changes produced measurable differences in conversion.
If you run this:
- Only test on new users.
- Keep everything else identical.
- Measure activation and retention, not just signups.
D. Cohort Based Price Increases
This is the least comfortable, and often the most revealing.
Basecamp has been transparent about raising prices for new customers first, then slowly rolling increases to existing users. The lesson was that fear of backlash was often worse than reality.
For a young company, this might mean:
- Grandfathering existing users.
- Raising prices for a specific segment.
- Testing annual plans with a discount.
Step 3: Pick One Variable, Not Five
Most pricing experiments fail because founders change too much at once.
Only change one of:
- Price point
- Metric (per seat, per usage)
- Packaging
- Discount structure
When Atlassian experimented with user based pricing, they did not simultaneously overhaul features or onboarding. That isolation made results interpretable.
Write down what is fixed and what is changing. If you cannot explain the experiment in one sentence, it is too complex.
Step 4: Decide What Success Looks Like Before You Run It
Define success metrics in advance. Otherwise, you will rationalize whatever happens.
Common metrics:
- Conversion rate
- Revenue per user
- Retention after 30 or 60 days
- Upgrade frequency
- Sales cycle length
Des Traynor has emphasized that Intercom often accepted lower conversion in exchange for higher expansion revenue, because the long term curve mattered more.
For early founders, pick one primary metric and one guardrail. For example, maximize revenue per user while ensuring activation does not drop more than 10 percent.
Step 5: Run the Experiment Long Enough to Learn
Pricing behavior has lag. People need time to experience value.
A common mistake is stopping after a week because conversions dipped. That is noise.
As a rule of thumb:
- Self serve SaaS: run for at least two full buying cycles.
- Sales assisted: wait until deals close or stall.
Stripe has noted that many pricing insights only showed up after observing how customers expanded usage over time, not at signup.
Step 6: Synthesize What You Learned Into a Principle
Do not just pick a number and move on. Extract a rule.
Examples:
- Customers pay more for saved time than advanced features.
- Teams tolerate higher prices if limits scale with usage.
- Discounts matter less than clarity.
These principles compound. They inform roadmap, marketing copy, and sales conversations.
Common Pricing Experiment Mistakes to Avoid
Founders repeat the same errors:
- Asking hypothetical questions instead of forcing choices.
- Testing pricing before value is clear.
- Overreacting to loud complaints instead of silent churn.
- Copying competitor prices without context.
- Treating pricing as a one time decision.
Patrick Campbell has said repeatedly that pricing is never “done.” The companies that win revisit it continuously, with discipline.
Do This Week
- Write down the single pricing decision blocking your next stage of growth.
- List three customer segments and rank them by price sensitivity.
- Schedule five willingness to pay interviews with customers who felt pain recently.
- Draft one packaging change that isolates perceived value.
- Decide your primary metric and one guardrail before running anything.
- Test only on new users or a clearly defined cohort.
- Run the experiment long enough to observe real usage.
- Document results in a one page memo with numbers and quotes.
- Extract one pricing principle you will reuse.
- Calendar a pricing review every quarter.
Final Thoughts
Pricing experiments feel risky because they touch money and identity at the same time. That discomfort is exactly why they work. The founders who build durable companies do not guess what customers will pay, they observe it. Start small, test one thing, and let behavior teach you. Clarity compounds faster than confidence ever will.





