Careful pricing research enables companies to determine optimal price points that allow new innovations to fully deliver value to customers. Precisely quantifying customer willingness-to-pay and simulating potential product offerings empowers organizations to make products that customers want while earning healthy margins to fund growth. Techniques like conjoint analysis help balance affordability to the consumer and profitability for the business.
In this article, I'll discuss three popular research techniques organizations can use to conduct pricing research:
Each technique has its pros and cons, but we can all agree that using data to inform decision making will help brands build better products and services.
This pricing research technique, also known as the Price Sensitivity Meter, is an age-old approach to finding an optimal price point. It's simple and involves four questions that ask respondents the following:
At what price is this product too expensive?
At what price is this product getting expensive but still considerable?
At what price is this product a great value?
At what price is this product too cheap?
Plotting responses to these questions shows an acceptable range of prices and an optimal price point.
While this method is simple for survey participants, there are flaws that a researcher must be aware of. The first is that a respondent only evaluates one product, without competitive context. By only evaluating one product, we will not be able to provide recommendations for optimizing features to drive a higher price or recommend which features to swap out to increase profitability.
Another limitation is that it does not directly ask if respondents would actually purchase the product at specific price points. The good news is that there is an extension called the Newton-Miller Smith model that combines Van Westendorp data with an additional purchase intent question. This extension creates a purchase probability curve, or price sensitivity curve, across price ranges.
For example, a respondent may indicate up to $50 is entirely acceptable yet they only have a 30% likelihood of actually buying at that price. The intent likelihood clarifies the curve and expected conversion at each price level. This enhanced granularity aids significantly in zeroing in on revenue-maximizing price points - a much more powerful output than just Van Westendorp data alone.
In summary, while Van Westendorp can be helpful in directionally offering price ranges and expectations for products that are new to world, there are risks that the results of a Van West can be completely outside your expectations. If you do choose to use Van West, we strongly recommend adding the Newton Miller Smith extension to layer on the pivotal purchase intent probabilities that allow honing pricing decisions for optimal lift.
Also known as Price Laddering, this technique starts with an initial price point for a product or service. If they agree, a higher price is presented. If they decline, a lower price is tested. This continues until willingness to pay thresholds are identified.
For example, when offered freelancing services priced at $100/hour, the researcher would ask about $150 if accepted or $75 if rejected. Eventually, a peak willingness price emerges, say $150.
This fast and straightforward method determines the price sensitivity curve for the products and services tested. But, while easy to implement, Gabor-Granger has limitations:
It only evaluates pricing for one product or service at a time, preventing bundling pricing optimization with feature optimization.
It typically does not incorporate competitive pricing data and risks developing unrealistically high/low pricing.
Requires separate price laddering studies for every distinct offering and fails to leverage product commonalities.
Gabor-Granger offers directional input on pricing thresholds yet falls short of the holistic optimization possible through conjoint analysis and simulations factoring in product configurations, features, competitive data, and scenario modeling. It remains a fast, low-effort option, but strategic applicability is restricted. For most advanced pricing research needs, conjoint leads to far richer insights and revenue lift potential.
Conjoint analysis provides immense strategic value by quantifying customer preferences for product attributes and specific levels of those attributes. This technique tests combinations to quantify preferences and price sensitivity, estimates the price elasticity of demand, and identifies revenue-maximizing prices. This technique best mimics real purchase tradeoffs and enables holistic product/price optimization inclusive of competition. Conjoint is the most insightful (and intensive) approach, and researchers can use this understanding to model a wide range of feature and pricing configurations.
For example, we could evaluate high, medium, and low specifications for processor speed and storage capacity coupled with premium, moderate, and low pricing alternatives. Combining these variables expands scenario modeling.
Conjoint analysis predicts customer choice share for each configuration, revealing optimal bundles balancing adoption rate and profit goals. This enables incredibly precise price elasticity measurement unavailable through other means.
You can also inject the competitive context to benchmark optimal configurations against current or upcoming offerings in the marketplace, ensuring your pricing remains attuned to real-world consumer decision drivers.
Conjoint analysis empowers recommendations like:
"Priced at $299 with high processor speed but moderate storage, 52% of prospective customers would purchase our product over competitive options, generating $X million revenue at Y% margins."
Such predictive and financially quantified guidance boosts confidence for executives when deciding pricing structures. Conjoint and scenario modeling is unmatched in capturing the interplay between product attributes, pricing, positioning, and customer preferences to identify revenue-maximizing sweet spots.
Of course, there are drawbacks to conjoint analysis, including complexity, cost and timing. Designing a conjoint study requires careful consideration of the product or service being tested, the market landscape and the target audience. Interpreting the results can also be complex. Partnering with consultant who has a deep understanding of the methodology and your business will be crucial.
And while it may cost more and take longer than a Van West or Gabor-Granger, the simulator tool is an evergreen deliverable that can be used for months after the research is completed. This would suggest that while the initial buy-in may be high, there will be little need to re-run the research if product specs or the roadmap changes.
How Pricing Research Empowers Organizations
Pricing techniques range from quick directional inputs to deeply strategic recommendations. Each methodology serves selected needs, but conjoint analysis excels at holistic optimization, factoring in configurations, features, positioning, competitive offerings, and scenario projections.
By quantifying buyer preferences and simulating scenarios to pinpoint ideal pricing structures, organizations gain insights to prosper while democratizing access to life-enhancing products for the diverse populations they serve.