CRO metrics that actually matter (and which ones don't)
Focus on metrics that drive decisions. Learn which KPIs predict success, which distract, and how to build measurement frameworks optimizing for real business value.
Metric selection determines optimization focus—tracking wrong metrics optimizes wrong objectives while missing critical performance indicators. Yet many programs drown in vanity metrics providing psychological satisfaction without business value. According to research from Lean Analytics analyzing metric frameworks, successful programs track 3-5 critical metrics driving decisions while failed programs monitor 20+ metrics creating analysis paralysis without actionable insights.
The metric selection challenge lies in distinguishing leading from lagging indicators, actionable from vanity metrics, and business-relevant from technically-interesting measurements. Pageviews feel important but rarely drive optimization decisions. Conversion rate matters but revenue per visitor matters more. Test velocity indicates program health but only if tests actually improve outcomes. According to metric research from McKinsey, optimal frameworks balance: outcome metrics (did we improve business results?), process metrics (are we executing well?), and diagnostic metrics (why did performance change?).
This analysis presents comprehensive metric framework including: critical conversion metrics requiring tracking, supportive secondary metrics providing context, vanity metrics deserving elimination, segment-specific measurements, cohort analysis approaches, and custom business metrics. You'll learn that measurement excellence requires focus—tracking fewer right metrics delivers better results than monitoring many wrong metrics through concentrated attention on business-critical indicators.
📊 Critical conversion metrics (track always)
Conversion rate: (conversions) ÷ (sessions or visitors). Primary optimization metric showing percentage of traffic converting. According to conversion tracking research, while imperfect (ignores revenue, quality) remains essential baseline metric. Typical e-commerce: 1-3% overall, 5-15% product-to-cart, 50-70% cart-to-checkout, 65-80% checkout-to-purchase.
Revenue per visitor: (total revenue) ÷ (total visitors). Superior to pure conversion rate through value consideration—converting more visitors at lower values may reduce profitability. According to RPV research, this metric optimizes business value not just conversion volume through combined conversion and monetization.
Average order value: (total revenue) ÷ (number of orders). Measures transaction size revealing whether optimization maintains or improves per-transaction value. According to AOV research, tracking prevents conversion increases achieved through margin-destroying discounting or category shifts to low-value products.
Customer acquisition cost: (total acquisition spending) ÷ (new customers). Measures acquisition efficiency revealing whether optimization improves CAC through higher conversion from same traffic investment. According to CAC research, optimization reducing CAC 20-40% delivers comparable value to equivalent traffic increases through efficiency gains.
Customer lifetime value: (average revenue per customer) × (average customer lifespan). Measures long-term value ensuring optimization builds sustainable relationships not extractive one-time transactions. According to CLV research, 90-day or 180-day cohort tracking provides practical proxy for true lifetime measurement.
Return on investment: (incremental revenue - program costs) ÷ (program costs). Quantifies optimization value justifying continued investment. According to ROI research, effective programs generate 300-600% returns demonstrating value exceeding costs through measured incremental gains.
📈 Important secondary metrics (track for context)
Cart abandonment rate: (carts created - purchases) ÷ (carts created). Reveals checkout effectiveness though not complete picture—high abandonment might stem from comparison shopping not problems. According to abandonment research, typical rates: 65-75%—monitor for unusual increases signaling issues.
Bounce rate: (single-page sessions) ÷ (total sessions). Measures immediate engagement though high bounces acceptable for specific pages (order confirmation, thank you pages). According to bounce research, product pages over 60% bounce signal problems while homepage 40-50% normal.
Pages per session: (total pageviews) ÷ (total sessions). Indicates engagement depth though optimal varies by business model—lead gen wants high, e-commerce wants efficient. According to engagement research, 3-5 pages typical for e-commerce suggesting reasonable exploration.
Time on site: (total session duration) ÷ (total sessions). Measures engagement though longer not always better—efficient quick conversion beats confused lengthy sessions. According to time research, 2-4 minutes typical for e-commerce though varies by product complexity.
Form completion rate: (form submissions) ÷ (form starts). Critical for lead gen and checkout measuring form effectiveness. According to form research, rates vary by length and purpose: short forms 40-70%, long forms 10-30%, checkout 30-50%.
Repeat purchase rate: (customers with 2+ purchases) ÷ (total customers). Measures retention quality ensuring optimization doesn't sacrifice long-term value for short-term conversion. According to repeat research, 25-40% healthy though varies significantly by category.
New customer conversion versus returning customer conversion comparing acquisition versus retention effectiveness. According to customer type research, returning customers typically convert 3-5x higher—differential tracking reveals segment-specific opportunities.
❌ Vanity metrics (stop tracking)
Raw traffic volume without conversion context. 100,000 visitors at 1% conversion generates fewer sales than 50,000 at 2.5%. According to traffic research, volume alone provides zero optimization value without accompanying conversion and value metrics.
Social media likes/shares/followers unless directly correlated with business outcomes. According to social vanity research, engagement metrics rarely predict conversion—disconnect between social activity and commercial value makes them poor optimization guides.
Registered users or email subscribers unless activation measured. Accounts without purchases provide minimal value. According to subscriber research, focus on activated engaged subscribers not raw counts lacking behavioral context.
Time on page as primary metric. Long time might indicate confusion not engagement. According to time interpretation research, combine with outcome data—long time with high conversion good, long time with high bounce bad.
Pageviews without behavioral context. High pageviews might indicate poor navigation forcing excessive clicks. According to pageview research, efficiency matters more than volume—fewer pages to conversion better than many pages.
Test count without success rate. Running 50 tests sounds impressive but meaningless if zero improve conversion. According to test velocity research, successful tests matter not total attempts—1 winning test beats 10 failed tests.
🎯 Segment-specific measurements
Device-specific conversion (mobile, tablet, desktop) revealing platform performance gaps. According to device research, mobile typically shows 30-50% lower conversion requiring separate tracking and optimization versus aggregate metrics hiding mobile problems.
Traffic source conversion (organic, paid, social, email, direct, referral) measuring channel quality and optimization opportunities. According to source research, conversion varies 2-10x across sources—aggregate metrics mask high-quality and low-quality source differences.
New versus returning customer metrics comparing acquisition versus retention performance. According to customer type research, differentiated tracking reveals whether optimization helping acquisition, retention, or both through exposed segment-specific impacts.
Geographic performance revealing market-specific conversion patterns. According to geographic research, international conversion often 40-70% lower than domestic suggesting localization opportunities through country-specific tracking.
Product category metrics identifying high-performing versus struggling categories. According to category research, 20% of categories typically generate 60-80% of revenue—category-level tracking focuses optimization on highest-value opportunities.
Cohort-based metrics tracking acquisition-period groups measuring whether recent customers show improving or declining lifetime behavior. According to cohort research, temporal tracking reveals optimization impact on customer quality invisible in aggregate metrics.
📊 Funnel-specific metrics
Product page conversion: (add-to-cart) ÷ (product page views). Measures product page effectiveness revealing whether products compelling enough for cart addition. According to product page research, typical rates: 8-15%—below 8% suggests product presentation problems.
Cart conversion: (checkout initiated) ÷ (cart created). Measures cart-to-checkout transition revealing whether cart design effective or problematic. According to cart research, typical rates: 50-70%—below 50% suggests cart issues.
Checkout completion: (purchase) ÷ (checkout initiated). Measures checkout effectiveness revealing whether checkout simplified or complicated. According to checkout research, typical rates: 65-80%—below 65% suggests checkout friction.
Overall funnel efficiency: (purchases) ÷ (homepage views). End-to-end measurement showing cumulative effectiveness. According to funnel research, tracking stage-specific plus overall rates reveals both bottleneck locations and aggregate performance.
🔬 Testing and experimentation metrics
Test velocity: (tests completed) ÷ (time period). Measures program productivity though only valuable if combined with success rate. According to velocity research, mature programs run 12-25 quarterly tests suggesting active experimentation.
Test success rate: (tests showing positive results) ÷ (tests completed). Measures hypothesis quality and learning effectiveness. According to success research, 40-60% success rates typical—above 80% suggests insufficient risk-taking while below 20% indicates poor hypothesis formation.
Average test improvement: mean conversion lift from winning tests. Measures impact magnitude of successful optimizations. According to improvement research, 10-30% average lifts typical—smaller suggests incremental optimization while larger suggests fundamental improvements.
Time to significance: average days from test start to statistical conclusion. Measures decision efficiency. According to timing research, 2-4 weeks typical—longer suggests insufficient traffic or small effects while shorter might indicate large samples or premature conclusions.
💡 Custom business metrics
Margin per visitor: (gross margin) ÷ (visitors). Superior to revenue per visitor for businesses with variable margin products through profit rather than pure revenue focus. According to margin research, profit-based optimization prevents revenue growth destroying profitability through low-margin sales.
Lifetime value per acquisition: (90-day or 180-day cohort value) ÷ (cohort size). Practical proxy for true lifetime value through near-term measurement predictive of long-term behavior. According to practical CLV research, 90-180 day tracking balances timeliness with predictive power.
Subscription conversion for SaaS/subscriptions: (subscriptions) ÷ (trial starts). Critical metric for subscription businesses measuring trial-to-paid conversion. According to subscription research, 20-40% trial conversion typical depending on trial length and product.
Lead quality for B2B: (qualified leads) ÷ (total leads). Prevents volume-focused optimization creating low-quality leads. According to lead quality research, qualified lead tracking ensures optimization delivers business value not just form submissions.
📈 Metric prioritization framework
Tier 1 (track daily): Conversion rate, revenue per visitor, transactions. Core business metrics requiring constant monitoring. According to priority research, 3-5 Tier 1 metrics provide sufficient daily insight without overwhelming.
Tier 2 (track weekly): AOV, cart abandonment, bounce rate, traffic by source, new vs returning conversion. Context metrics informing optimization priorities. According to weekly research, 8-12 Tier 2 metrics provide comprehensive understanding without excessive granularity.
Tier 3 (track monthly): CLV, cohort analysis, category performance, test velocity, success rate. Strategic metrics guiding long-term decisions. According to monthly research, 10-15 Tier 3 metrics enable strategic assessment without daily noise.
Segment-specific (track by segment always): Device, traffic source, geography, customer type. Critical dimensions exposing differential performance. According to segmentation research, 4-6 key segments reveal hidden patterns invisible in aggregates.
💡 Common metric mistakes
Tracking too many metrics creating analysis paralysis preventing action. According to metric overload research, 20+ tracked metrics reduce decision velocity 40-80% through complexity preventing focus on critical indicators.
Optimizing single metric ignoring secondary impacts. Conversion increase harming AOV creates net-negative outcome. According to holistic research, balanced optimization prevents tunnel vision damaging unmeasured metrics.
Comparing incomparable metrics—mobile to desktop, new to returning, January to July—without normalization. According to comparison research, inappropriate benchmarking creates false conclusions through ignored context.
Premature metric evaluation declaring success/failure before statistical validity. According to timing research, minimum 7 days and 350+ conversions required for reliable rates—earlier conclusions wrong 40-60% of time.
Ignoring confidence intervals treating single-point estimates as perfect truth. According to confidence research, ±15% intervals common for typical traffic—accounting for uncertainty prevents false precision.
Focus on critical conversion metrics: conversion rate, revenue per visitor, AOV, CAC, CLV, ROI. Track secondary metrics for context: cart abandonment, bounce rate, engagement, repeat rate, funnel stages. Eliminate vanity metrics: raw traffic, social likes, counts without behavior. Segment measurements by: device, traffic source, customer type, geography, category, cohort. Track testing metrics: velocity, success rate, improvement magnitude. Create custom business metrics: margin, practical CLV, lead quality. Optimal frameworks balance 3-5 Tier 1 daily metrics, 8-12 Tier 2 weekly metrics, 10-15 Tier 3 monthly metrics. Measurement focus enables optimization focus—track fewer right metrics delivering better results than many wrong metrics.
Track key CRO metrics with Peasy's daily reports. Get conversion rate, AOV, sales, and order count delivered via email every morning. Start at peasy.nu

