10 questions to ask your data after every seasonal event

Standardized post-season debrief framework. Get specific questions that reveal what worked what failed and what to change next year.

A group of question marks sitting next to each other
A group of question marks sitting next to each other

The event is over. You're exhausted. Your team is exhausted. The last thing anyone wants to do is analyze what happened.

Do it anyway. Right now. Not next week. Not next month. Within 48 hours while everything is fresh. Because the stores that dominate seasonal events don't just execute—they learn systematically from every event and implement those learnings next time.

Here's the problem most stores face: They know they should analyze post-event performance, but they don't know what questions to ask. So they pull a revenue report, look at the number, decide if it's good or bad, and call it done. That's not analysis—that's checking a box.

Real post-event analysis asks specific diagnostic questions revealing not just what happened but why it happened and what to do differently next time. These questions have right and wrong answers—your data tells you definitively whether something worked or failed.

According to post-event learning research, stores using structured question frameworks for seasonal debriefs implement 3-5x more tactical improvements year-over-year versus ad-hoc analysis stores, directly translating to better performance through accumulated learning.

These ten questions form your post-seasonal debrief framework. Ask them after every seasonal event—Black Friday, Valentine's Day, Mother's Day, back-to-school, whatever. Your data gives you answers. Those answers tell you what to change.

❓ Question 1: What hour showed highest revenue-per-visitor?

Why this matters:

Revenue peaks and traffic peaks often differ. Knowing your most efficient hour (highest revenue per visitor, indicating best conversion and AOV combination) reveals optimal timing for future marketing concentration.

How to answer:

Pull hourly data showing: visitors, revenue, revenue-per-visitor for the entire event. Sort by revenue-per-visitor descending.

What to do with the answer:

Next year, concentrate your highest-value marketing activities (best email sends, highest-budget ad hours, strongest social posts) during the hours showing historical best efficiency. Don't just chase traffic—chase efficient traffic.

Example finding: 2-4 PM showed highest revenue-per-visitor (€8.40/visitor vs €5.20 average). Next year: schedule VIP email for 1 PM delivery, boost social ad spend 1-4 PM, feature strongest offers during this window.

❓ Question 2: Which traffic source showed highest ROAS—and which showed lowest?

Why this matters:

Not all traffic is equal. Some channels deliver profitable returns, others destroy value. Knowing exactly which performed best and worst guides next year's budget allocation.

How to answer:

Calculate for each traffic source: Revenue generated, Marketing spend (if applicable), ROAS (return on ad spend).

Organic search, email, and direct typically show infinite ROAS (no direct cost). Compare paid channels: Google Ads, Facebook, Instagram, TikTok, Display, etc.

What to do with the answer:

Double budget for highest-ROAS channels (room to scale). Investigate or eliminate lowest-ROAS channels (fix or stop spending). Reallocate from losers to winners.

Example finding: Google Shopping: 4.8x ROAS. Facebook Ads: 1.4x ROAS. Decision: Next year, allocate 60% of paid budget to Google Shopping (vs 40% this year), reduce Facebook to 20% (vs 35%) unless issues identified and fixable.

❓ Question 3: What was our biggest missed opportunity (category or product that could have done more)?

Why this matters:

Success isn't just what worked—it's identifying what almost worked but fell short due to fixable issues (inventory, positioning, pricing, etc.).

How to answer:

Look for products/categories showing:

  • High traffic but low conversion (interest exists, conversion problem)

  • Sold out early (demand exceeded supply)

  • Good margin potential but underperformed revenue (positioning issue)

What to do with the answer:

For high-traffic/low-conversion: Fix conversion barriers (pricing, photos, descriptions, reviews). For stockouts: Deeper inventory next year. For underperforming margin products: Better positioning and marketing emphasis.

Example finding: Premium leather bags category got 8,400 product page views (high interest) but only 2.1% conversion (vs 3.8% site average). Investigation: No reviews, poor mobile photos. Action: Collect reviews, reshoot photos, feature prominently next year—potential 50-80% category lift.

❓ Question 4: When did we stock out of key products—and what did it cost us?

Why this matters:

Stockouts represent pure lost revenue. Quantifying the cost justifies deeper inventory investment next year.

How to calculate:

For each stockout:

  1. Identify when product went out of stock

  2. Calculate hourly/daily sales rate before stockout

  3. Calculate hours/days out of stock

  4. Multiply: Lost hours × hourly sales rate = lost revenue

What to do with the answer:

For products with expensive stockouts (>€10K lost), double inventory depth next year. For products with minor stockouts (<€2K lost), maintain current approach or slight increase.

Example finding: Bestseller stocked out Saturday 11 AM, restocked Monday 3 PM. Sales rate before stockout: €420/hour. Hours out: 52. Lost revenue: €21,840. Decision: Next year, 2.5x inventory depth for proven bestsellers.

❓ Question 5: What was our cart abandonment rate during the event vs normal—and why did it change?

Why this matters:

Abandonment rate changes indicate checkout health or problems. Increased abandonment suggests technical issues, pricing surprises, or friction. Decreased abandonment indicates urgency driving completion.

How to answer:

Compare: Event cart abandonment rate vs baseline 30-day abandonment rate.

If abandonment increased: Drill into checkout funnel identifying where dropoff occurred. Check for: unexpected shipping costs, technical errors, slow page loads, mobile-specific issues.

If abandonment decreased: Understand what urgency factors worked for future replication.

What to do with the answer:

For increased abandonment: Fix identified friction points before next event. For decreased abandonment: Document what created urgency (countdown timers, limited quantity messaging, clear deadline communication) and replicate.

Example finding: Abandonment increased from 68% baseline to 79% during event. Cause: Shipping cost surprise at checkout (free shipping threshold not clearly communicated on product pages). Action: Prominent free shipping messaging on all product pages next time—estimated 6-8% conversion lift.

❓ Question 6: How did mobile vs desktop performance compare—and did one underperform?

Why this matters:

Device-specific problems often hide in aggregate metrics. Mobile might have melted down while desktop compensated, or vice versa.

How to answer:

Calculate separately for mobile and desktop:

  • Conversion rate vs baseline

  • AOV vs baseline

  • Cart abandonment vs baseline

  • Revenue contribution %

What to do with the answer:

If mobile underperformed: Prioritize mobile optimization (checkout, page speed, payment options) before next event. If desktop underperformed: Less common, but investigate if true. If balanced: Current approach working, maintain.

Example finding: Desktop conversion: 4.2% (vs 3.8% baseline, +11%). Mobile conversion: 1.4% (vs 1.9% baseline, -26%). Problem: Mobile checkout broke on certain iOS versions. Action: Comprehensive mobile testing protocol before next event preventing repeat failure.

❓ Question 7: What percentage of customers were new vs returning—and how did their value compare?

Why this matters:

Understanding customer acquisition effectiveness during seasonal events informs marketing strategy. Heavy new customer acquisition at low lifetime value suggests problems. Strong returning customer activation indicates loyalty success.

How to answer:

Segment orders into: New customers (first purchase ever), Returning customers (prior purchase history).

Calculate for each: Count, % of total orders, Average order value, Total revenue contribution.

What to do with the answer:

If new customers dominated with solid AOV: Good acquisition event, track their retention over next 6-12 months. If returning customers dominated: Strong loyalty activation, consider VIP/exclusive programs next year. If new customers had low AOV: Acquisition quality concerns, may need adjusted targeting or messaging.

Example finding: New customers: 68% of orders, €74 AOV. Returning: 32% of orders, €118 AOV. Returning customers delivered 49% of revenue despite being 32% of orders. Action: Next year, add early access period for existing customers (VIP treatment) capturing their higher value before opening to general public.

❓ Question 8: Which day of the event performed best relative to expectations?

Why this matters:

Multi-day events (Black Friday through Cyber Monday) show distinct daily patterns. Knowing which day overperformed or underperformed guides next year's emphasis and pacing.

How to answer:

For each day, calculate: Actual revenue, Expected revenue (forecast), Variance %.

Rank days by variance—highest positive variance = biggest win, highest negative variance = biggest disappointment.

What to do with the answer:

Concentrate resources on historically overperforming days. Investigate underperforming days for fixable causes (technical issues, competitive actions, messaging problems).

Example finding:

  • Black Friday: +12% vs forecast (good)

  • Saturday: -28% vs forecast (bad)

  • Sunday: +8% vs forecast (good)

  • Cyber Monday: +18% vs forecast (excellent)

Saturday investigation reveals: Ran out of featured products by Saturday AM, traffic arrived but couldn't buy. Action: Ensure key products stocked through entire weekend next year, not just Friday.

❓ Question 9: What was our repeat purchase rate during the event—and what does it tell us?

Why this matters:

Some customers made multiple purchases during the event (bought Friday, came back Sunday) indicating strong engagement and potential for encouragement.

How to answer:

Calculate: % of event customers who placed 2+ orders during event period, Average number of orders among multi-purchase customers, Revenue contribution from multi-purchase customers.

What to do with the answer:

If repeat purchase rate high (>15%): Create campaigns encouraging multiple purchases ("Come back tomorrow for different deals"). If repeat purchase rate low (<5%): Either event structure doesn't support multiple purchases (one big sale, not evolving offers) or customers are satisfied with single purchase.

Example finding: 18% of customers placed 2+ orders during event, averaging 2.4 orders each. These multi-purchase customers generated 34% of total event revenue despite being 18% of customers. Action: Next year, design daily deal structure explicitly encouraging return visits—"New deals daily" messaging.

❓ Question 10: If we could change one thing about this event, what would have the biggest impact?

Why this matters:

Forces prioritization. You can't fix everything. What one change would deliver maximum improvement?

How to answer:

Review all findings from questions 1-9. Identify which represents biggest opportunity or biggest problem.

Consider:

  • Magnitude of impact (lost revenue, missed opportunity)

  • Feasibility of fix (easy to implement vs major undertaking)

  • Likelihood of recurrence (one-time problem vs systemic issue)

What to do with the answer:

Make this your #1 priority for next seasonal event preparation. Document it. Assign ownership. Track implementation. Verify it's fixed before next event.

Example conclusion: Biggest impact opportunity = Mobile checkout fix. Cost: €21K+ lost revenue from mobile conversion drop. Feasibility: High (technical fix). Recurrence: Very likely if not addressed. Decision: Top priority for Q1 is comprehensive mobile checkout overhaul and testing protocol.

Post-seasonal analysis isn't about pulling every possible report—it's about asking specific diagnostic questions revealing actionable improvements. Question your highest-efficiency hour guiding next year's timing concentration. Identify best and worst ROAS channels informing budget reallocation. Spot biggest missed opportunity revealing fixable problems. Quantify stockout costs justifying inventory investment. Analyze abandonment changes uncovering friction points. Compare device performance catching platform-specific issues. Segment new versus returning customers understanding acquisition effectiveness. Rank daily performance identifying pacing opportunities. Measure repeat purchase rates revealing engagement potential. And prioritize single highest-impact change focusing improvement efforts.

These ten questions take 2-3 hours to answer comprehensively but deliver 10-20x return through implemented improvements. Don't skip post-event analysis. Don't do superficial analysis. Ask these questions, get real answers from your data, implement changes, and watch next year's performance improve through systematic learning.

Get the data to answer these questions automatically. Try Peasy for free at peasy.nu and receive daily reports throughout your seasonal events—sales, conversion, orders, top products, and top channels with automatic week-over-week and year-over-year comparisons.

© 2025. All Rights Reserved

© 2025. All Rights Reserved

© 2025. All Rights Reserved