A Practical Protocol for Carryover Experiments

Synthesising the methodologies throughout Chapter 5, Section 5.11 provides a compact, end-to-end checklist for conducting carryover (switchback) experiments in marketing panels. Following this workflow ensures that analyses are transparent, rigorous, and aligned with modern best practices.

1. Align Estimands to Business Question

  • Define the Target Estimand: Clarify the goal: are you looking for immediate effects for a go/no-go decision, cumulative effects for ROI calculation, or long-run multipliers for strategic planning?
  • Mapping: Use the event-time estimands $\theta_k$ to capture dynamic responses. Ensure your estimand definition (e.g., cumulative sums $\sum_{k=0}^{K} \theta_k$) directly answers the business question.

2. Assess Data Support

  • Construct Support Tables: Tabulate observations and cohorts by event time $k$.
  • Identify Thin-Support Regions: Flag regions with few observations (e.g., $N < 30$) or sparse cohort contributions. Truncate or bin extreme event horizons where support is thin or where composition changes sharply to avoid composition bias.

3. Select Estimator

  • Heterogeneity-Robust Estimators: Avoid native Two-Way Fixed Effects (TWFE). Instead, use estimators like Sun–Abraham (for balanced panels), Callaway–Sant’Anna (for group-time averages), or Borusyak–Jaravel–Spiess (for imputation-based approaches).
  • Benchmark: Estimate TWFE only as a benchmark, and interpret any divergence in light of its potential for negative weighting and opaque aggregations.

4. Run Pre-Trend and Anticipation Checks

  • Diagnostic Specifications: Estimate multiple pre-treatment leads. Conduct joint Wald tests to verify that pre-treatment coefficients ($\theta_k$ for $k < 0$) are close to zero.
  • Placebo Tests: Run placebo-in-time tests using only pre-treatment data. If substantial pre-trends or anticipation effects are detected, reconsider the design or consider alternative identification strategies like Synthetic Control.

5. Estimate and Plot Event-Time Profile

  • Visualisation: Produce well-labelled event-study plots tracing $\hat{\theta}_k$ with confidence intervals.
  • Clarification: Clearly mark the vertical line at $k = 0$ (treatment adoption) and the reference period ($k = -1$).
  • Support Data: Annotate plots with sample sizes or cohort counts to ground the interpretation in data support.

6. Choose Inference Procedure

  • Clustering: Default to clustering standard errors by unit $i$. Consider two-way clustering if cross-unit correlation is plausible.
  • Small/Moderate Clusters: If the number of clusters is small ($N < 20$), use wild cluster bootstrap or randomisation inference. For $N$ between 20 and 50, compare cluster-robust SEs with bootstrap-based inference.
  • Multiplicity: Pre-specify multiplicity adjustments (e.g., Romano–Wolf stepdown) if the analysis involves many hypothesis families.

7. Conduct Sensitivity Analyses

  • Vary Specification: Construct specification curves by varying control sets (never-treated vs not-yet-treated), event-time windows, binning rules, and covariate sets.
  • Cross-Estimator Comparison: Confirm that results are robust across different heterogeneity-robust estimators.
  • Diagnostics: Use leave-one-cohort-out and leave-one-period-out procedures to identify influential outliers.

8. Compute Event-Time Metrics

Translate the estimated $\hat{\theta}_k$ profile into quantitative business outputs using the formulas from Section 5.10:

  • Anticipation: Reconcile pre-trend diagnostics with institutional knowledge of announcements.
  • Ramp-up Rate: Compute average per-period effect growth.
  • Time-to-Maturity: Identify the event time where the effect stabilises.
  • Effect Multiplier / LRM: Compute short-run versus long-run value.
  • Half-Life: Identify the persistence of effects by measuring the time to 50% decay (if applicable).
  • Cumulative Effect: Compute the sum of effects for ROI and CLV inputs.

9. Document and Report Transparently

  • Reproducibility: Provide scripts, cleaned data (or simulation), and software versions.
  • Clarity: Clearly state the research question, data structure, estimator rationale, and diagnostic results.
  • Actionability: Provide a substantive interpretation linking the causal profile and metrics back to business decisions.

Conclusion

By following this 9-step workflow, practitioners ensure that event-study analyses of carryover experiments are transparent, rigorous, and actionable. This protocol parallels the DiD workflow in Chapter 4, providing a consistent framework for modern causal inference in marketing panels.