Why templates matter

Section 3.11 turns design-based principles into operational templates. The goal is to standardize what must be written down before analysis: assignment, estimands, diagnostics, inference, and reporting. A good template reduces researcher degrees of freedom and makes credibility checks routine.

Design protocol template

A design protocol is a structured, time-stamped document created before outcome analysis. It should include:

  • The research question and intervention, in plain language.
  • Assignment mechanism (randomized vs observational) and unit of assignment.
  • Data structure ($N$, $T$, panel type, cohorts).
  • Treatment window and measurement windows.
  • Target estimand (ATT, $\tau(g,t)$, $\theta_k$, or $\mu(d)$) aligned to the question.
  • Primary and secondary outcomes with measurement units and transformations.
  • Estimator choice with chapter references.
  • Identification assumptions (parallel trends, low-rank structure, conditional independence, etc.).
  • Inference plan (clustering, bootstrap, randomization inference, multiplicity).
  • Diagnostics and sensitivity analyses.
  • Threats to validity and design mitigations.

The protocol should be archived with immutable timestamps and updated only through addenda, never by overwriting the original.

Diagnostic checklist

The diagnostic checklist mirrors the Chapter 17 workflow and ties each diagnostic to an identification requirement:

  • Assignment transparency and balance checks support parallel trends.
  • Pre-trend diagnostics assess no-anticipation and parallel trends.
  • Overlap checks support conditional independence.
  • Spillover checks test SUTVA and exposure assumptions.
  • Seasonality and event checks guard against confounding shocks.
  • Measurement stability ensures consistent interpretation of $Y_{it}(d)$.
  • Power and inference checks align evidential standards with the planned estimand.

When diagnostics fail, the default response is to revise the design or narrow the estimand, not to proceed and hope for the best.

Visual documentation templates

Section 3.11 highlights three visual templates to document assignment:

  • Assignment matrices for phased rollouts and staggered adoption.
  • Geo-experiment maps with buffers and stratification.
  • Switchback schedules with washout periods and day-of-week balance.

These visuals are not decoration. They make the assignment mechanism auditable.

Assignment-to-estimator crosswalk

Table 3.1 provides a practical map from assignment to estimands and recommended estimators. Examples:

  • Randomized block $\rightarrow$ ATT, event-time $\theta_k$ with DiD and event-study diagnostics.
  • Staggered adoption with heterogeneity $\rightarrow$ $\tau(g,t)$ and $\theta_k$ via Callaway-Sant’Anna or Sun-Abraham.
  • Single treated unit $\rightarrow$ synthetic control or SDID with placebo inference.
  • Common shocks $\rightarrow$ factor models or matrix completion when parallel trends fails.
  • Continuous intensity $\rightarrow$ dose-response $\mu(d)$ with DML and distributed lags.
  • Spillovers $\rightarrow$ exposure mapping, spatial/network models, and bounds.

Pre-analysis plan template

The pre-analysis plan is a full, analysis-ready template that captures:

  • Study metadata and finalization date.
  • A one-sentence research question.
  • Assignment mechanism and panel structure.
  • Estimand definition with notation.
  • Primary and secondary outcomes.
  • Estimator choice and identification assumptions.
  • Inference plan and multiplicity adjustment.
  • Ex ante diagnostics and sensitivity analyses.
  • Threats to validity and mitigation actions.
  • Reporting plan with a commitment to publish all pre-specified analyses.

Takeaway

Section 3.11 is about operational discipline. Templates and checklists make design decisions explicit, diagnostics routine, and reporting credible. The payoff is not just transparency but better design choices before it is too late to change them.

References

  • Shaw, C. (2025). Causal Inference in Marketing: Panel Data and Machine Learning Methods (Community Review Edition), Section 3.11.
  • Callaway, B., and Sant’Anna, P. H. C. (2021). Difference-in-differences with multiple time periods.
  • Sun, L., and Abraham, S. (2021). Estimating dynamic treatment effects in event studies with heterogeneous effects.
  • Abadie, A., Diamond, A., and Hainmueller, J. (2010). Synthetic control methods for comparative case studies.
  • Arkhangelsky, D., et al. (2021). Synthetic difference-in-differences.
  • Bai, J. (2009). Panel data models with interactive fixed effects.
  • Chernozhukov, V., et al. (2017). Double/debiased machine learning.