Why threats must be handled ex ante
Section 3.8 catalogues the most common threats to validity in marketing panels and emphasizes a design-first mindset. The key principle: identify threats before treatment starts and adapt the design accordingly. Ex post repairs can help, but they are never as credible as ex ante prevention.
Seasonality and event interference
Seasonality is pervasive: holidays, weather, and school calendars move demand. Event interference (elections, pandemics, major sports events) can create asymmetric shocks across treated and control units.
Design adaptations:
- Align treated and control windows to the same seasonal phase.
- Use concurrent controls when feasible (geo experiments with spatial separation).
- If concurrent controls are impossible, extend windows across multiple seasons and pre-specify seasonal fixed effects or detrending.
If identification relies purely on seasonal modeling, it is more assumption-heavy, and pre-treatment diagnostics become critical.
Policy and algorithm changes
Platforms regularly change algorithms, and policy environments shift. If a policy or algorithm change coincides with treatment, the effect is confounded.
Design adaptations:
- Schedule experiments during stable periods.
- Coordinate with platform roadmaps to avoid planned changes.
- If unavoidable, estimate treatment-by-change interactions and stratify cohorts by regime.
This is still fragile unless you can justify the timing as exogenous to potential outcomes.
Measurement shifts
Changing measurement systems can mimic treatment effects. If the definition of $Y_{it}$ changes mid-study, pre/post contrasts mix real effects with measurement artifacts.
Design adaptations:
- Freeze measurement during the experiment when possible.
- Run overlap periods with old and new measures to map them onto a common scale.
- Use negative-control outcomes that should not be affected by measurement changes.
When measurement shifts are unavoidable, document the timing and run sensitivity analysis around the break.
Buffers and robustness windows
Buffers exclude periods or units most vulnerable to contamination. Examples:
- Exclude the first week after launch to avoid anticipation or measurement lag.
- Exclude the last week before treatment ends to avoid decay or announcement effects.
Robustness windows vary pre/post lengths to check whether conclusions hinge on narrow intervals. Specification curves aggregate estimates across plausible windows and control sets to show stability rather than cherry-picking a single specification.
When parallel trends is implausible
If pre-trends diverge, parallel trends is not credible. Factor models provide an alternative by assuming a low-rank structure for untreated outcomes:
$$ Y_{it}(0)=\alpha_i+\lambda_t+\sum_{r=1}^R \lambda_{ir} f_{tr}+\varepsilon_{it}. $$This relaxes parallel trends but introduces new assumptions: stable factor loadings and sufficient untreated units and pre-periods to estimate $R$.
Practical implications:
- Ensure $\min(N_0, T_0) \gg R$ or factor estimates will be unstable.
- Run sensitivity checks over different $R$.
- Expect higher variance than DiD when parallel trends actually holds.
Spillovers and interference
Spillovers violate SUTVA and change the estimand. If spillovers are plausible:
- Pre-specify exposure mappings $h_i(D_{-i,t})$.
- Collect data in buffer zones or along network edges.
- Run sensitivity analysis across alternative exposure definitions.
Mis-specifying the spillover structure biases both direct and spillover estimates.
Design limits and remaining threats
Some threats cannot be fully eliminated: time-varying unobservables, measurement error in treatment, and model misspecification. Design-based reasoning reduces reliance on modeling, but it does not remove the need for judgement. Sensitivity analysis should accompany final estimates.
Practical design checklist
- Align seasonal phases or secure concurrent controls.
- Verify platform policy and algorithm stability windows.
- Document measurement definitions and test for shifts.
- Plan buffers and robustness windows.
- Pre-specify alternative estimators when pre-trends fail.
- Define spillover exposure mappings before treatment starts.
Takeaway
Threats to validity are not an afterthought. The most credible marketing panel studies anticipate them, adapt design choices accordingly, and report sensitivity to the remaining risks.
References
- Shaw, C. (2025). Causal Inference in Marketing: Panel Data and Machine Learning Methods (Community Review Edition), Section 3.8.
- Bai, J. (2009). Panel data models with interactive fixed effects.
- Xu, Y. (2017). Generalized synthetic control for panel data.
- Simonsohn, U., Simmons, J. P., and Nelson, L. D. (2020). Specification curve analysis.