Why “Did it work?” is hard
Marketing leaders live in a noisy world. Campaigns overlap, competitors react, seasons shift, and customer tastes evolve. The core problem isn’t data scarcity; it’s separating the causal impact of marketing actions from everything else happening at the same time. This is the measurement crisis: we can predict outcomes, but we still struggle to say what would have happened without the intervention.
Prediction isn’t causation
Modern analytics often excels at forecasting: who will churn, what price maximizes revenue, which channel gets clicks. But forecasts answer “what will happen,” not “what if we act differently.” That distinction matters for strategy. If we can’t isolate cause, we can’t confidently reallocate budgets or justify new programs.
Three structural challenges
Marketing settings amplify classic causal inference problems:
- Endogeneity: Budgets and campaigns are not assigned at random. Firms increase spend when they anticipate demand or face competitive threats, which confounds naive correlations.
- Dynamics: Effects unfold over time—adstock, habit formation, and delayed responses mean single pre/post comparisons are misleading.
- Spillovers: Actions in one region or segment can influence others (referrals, cross-market competition), violating the independence assumptions that many standard methods rely on.
Why MMM alone isn’t enough
Traditional Marketing Mix Models (MMM) often lean on flexible functional forms and aggregate time series. They can describe historical patterns but still struggle to identify causal impacts of discrete actions (e.g., a loyalty program launch) when confounding, dynamics, and spillovers are present. Attribution methods face similar limits in observational settings.
The design-first alternative
The path forward is to bring causal design thinking into marketing measurement:
- Treat each analysis as an identification problem, not just a modeling exercise.
- Use panel data to control for time-invariant differences and to track dynamics.
- Combine econometric tools (e.g., difference-in-differences, synthetic control, factor models) with careful diagnostics that make assumptions explicit.
Takeaway
“MMM 101” starts with a simple premise: marketing measurement is a causal problem, not just a predictive one. Robust decisions require tools that can credibly estimate counterfactuals in noisy, evolving, and interconnected markets. The rest of this series builds from that foundation.
References
- Shaw, C. (2025). Causal Inference in Marketing: Panel Data and Machine Learning Methods (Community Review Edition), Section 1.1.
- Pearl, J. (2009). Causality: Models, Reasoning, and Inference.