MMM 610: Workflow Checklist for Synthetic Control

Section 6.10 is the operational companion to the theory and applications in the synthetic control chapter. The point is not to introduce a new estimator. It is to make sure the estimator is used in a defensible way: with the right treated unit, a credible donor pool, a sensible pre-period, and diagnostics that are reported before anyone starts talking about business impact.

That makes this section especially useful for MMM work. Synthetic control often looks simple in slides, but the actual analysis is a sequence of design choices. If those choices are not documented, the estimate is hard to trust and hard to reproduce.

1. Define the Treated Unit and Timing

Start by stating exactly what is treated and when treatment begins.

If the intervention is announced before implementation, decide whether the analysis should use the announcement date or the execution date. That choice should follow the likely anticipation pattern, not convenience. In marketing settings, this distinction matters because customers, competitors, and distributors may respond before the official launch.

The output of this step should be unambiguous: one treated unit, one intervention time, and a short institutional explanation of why the treatment happened.

2. Curate the Donor Pool

The donor pool is not just “everyone else.” It should exclude treated units, contaminated units, and units that are fundamentally incomparable to the treated unit.

Useful donor filters include:

  • Geography.
  • Market size.
  • Operational similarity.
  • Media overlap or spillover risk.
  • Known local shocks.

If spillovers are plausible, define a buffer zone and exclude donors inside it. That is often the difference between a credible design and a biased one.

3. Choose Predictors and the Pre-Period

The pre-treatment window is the raw material for identification. Use all pre-treatment outcome periods you have, together with stable covariates that help anchor the long-run level.

Avoid post-treatment variables and anything affected by the intervention itself. If the pre-period is short, longer lags or differenced outcomes may help capture trends, but they also increase the risk of overfitting if the donor pool is small.

The practical test is simple: can the chosen predictors reproduce the treated unit’s pre-treatment trajectory?

4. Fit the Synthetic Control and Check Pre-Fit

After fitting the weights, inspect the pre-treatment RMSPE and the weight pattern.

The key questions are:

  • Is the fit tight relative to the scale of the outcome?
  • Does the synthetic control track the treated unit over the full pre-period?
  • Are the weights concentrated on a few donors or spread across many?
  • Do the high-weight donors make substantive sense?

Good pre-fit is necessary, but it is not sufficient. If the fit improves only after repeated donor and predictor search, the design may be drifting into hidden specification search.

5. Plot the Gap and Estimate the Effect

Once the pre-period is credible, extend the plot into the post-treatment window and inspect the gap.

The gap plot should answer three basic questions:

  1. Does the gap open immediately after treatment?
  2. Does it ramp up gradually?
  3. Does it persist or fade?

For reporting, compute both the cumulative effect and the average effect over the post-period. In MMM work, those summaries are often easier to translate into revenue, margin, or incremental units than a period-by-period table.

6. Run Placebo Checks

In-space placebo checks are the main credibility device. Reassign treatment to each donor in turn and compare the treated unit’s post-treatment gap or RMSPE ratio to the placebo distribution.

The treated unit should look unusual relative to the donor pool if the design is working.

In-time placebo checks are also useful. Move the treatment date into the middle of the pre-period and confirm that the synthetic control does not generate a fake effect before the actual intervention.

These are diagnostics of design stability, not guarantees of identification.

7. Choose an Inference Procedure

The inference choice depends on the sample size and the goal.

If the donor pool is reasonably large, permutation or rank-based inference is natural. With a small donor pool, rank p-values can be coarse, so they should be interpreted as relative extremeness rather than as exact probabilities.

If confidence intervals or uniform bands are the main goal, conformal inference is often better suited because it handles serial dependence and weight uncertainty more naturally than a single point estimate.

When different synthetic-control-style estimators disagree, it is often informative to report a bounds-based summary rather than forcing one estimate to carry the whole story.

8. Document the Analysis Plan

The checklist is not complete until the analysis is documented.

At minimum, record:

  • The treated unit and intervention time.
  • The donor selection rules.
  • The predictor set and pre-period.
  • The weight solution and donor concentration.
  • The pre-fit diagnostics.
  • The placebo results.
  • The inference method.

If you changed donors or predictors after looking at the outcomes, label that work as exploratory. That distinction matters for interpretation.

9. Translate the Result for Decision-Makers

The final step is communication.

Executives usually do not want the full optimization problem. They want to know what changed, how credible the counterfactual is, and what the business implication is. The best way to answer that is with a compact package: donor table, trajectory plot, gap plot, placebo plot, and a plain-language summary of what the estimate means.

Summary

Section 6.10 is the part of synthetic control that practitioners actually need when they move from theory to a real marketing panel. It forces the analyst to make the intervention, donor pool, predictor set, diagnostics, and inference choices explicit. That discipline is what keeps synthetic control from becoming an attractive-looking but fragile curve fit.

For MMM, the right lesson is operational: treat synthetic control as a workflow, not just a model.