What is a block design?

A block design assigns treatment to a subset of units at a common start time, and those treated units stay treated for the rest of the sample. A simple representation is:

$$ D_{it}=D_i \cdot \mathbf{1}\{t\ge t_0\}, $$

where $D_i \in \{0,1\}$ marks treatment-group membership and $t_0$ is the shared launch time.

Why it maps cleanly to DiD

Block designs align with the canonical two-group, two-period difference-in-differences setup. The estimand is the ATT:

$$ \mathrm{ATT}=\mathbb{E}[Y_{it}(1)-Y_{it}(0)\mid D_{it}=1], $$

which averages effects over treated unit-period cells. Under parallel trends, the DiD estimator recovers the ATT by comparing pre/post changes in treated units to pre/post changes in controls.

When the assumptions are most credible

Block designs are the core of randomized marketing experiments:

  • Geo-experiments randomizing markets or DMAs.
  • Store-level pricing tests with randomly assigned treated stores.
  • Platform A/B tests allocating users to treatment and control groups.

Randomization makes parallel trends hold in expectation (conditional on any stratification), but finite-sample imbalance can still occur. That is why balance diagnostics and randomization-based inference matter even in experimental settings.

Inference and diagnostics

  • Clustering: inference must account for the level of randomization (market, store, or user cluster).
  • Balance checks: pre-treatment covariates should be balanced across groups.
  • Pre-trends: even in randomized designs, checking pre-treatment trends is a useful sanity check.
  • Spillovers: if treatment affects nearby controls, SUTVA is violated and estimates can be biased.

Where it sits in the method map

Block designs are the starting point for Chapter 4’s standard DiD estimator and for later diagnostics in Chapter 17. If treatment timing is not synchronized, the design transitions to staggered adoption, which requires different estimators.

Takeaway

Block designs provide the cleanest identification in panel settings: a clear ATT target, a simple DiD estimator, and a transparent diagnostic workflow. Their credibility is strongest when randomization is explicit and spillovers are minimal.

References

  • Shaw, C. (2025). Causal Inference in Marketing: Panel Data and Machine Learning Methods (Community Review Edition), Section 3.2.1.
  • Goodman-Bacon, A. (2021). Difference-in-differences with variation in treatment timing. Journal of Econometrics.