Transitioning from Calendar Time to Event Time

Building on the difference-in-differences (DiD) framework developed in the 400-level series, Chapter 5 introduces the Event-Study Design.

Rather than plotting panel data by calendar dates, event studies organize data relative to an intervention date. This simple reframing provides powerful diagnostic and substantive benefits. Event studies allow us to:

  • Visualize the dynamic evolution of treatment effects over time.
  • Provide diagnostics for the plausibility of parallel trends and no anticipation.
  • Transparently communicate ramp-up periods, decay rates, and carryover effects—critical dynamics for marketing applications.

The Logic of Event-Time Alignment

If $G_i$ is the adoption time for unit $i$, we define event time as $k = t - G_i$.

This tracks the number of periods since (or until) the treatment:

  • $k = -2$: Two periods before adoption (pre-treatment).
  • $k = 0$: The period of adoption.
  • $k = 3$: Three periods post-adoption.

By strictly aligning units in event time instead of calendar time, the design explicitly pools observations from cohorts treating at different calendar dates. For instance, if Store A launches a loyalty program in Q3, B in Q5, and C in Q7, evaluating them at $k=2$ allows us to study short-run dynamics by comparing Store A’s Q5, Store B’s Q7, and Store C’s Q9 against concurrent control units.

Marketing Contexts: Anticipation and Carryover

Marketing interventions rarely produce flat, immediate effects.

  1. Anticipation: Target units may alter their behavior if they expect a future intervention. If a loyalty program is pre-announced, customers might delay purchases to earn points (leading to negative pre-treatment coefficients). If a price hike is anticipated, customers may stockpile (yielding positive pre-treatment coefficients). Such responses violate the “no anticipation” assumption required for canonical DiD.
  2. Carryover (Dynamics): Advertising takes time to build brand awareness; price promotions often spike immediately but succumb to rapid decay or competitor responses. Standard DiD compresses this into a single post-treatment Average Treatment Effect on the Treated (ATT). Event studies relax this restriction, permitting flexible, reduced-form lag structures without dense parametric assumptions.

Threats to Validity: The Perils of Pooling

Homogeneity and TWFE

Merging data in event time is incredibly effective for precision, but it carries strict assumptions—namely, that different cohorts share a common dynamic profile $\{\theta_k\}$ (homogeneous dynamic effects). If this is violated under staggered adoption, traditional Two-Way Fixed Effects (TWFE) introduces familiar weighting biases. Always lean on modern heterogeneity-robust DiD estimators instead of native TWFE.

Composition Bias at Long Horizons

At extreme event horizons, the comparisons reflect a changing mix of cohorts.

  • Large positive $k$: Only identifiable from the earliest-adopting cohorts.
  • Large negative $k$ (leads): Only identifiable from late-adopting cohorts.

If early adopters systematically differ from late adopters (e.g., early adopters are historically high-volume stores), an upward-sloping profile in $\hat{\theta}_k$ might purely be an artifact of who is in the sample at that event time, rather than a genuine shift in treatment dynamics. Checking cohort-specific profiles or truncating the study window mitigates this composition bias.

Contrasting with Financial Event Studies

Note that marketing panel event studies are structurally distinct from financial event studies (which analyze stock returns in tight windows around corporate announcements). Panel event studies rely on the logic of parallel trends and panel units as controls, whereas financial event studies utilize asset pricing/market models to construct counterfactuals and frequently assume cross-sectional independence for inference. They are complementary concepts, but the identification strategies do not overlap.