What is the difference between latest event time and the earliest event time?

Get full access to Quantitative Techniques: Theory and Problems and 60K+ other titles, with free 10-day trial of O'Reilly.

Show

There's also live online events, interactive content, certification prep materials, and more.

Get full access to Quantitative Techniques: Theory and Problems and 60K+ other titles, with free 10-day trial of O'Reilly.

There's also live online events, interactive content, certification prep materials, and more.

of two-way fixed effects DD estimates requires both a parallel trends assumption and treatment effects that are constant over time. I show how to decompose the difference between two specifications, and provide a new analysis of models that include time-varying controls.

Introduction

Difference-in-differences (DD) is both the most common and the oldest quasi-experimental research design, dating back to Snow’s (1855) analysis of a London cholera outbreak.1 A DD estimate is the difference between the change in outcomes before and after a treatment (difference one) in a treatment versus control group (difference two): y¯TREATPOST−y¯TREATPRE−y¯CONTROLPOST−y¯CONTROLPRE. That simple quantity also equals the estimated coefficient on the interaction of a treatment group dummy and a post-treatment period dummy in the following regression: yit=γ+γi⋅TREATi+γ⋅tPOSTt+β2x2TREATi×POSTt+uit.The elegance of DD makes it clear which comparisons generate the estimate, what leads to bias, and how to test the design. The expression in terms of sample means connects the regression to potential outcomes and shows that, under a common trends assumption, a two-group/two-period (2x2) DD identifies the average treatment effect on the treated. Almost all econometrics textbooks and survey articles describe this structure,2 and recent methodological extensions build on it.3

Most DD applications diverge from this 2x2 set up though because treatments usually occur at different times.4 Local governments change policy. Jurisdictions hand down legal rulings. Natural disasters strike across seasons. Firms lay off workers. In this case researchers estimate a regression with dummies for cross-sectional units (αi⋅) and time periods (α⋅t), and a treatment dummy (Dit): yit=αi⋅+α⋅t+βDDDit+eit.In contrast to our substantial understanding of canonical 2x2 DD, we know relatively little about the two-way fixed effects DD when treatment timing varies. We do not know precisely how it compares mean outcomes across groups.5 We typically rely on general descriptions of the identifying assumption like “interventions must be as good as random, conditional on time and group fixed effects” (Bertrand et al., 2004, p. 250). We have limited understanding of the treatment effect parameter that regression DD identifies. Finally, we often cannot evaluate how and why alternative specifications change estimates.6

This paper shows that the two-way fixed effects DD estimator in (2) (TWFEDD) is a weighted average of all possible 2x2 DD estimators that compare timing groups to each other (the DD decomposition). Some use units treated at a particular time as the treatment group and untreated units as the control group. Some compare units treated at two different times, using the later-treated group as a control before its treatment begins and then the earlier-treated group as a control after its treatment begins. The weights on the 2x2 DDs are proportional to timing group sizes and the variance of the treatment dummy in each pair, which is highest for units treated in the middle of the panel.

I first use this DD decomposition to show that TWFEDD estimates a variance-weighted average of treatment effect parameters sometimes with “negative weights” (Borusyak and Jaravel, 2017, de Chaisemartin and D’Haultfœuille, 2020, Sun and Abraham, 2020).7 When treatment effects do not change over time, TWFEDD yields a variance-weighted average of cross-group treatment effects and all weights are positive. Negative weights only arise when average treatment effects vary over time. The DD decomposition shows why: when already-treated units act as controls, changes in their outcomes are subtracted and these changes may include time-varying treatment effects. This does not imply a failure of the design in the sense of non-parallel trends in counterfactual outcomes, but it does suggest caution when using TWFE estimators to summarize treatment effects.

Next I use the DD decomposition to define “common trends” when one is interested in using TWFEDD to identify the variance-weighted treatment effect parameter. Each 2x2 DD relies on pairwise common trends in untreated potential outcomes so the overall assumption is an average of these terms using the variance-based decomposition weights. The extent to which a given timing group’s differential trend biases the overall estimate equals the difference between the total weight on 2x2 DDs where it is the treatment group and the total weight on 2x2 DDs where it is the control group. Because units treated near the beginning or the end of the panel have the lowest treatment variance they can get more weight as controls than treatments. In designs without untreated units they always do.

Finally, I develop simple tools to describe the TWFEDD estimator and evaluate why estimates change across specifications.8 Plotting the 2x2 DDs against their weights displays heterogeneity in the components of the weighted average and shows which terms and timing groups matter most. Summing the weights on the timing comparisons quantifies “how much” of the variation comes from timing (a common question in practice), and provides practical guidance on how well the TWFEDD estimator works compared to alternative estimators (Sun and Abraham, 2020, Borusyak and Jaravel, 2017, Callaway and Sant’Anna, 2020, Imai and Kim, 2021, Strezhnev, 2018, Ben-Michael et al., 2019). Comparing TWFEDD estimates across specifications in a Oaxaca-Blinder-Kitagawa decomposition measures how much of the change in the overall estimate comes from the 2x2 DDs (consistent with confounding or within-group heterogeneity), the weights (changing estimand), or the interaction of the two. Scattering the 2x2 DDs or the weights from different specifications show which specific terms drive these differences. I also provide the first detailed analysis of specifications with time-varying controls, which can address bias, but also changes the sources of identification to include comparisons between units with the same treatment but different covariates.

To demonstrate these methods I replicate Stevenson and Wolfers (2006), who study of the effect of unilateral divorce laws on female suicide rates. The TWFEDD estimates suggest that unilateral divorce leads to 3 fewer suicides per million women. More than a third of the identifying variation comes from treatment timing and the rest comes from comparisons to states whose reform status does not change during the sample period. Event-study estimates show that the treatment effects grow over time, though, which biases many of the timing comparisons. The TWFEDD estimate (−3.08) is therefore a misleading summary of the average post-treatment effect (about −5). Much of the sensitivity across specifications comes from changes in weights, or a small number of 2x2 DD’s, and need not indicate bias.

My results show how and why the TWFEDD estimator can fail to identify interpretable treatment effect parameters and suggest that practitioners should be careful when relying on it in designs with treatment timing variation. Fortunately, recent research has developed simple flexible estimators that address the problems I describe (e.g. Callaway and Sant’Anna, 2020), enabling applied researchers to make better use of variation in treatment timing.

Section snippets

The difference-in-differences decomposition theorem

When units experience treatment at different times, one cannot estimate equation (1) because the post-period dummy is not defined for control observations. Nearly all work that exploits variation in treatment timing use the two-way fixed effects regression in Eq. (2) (Cameron and Trivedi, 2005 p. 738). Researchers clearly recognize that differences in when units received treatment contribute to identification, but have not been able to describe how these comparisons are made.9

Theory: What parameter does DD identify and under what assumptions?

Theorem 1 relates the regression DD coefficient to sample averages, which makes it simple to analyze its statistical properties by writing βˆDDin terms of potential outcomes (Holland, 1986, Rubin, 1974). Define Yit(k)as the outcome of unit iin period twhen it is treated at ti=k, and use Yittito denote treated potential outcomes under unit i’s actual treatment date. Yit0is the untreated potential outcome. If tthen Yitti=Yit0. The observed outcome is yit=DitYitti+1−DitYit0. Following

DD decomposition in practice: Unilateral divorce and female suicide

To illustrate how to use DD decomposition theorem in practice, I replicate Stevenson and Wolfers’ (2006) analysis of no-fault divorce reforms and female suicide. Unilateral (or no-fault) divorce allowed either spouse to end a marriage, redistributing property rights and bargaining power relative to fault-based divorce regimes. Stevenson and Wolfers exploit “the natural variation resulting from the different timing of the adoption of unilateral divorce laws” in 37 states from 1969–1985 (see

Alternative specifications

The results above refer to parsimonious regressions like (2), but researchers almost always estimate multiple specifications and use differences to evaluate internal validity (Oster, 2016) or choose projects in the first place. This section extends the DD decomposition theorem to different weighting choices and control variables, providing simple new tools for learning why estimates change across specifications.

The DD decomposition theorem suggests a simple way to understand why estimates

Conclusion

Difference-in-differences is perhaps the most widely applicable quasi-experimental research design, but it has primarily been understood in the context of the simplest two-group/two-period estimator. I show that when treatment timing varies across units, the TWFEDD estimator equals a weighted average of all possible simple 2x2 DDs that compare one group that changes treatment status to another group that does not. Many ways in which the theoretical interpretation of regression DD differs from

Acknowledgments

I thank Michael Anderson, Andrew Baker, Martha Bailey, Marianne Bitler, Brantly Callaway, Kitt Carpenter, Eric Chyn, Bill Collins, Scott Cunningham, John DiNardo, Andrew Dustan, Federico Gutierrez, Brian Kovak, Emily Lawler, Doug Miller, Austin Nichols, Sayeh Nikpay, Edward Norton, Jesse Rothstein, Pedro Sant’Anna, Jesse Shapiro, Gary Solon, Isaac Sorkin, Sarah West, and seminar participants at the Southern Economics Association, ASHEcon 2018, the University of California, Davis, University of

References (69)

  • Joseph HotzV. et al.

    Predicting the efficacy of future training programs using past experiences at other locations

    J. Econometrics

    (2005)

  • HeckmanJames J. et al.

    Chapter 31 - the economics and econometrics of active labor market programs

  • CallawayBrantly et al.

    Quantile treatment effects in difference in differences models under dependence restrictions and with only two time periods

    J. Econometrics

    (2018)

  • AngristJoshua D. et al.

    Chapter 23 - empirical strategies in labor economics

  • AngristJoshua D.

    Grouped-data estimation and testing in simple labor-supply models

    J. Econometrics

    (1991)

  • AbadieAlberto

    Semiparametric difference-in-differences estimators

    Rev. Econom. Stud.

    (2005)

  • AbadieAlberto et al.

    Synthetic control methods for comparative case studies: Estimating the effect of california’s tobacco control program

    J. Amer. Statist. Assoc.

    (2010)

  • AllcottHunt

    Site selection bias in program evaluation

    Q. J. Econ.

    (2015)

  • AlmondDouglas et al.

    Inside the war on poverty: The impact of food stamps on birth outcomes

    Rev. Econ. Stat.

    (2011)

  • AngristJoshua D. et al.

    Mostly Harmless Econometrics : An Empiricist’s Companion

    (2009)

  • AngristJoshua D. et al.

    Mastering ’Metrics : The Path from Cause to Effect

    (2015)

  • AtheySusan et al.

    Identification and inference in nonlinear difference-in-differences models

    Econometrica

    (2006)

  • AtheySusan et al.

    Design-Based Analysis in Difference-in-Differences Settings with Staggered AdoptionWorking Paper

    (2018)

  • Ben-MichaelEli et al.

    Synthetic Controls and Weighted Event Studies with Staggered AdoptionWorking Paper

    (2019)

  • BertrandMarianne et al.

    How much should we trust differences-in-differences estimates?

    Q. J. Econ.

    (2004)

  • BilinskiAlyssa et al.

    (2019)

  • BitlerMarianne P. et al.

    Some evidence on race, welfare reform, and household income

    Amer. Econ. Rev.

    (2003)

  • BlinderAlan S.

    Wage discrimination: Reduced form and structural estimates

    J. Hum. Resour.

    (1973)

  • BorusyakKirill et al.

    Revisiting Event Study DesignsHarvard University Working Paper

    (2017)

  • CallawayBrantly et al.

    Difference-in-differences with multiple time periods

    J. Econometrics

    (2020)

  • CameronColin et al.

    Microeconometrics : Methods and Applications

    (2005)

  • CengizDoruk et al.

    The effect of minimum wages on low-wage jobs*

    Q. J. Econ.

    (2019)

  • de ChaisemartinClément et al.

    Fuzzy differences-in-differences

    Rev. Econom. Stud.

    (2018)

  • de ChaisemartinClément et al.

    Two-way fixed effects estimators with heterogeneous treatment effects

    Amer. Econ. Rev.

    (2020)

  • ChernozhukovVictor et al.

    Average and quantile effects in nonseparable panel models

    Econometrica

    (2013)

  • ChynEric

    Moved to opportunity: The long-run effect of public housing demolition on labor market outcomes of children

    Amer. Econ. Rev.

    (2018)

  • CunninghamScott

    Causal Inference: The Mixtape

    (2021)

  • DeatonAngus

    The Analysis of Household Surveys : A Microeconometric Approach to Development Policy

    (1997)

  • DeshpandeManasi et al.

    Who is screened out? Application costs and the targeting of disability programs

    Amer. Econ. J.: Econ. Policy

    (2019)

  • FadlonItzik et al.

    Family Labor Supply Responses to Severe Health ShocksNational Bureau of Economic Research Working Paper Series (21352)

    (2015)

  • FrischRagnar et al.

    Econometrica

    (1933)

  • GibbonsRagnar et al.

    Broken or fixed effects?

    J. Econometr. Methods

    (2018)

  • GoodmanJoshua

    The Labor of Division: Returns to Compulsory High School Math CourseworkNational Bureau of Economic Research Working Paper Series (23063)

    (2017)

  • Goodman-BaconAndrew et al.

    Bacondecomp: Stata module for decomposing difference-in-differences estimation with variation in treatment timing

    Stata Command

    (2019)

  • Navigate DownView more references

    Cited by (628)

    • Clean heating and air pollution: Evidence from Northern China

      2023, Energy Reports

      Show abstractNavigate Down

      In response to the severe air pollution caused by winter heating, the Chinese government has proposed a clean heating policy since 2017. This paper first uses air quality monitoring and household micro-survey data in northern cities to investigate the impact of this policy on air pollution through the DID approach. The empirical results show that the air quality index of the pilot cities has decreased by 12.3 units, which means that the clean heating policy has significantly reduced air pollution. The finding still holds through a series of robustness analyses such as controlling for confounding policies and Goodman-Bacon decomposition. Besides, we explore the mechanism at the urban and household levels. It shows that this policy has significantly increased the consumption of natural gas and the number of natural gas users in urban central heating and also prompted more households to use natural gas and electricity instead of solar energy to replace coal for heating. The heterogeneity analysis shows that this policy has a greater impact on both low-urbanization cities and developed cities. Furthermore, we analyze price sensitivity and the costs and benefits of the policy, and find that the increase in natural gas prices significantly inhibits the policy effect, while the price of coal has no significant effect. In addition, the policy significantly reduces the incidence of lung disease among middle-aged and elderly people and leads to a significant increase in household heating costs in winter, which indicates that the clean heating policy has generated environmental and health benefits despite increasing government and household expenditures.

    • Military investment and the rise of industrial clusters: Evidence from China's self-strengthening movement

      2023, Journal of Development Economics

      Show abstractNavigate Down

      This paper investigates the short- and long-term impact of large-scale military investment on civilian industrial growth by focusing on China’s first attempt to modernize its military sector between 1861 and 1894. Panel data from 1858 to 1937 suggest that the program generated positive effects on civilian firm entry, but these effects appeared only after the government relaxed constraints on the entry of private firms. Long-term analysis shows that counties that received more military investment through the program, driven by plausibly exogenous ex ante political connections, had greater output in civilian industries in the 1930s. Analysis of the mechanisms suggest that the program boosted local economies through input–output linkages, human capital accumulation, and the rise of modern banks.

    • Highly skilled foreign labor introduction policies and corporate innovation: Evidence from a natural experiment in China

      2023, Economic Analysis and Policy

      Show abstractNavigate Down

      The helping-hand theory of government suggests that government can directly spur innovation by providing subsidies, tax credits, and bank loans. However, the existing literature largely ignores more indirect mechanisms such as when government attracts highly skilled foreign labor to work in local firms and thereby enhances the accumulation of corporate human capital. To fill this theoretical gap, we exploit a unique policy shock that began in 2008 when local governments issued highly skilled foreign labor introduction policies that attracted expatriate Chinese to return home to work. By using a difference-in-differences model, we find that the enactment of these policies promotes corporate innovation and the effect is more pronounced for companies previously lacking skilled employees. The results suggest that the upgrade of human capital serves as a channel for this effect. Moreover, the policy causes a greater innovation-incentive effect for those native companies that require more high-skilled labor, suggesting that the highly skilled foreign labor is complementary to that of locals. Last, we find that political connected non-state-owned enterprises whose senior executives have overseas experience benefit more from the policy. Our study provides policy implications by revealing an overseas talent-driven innovation development path.

    • Combining rules and discretion in economic development policy: Evidence on the impacts of the California Competes Tax Credit

      2023, Journal of Public Economics

      Show abstractNavigate Down

      We evaluate the effects of one of a new generation of economic development programs, the California Competes Tax Credit (CCTC), on local job creation. Incorporating perceived best practices from previous initiatives, the CCTC combines explicit eligibility thresholds with some discretion on the part of program officials to select tax credit recipients. The structure and implementation of the program facilitates rigorous evaluation. We exploit detailed data on accepted and rejected applicants to the CCTC, including information on the scoring of applicants with regard to program goals as well as on funding decisions, together with restricted-access American Community Survey (ACS) data on local economic conditions. Using a difference-in-differences approach, we find that each CCTC-incentivized job in a census tract increases the number of individuals working in that tract by close to 3 – a significant local multiplier. Local multipliers are larger for non-manufacturing awards than for manufacturing awards. CCTC awards increase employment among workers across socioeconomic groups and in more- as well as less-advantaged neighborhoods, but have limited impact on residents of affected communities. We validate our empirical strategy and confirm our core results using an alternative dataset and recently developed difference-in-differences methods that correct for potential biases generated by variation in treatment timing and treatment effect heterogeneity.

    • Violence-induced migration and peer effects in academic performance

      2023, Journal of Public Economics

      Show abstractNavigate Down

      We document that local violence generates spillover effects beyond areas where violence takes place, via out-migration from violence-affected areas and peer exposure to violence. In the context of the Mexican war on drugs, we use violence-induced student migration as an exogenous source of variation in peer exposure to violence to estimate its effects on the academic performance of non-moving students in areas not directly affected by violence. Our results show that municipalities that face more violence experience higher rates of student out-migration. In receiving schools in areas not directly affected by violence, adding a new peer who was exposed to local violence to a class of 20 students decreases incumbents’ academic performance by 2 percent of a standard deviation. Negative effects are more pronounced among girls and high-achieving students.

    • Public health departments and the mortality transition in Latin America: Evidence from Puerto Rico

      2023, Journal of Development Economics

      Show abstractNavigate Down

      This paper examines the role of public health in reducing mortality prior to modern medicine by studying Puerto Rico in the early 20th century. From 1930 to 1960, Puerto Rico experienced one of the fastest increases in life expectancy in history and completed the first mortality transition outside of Europe and Western offshoots. Using municipal-level data in an event study framework, I show that public health units (county health departments) caused around half of the reduction in infant and tuberculosis mortality from 1923 to 1945, without significantly increasing public expenditures. Public health units also reduced maternal mortality and stillbirths. I present descriptive evidence that more assistant midwives per capita correspond to larger declines in maternal mortality, suggesting the importance of the training of midwives by health units. This investigation provides a window into Latin America more broadly, since most countries in Latin America subsequently adopted public health units.

    Arrow Up and RightView all citing articles on Scopus

    • Research article

      Covariate-adjusted Fisher randomization tests for the average treatment effect

      Journal of Econometrics, Volume 225, Issue 2, 2021, pp. 278-294

      Show abstractNavigate Down

      Fisher’s randomization test (frt) delivers exact p-values under the strong null hypothesis of no treatment effect on any units whatsoever and allows for flexible covariate adjustment to improve the power. Of interest is whether the resulting covariate-adjusted procedure could also be valid for testing the weak null hypothesis of zero average treatment effect. To this end, we evaluate two general strategies for conducting covariate adjustment in frts: the pseudo-outcome strategy that uses the residuals from an outcome model with only the covariates as the pseudo, covariate-adjusted outcomes to form the test statistic, and the model-output strategy that directly uses the output from an outcome model with both the treatment and covariates as the covariate-adjusted test statistic. Based on theory and simulation, we recommend using the ordinary least squares (ols) fit of the observed outcome on the treatment, centered covariates, and their interactions for covariate adjustment, and conducting frt with the robust t-value of the treatment as the test statistic. The resulting frt is finite-sample exact for testing the strong null hypothesis, asymptotically valid for testing the weak null hypothesis, and more powerful than the unadjusted counterpart under alternatives, all irrespective of whether the linear model is correctly specified or not. We start with complete randomization, and then extend the theory to cluster randomization, stratified randomization, and rerandomization, respectively, giving a recommendation for the test procedure and test statistic under each design. Our theory is design-based, also known as randomization-based, in which we condition on the potential outcomes but average over the random treatment assignment.

    • Research article

      Matching estimators with few treated and many control observations

      Journal of Econometrics, Volume 225, Issue 2, 2021, pp. 295-307

      Show abstractNavigate Down

      We analyze the properties of matching estimators when there are few treated, but many control observations. We show that, under standard assumptions, the nearest neighbor matching estimator for the average treatment effect on the treated is asymptotically unbiased in this framework. However, when the number of treated observations is fixed, the estimator is not consistent, and it is generally not asymptotically normal. Since standard inference methods are inadequate, we propose alternative inference methods, based on the theory of randomization tests under approximate symmetry, that are asymptotically valid in this framework. We show that these tests are valid under relatively strong assumptions when the number of treated observations is fixed, and under weaker assumptions when the number of treated observations increases, but at a lower rate relative to the number of control observations.

    • Research article

      Federalizing benefits: The introduction of Supplemental Security Income and the size of the safety net

      Journal of Public Economics, Volume 185, 2020, Article 104174

      Show abstractNavigate Down

      In 1974, Supplemental Security Income (SSI) federalized cash welfare programs for the elderly, blind, and individuals with disabilities, imposing a national minimum benefit, and differentially raising payment levels in states that paid below its benefit floor. We show that this increased disability participation, but shrank non-disability cash transfer programs. For every four new SSI recipients, three came from other welfare programs. Each dollar of per capita SSI income increased total per capita transfer income by just over 50 cents. Federalizing part of a patchwork safety net need not increase redistribution by as much as traditional models of fiscal federalism suggest.

    • Research article

      Permutation test for heterogeneous treatment effects with a nuisance parameter

      Journal of Econometrics, Volume 225, Issue 2, 2021, pp. 148-174

      Show abstractNavigate Down

      This paper proposes an asymptotically valid permutation test for heterogeneous treatment effects in the presence of an estimated nuisance parameter. Not accounting for the estimation error of the nuisance parameter results in statistics that depend on the particulars of the data generating process, and the resulting permutation test fails to control the Type 1 error, even asymptotically.

      In this paper we consider a permutation test based on a martingale transformation of the empirical process to render an asymptotically pivotal statistic, effectively nullifying the effect associated with the estimation error on the limiting distribution of the statistic. Under weak conditions, we show that the permutation test based on the martingale-transformed statistic results in the asymptotic rejection probability of αin general while retaining the exact control of the test level when testing for the more restrictive sharp null. We also show how our martingale-based permutation test extends to testing whether there exists treatment effect heterogeneity within subgroups defined by observable covariates. Our approach comprises testing the joint null hypothesis that treatment effects are constant within mutually exclusive subgroups while allowing the treatment effects to vary across subgroups.

      Monte Carlo simulations show that the permutation test presented here performs well in finite samples, and is comparable to those existing in the literature. To gain further understanding of the test to practical problems, we investigate the gift exchange hypothesis in the context of two field experiments from Gneezy and List (2006). Lastly, we provide the companion RATest R package to facilitate and encourage the application of our test in empirical research.

    • Research article

      Journal of Public Economics, Volume 124, 2015, pp. 1-17

      Show abstractNavigate Down

      While conducting empirical work, researchers sometimes observe changes in outcomes before adoption of a new policy. The conventional diagnosis is that treatment is endogenous. This observation is also consistent, however, with anticipation effects that arise naturally out of many theoretical models. This paper illustrates that distinguishing endogeneity from anticipation matters greatly when estimating treatment effects. It provides a framework for comparing different methods for estimating anticipation effects and proposes a new set of instrumental variables to address the problem that subjects' expectations are unobservable. Finally, this paper examines a specific set of tort reforms that was not targeted at physicians but was likely anticipated by them. Interpreting pre-trends as evidence of anticipation increases the estimated effect of these reforms by a factor of two compared to a model that ignores anticipation.

    • Research article

      Regression discontinuity design with many thresholds

      Journal of Econometrics, Volume 218, Issue 1, 2020, pp. 216-241

      Show abstractNavigate Down

      Numerous empirical studies employ regression discontinuity designs with multiple cutoffs and heterogeneous treatments. A common practice is to normalize all the cutoffs to zero and estimate one effect. This procedure identifies the average treatment effect (ATE) on the observed distribution of individuals local to existing cutoffs. However, researchers often want to make inferences on more meaningful ATEs, computed over general counterfactual distributions of individuals, rather than simply the observed distribution of individuals local to existing cutoffs. This paper proposes a consistent and asymptotically normal estimator for such ATEs when heterogeneity follows a non-parametric function of cutoff characteristics in the sharp case. The proposed estimator converges at the minimax optimal rate of root-nfor a specific choice of tuning parameters. Identification in the fuzzy case, with multiple cutoffs, is impossible unless heterogeneity follows a finite-dimensional function of cutoff characteristics. Under parametric heterogeneity, this paper proposes an ATE estimator for the fuzzy case that optimally combines observations to maximize its precision.

    This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

    What is earliest occurrence time and latest occurrence time of an event?

    1. Earliest event occurrence time (T₂): Time at which an event may occur as early as possible. 2. Latest allowable occurance time (T): Time at which event may occur as late as possible without delaying the overall project completion time.

    What is the difference between the earliest time that an activity can start and the latest time it has to start?

    The slack time, also known as float time, for an activity is the time between the earliest and latest start time.

    What is earliest and latest time in CPM?

    EF - earliest finish time, equal to the earliest start time for the activity plus the time required to complete the activity. LF - latest finish time: the latest time at which the activity can be completed without delaying the project.

    What is the meaning of earliest expected time?

    The Earliest Expected Time (TE) is the time when an event can be expected to occur earlier. The Latest allowable occurrence time (TE)is the latest time by which an event must occur to keep the project on schedule (without delaying the project).