Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

I can t believe it's not avgn

04/12/2021 Client: muhammad11 Deadline: 2 Day

Mastering ’Metrics

Mastering ’Metrics The Path from Cause to Effect

Joshua D. Angrist and

Jörn-Steffen Pischke

PRINCETON UNIVERSITY PRESS ▪ PRINCETON AND OXFORD

Copyright © 2015 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, 6 Oxford Street, Woodstock, Oxfordshire

OX20 1TW

press.princeton.edu Jacket and illustration design by Wanda Espana

Book illustrations by Garrett Scafani All Rights Reserved

Library of Congress Cataloging-in-Publication Data Angrist, Joshua David.

Mastering ’metrics : the path from cause to effect / Joshua D. Angrist, Jörn-Steffen Pischke. pages cm Includes index.

Summary: “Applied econometrics, known to aficionados as ’metrics, is the original data science. ’Metrics encompasses the statistical methods economists use to untangle cause and effect in human affairs. Through accessible discussion and with a dose of kung fu-themed humor, Mastering ’Metrics presents the essential tools of econometric research and demonstrates why econometrics is exciting and useful. The five most valuable econometric methods, or what the authors call the Furious Five—random assignment, regression, instrumental variables, regression discontinuity designs, and differences in differences-are illustrated through wellcrafted real-world examples (vetted for awesomeness by Kung Fu Panda’s Jade Palace). Does health insurance make you healthier? Randomized experiments provide answers. Are expensive private colleges and selective public high schools better than more pedestrian institutions? Regression analysis and a regression discontinuity design reveal the surprising truth. When private banks teeter, and

depositors take their money and run, should central banks step in to save them? Differences-in- differences analysis of a Depression-era banking crisis offers a response. Could arresting O.J. Simpson have saved his ex-wife’s life? Instrumental variables methods instruct law enforcement authorities in how best to respond to domestic abuse. Wielding econometric tools with skill and confidence, Mastering ’Metrics uses data and statistics to illuminate the path from cause to effect. Shows why econometrics is important Explains econometric research through humorous and accessible discussion Outlines empirical methods central to modern econometric practice Works

through interesting and relevant real-world examples”—Provided by publisher. ISBN 978-0-691-15283-7 (hardback : alk. paper)— ISBN 978-0-691-15284-4 (paperback : alk. paper) 1. Econometrics. I. Pischke, Jörn-Steffen. II. Title.

HB139.A53984 2014 330.01′5195—dc23 2014024449

British Library Cataloging-in-Publication Data is available This book has been composed in Sabon with Helvetica Neue Condensed family display using

ZzTEX by Princeton Editorial Associates Inc., Scottsdale, Arizona

Printed on acid-free paper. ♾ Printed in the United States of America

1 3 5 7 9 10 8 6 4 2

http://press.princeton.edu
CONTENTS

List of Figures vii List of Tables ix Introduction xi

1 Randomized Trials 1 1.1 In Sickness and in Health (Insurance) 1 1.2 The Oregon Trail 24

Masters of ’Metrics: From Daniel to R. A. Fisher 30 Appendix: Mastering Inference 33

2 Regression 47 2.1 A Tale of Two Colleges 47 2.2 Make Me a Match, Run Me a Regression 55 2.3 Ceteris Paribus? 68

Masters of ’Metrics: Galton and Yule 79 Appendix: Regression Theory 82

3 Instrumental Variables 98 3.1 The Charter Conundrum 99 3.2 Abuse Busters 115 3.3 The Population Bomb 123

Masters of ’Metrics: The Remarkable Wrights 139 Appendix: IV Theory 142

4 Regression Discontinuity Designs 147

4.1 Birthdays and Funerals 148 4.2 The Elite Illusion 164 Masters of ’Metrics: Donald Campbell 175

5 Differences-in-Differences 178 5.1 A Mississippi Experiment 178 5.2 Drink, Drank, … 191 Masters of ’Metrics: John Snow 204 Appendix: Standard Errors for Regression DD 205

6 The Wages of Schooling 209 6.1 Schooling, Experience, and Earnings 209 6.2 Twins Double the Fun 217 6.3 Econometricians Are Known by Their … Instruments 223 6.4 Rustling Sheepskin in the Lone Star State 235 Appendix: Bias from Measurement Error 240

Abbreviations and Acronyms 245 Empirical Notes 249 Acknowledgments 269 Index 271

FIGURES

1.1 A standard normal distribution 40 1.2 The distribution of the t-statistic for the mean in a sample

of size 10 41

1.3 The distribution of the t-statistic for the mean in a sample of size 40

42

1.4 The distribution of the t-statistic for the mean in a sample of size 100

42

2.1 The CEF and the regression line 83 2.2 Variance in X is good 96 3.1 Application and enrollment data from KIPP Lynn lotteries 103 3.2 IV in school: the effect of KIPP attendance on math scores 108 4.1 Birthdays and funerals 149 4.2 A sharp RD estimate of MLDA mortality effects 150 4.3 RD in action, three ways 154 4.4 Quadratic control in an RD design 158 4.5 RD estimates of MLDA effects on mortality by cause of

death 161

4.6 Enrollment at BLS 166 4.7 Enrollment at any Boston exam school 167 4.8 Peer quality around the BLS cutoff 168 4.9 Math scores around the BLS cutoff 172 4.10 Thistlethwaite and Campbell’s Visual RD 177 5.1 Bank failures in the Sixth and Eighth Federal Reserve

Districts 184

5.2 Trends in bank failures in the Sixth and Eighth Federal 185

Reserve Districts 5.3 Trends in bank failures in the Sixth and Eighth Federal

Reserve Districts, and the Sixth District’s DD counterfactual

186

5.4 An MLDA effect in states with parallel trends 198 5.5 A spurious MLDA effect in states where trends are not

parallel 198

5.6 A real MLDA effect, visible even though trends are not parallel

199

5.7 John Snow’s DD recipe 206 6.1 The quarter of birth first stage 230 6.2 The quarter of birth reduced form 230 6.3 Last-chance exam scores and Texas sheepskin 237 6.4 The effect of last-chance exam scores on earnings 237

TABLES

1.1 Health and demographic characteristics of insured and uninsured couples in the NHIS

5

1.2 Outcomes and treatments for Khuzdar and Maria 7 1.3 Demographic characteristics and baseline health in the

RAND HIE 20

1.4 Health expenditure and health outcomes in the RAND HIE 23 1.5 OHP effects on insurance coverage and health-care use 27 1.6 OHP effects on health indicators and financial health 28 2.1 The college matching matrix 53 2.2 Private school effects: Barron’s matches 63 2.3 Private school effects: Average SAT score controls 66 2.4 School selectivity effects: Average SAT score controls 67 2.5 Private school effects: Omitted variables bias 76 3.1 Analysis of KIPP lotteries 104 3.2 The four types of children 112 3.3 Assigned and delivered treatments in the MDVE 117 3.4 Quantity-quality first stages 135 3.5 OLS and 2SLS estimates of the quantity-quality trade-off 137 4.1 Sharp RD estimates of MLDA effects on mortality 160 5.1 Wholesale firm failures and sales in 1929 and 1933 190 5.2 Regression DD estimates of MLDA effects on death rates 196 5.3 Regression DD estimates of MLDA effects controlling for

beer taxes 201

6.1 How bad control creates selection bias 216

6.2 Returns to schooling for Twinsburg twins 220 6.3 Returns to schooling using child labor law instruments 226 6.4 IV recipe for an estimate of the returns to schooling using

a single quarter of birth instrument 231

6.5 Returns to schooling using alternative quarter of birth instruments

232

INTRODUCTION

BLIND MASTER PO: Close your eyes. What do you hear?

YOUNG KWAI CHANG CAINE: I hear the water, I hear the birds.

MASTER PO: Do you hear your own heartbeat?

KWAI CHANG CAINE: No.

MASTER PO: Do you hear the grasshopper that is at your feet?

KWAI CHANG CAINE: Old man, how is it that you hear these things?

MASTER PO: Young man, how is it that you do not? Kung Fu, Pilot

Economists’ reputation for dismality is a bad rap. Economics is as exciting as any science can be: the world is our lab, and the many diverse people in it are our subjects. The excitement in our work comes from the opportunity to learn

about cause and effect in human affairs. The big questions of the day are our questions: Will loose monetary policy spark economic growth or just fan the fires of inflation? Iowa farmers and the Federal Reserve chair want to know. Will mandatory health insurance really make Americans healthier? Such policy kindling lights the fires of talk radio. We approach these questions coolly, however, armed not with passion but with data. Economists’ use of data to answer cause-and-effect questions

constitutes the field of applied econometrics, known to students and

masters alike as ’metrics. The tools of the ’metrics trade are disciplined data analysis, paired with the machinery of statistical inference. There is a mystical aspect to our work as well: we’re after truth, but truth is not revealed in full, and the messages the data transmit require interpretation. In this spirit, we draw inspiration from the journey of Kwai Chang Caine, hero of the classic Kung Fu TV series. Caine, a mixed- race Shaolin monk, wanders in search of his U.S.-born half-brother in the nineteenth century American West. As he searches, Caine questions all he sees in human affairs, uncovering hidden relationships and deeper meanings. Like Caine’s journey, the Way of ’Metrics is illuminated by questions.

Other Things Equal In a disturbing development you may have heard of, the proportion of American college students completing their degrees in a timely fashion has taken a sharp turn south. Politicians and policy analysts blame falling college graduation rates on a pernicious combination of tuition hikes and the large student loans many students use to finance their studies. Perhaps increased student borrowing derails some who would otherwise stay on track. The fact that the students most likely to drop out of school often shoulder large student loans would seem to substantiate this hypothesis. You’d rather pay for school with inherited riches than borrowed money if you can. As we’ll discuss in detail, however, education probably boosts earnings enough to make loan repayment bearable for most graduates. How then should we interpret the negative correlation between debt burden and college graduation rates? Does indebtedness cause debtors to drop out? The first question to ask in this context is who borrows the most. Students who borrow heavily typically come from middle and lower income families, since richer families have more savings. For many reasons, students from lower income families are less likely to complete a degree than those from higher income families, regardless of whether they’ve borrowed heavily. We should therefore be

skeptical of claims that high debt burdens cause lower college completion rates when these claims are based solely on comparisons of completion rates between those with more or less debt. By virtue of the correlation between family background and college debt, the contrast in graduation rates between those with and without student loans is not an other things equal comparison. As college students majoring in economics, we first learned the other things equal idea by its Latin name, ceteris paribus. Comparisons made under ceteris paribus conditions have a causal interpretation. Imagine two students identical in every way, so their families have the same financial resources and their parents are similarly educated. One of these virtual twins finances college by borrowing and the other from savings. Because they are otherwise equal in every way (their grandmother has treated both to a small nest egg), differences in their educational attainment can be attributed to the fact that only one has borrowed. To this day, we wonder why so many economics students first encounter this central idea in Latin; maybe it’s a conspiracy to keep them from thinking about it. Because, as this hypothetical comparison suggests, real other things equal comparisons are hard to engineer, some would even say impossibile (that’s Italian not Latin, but at least people still speak it). Hard to engineer, maybe, but not necessarily impossible. The ’metrics craft uses data to get to other things equal in spite of the obstacles—called selection bias or omitted variables bias—found on the path running from raw numbers to reliable causal knowledge. The path to causal understanding is rough and shadowed as it snakes around the boulders of selection bias. And yet, masters of ’metrics walk this path with confidence as well as humility, successfully linking cause and effect. Our first line of attack on the causality problem is a randomized experiment, often called a randomized trial. In a randomized trial, researchers change the causal variables of interest (say, the availability of college financial aid) for a group selected using something like a coin toss. By changing circumstances randomly, we make it highly likely that the variable of interest is unrelated to the many other factors determining the outcomes we mean to study. Random assignment isn’t

the same as holding everything else fixed, but it has the same effect. Random manipulation makes other things equal hold on average across the groups that did and did not experience manipulation. As we explain in Chapter 1, “on average” is usually good enough.

Randomized trials take pride of place in our ’metrics toolkit. Alas, randomized social experiments are expensive to field and may be slow to bear fruit, while research funds are scarce and life is short. Often, therefore, masters of ’metrics turn to less powerful but more accessible research designs. Even when we can’t practicably randomize, however, we still dream of the trials we’d like to do. The notion of an ideal experiment disciplines our approach to econometric research. Mastering ’Metrics shows how wise application of our five favorite econometric tools brings us as close as possible to the causality-revealing power of a real experiment. Our favorite econometric tools are illustrated here through a series of

well-crafted and important econometric studies. Vetted by Grand Master Oogway of Kung Fu Panda’s Jade Palace, these investigations of causal effects are distinguished by their awesomeness. The methods they use— random assignment, regression, instrumental variables, regression discontinuity designs, and differences-in-differences—are the Furious Five of econometric research. For starters, motivated by the contemporary American debate over health care, the first chapter

describes two social experiments that reveal whether, as many policymakers believe, health insurance indeed helps those who have it stay healthy. Chapters 2–5 put our other tools to work, crafting answers to important questions ranging from the benefits of attending private colleges and selective high schools to the costs of teen drinking and the effects of central bank injections of liquidity. Our final chapter puts the Furious Five to the test by returning to the education arena. On average, college graduates earn about twice as much as high school graduates, an earnings gap that only seems to be growing. Chapter 6 asks whether this gap is evidence of a large causal return to schooling or merely a reflection of the many other advantages those with more education might have (such as more educated parents). Can the relationship between schooling and earnings ever be evaluated on a ceteris paribus basis, or must the boulders of selection bias forever block our way? The challenge of quantifying the causal link between schooling and earnings provides a gripping test match for ’metrics tools and the masters who wield them.

Mastering ’Metrics

Chapter 1

Randomized Trials

KWAI CHANG CAINE: What happens in a man’s life is already written. A man must move through life as his destiny wills.

OLD MAN: Yet each is free to live as he chooses. Though they seem opposite, both are true. Kung Fu, Pilot

Our Path

Our path begins with experimental random assignment, both as a framework for causal questions and a benchmark by which the results from other methods are judged. We illustrate the awesome power of random assignment through two randomized evaluations of the effects of health insurance. The appendix to this chapter also uses the experimental framework to review the concepts and methods of statistical inference.

1.1 In Sickness and in Health (Insurance)

The Affordable Care Act (ACA) has proven to be one of the most controversial and interesting policy innovations we’ve seen. The ACA requires Americans to buy health insurance, with a tax penalty for those

who don’t voluntarily buy in. The question of the proper role of government in the market for health care has many angles. One is the causal effect of health insurance on health. The United States spends more of its GDP on health care than do other developed nations, yet Americans are surprisingly unhealthy. For example, Americans are more likely to be overweight and die sooner than their Canadian cousins, who spend only about two-thirds as much on care. America is also unusual among developed countries in having no universal health insurance scheme. Perhaps there’s a causal connection here. Elderly Americans are covered by a federal program called Medicare, while some poor Americans (including most single mothers, their children, and many other poor children) are covered by Medicaid. Many of the working, prime-age poor, however, have long been uninsured. In fact, many uninsured Americans have chosen not to participate in an employer-provided insurance plan.1 These workers, perhaps correctly, count on hospital emergency departments, which cannot turn them away, to address their health-care needs. But the emergency department might not be the best place to treat, say, the flu, or to manage chronic conditions like diabetes and hypertension that are so pervasive among poor Americans. The emergency department is not required to provide long-term care. It therefore stands to reason that government-mandated health insurance might yield a health dividend. The push for subsidized universal health insurance stems in part from the belief that it does. The ceteris paribus question in this context contrasts the health of someone with insurance coverage to the health of the same person were they without insurance (other than an emergency department backstop). This contrast highlights a fundamental empirical conundrum: people are either insured or not. We don’t get to see them both ways, at least not at the same time in exactly the same circumstances. In his celebrated poem, “The Road Not Taken,” Robert Frost used the metaphor of a crossroads to describe the causal effects of personal choice:

Two roads diverged in a yellow wood,

And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth;

Frost’s traveler concludes:

Two roads diverged in a wood, and I— I took the one less traveled by, And that has made all the difference.

The traveler claims his choice has mattered, but, being only one person, he can’t be sure. A later trip or a report by other travelers won’t nail it down for him, either. Our narrator might be older and wiser the second time around, while other travelers might have different experiences on the same road. So it is with any choice, including those related to health insurance: would uninsured men with heart disease be disease-free if they had insurance? In the novel Light Years, James Salter’s irresolute narrator observes: “Acts demolish their alternatives, that is the paradox.” We can’t know what lies at the end of the road not taken. We can’t know, but evidence can be brought to bear on the question. This chapter takes you through some of the evidence related to paths involving health insurance. The starting point is the National Health Interview Survey (NHIS), an annual survey of the U.S. population with detailed information on health and health insurance. Among many other things, the NHIS asks: “Would you say your health in general is excellent, very good, good, fair, or poor?” We used this question to code an index that assigns 5 to excellent health and 1 to poor health in a sample of married 2009 NHIS respondents who may or may not be insured.2 This index is our outcome: a measure we’re interested in studying. The causal relation of interest here is determined by a variable that indicates coverage by private health insurance. We call this variable the treatment, borrowing from the literature on medical trials, although the treatments we’re interested in need not be medical treatments like

drugs or surgery. In this context, those with insurance can be thought of as the treatment group; those without insurance make up the comparison or control group. A good control group reveals the fate of the treated in a counterfactual world where they are not treated. The first row of Table 1.1 compares the average health index of

insured and uninsured Americans, with statistics tabulated separately for husbands and wives.3 Those with health insurance are indeed healthier than those without, a gap of about .3 in the index for men and .4 in the index for women. These are large differences when measured against the standard deviation of the health index, which is about 1. (Standard deviations, reported in square brackets in Table 1.1, measure variability in data. The chapter appendix reviews the relevant formula.) These large gaps might be the health dividend we’re looking for.

Fruitless and Fruitful Comparisons Simple comparisons, such as those at the top of Table 1.1, are often cited as evidence of causal effects. More often than not, however, such comparisons are misleading. Once again the problem is other things equal, or lack thereof. Comparisons of people with and without health insurance are not apples to apples; such contrasts are apples to oranges, or worse. Among other differences, those with health insurance are better

educated, have higher income, and are more likely to be working than the uninsured. This can be seen in panel B of Table 1.1, which reports the average characteristics of NHIS respondents who do and don’t have health insurance. Many of the differences in the table are large (for example, a nearly 3-year schooling gap); most are statistically precise enough to rule out the hypothesis that these discrepancies are merely chance findings (see the chapter appendix for a refresher on statistical significance). It won’t surprise you to learn that most variables tabulated here are highly correlated with health as well as with health insurance status. More-educated people, for example, tend to be healthier as well as being overrepresented in the insured group. This may be because

more-educated people exercise more, smoke less, and are more likely to wear seat belts. It stands to reason that the difference in health between insured and uninsured NHIS respondents at least partly reflects the extra schooling of the insured.

TABLE 1.1 Health and demographic characteristics of insured and uninsured

couples in the NHIS

Notes: This table reports average characteristics for insured and uninsured married couples in the 2009 National Health Interview Survey (NHIS). Columns (1), (2), (4), and (5) show average characteristics of the group of individuals specified by the column heading. Columns (3) and (6) report the difference between the average characteristic for individuals with and without health insurance (HI). Standard deviations are in brackets; standard errors are reported in parentheses.

Our effort to understand the causal connection between insurance and

health is aided by fleshing out Frost’s two-roads metaphor. We use the letter Y as shorthand for health, the outcome variable of interest. To make it clear when we’re talking about specific people, we use subscripts as a stand-in for names: Yi is the health of individual i. The outcome Yi is recorded in our data. But, facing the choice of whether to pay for health insurance, person i has two potential outcomes, only one of which is observed. To distinguish one potential outcome from another, we add a second subscript: The road taken without health insurance leads to Y0i (read this as “y-zero-i”) for person i, while the road with health insurance leads to Y1i (read this as “y-one–i”) for person i. Potential outcomes lie at the end of each road one might take. The causal effect of insurance on health is the difference between them, written Y1i − Y0i.

4

To nail this down further, consider the story of visiting Massachusetts Institute of Technology (MIT) student Khuzdar Khalat, recently arrived from Kazakhstan. Kazakhstan has a national health insurance system that covers all its citizens automatically (though you wouldn’t go there just for the health insurance). Arriving in Cambridge, Massachusetts, Khuzdar is surprised to learn that MIT students must decide whether to opt in to the university’s health insurance plan, for which MIT levies a hefty fee. Upon reflection, Khuzdar judges the MIT insurance worth paying for, since he fears upper respiratory infections in chilly New England. Let’s say that Y0i = 3 and Y1i = 4 for i = Khuzdar. For him, the causal effect of insurance is one step up on the NHIS scale:

Table 1.2 summarizes this information.

TABLE 1.2 Outcomes and treatments for Khuzdar and Maria

Khuzdar Khalat Maria Moreño Potential outcome without insurance: Y0i 3 5

Potential outcome with insurance: Y1i 4 5

Treatment (insurance status chosen): Di 1 0

Actual health outcome: Yi 4 5

Treatment effect: Y1i − Y0i 1 0

It’s worth emphasizing that Table 1.2 is an imaginary table: some of the information it describes must remain hidden. Khuzdar will either buy insurance, revealing his value of Y1i, or he won’t, in which case his Y0i is revealed. Khuzdar has walked many a long and dusty road in Kazakhstan, but even he cannot be sure what lies at the end of those not taken. Maria Moreño is also coming to MIT this year; she hails from Chile’s

Andean highlands. Little concerned by Boston winters, hearty Maria is not the type to fall sick easily. She therefore passes up the MIT insurance, planning to use her money for travel instead. Because Maria has Y0,Maria = Y1,Maria = 5, the causal effect of insurance on her health is

Maria’s numbers likewise appear in Table 1.2. Since Khuzdar and Maria make different insurance choices, they offer

an interesting comparison. Khuzdar’s health is YKhuzdar = Y1,Khuzdar = 4, while Maria’s is YMaria = Y0,Maria = 5. The difference between them is

Taken at face value, this quantity—which we observe—suggests Khuzdar’s decision to buy insurance is counterproductive. His MIT insurance coverage notwithstanding, insured Khuzdar’s health is worse than uninsured Maria’s. In fact, the comparison between frail Khuzdar and hearty Maria tells

us little about the causal effects of their choices. This can be seen by linking observed and potential outcomes as follows:

The second line in this equation is derived by adding and subtracting Y0,Khuzdar, thereby generating two hidden comparisons that determine the one we see. The first comparison, Y1,Khuzdar − Y0,Khuzdar, is the causal effect of health insurance on Khuzdar, which is equal to 1. The second, Y0,Khuzdar − Y0,Maria, is the difference between the two students’ health status were both to decide against insurance. This term, equal to −2, reflects Khuzdar’s relative frailty. In the context of our effort to uncover causal effects, the lack of comparability captured by the second term is called selection bias. You might think that selection bias has something to do with our focus on particular individuals instead of on groups, where, perhaps, extraneous differences can be expected to “average out.” But the difficult problem of selection bias carries over to comparisons of groups, though, instead of individual causal effects, our attention shifts to average causal effects. In a group of n people, average causal effects are written Avgn[Y1i − Y0i], where averaging is done in the usual way (that is, we sum individual outcomes and divide by n):

The symbol indicates a sum over everyone from i = 1 to n, where n is the size of the group over which we are averaging. Note that both summations in equation (1.1) are taken over everybody in the group of interest. The average causal effect of health insurance compares average

health in hypothetical scenarios where everybody in the group does and does not have health insurance. As a computational matter, this is the average of individual causal effects like Y1,Khuzdar − Y0,Khuzdar and Y1,Maria − Y0,Maria for each student in our data. An investigation of the average causal effect of insurance naturally begins by comparing the average health of groups of insured and uninsured people, as in Table 1.1. This comparison is facilitated by the construction of a dummy variable, Di, which takes on the values 0 and 1 to indicate insurance status:

We can now write Avgn[Yi|Di = 1] for the average among the insured and Avgn[Yi|Di = 0] for the average among the uninsured. These quantities are averages conditional on insurance status.5

The average Yi for the insured is necessarily an average of outcome Y1i, but contains no information about Y0i. Likewise, the average Yi among the uninsured is an average of outcome Y0i, but this average is devoid of information about the corresponding Y1i. In other words, the road taken by those with insurance ends with Y1i, while the road taken by those without insurance leads to Y0i. This in turn leads to a simple but important conclusion about the difference in average health by insurance status:

an expression highlighting the fact that the comparisons in Table 1.1 tell us something about potential outcomes, though not necessarily what we want to know. We’re after Avgn[Y1i − Y0i], an average causal effect involving everyone’s Y1i and everyone’s Y0i, but we see average Y1i only

for the insured and average Y0i only for the uninsured. To sharpen our understanding of equation (1.2), it helps to imagine

that health insurance makes everyone healthier by a constant amount, κ. As is the custom among our people, we use Greek letters to label such parameters, so as to distinguish them from variables or data; this one is the letter “kappa.” The constant-effects assumption allows us to write:

or, equivalently, Y1i − Y0i = κ. In other words, κ is both the individual and average causal effect of insurance on health. The question at hand is how comparisons such as those at the top of Table 1.1 relate to κ. Using the constant-effects model (equation (1.3)) to substitute for

Avgn[Y1i|Di = 1] in equation (1.2), we have:

This equation reveals that health comparisons between those with and without insurance equal the causal effect of interest (κ) plus the difference in average Y0i between the insured and the uninsured. As in the parable of Khuzdar and Maria, this second term describes selection bias. Specifically, the difference in average health by insurance status can be written:

where selection bias is defined as the difference in average Y0i between the groups being compared. How do we know that the difference in means by insurance status is

contaminated by selection bias? We know because Y0i is shorthand for everything about person i related to health, other than insurance status.

The lower part of Table 1.1 documents important noninsurance differences between the insured and uninsured, showing that ceteris isn’t paribus here in many ways. The insured in the NHIS are healthier for all sorts of reasons, including, perhaps, the causal effects of insurance. But the insured are also healthier because they are more educated, among other things. To see why this matters, imagine a world in which the causal effect of insurance is zero (that is, κ = 0). Even in such a world, we should expect insured NHIS respondents to be healthier, simply because they are more educated, richer, and so on. We wrap up this discussion by pointing out the subtle role played by

information like that reported in panel B of Table 1.1. This panel shows that the groups being compared differ in ways that we can observe. As we’ll see in the next chapter, if the only source of selection bias is a set of differences in characteristics that we can observe and measure, selection bias is (relatively) easy to fix. Suppose, for example, that the only source of selection bias in the insurance comparison is education. This bias is eliminated by focusing on samples of people with the same schooling, say, college graduates. Education is the same for insured and uninsured people in such a sample, because it’s the same for everyone in the sample. The subtlety in Table 1.1 arises because when observed differences

proliferate, so should our suspicions about unobserved differences. The fact that people with and without health insurance differ in many visible ways suggests that even were we to hold observed characteristics fixed, the uninsured would likely differ from the insured in ways we don’t see (after all, the list of variables we can see is partly fortuitous). In other words, even in a sample consisting of insured and uninsured people with the same education, income, and employment status, the insured might have higher values of Y0i. The principal challenge facing masters of ’metrics is elimination of the selection bias that arises from such unobserved differences.

Breaking the Deadlock: Just RANDomize My doctor gave me 6 months to live … but when I couldn’t pay the bill, he gave me 6 months more. Walter Matthau

Experimental random assignment eliminates selection bias. The logistics of a randomized experiment, sometimes called a randomized trial, can be complex, but the logic is simple. To study the effects of health insurance in a randomized trial, we’d start with a sample of people who are currently uninsured. We’d then provide health insurance to a randomly chosen subset of this sample, and let the rest go to the emergency department if the need arises. Later, the health of the insured and uninsured groups can be compared. Random assignment makes this comparison ceteris paribus: groups insured and uninsured by random assignment differ only in their insurance status and any consequences that follow from it. Suppose the MIT Health Service elects to forgo payment and tosses a

coin to determine the insurance status of new students Ashish and Zandile (just this once, as a favor to their distinguished Economics Department). Zandile is insured if the toss comes up heads; otherwise, Ashish gets the coverage. A good start, but not good enough, since random assignment of two experimental subjects does not produce

insured and uninsured apples. For one thing, Ashish is male and Zandile female. Women, as a rule, are healthier than men. If Zandile winds up healthier, it might be due to her good luck in having been born a woman and unrelated to her lucky draw in the insurance lottery. The problem here is that two is not enough to tango when it comes to random assignment. We must randomly assign treatment in a sample that’s large enough to ensure that differences in individual characteristics like sex wash out. Two randomly chosen groups, when large enough, are indeed

comparable. This fact is due to a powerful statistical property known as the Law of Large Numbers (LLN). The LLN characterizes the behavior of sample averages in relation to sample size. Specifically, the LLN says that a sample average can be brought as close as we like to the average in the population from which it is drawn (say, the population of American college students) simply by enlarging the sample. To see the LLN in action, play dice.6 Specifically, roll a fair die once

and save the result. Then roll again and average these two results. Keep on rolling and averaging. The numbers 1 to 6 are equally likely (that’s why the die is said to be “fair”), so we can expect to see each value an equal number of times if we play long enough. Since there are six possibilities here, and all are equally likely, the expected outcome is an equally weighted average of each possibility, with weights equal to 1/6:

This average value of 3.5 is called a mathematical expectation; in this case, it’s the average value we’d get in infinitely many rolls of a fair die. The expectation concept is important to our work, so we define it formally here.

MATHEMATICAL EXPECTATION The mathematical expectation of a variable, Yi, written E[Yi], is the population average of this variable. If Yi is a

variable generated by a random process, such as throwing a die, E[Yi] is the average in infinitely many repetitions of this process. If Yi is a variable that comes from a sample survey, E[Yi] is the average obtained if everyone in the population from which the sample is drawn were to be enumerated.

Rolling a die only a few times, the average toss may be far from the corresponding mathematical expectation. Roll two times, for example, and you might get boxcars or snake eyes (two sixes or two ones). These average to values well away from the expected value of 3.5. But as the number of tosses goes up, the average across tosses reliably tends to 3.5. This is the LLN in action (and it’s how casinos make a profit: in most gambling games, you can’t beat the house in the long run, because the expected payout for players is negative). More remarkably, it needn’t take too many rolls or too large a sample for a sample average to approach the expected value. The chapter appendix addresses the question of how the number of rolls or the size of a sample survey determines statistical accuracy. In randomized trials, experimental samples are created by sampling

from a population we’d like to study rather than by repeating a game, but the LLN works just the same. When sampled subjects are randomly divided (as if by a coin toss) into treatment and control groups, they come from the same underlying population. The LLN therefore promises that those in randomly assigned treatment and control samples will be similar if the samples are large enough. For example, we expect to see similar proportions of men and women in randomly assigned treatment and control groups. Random assignment also produces groups of about the same age and with similar schooling levels. In fact, randomly assigned groups should be similar in every way, including in ways that we cannot easily measure or observe. This is the root of random assignment’s awesome power to eliminate selection bias. The power of random assignment can be described precisely using the

following definition, which is closely related to the definition of mathematical expectation.

CONDITIONAL EXPECTATION The conditional expectation of a variable, Yi, given a dummy variable, Di = 1, is written E[Yi|Di = 1]. This is the average of Yi in the population that has Di equal to 1. Likewise, the conditional expectation of a variable, Yi, given Di = 0, written E[Yi|Di = 0], is the average of Yi in the population that has Di equal to 0. If Yi and Di are variables generated by a random process, such as throwing a die under different circumstances, E[Yi|Di = d] is the average of infinitely many repetitions of this process while holding the circumstances indicated by Di fixed at d. If Yi and Di come from a sample survey, E[Yi|Di = d] is the average computed when everyone in the population who has Di = d is sampled.

Because randomly assigned treatment and control groups come from the same underlying population, they are the same in every way, including their expected Y0i. In other words, the conditional expectations, E[Y0i|Di = 1] and E[Y0i|Di = 0], are the same. This in turn means that:

RANDOM ASSIGNMENT ELIMINATES SELECTION BIAS When Di is randomly assigned, E[Y0i|Di = 1] = E[Y0i|Di = 0], and the difference in expectations by treatment status captures the causal effect of treatment:

Provided the sample at hand is large enough for the LLN to work its magic (so we can replace the conditional averages in equation (1.4) with conditional expectations), selection bias disappears in a randomized experiment. Random assignment works not by eliminating individual

differences but rather by ensuring that the mix of individuals being compared is the same. Think of this as comparing barrels that include equal proportions of apples and oranges. As we explain in the chapters that follow, randomization isn’t the only way to generate such ceteris paribus comparisons, but most masters believe it’s the best. When analyzing data from a randomized trial or any other research design, masters almost always begin with a check on whether treatment and control groups indeed look similar. This process, called checking for balance, amounts to a comparison of sample averages as in panel B of Table 1.1. The average characteristics in panel B appear dissimilar or unbalanced, underlining the fact that the data in this table don’t come from anything like an experiment. It’s worth checking for balance in this manner any time you find yourself estimating causal effects. Random assignment of health insurance seems like a fanciful proposition. Yet health insurance coverage has twice been randomly assigned to large representative samples of Americans. The RAND Health Insurance Experiment (HIE), which ran from 1974 to 1982, was one of the most influential social experiments in research history. The HIE enrolled 3,958 people aged 14 to 61 from six areas of the country. The HIE sample excluded Medicare participants and most Medicaid and military health insurance subscribers. HIE participants were randomly assigned to one of 14 insurance plans. Participants did not have to pay insurance premiums, but the plans had a variety of provisions related to cost sharing, leading to large differences in the amount of insurance they offered. The most generous HIE plan offered comprehensive care for free. At the other end of the insurance spectrum, three “catastrophic coverage” plans required families to pay 95% of their health-care costs, though these costs were capped as a proportion of income (or capped at $1,000 per family, if that was lower). The catastrophic plans approximate a no- insurance condition. A second insurance scheme (the “individual deductible” plan) also required families to pay 95% of outpatient charges, but only up to $150 per person or $450 per family. A group of nine other plans had a variety of coinsurance provisions, requiring

participants to cover anywhere from 25% to 50% of charges, but always capped at a proportion of income or $1,000, whichever was lower. Participating families enrolled in the experimental plans for 3 or 5 years and agreed to give up any earlier insurance coverage in return for a fixed monthly payment unrelated to their use of medical care.7

The HIE was motivated primarily by an interest in what economists call the price elasticity of demand for health care. Specifically, the RAND investigators wanted to know whether and by how much health-care use falls when the price of health care goes up. Families in the free care plan faced a price of zero, while coinsurance plans cut prices to 25% or 50% of costs incurred, and families in the catastrophic coverage and deductible plans paid something close to the sticker price for care, at least until they hit the spending cap. But the investigators also wanted to know whether more comprehensive and more generous health insurance coverage indeed leads to better health. The answer to the first question was a clear “yes”: health-care consumption is highly responsive to the price of care. The answer to the second question is murkier.

Randomized Results Randomized field experiments are more elaborate than a coin toss, sometimes regrettably so. The HIE was complicated by having many small treatment groups, spread over more than a dozen insurance plans. The treatment groups associated with each plan are mostly too small for comparisons between them to be statistically meaningful. Most analyses of the HIE data therefore start by grouping subjects who were assigned to similar HIE plans together. We do that here as well.8

A natural grouping scheme combines plans by the amount of cost sharing they require. The three catastrophic coverage plans, with subscribers shouldering almost all of their medical expenses up to a fairly high cap, approximate a no-insurance state. The individual deductible plan provided more coverage, but only by reducing the cap on total expenses that plan participants were required to shoulder. The nine coinsurance plans provided more substantial coverage by splitting

subscribers’ health-care costs with the insurer, starting with the first dollar of costs incurred. Finally, the free plan constituted a radical intervention that might be expected to generate the largest increase in health-care usage and, perhaps, health. This categorization leads us to four groups of plans: catastrophic, deductible, coinsurance, and free, instead of the 14 original plans. The catastrophic plans provide the (approximate) no-insurance control, while the deductible, coinsurance, and free plans are characterized by increasing levels of coverage. As with nonexperimental comparisons, a first step in our experimental analysis is to check for balance. Do subjects randomly assigned to treatment and control groups—in this case, to health insurance schemes ranging from little to complete coverage—indeed look similar? We gauge this by comparing demographic characteristics and health data collected before the experiment began. Because demographic characteristics are unchanging, while the health variables in question were measured before random assignment, we expect to see only small differences in these variables across the groups assigned to different plans. In contrast with our comparison of NHIS respondents’ characteristics by insurance status in Table 1.1, a comparison of characteristics across randomly assigned treatment groups in the RAND experiment shows the people assigned to different HIE plans to be similar. This can be seen in panel A of Table 1.3. Column (1) in this table reports averages for the catastrophic plan group, while the remaining columns compare the groups assigned more generous insurance coverage with the catastrophic control group. As a summary measure, column (5) compares a sample combining subjects in the deductible, coinsurance, and free plans with subjects in the catastrophic plans. Individuals assigned to the plans with more generous coverage are a little less likely to be female and a little less educated than those in the catastrophic plans. We also see some variation in income, but differences between plan groups are mostly small and are as likely to go one way as another. This pattern contrasts with the large and systematic demographic differences between insured and uninsured people seen in the NHIS data summarized in Table 1.1.

The small differences across groups seen in panel A of Table 1.3 seem likely to reflect chance variation that emerges naturally as part of the sampling process. In any statistical sample, chance differences arise because we’re looking at one of many possible draws from the underlying population from which we’ve sampled. A new sample of similar size from the same population can be expected to produce comparisons that are similar—though not identical—to those in the table. The question of how much variation we should expect from one sample to another is addressed by the tools of statistical inference.

TABLE 1.3 Demographic characteristics and baseline health in the RAND HIE

Notes: This table describes the demographic characteristics and baseline health of subjects in the RAND Health Insurance Experiment (HIE). Column (1) shows the average for the group assigned catastrophic coverage. Columns (2)–(5) compare averages in the deductible, cost- sharing, free care, and any insurance groups with the average in column (1). Standard errors are reported in parentheses in columns (2)–(5); standard deviations are reported in brackets in column (1).

The appendix to this chapter briefly explains how to quantify sampling variation with formal statistical tests. Such tests amount to the juxtaposition of differences in sample averages with their standard errors, the numbers in parentheses reported below the differences in averages listed in columns (2)–(5) of Table 1.3. The standard error of a difference in averages is a measure of its statistical precision: when a difference in

sample averages is smaller than about two standard errors, the difference is typically judged to be a chance finding compatible with the hypothesis that the populations from which these samples were drawn are, in fact, the same. Differences that are larger than about two standard errors are said to

be statistically significant: in such cases, it is highly unlikely (though not impossible) that these differences arose purely by chance. Differences that are not statistically significant are probably due to the vagaries of the sampling process. The notion of statistical significance helps us interpret comparisons like those in Table 1.3. Not only are the differences in this table mostly small, only two (for proportion female in columns (4) and (5)) are more than twice as large as the associated standard errors. In tables with many comparisons, the presence of a few isolated statistically significant differences is usually also attributable to chance. We also take comfort from the fact that the standard errors in this table are not very big, indicating differences across groups are measured reasonably precisely. Panel B of Table 1.3 complements the contrasts in panel A with

evidence for reasonably good balance in pre-treatment outcomes across treatment groups. This panel shows no statistically significant differences in a pre-treatment index of general health. Likewise, pre-treatment cholesterol, blood pressure, and mental health appear largely unrelated to treatment assignment, with only a couple of contrasts close to statistical significance. In addition, although lower cholesterol in the free group suggests somewhat better health than in the catastrophic group, differences in the general health index between these two groups go the other way (since lower index values indicate worse health). Lack of a consistent pattern reinforces the notion that these gaps are due to chance. The first important finding to emerge from the HIE was that subjects

assigned to more generous insurance plans used substantially more health care. This finding, which vindicates economists’ view that demand for a good should go up when it gets cheaper, can be seen in panel A of Table 1.4.9 As might be expected, hospital inpatient

admissions were less sensitive to price than was outpatient care, probably because admissions decisions are usually made by doctors. On the other hand, assignment to the free care plan raised outpatient spending by two-thirds (169/248) relative to spending by those in catastrophic plans, while total medical expenses increased by 45%. These large gaps are economically important as well as statistically significant. Subjects who didn’t have to worry about the cost of health care clearly

consumed quite a bit more of it. Did this extra care and expense make them healthier? Panel B in Table 1.4, which compares health indicators across HIE treatment groups, suggests not. Cholesterol levels, blood pressure, and summary indices of overall health and mental health are remarkably similar across groups (these outcomes were mostly measured 3 or 5 years after random assignment). Formal statistical tests show no statistically significant differences, as can be seen in the group-specific contrasts (reported in columns (2)–(4)) and in the differences in health between those in a catastrophic plan and everyone in the more generous insurance groups (reported in column (5)). These HIE findings convinced many economists that generous health

insurance can have unintended and undesirable consequences, increasing health-care usage and costs, without generating a dividend in the form of better health.10

TABLE 1.4 Health expenditure and health outcomes in the RAND HIE

Notes: This table reports means and treatment effects for health expenditure and health outcomes in the RAND Health Insurance Experiment (HIE). Column (1) shows the average for the group assigned catastrophic coverage. Columns (2)–(5) compare averages in the deductible, cost- sharing, free care, and any insurance groups with the average in column (1). Standard errors are reported in parentheses in columns (2)–(5); standard deviations are reported in brackets in column (1).

1.2 The Oregon Trail

MASTER KAN: Truth is hard to understand.

KWAI CHANG CAINE: It is a fact, it is not the truth. Truth is often hidden, like a shadow in darkness. Kung Fu, Season 1, Episode 14

The HIE was an ambitious attempt to assess the impact of health insurance on health-care costs and health. And yet, as far as the contemporary debate over health insurance goes, the HIE might have missed the mark. For one thing, each HIE treatment group had at least catastrophic coverage, so financial liability for health-care costs was limited under every treatment. More importantly, today’s uninsured Americans differ considerably from the HIE population: most of the uninsured are younger, less educated, poorer, and less likely to be working. The value of extra health care in such a group might be very different than for the middle class families that participated in the HIE. One of the most controversial ideas in the contemporary health policy

arena is the expansion of Medicaid to cover the currently uninsured (interestingly, on the eve of the RAND experiment, talk was of expanding Medicare, the public insurance program for America’s elderly). Medicaid now covers families on welfare, some of the disabled, other poor children, and poor pregnant women. Suppose we were to expand Medicaid to cover those who don’t qualify under current rules. How would such an expansion affect health-care spending? Would it shift treatment from costly and crowded emergency departments to possibly more effective primary care? Would Medicaid expansion improve health? Many American states have begun to “experiment” with Medicaid

expansion in the sense that they’ve agreed to broaden eligibility, with the federal government footing most of the bill. Alas, these aren’t real experiments, since everyone who is eligible for expanded Medicaid coverage gets it. The most convincing way to learn about the consequences of Medicaid expansion is to randomly offer Medicaid coverage to people in currently ineligible groups. Random assignment of Medicaid seems too much to hope for. Yet, in an awesome social experiment, the state of Oregon recently offered Medicaid to thousands of randomly chosen people in a publicly announced health insurance lottery. We can think of Oregon’s health insurance lottery as randomly

selecting winners and losers from a pool of registrants, though coverage

was not automatic, even for lottery winners. Winners won the opportunity to apply for the state-run Oregon Health Plan (OHP), the Oregon version of Medicaid. The state then reviewed these applications, awarding coverage to Oregon residents who were U.S. citizens or legal immigrants aged 19–64, not otherwise eligible for Medicaid, uninsured for at least 6 months, with income below the federal poverty level, and few financial assets. To initiate coverage, lottery winners had to document their poverty status and submit the required paperwork within 45 days. The rationale for the 2008 OHP lottery was fairness and not research,

but it’s no less awesome for that. The Oregon health insurance lottery provides some of the best evidence we can hope to find on the costs and benefits of insurance coverage for the currently uninsured, a fact that motivated research on OHP by MIT master Amy Finkelstein and her coauthors.11

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

A Grade Exams
Solution Provider
Quick Finance Master
Exam Attempter
Assignment Solver
ECFX Market
Writer Writer Name Offer Chat
A Grade Exams

ONLINE

A Grade Exams

I will be delighted to work on your project. As an experienced writer, I can provide you top quality, well researched, concise and error-free work within your provided deadline at very reasonable prices.

$30 Chat With Writer
Solution Provider

ONLINE

Solution Provider

I have read your project description carefully and you will get plagiarism free writing according to your requirements. Thank You

$46 Chat With Writer
Quick Finance Master

ONLINE

Quick Finance Master

I will provide you with the well organized and well research papers from different primary and secondary sources will write the content that will support your points.

$39 Chat With Writer
Exam Attempter

ONLINE

Exam Attempter

I am an experienced researcher here with master education. After reading your posting, I feel, you need an expert research writer to complete your project.Thank You

$23 Chat With Writer
Assignment Solver

ONLINE

Assignment Solver

I am an academic and research writer with having an MBA degree in business and finance. I have written many business reports on several topics and am well aware of all academic referencing styles.

$38 Chat With Writer
ECFX Market

ONLINE

ECFX Market

I have read your project details and I can provide you QUALITY WORK within your given timeline and budget.

$28 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Week 7 Discussion 1 Physiology an Pathophysiology - Deliverable 4 - Electricity, Magnetism, and Light Compare/Contrast Paper - Third party cheque indemnity - Cigweld 200 ac dc - Pros and cons of college athletes getting paid - Concepts and Challenges in Nursing Discussion board prompt 200 WORDS - Behaviorally anchored rating scale for salesperson - Lateral thinking appears to solve a problem by an unorthodox or apparently illogical method. - The subalterns thomas hardy analysis - Strategic Management Case Study - 44 bus route newcastle - Budgets normally cover a period of: - Determination of ka for a weak acid lab answers - Reading Responses(Less Than 1 Page) - Biology photosynthesis absorption spectrum for chlorophyll worksheet answers - How to differentiate a log - Mount gambier to adelaide - The eagle poem techniques - How does latitude and longitude affect climate - Victoria carpets bourton twist - Characternym - Unit VI Article Critique - Advantages of continuing education for nurses - The runaway species cliff notes - Cj fallon student login - Pros and cons of nist framework - What is nursing informatics pdf - Environmental Discussion Question (Mod 4) - 5296 harvest lake drive loveland co 80538 - Do My Assignment For Me UK - Research paper literature review outline - What process most logically explains the different tilts of gravestones in a hillside cemetery? - Race in america matthew desmond - River community hospital case study solution - Accounts clerk job description - Uts autumn 2021 results - Health and physical assessment reviews and rationales - Dissertation writing services - Columbus's journal helped his readers share his experiences by - Employee goal setting template - Hsc economics exchange rates essay - Lens used to direct light crossword clue - Event evaluation form for planners - High Food Insecurity - Story Board - Megaman lr1204dgv2 wfl 2800k - Simon death scene lord of the flies - Essay - Cause and effect argument essay eng 106 - Potassium hydrogen phthalate equivalent weight - Paddle com marke iat paypal - South lanarkshire building warrant - Electrochemistry voltaic cells lab report answers - Why do termites follow pen ink - Respiratory Tract Infections - How science works ks3 - How to breed rexx in monster legends - En 60204 1 wire colors - Osha test questions and answers - Osha complaint response letter - Reference List - Emotional and Cultural Intelligence - Module 03 Lab 02 - Gravimetric Analysis - Universal coupling assembly drawing pdf - Canadian cree the yaqui and tarahumare of mexico - Archies cab in bayonne nj - Swine flu - Butl_ Learnign Feamework - Attributed personal service income - Institute of fire engineers exams - Download and watch a movie and answer questions - Real life examples of extension strategies - Core trade electrical wiring installation - Kim Woods only - Reflective Paper (Cloud Computing) - If an idea is restated in a nonfiction article, what does this most likely signal for the reader? - What is easa part 21 - Vinegar and baking soda stoichiometry lab answers - Health Care Delivery Discussion 1 - Homework -Ostds - Op amp positive and negative feedback - Psychology Discussion SUPER EASY AND FAST PLZ HELP - Freedom on my mind pdf - Get skilled get active - University of windsor graduate programs - Freedom of information centrelink - Ge mckinsey nine cell matrix - Tina jones abdominal shadow health - Consumer behavior domino pizza - Week 3 assignment - Nku blackboard app - Education business plan template - Abc catch up harrow - Firms that do the product leadership strategy well include which of the following factors: - How to apply a short arm cast - Dementia care mapping training - Bullet in the brain questions and answers - Assignment: Off-Label Drug Use in Pediatrics - Analysis of the other wes moore - Streaking techniques in microbiology