An Introduction to Statistical Inference and Its Applications with R

Page 1

i

i

“trosset” — 2009/5/15 — 15:20 — page ix — #5

i

i

Contents List of Figures

xv

List of Tables

xix

Preface

xxiii

1 Experiments 1.1 Examples . . . . . . . . . . . . . . 1.1.1 Spinning a Penny . . . . . . 1.1.2 The Speed of Light . . . . . 1.1.3 Termite Foraging Behavior 1.2 Randomization . . . . . . . . . . . 1.3 The Importance of Probability . . 1.4 Games of Chance . . . . . . . . . . 1.5 Exercises . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

1 1 2 3 5 7 12 14 19

2 Mathematical 2.1 Sets . . . 2.2 Counting 2.3 Functions 2.4 Limits . . 2.5 Exercises

Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

23 23 27 34 35 36

3 Probability 3.1 Interpretations of Probability 3.2 Axioms of Probability . . . . 3.3 Finite Sample Spaces . . . . . 3.4 Conditional Probability . . . 3.5 Random Variables . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

43 43 44 52 58 69

. . . . .

ix i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page x — #6

i

i

CONTENTS

x 3.6 3.7

Case Study: Padrolling in Milton Murayama’s All I asking for is my body . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Discrete Random Variables 4.1 Basic Concepts . . . . . . 4.2 Examples . . . . . . . . . 4.3 Expectation . . . . . . . . 4.4 Binomial Distributions . . 4.5 Exercises . . . . . . . . .

77 82

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

89 . 89 . 90 . 94 . 105 . 110

5 Continuous Random Variables 5.1 A Motivating Example . . . . . 5.2 Basic Concepts . . . . . . . . . 5.3 Elementary Examples . . . . . 5.4 Normal Distributions . . . . . . 5.5 Normal Sampling Distributions 5.6 Exercises . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . .

141 . 141 . 143 . 146 . 147 . 148 . 148 . 149 . 150

. . . . . . . . .

153 . 154 . 156 . 158 . 160 . 163 . 164 . 167 . 173 . 175

. . . . .

. . . . .

6 Quantifying Population Attributes 6.1 Symmetry . . . . . . . . . . . . . . . . . . . . . 6.2 Quantiles . . . . . . . . . . . . . . . . . . . . . 6.2.1 The Median of a Population . . . . . . . 6.2.2 The Interquartile Range of a Population 6.3 The Method of Least Squares . . . . . . . . . . 6.3.1 The Mean of a Population . . . . . . . . 6.3.2 The Standard Deviation of a Population 6.4 Exercises . . . . . . . . . . . . . . . . . . . . . 7 Data 7.1 The Plug-In Principle . . . . . . . . . . 7.2 Plug-In Estimates of Mean and Variance 7.3 Plug-In Estimates of Quantiles . . . . . 7.3.1 Box Plots . . . . . . . . . . . . . 7.3.2 Normal Probability Plots . . . . 7.4 Kernel Density Estimates . . . . . . . . 7.5 Case Study: Forearm Lengths . . . . . . 7.6 Transformations . . . . . . . . . . . . . 7.7 Exercises . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

. . . . . . . .

. . . . . . . . .

117 117 120 124 128 132 136

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xi — #7

i

i

CONTENTS 8 Lots of Data 8.1 Averaging Decreases Variation . . 8.2 The Weak Law of Large Numbers . 8.3 The Central Limit Theorem . . . . 8.4 Exercises . . . . . . . . . . . . . .

xi

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

9 Inference 9.1 A Motivating Example . . . . . . . . . . . . . . . . . . . 9.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Estimating a Population Mean . . . . . . . . . . 9.2.2 Estimating a Population Variance . . . . . . . . 9.3 Heuristics of Hypothesis Testing . . . . . . . . . . . . . 9.4 Testing Hypotheses about a Population Mean . . . . . . 9.4.1 One-Sided Hypotheses . . . . . . . . . . . . . . . 9.4.2 Formulating Suitable Hypotheses . . . . . . . . . 9.4.3 Statistical Significance and Material Significance 9.5 Set Estimation . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Sample Size . . . . . . . . . . . . . . . . . . . . . 9.5.2 One-Sided Confidence Intervals . . . . . . . . . . 9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1-Sample Location Problems 10.1 The Normal 1-Sample Location Problem . . 10.1.1 Point Estimation . . . . . . . . . . . 10.1.2 Hypothesis Testing . . . . . . . . . . 10.1.3 Set Estimation . . . . . . . . . . . . 10.2 The General 1-Sample Location Problem . . 10.2.1 Hypothesis Testing . . . . . . . . . . 10.2.2 Point Estimation . . . . . . . . . . . 10.2.3 Set Estimation . . . . . . . . . . . . 10.3 The Symmetric 1-Sample Location Problem 10.3.1 Hypothesis Testing . . . . . . . . . . 10.3.2 Point Estimation . . . . . . . . . . . 10.3.3 Set Estimation . . . . . . . . . . . . 10.4 Case Study: Deficit Unawareness . . . . . . 10.5 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . .

181 . 183 . 185 . 187 . 194

. . . . . . . . . . . . .

197 . 198 . 200 . 200 . 201 . 202 . 212 . 216 . 217 . 221 . 222 . 225 . 226 . 227

. . . . . . . . . . . . . .

233 . 236 . 236 . 237 . 241 . 243 . 243 . 246 . 247 . 248 . 249 . 254 . 256 . 257 . 262

11 2-Sample Location Problems 269 11.1 The Normal 2-Sample Location Problem . . . . . . . . . . . . 271 11.1.1 Known Variances . . . . . . . . . . . . . . . . . . . . . 274

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xii — #8

i

i

CONTENTS

xii 11.1.2 Unknown Common Variance . . . . . . . . 11.1.3 Unknown Variances . . . . . . . . . . . . . 11.2 The Case of a General Shift Family . . . . . . . . . 11.2.1 Hypothesis Testing . . . . . . . . . . . . . . 11.2.2 Point Estimation . . . . . . . . . . . . . . . 11.2.3 Set Estimation . . . . . . . . . . . . . . . . 11.3 Case Study: Etruscan versus Italian Head Breadth 11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . 12 The Analysis of Variance 12.1 The Fundamental Null Hypothesis . . . . 12.2 Testing the Fundamental Null Hypothesis 12.2.1 Known Population Variance . . . . 12.2.2 Unknown Population Variance . . 12.3 Planned Comparisons . . . . . . . . . . . 12.3.1 Orthogonal Contrasts . . . . . . . 12.3.2 Bonferroni t-Tests . . . . . . . . . 12.4 Post Hoc Comparisons . . . . . . . . . . . 12.4.1 Bonferroni t-Tests . . . . . . . . . 12.4.2 Scheff´e F -Tests . . . . . . . . . . . 12.5 Case Study: Treatments of Anorexia . . . 12.6 Exercises . . . . . . . . . . . . . . . . . . 13 Goodness-of-Fit 13.1 Partitions . . . . . . . 13.2 Test Statistics . . . . . 13.3 Testing Independence 13.4 Exercises . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . . . . . . . .

. . . .

14 Association 14.1 Bivariate Distributions . . . . . . . . . . . . 14.2 Normal Random Variables . . . . . . . . . . 14.2.1 Bivariate Normal Samples . . . . . . 14.2.2 Inferences about Correlation . . . . 14.3 Monotonic Association . . . . . . . . . . . . 14.4 Explaining Association . . . . . . . . . . . . 14.5 Case Study: Anorexia Treatments Revisited 14.6 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . .

. . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

275 278 282 283 287 289 290 294

. . . . . . . . . . . .

305 . 306 . 307 . 308 . 309 . 313 . 318 . 321 . 323 . 323 . 324 . 325 . 328

. . . .

337 . 337 . 338 . 341 . 344

. . . . . . . .

349 . 350 . 351 . 353 . 357 . 362 . 368 . 369 . 372

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xiii — #9

i

i

CONTENTS 15 Simple Linear Regression 15.1 The Regression Line . . . . . 15.2 The Method of Least Squares 15.3 Computation . . . . . . . . . 15.4 The Simple Linear Regression 15.5 Assessing Linearity . . . . . . 15.6 Case Study: Are Thick Books 15.7 Exercises . . . . . . . . . . .

xiii

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

379 . 380 . 385 . 393 . 395 . 400 . 406 . 408

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

417 418 423 427 431

R A Statistical Programming Language R.1 Introduction . . . . . . . . . . . . . . . . . . . . R.1.1 What Is R? . . . . . . . . . . . . . . . . R.1.2 Why Use R? . . . . . . . . . . . . . . . R.1.3 Installing R . . . . . . . . . . . . . . . . R.1.4 Learning about R . . . . . . . . . . . . R.2 Using R . . . . . . . . . . . . . . . . . . . . . . R.2.1 Vectors . . . . . . . . . . . . . . . . . . R.2.2 R Is a Calculator! . . . . . . . . . . . . R.2.3 Some Statistics Functions . . . . . . . . R.2.4 Matrices . . . . . . . . . . . . . . . . . . R.2.5 Creating New Functions . . . . . . . . . R.3 Functions That Accompany This Book . . . . . R.3.1 Inferences about a Center of Symmetry R.3.2 Inferences about a Shift Parameter . . . R.3.3 Inferences about Monotonic Association R.3.4 Exploring Bivariate Normal Data . . . . R.3.5 Simulating Random Termite Foraging .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

435 435 435 435 436 437 437 438 440 440 440 443 445 446 449 452 455 459

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model . . . . . . . . . . . . . . . . More Valuable? . . . . . . . . . . .

16 Simulation-Based Inference 16.1 Termite Foraging Revisited . . 16.2 The Bootstrap . . . . . . . . . 16.3 Case Study: Adventure Racing 16.4 Exercises . . . . . . . . . . . .

Index

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

463

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xxiii — #19

i

i

Preface To paraphrase Tuco (Eli Wallach) in The Good, the Bad, and the Ugly, there are two kinds of textbooks: those that sketch essential ideas, on which an instructor elaborates in class, and those that include all of the details for which an instructor rarely has time. This book is of the second kind. I began writing for the simple reason that I didn’t have enough time in class to tell my students all of the things that I wanted to share with them. This book has a long history. In the spring semesters of 1994 and 1995, I taught a second semester of statistics for undergraduate psychology majors at the University of Arizona. That course was an elective, recommended for strong students contemplating graduate study. I originally planned to teach nonparametric methods, then expanded the course and began developing Lecture Notes on Univariate Location Problems, which eventually evolved into Chapters 10–12 of the present book. From Spring 1994 to Spring 1998, I regularly taught a service course at the University of Arizona, an introduction to statistics for graduate students in various quantitative disciplines. By 1998 my preferred text1 was no longer in print; hence, when I moved to the College of William & Mary and created a comparable course for undergraduates (Math 351: Applied Statistics), I was obliged to find another. It was only then, and with considerable trepidation, that I decided to expand my own lecture notes and use them as my primary text. Most of the material that I covered in Math 351 had been completed by 2006. When I moved to Indiana University, I continued to teach the same material as Stat S320: Introduction to Statistics. Presented with the opportunity to help build a new department, I found that at last I was ready to finish my manuscript and move on to other challenges. 1 L. H. Koopmans (1987). Introduction to Contemporary Statistical Methods, Second Edition. Duxbury Press, Boston.

xxiii i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xxiv — #20

i

xxiv

i

Preface

Who Should Read This Book? This book is for students who like to think and who want to understand. It is a self-contained introduction to the methods of statistical inference for students who are comfortable with mathematical concepts and who like to be challenged. Students who are not mathematically inclined, who are intimidated by mathematical notation and formulas, should seek a more gentle approach. There are many ways to begin one’s study of statistics and no one approach is ideal for everyone. At the College of William & Mary, my primary constituency was a subset of undergraduate math majors, those who were more interested in how mathematics is used than in mathematical theory. Math 351 had to contain enough mathematics to be a math course, but it was always conceived as an applied course. A second constituency was a subset of students from other disciplines, often students contemplating graduate study, who wanted more than a superficial survey of various statistical recipes. I particularly remember a biology major, Emilie Snell-Rood, who told me that she wanted to understand what she was doing in her lab when she entered data into a computer program and drew scientific conclusions based on the numbers that it returned. The prerequisite for Math 351 was two semesters of univariate calculus. This requirement provides a crude way of deciding who is prepared to read this book, but it is potentially misleading. Although the book alludes to several basic concepts from calculus, the student is never required to use the methods of calculus. For example, the probability that a continuous random variable X produces a value in the interval [a, b] is described as an area under a probability density function, but these areas are computed by elementary geometry or by computer software, never by integration. What a student needs to read this book is a certain degree of mathematical maturity. Over the years, I have taught this material to many students who were taking a first course in statistics and quite a few who were taking a second. The ratio of the latter to the former is increasing as the popularity of AP statistics courses grows. Although this book does not suppose previous study of statistics, I can’t recall a single student who felt that she/he had already learned the material in this book in a previous course.

Content This book looks intimidating. It contains a great deal of mathematical notation and it is not until one immerses oneself in the material that the benefits of that notation become apparent. I have adopted the convention of stating mathematical truths, i.e., facts that are true of logical

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xxv — #21

i

Preface

i

xxv

necessity, as theorems. I have even included a small number of proofs!2 To students who point to these theorems and complain that the book is too theoretical, my standard retort is that the theorems describe the conditions under which the statistical procedures can be used, and what’s more useful than that? Although my decisions about how much theory to include are idiosyncratic, I hope that anyone who examines the examples, the case studies, and the exercises will be persuaded that this book is first and foremost about applied statistical inference. The organization of this book is also slightly idiosyncratic. I place greater emphasis on probability models than have most authors of recent introductory texts. Furthermore, I develop the tools of probability before progressing to the study of statistics. Although Chapter 1 motivates the study of probability, many pages intervene before Chapter 7 begins the transition from probability models to statistical inference. Chapter 7 introduces a variety of summary statistics and diagnostic techniques that are used to extract information from random samples. Many introductions to statistics begin with this information, usually represented as descriptive statistics. The potential problem with such an arrangement is that it may be difficult to communicate to the student the sense that sample quantities are calculated for the purpose of drawing inferences about population quantities. This difficulty is avoided by deferring discussion of samples until the probabilistic tools for describing populations have been introduced. One can then introduce a single unifying concept—the plug-in principle—and derive a variety of sample quantities by applying the tools of discrete probability to the empirical distribution. I have attempted to organize the material on statistical inference so as to emphasize the importance of identifying the structure of the experiment and making appropriate assumptions about it. Thus, Chapter 10 on 1sample location problems and Chapter 11 on 2-sample location problems each include material on both parametric and nonparametric procedures. I had originally intended to continue this program in subsequent chapters, but eventually it became clear that doing so would make the book far too long. Computing plays a special role in this book. When I began teaching, either one did not use the computer or one relied on statistical packages like 2

Some proofs were included to illustrate the nature of mathematical deduction. Others use elementary arguments. My hope is that certain students, possibly the statisticians of tomorrow, will be excited by the discovery that they already possess the tools to follow these arguments. All of the proofs are optional and I never cover them when I teach this material.

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xxvi — #22

i

i

Preface

xxvi

Minitab to perform entire analyses. Believing that students should not rely on computers before understanding the procedures themselves, I preferred the former approach. Naturally, this approach limits one to very small data sets, and even then the calculations can be extremely tedious. The rise of S-Plus and R created a third possibility: a course that frees students from tedious calculations and obsolete tables, but still requires them to understand the procedures that they use. This book relies heavily on R, but in a way that privileges statistics over computation. It does not use advanced functions to perform entire analyses; rather, it explains how to use elementary functions to perform the individual steps of a procedure. For example, a student who performs a 1-sample t-test must still compute a sample mean and variance, insert these quantities into a test statistic, then compute a significance probability. Using R renders the calculations painless, but the procedure cannot be implemented without thought. The examples and exercises that appear in this book were constructed with great care. They include idealized experiments that allow one to focus on the concept at hand, imagined experiments that were inspired by real-world considerations, sanitized experiments that are simplified versions of actual experiments, and actual experiments that produced actual data. I have drawn extensively from my own experiences, but I have also made liberal use of data sets included in A Handbook of Small Data Sets.3 Instructors may find it convenient to acquire a copy of this extremely useful resource. Web Resources I have created a web page that I will use to disseminate materials that supplement the text. These materials include the R functions that accompany the text (see Appendix R), many of the data sets used in the text, and the inevitable list of errata. Here is the URL: http://mypage.iu.edu/~mtrosset/StatInfeR.html Acknowledgments As this work progressed, I frequently found myself reflecting on how much I owe to so many people. My thinking about statistics has been shaped by many fine teachers and scholars, but C. Ralph Buncher, James R. Thompson, Erich L. Lehmann, and Peter J. Bickel especially influenced my early development. My unusual career trajectory (from academia to consulting to academia) has served me well, but I might not be 3 D. J. Hand, E. Daly, A. D. Lunn, K. J. McConway, and E. Ostrowski (1994). A Handbook of Small Data Sets. Chapman & Hall, London.

i

i i

i


i

i

“trosset” — 2009/5/15 — 15:20 — page xxvii — #23

i

Preface

i

xxvii

in academia today without the support of David W. Scott, Yashaswini D. Mittal, and especially Richard A. Tapia, to whom this book is dedicated. A great many people have contributed to this project. I am grateful for the feedback (sometimes unintentional!) of all the students on whom I tested material. I retain a special fondness for my students at the College of William & Mary, where most of this book was written. Many colleagues offered suggestions. Eva Czabarka, Larry Leemis, and Sebastian Schreiber taught courses from early drafts and offered invaluable criticism. When my creativity waned, I drew inspiration from the interests and exploits of friends and acquaintances. If Arlen Fuhrman appears too frequently in this book, it is because he is so extraordinarily entertaining. I did not set out to write a book, only notes that would supplement my lectures. David Scott encouraged me to be more ambitious, but in December 2005 I wrote the following: Occasionally I toy with the notion of finding a publisher and finishing the project of Math 351, but always I conclude that this is a project that should continue to evolve. One of the challenges of teaching introductory courses lies in finding new and better ways to explain difficult concepts without trivializing them. Even after years of teaching the same material, I continue to discover new examples and improve my exposition of familiar concepts.

Bob Stern approached me at precisely the right moment and persuaded me to seek closure. Nevertheless, I took longer than expected to complete this project and I am profoundly grateful to Bob for his forbearance. Finally, a special thanks to two very dear friends. This project began in Arizona and, 15 years later, it concludes in Indiana. Susan White always believed in me and bravely supported my decision to leave Tucson. Dana Fielding welcomed me to Bloomington and reminded me that life is an adventure. Michael W. Trosset Bloomington, IN

i

i i

i


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.