The Oxford handbook of quantitative methods edited by Todd D. Little.

507 Pages • 354,102 Words • PDF • 4.8 MB
Uploaded at 2021-09-24 10:49

This document was submitted by our user and they confirm that they have the consent to share it. Assuming that you are writer or own the copyright of this document, report to us by using this DMCA report button.


The Oxford Handbook of Quantitative Methods

OX F O R D L I B R A RY O F P S Y C H O L O G Y

Editor-in-Chief Peter E. Nathan A r e a E d i t o r s:

Clinical Psychology David H. Barlow

Cognitive Neuroscience Kevin N. Ochsner and Stephen M. Kosslyn

Cognitive Psychology Daniel Reisberg

Counseling Psychology Elizabeth M. Altmaier and Jo-Ida C. Hansen

Developmental Psychology Philip David Zelazo

Health Psychology Howard S. Friedman

History of Psychology David B. Baker

Methods and Measurement Todd D. Little

Neuropsychology Kenneth M. Adams

Organizational Phychology Steve W. J. Kozlowski

Personality and Social Psychology Kay Deaux and Mark Snyder

OX F O R D L I B R A RY O F P S Y C H O L O G Y

Editor-in-Chief

peter e. nathan

The Oxford Handbook of Quantitative Methods Edited by

Todd D. Little

Vo lu m e ı : Fo u n d at i o n s

1

3 Oxford University Press, Inc., publishes works that further Oxford University’s objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam

© 2013 by Oxford University Press, Inc. Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press Library of Congress Cataloging-in-Publication Data The Oxford handbook of quantitative methods / edited by Todd D. Little. v. cm. – (Oxford library of psychology) ISBN 978–0–19–993487–4 ISBN 978–0–19–993489–8 1. Psychology–Statistical methods. 2. Psychology–Mathematical models. BF39.O927 2012 150.72 1—dc23 2012015005

9 8 7 6 5 4 3 2 1 Printed in the United States of America on acid-free paper

I. Little, Todd D.

SHORT CONTENTS

Oxford Library of Psychology About the Editor Contributors

Index

ix

xi

Table of Contents Chapters

vii

xvii

1–479

480

v

This page intentionally left blank

OX F O R D L I B R A RY O F P SYC H O LO GY

The Oxford Library of Psychology, a landmark series of handbooks, is published by Oxford University Press, one of the world’s oldest and most highly respected publishers, with a tradition of publishing significant books in psychology. The ambitious goal of the Oxford Library of Psychology is nothing less than to span a vibrant, wide-ranging field and, in so doing, to fill a clear market need. Encompassing a comprehensive set of handbooks, organized hierarchically, the Library incorporates volumes at different levels, each designed to meet a distinct need. At one level are a set of handbooks designed broadly to survey the major subfields of psychology; at another are numerous handbooks that cover important current focal research and scholarly areas of psychology in depth and detail. Planned as a reflection of the dynamism of psychology, the Library will grow and expand as psychology itself develops, thereby highlighting significant new research that will impact on the field. Adding to its accessibility and ease of use, the Library will be published in print and, later, electronically. The Library surveys psychology’s principal subfields with a set of handbooks that capture the current status and future prospects of those major subdisciplines. This initial set includes handbooks of social and personality psychology, clinical psychology, counseling psychology, school psychology, educational psychology, industrial and organizational psychology, cognitive psychology, cognitive neuroscience, methods and measurements, history, neuropsychology, personality assessment, developmental psychology, and more. Each handbook undertakes to review one of psychology’s major subdisciplines with breadth, comprehensiveness, and exemplary scholarship. In addition to these broadly conceived volumes, the Library also includes a large number of handbooks designed to explore in depth more specialized areas of scholarship and research, such as stress, health and coping, anxiety and related disorders, cognitive development, or child and adolescent assessment. In contrast to the broad coverage of the subfield handbooks, each of these latter volumes focuses on an especially productive, more highly focused line of scholarship and research. Whether at the broadest or most specific level, however, all of the Library handbooks offer synthetic coverage that reviews and evaluates the relevant past and present research and anticipates research in the future. Each handbook in the Library includes introductory and concluding chapters written by its editor to provide a roadmap to the handbook’s table of contents and to offer informed anticipations of significant future developments in that field. An undertaking of this scope calls for handbook editors and chapter authors who are established scholars in the areas about which they write. Many of the

vii

nation’s and world’s most productive and best-respected psychologists have agreed to edit Library handbooks or write authoritative chapters in their areas of expertise. For whom has the Oxford Library of Psychology been written? Because of its breadth, depth, and accessibility, the Library serves a diverse audience, including graduate students in psychology and their faculty mentors, scholars, researchers, and practitioners in psychology and related fields. Each will find in the Library the information they seek on the subfield or focal area of psychology in which they work or are interested. Befitting its commitment to accessibility, each handbook includes a comprehensive index, as well as extensive references to help guide research. And because the Library was designed from its inception as an online as well as a print resource, its structure and contents will be readily and rationally searchable online. Further, once the Library is released online, the handbooks will be regularly and thoroughly updated. In summary, the Oxford Library of Psychology will grow organically to provide a thoroughly informed perspective on the field of psychology, one that reflects both psychology’s dynamism and its increasing interdisciplinarity. Once published electronically, the Library is also destined to become a uniquely valuable interactive tool, with extended search and browsing capabilities. As you begin to consult this handbook, we sincerely hope you will share our enthusiasm for the more than 500year tradition of Oxford University Press for excellence, innovation, and quality, as exemplified by the Oxford Library of Psychology. Peter E. Nathan Editor-in-Chief Oxford Library of Psychology

viii

oxford library of psychology

ABOUT THE EDITOR

Todd D. Little Todd D. Little, Ph.D., is a Professor of Psychology, Director of the Quantitative training program, Director of the undergraduate Social and Behavioral Sciences Methodology minor, and a member of the Developmental training program. Since 2010, Todd has been Director of the Center for Research Methods and Data Analysis (CRMDA) at Kansas University. Little is internationally recognized for his quantitative work on various aspects of applied SEM (e.g., indicator selection, parceling, modeling developmental processes) as well as his substantive developmental research (e.g., action-control processes and motivation, coping, and self-regulation). In 2001, Little was elected to membership in the Society for Multivariate Experimental Psychology. In 2009, he was elected President of APA’s Division 5 (Evaluation, Measurement, and Statistics) and in 2010 was elected Fellow of the division. In 2012, he was elected Fellow in the Association for Psychological Science. He founded, organizes, and teaches in the internationally renowned KU “Stats Camps” each June (see crmda.KU.edu for details of the summer training programs). Little has edited five books related to methodology including The Oxford Handbook of Quantitative Methods and the Guilford Handbook of Developmental Research Methods (with Brett Laursen and Noel Card). Little has been principal investigator or co-principal investigator on more than 15 grants and contracts, statistical consultant on more than 60 grants and he has guided the development of more than 10 different measurement tools.

ix

This page intentionally left blank

CONTRIBUTORS

Leona S. Aiken Department of Psychology Arizona State University Tempe, AZ Rawni A. Anderson Center for Research Methods and Data Analysis University of Kansas Lawrence, KS Luc Anselin GeoDa Center for Geospatial Analysis and Computation School of Geographical Sciences and Urban Planning Arizona State University Tempe, AZ Amanda N. Baraldi Department of Psychology Arizona State University Tempe, AZ David E. Bard Department of Pediatrics University of Oklahoma Health Sciences Center Oklahoma City, OK Theodore P. Beauchaine Department of Psychology Washington State University Pullman, WA Gabriëlla A.M. Blokland Genetic Epidemiology Laboratory Queensland Institute of Medical Research School of Psychology and Centre for Advanced Imaging University of Queensland Brisbane, Australia Annette Brose Max Plank Institute for Human Development Berlin, Germany Max Plank Institute for Human Cognitive and Brain Sciences

Timothy A. Brown Department of Psychology Boston University Boston, MA Trent D. Buskirk Department of Community HealthBiostatistics Division Saint Louis University Saint Louis, MO Noel A. Card Family Studies and Human Development University of Arizona Tucson, AZ Deborah M. Casper Family Studies and Human Development University of Arizona Tucson, AZ Daniel R. Cavagnaro Department of Psychology The Ohio State University Columbus, OH Rand D. Conger Department of Human and Community Development University of California at Davis Davis, CA David Cook Abt Associates Inc. Thomas D. Cook Institute for Policy Research Northwestern University Evanston, IL Stefany Coxe Department of Psychology Arizona State University Tempe, AZ R.J. de Ayala Department of Educational Psychology University of Nebraska Lincoln Lincoln, NE

xi

Pascal R. Deboeck Department of Psychology University of Kansas Lawrence, KS Sarah Depaoli Department of Educational Psychology University of Wisconsin Madison Madison, WI Cody S. Ding College of Education University of Missouri-Saint Louis Saint Louis, MO M. Brent Donnellan Department of Psychology Michigan State University East Lansing, MI Dawnté R. Early Department of Human and Community Development University of California at Davis Davis, CA Craig K. Enders Department of Psychology Arizona State University Tempe, AZ David M. Erceg-Hurn School of Psychology University of Western Australia Crawley, WA, Australia Aurelio José Figueredo Department of Psychology School of Mind, Brain, & Behavior Division of Family Studies and Human Development College of Agriculture and Life Sciences University of Arizona Tucson, AZ Rafael Antonio Garcia Department of Psychology School of Mind, Brain, & Behavior University of Arizona Tucson, AZ Amanda C. Gottschall Department of Psychology Arizona State University Tempe, AZ Michael J. Greenacre Department of Economics and Business Universitat Pompeu Fabra, Barcelona Barcelona, Spain

xii

contributors

Brian D. Haig Department of Psychology University of Canterbury Canterbury, New Zealand Kelly Hallberg Institute for Policy Research Northwestern University Evanston, IL Lisa L. Harlow Department of Psychology University of Rhode Island Kingston, RI Emily J. Hart Department of Psychology University at Buffalo The State University of New York Buffalo, NY Kit-Tai Hau The Chinese University of Hong Kong Hong Kong Joop J. Hox Department of Methodology and Statistics Utrecht University Utrecht, The Netherlands James Jaccard Department of Psychology Florida International University Boca Raton, FL Paul E. Johnson Department of Political Science Kansas University Lawrence, KS Kelly M. Kadlec Department of Psychology University of Southern California Los Angeles, CA David Kaplan Department of Educational Psychology University of Wisconsin-Madison Madison, WI Ken Kelley Department of Management University of Notre Dame Notre Dame, IN Harvey J. Keselman Department of Psychology University of Manitoba Winnipeg, Canada

Neal M. Kingston School of Education University of Kansas Lawrence, KS Yasemin Kisbu-Sakarya Department of Psychology Arizona State University Tempe, AZ Laura B. Kramer School of Education University of Kansas Lawrence, KS Todd D. Little Center for Research Methods and Data Analysis Department of Psychology University of Kansas Lawrence, KS Richard E. Lucas Department of Psychology Michigan State University East Lansing, MI David P. MacKinnon Department of Psychology Arizona State University Tempe, AZ Patrick Mair Institute for Statistics and Mathematics Vienna University of Economics and Business Vienna, Austria Herbert W. Marsh Department of Education University of Oxford Oxford, UK Katherine E. Masyn Graduate School of Education Harvard University Cambridge, MA John J. McArdle Department of Psychology University of Southern California Los Angeles, CA Roderick P. McDonald† Sydney University Sydney, Australia Professor Emeritus University of Illinois at Urbana-Champaign

Professor Emeritus Macquarie University † April 16, 1928 – October, 29, 2011 Sarah E. Medland Genetic Epidemiology Laboratory Queensland Institute of Medical Research School of Psychology University of Queensland Brisbane, Australia Peter C. M. Molenaar Department of Human Development and Family Studies Pennsylvania State University University Park, PA Alexandre J.S. Morin Department of Psychology University of Sherbrooke Sherbrooke, Quebec, Canada Miriam A. Mosing Genetic Epidemiology Laboratory Queensland Institute of Medical Research School of Psychology University of Queensland Brisbane, Australia Keith E. Muller Department of Health Outcomes and Policy University of Florida Gainesville, FL Eun-Young Mun Center of Alcohol Studies Rutgers University Piscataway, NJ Alan T. Murray GeoDa Center for Geospatial Analysis and Computation School of Geographical Sciences and Urban Planning Arizona State University Tempe, AZ Jay I. Myung Department of Psychology The Ohio State University Columbus, OH Benjamin Nagengast Department of Education Oxford University Oxford, UK

contributors

xiii

Sally Gayle Olderbak Department of Psychology School of Mind, Brain, & Behavior University of Arizona Tucson, AZ Jamie M. Ostrov Department of Psychology University at Buffalo The State University of New York Buffalo, NY Trond Peterson Department of Sociology University of California-Berkeley Berkeley, CA Mark A. Pitt Department of Psychology The Ohio State University Columbus, OH Larry R. Price College of Education and College of Science Texas State University-San Marcos San Marcos, TX Nilam Ram Department of Human Development and Family Studies Pennsylvania State University University Park, PA Max Plank Institute for Human Development Berlin, Germany Sergio J. Rey GeoDa Center for Geospatial Analysis and Computation School of Geographical Sciences and Urban Planning Arizona State University Tempe, AZ Joseph L. Rodgers Department of Psychology University of Oklahoma Norman, OK Robert Rosenthal Department of Psychology University of California, Riverside Riverside, CA Ralph L. Rosnow Department of Psychology Temple University Philadelphia, PA

xiv

contributors

André A. Rupp Department of Measurement, Statistics, and Evaluation (EDMS) University of Maryland College Park, MD Gabriel Lee Schlomer Division of Family Studies and Human Development College of Agriculture and Life Sciences University of Arizona Tucson, AZ Christof Schuster Department of Psychology Justus-Liebig-Unversitat Giessen Giessen, Germany James P. Selig Department of Psychology University of New Mexico Albuquerque, NM Paul E. Spector Department of Psychology University of South Florida Tampa, FL Peter M. Steiner Department of Educational Psychology University of Wisconsin-Madison Madison, WI Carolin Strobl Ludwig-Maximilians-Universität München Faculty of Mathematics, Informatics and Statistics Institute of Statistics Munich, Germany Bruce Thompson Baylor College of Medicine Austin, TX Terry T. Tomazic Department of Sociology and Criminal Justice Saint Louis University Saint Louis, MO James T. Townsend Department of Psychological and Brain Sciences Indiana University Bloomington, IN Trisha Van Zandt Department of Psychology The Ohio State University Columbus, OH

Alexander von Eye Departments of Psychology Michigan State University East Lansing, MI University of Vienna Vienna, Austria Stefan von Weber Department of Mechanical Engineering Hochschule Furtwangen University Furtwangen im Schwarzwald, Germany Karin J.H. Verweij Genetic Epidemiology Laboratory Queensland Institute of Medical Research School of Psychology University of Queensland Brisbane, Australia Theodore A. Walls Department of Behavioral Science University of Rhode Island Kingston, RI Lihshing Leigh Wang Educational Studies Program University of Cincinnati Cincinnati, OH Amber S. Watts Center for Research Methods and Data Analysis and Lifespan Institute, Gerontology Center University of Kansas Lawrence, KS William W.S. Wei Department of Statistics Temple University Philadelphia, PA Zhonglin Wen South China Normal University Stephen G. West Department of Psychology Arizona State University Tempe, AZ

Keith F. Widaman Department of Psychology University of California at Davis Davis, CA Rand R. Wilcox Department of Psychology University of Southern California Los Angeles, CA Lisa M. Willoughby Department of Psychology Saint Louis University Saint Louis, MO Coady Wing Institute for Policy Research Northwestern University Evanston, IL Pedro Sofio Abril Wolf Department of Psychology University of Cape Town Cape Town, South Africa Vivian Wong School of Education and Social Policy Northwestern University Evanston, IL Carol M. Woods Center for Research Methods and Data Analysis and Department of Psychology University of Kansas Lawrence, KS Wei Wu Center for Research Methods and Data Analysis and Department of Psychology University of Kansas Lawrence, KS Ke-Hai Yuan Department of Psychology University of Notre Dame Notre Dame, IN

contributors

xv

This page intentionally left blank

CONTENTS

1. Introduction 1 Todd D. Little 2. The Philosophy of Quantitative Methods 7 Brian D. Haig 3. Quantitative Methods and Ethics 32 Ralph L. Rosnow and Robert Rosenthal 4. Special Populations 55 Keith F. Widaman, Dawnté R. Early, and Rand D. Conger 5. Theory Construction, Model Building, and Model Selection 82 James Jaccard 6. Teaching Quantitative Psychology 105 Lisa L. Harlow 7. Modern Test Theory 118 Roderick P. McDonald 8. The IRT Tradition and its Applications 144 R.J. de Ayala 9. Survey Design and Measure Development 170 Paul E. Spector 10. High-Stakes Test Construction and Test Use 189 Neal M. Kingston and Laura B. Kramer 11. Effect Size and Sample Size Planning 206 Ken Kelley 12. Experimental Design for Causal Inference: Clinical Trials and Regression Discontinuity Designs 223 Kelly Hallberg, Coady Wing, Vivian Wong, and Thomas D. Cook 13. Matching and Propensity Scores 237 Peter M. Steiner and David Cook 14. Designs for and Analyses of Response Time Experiments 260 Trisha Van Zandt and James T. Townsend 15. Observational Methods 286 Jamie M. Ostrov and Emily J. Hart

xvii

16. A Primer of Epidemiologic Methods, Concepts, and Analysis with Examples and More Advanced Applications within Psychology 305 David E. Bard, Joseph L. Rodgers, and Keith E. Muller 17. Program Evaluation: Principles, Procedures, and Practices 332 Aurelio José Figueredo, Sally Gayle Olderbak, Gabriel Lee Schlomer, Rafael Antonio Garcia, and Pedro Sofio Abril Wolf 18. Overview of Statistical Estimation Methods 361 Ke-Hai Yuan and Christof Schuster 19. Robust Statistical Estimation 388 David M. Erceg-Hurn, Rand R. Wilcox, and Harvey J. Keselman 20. Bayesian Statistical Methods 407 David Kaplan and Sarah Depaoli 21. Mathematical Modeling 438 Daniel R. Cavagnaro, Jay I. Myung, and Mark A. Pitt 22. Monte Carlo Analysis in Academic Research 454 Paul E. Johnson Index

xviii

contents

480

CHAPTER

1

Introduction

Todd D. Little

Abstract In this introductory chapter to The Oxford Handbook of Quantitative Methods, I provide an overview of the two volumes. More specifically, I describe the rationale and motivation for the selected topics that are presented in volumes. I also list out my instructions to the chapter authors and then describe how the chapters fit together into thematic groupings. I also extend my sincerest gratitude to the persons who assisted me along the way, as no work this comprehensive can be done without the considerable help and assistance of many persons. I conclude with how pleased I am with the quality and comprehensiveness of the chapters that are included. Key Words: Overview; Quantitative Methods; Methodology; Statistics

Oxford Introduction Handbooks provide a crucial venue to communicate the current state of the field. They also provide a one-stop source for learning and reviewing current best practices in a field. The Oxford Handbook of Quantitative Methods serves both of these functions. The field of quantitative methods is quite broad, as you can probably imagine. I have tried to be thorough in my selection of topics to be covered. As with any handbook of this magnitude, some topics were all set to have a contribution submitted, only to have some unforeseen hindrance preclude its inclusion at the last minute (e.g., graphical representations of data, ecological inference, history of quantitative methods). Some topics overlap with others and may not have found their way to become a separate chapter, but their fundamental elements are found in parts of other chapters. This handbook is one of many that Oxford University Press (OUP) is assembling but will be the capstone methodology handbook. As many of you know, OUP is building a comprehensive and

synthetic Library of Handbooks covering the field of psychology (the Editor-in-Chief of the library is Peter Nathan, University of Iowa Foundation Distinguished Professor of Psychology and Public Health). The library comprises handbooks in the truest sense of the word: books that summarize and synthesize a topic, define the current scholarship, and set the agenda for future research. Each handbook is published as a bound book, and it will also be developed for electronic delivery. In this format, the content will be integrated across topics and available as a fully integrated electronic library. I think the idea of a comprehensive electronic library is very forward-thinking. This format is a very attractive opportunity to have a fully comprehensive and upto-date handbook of methods in our field. Hence, I agreed to take on the role of editor of The Oxford Handbook of Quantitative Methods. I am very pleased with the quality of the work that each author provided. As per my request to the contributing authors, each chapter is meant to be both accessible and comprehensive; nearly all the

1

authors were very responsive to my requests. The guidelines I asked authors to consider were: • Handbook chapters should be comprehensive and authoritative; readers will rely heavily on these chapters, particularly when they move to the online format. • Handbook chapters should present not only the strengths of the topic covered but also any limitations. • Handbook chapters should make all assumptions underlying the topic explicit. • Regarding citations, handbook chapters should cover the historical origins as well as the recent renditions of a given key topic. • Handbook chapters should not present onesided views on any debate; rather, they should report the issues and present the arguments—both pro and con. Authors can direct readers to other platforms where a position piece is presented. • To facilitate the online linkages, handbook chapters should point to other online resources related to the topic presented. • Every element of every formula presented must be explicitly explained; assume no knowledge of how to read formulae. • Examples, examples, examples, and, when in doubt, provide an example! Concrete examples are absolutely critical to communicate quantitative content. • Avoid jargon and acronyms. Please spell out acronyms, and if you use jargon, please remind the reader of the meaning or definition of the jargon every three to four times it is used; similarly, if you use an acronym, then remind the reader of what it means every three to four times it is used. • Use active voice, and do not shy away from the use of I/me or we/us. Channel how you lecture on the topic. It will create a crisp and enjoyable read. • Do not start a sentence with “This” followed by a verb. The referent to “this” must be restated because of the ambiguity this creates. This general guideline should be followed as a rule!

Authors, like editors, have preferences and habits, so you will find places, chapters, and so on where some of my admonitions were not followed. But the quality of the product that each chapter provides is nonetheless uncompromised. We have established a Wiki-based resource page for the handbook, which can be found at crmda.KU.edu/oxford. Each author has been asked to maintain and upload materials to 2

introduction

support his or her chapter contribution. At the top of that page is a link that encourages you to offer comments and suggestions on the topics and coverage of the handbook. These comments will be reviewed and integrated into future editions of this handbook. I encourage you, therefore, to take advantage of this opportunity to help shape the directions and content coverage of this handbook. Statistical software has blossomed with the advent of hardware that provides the necessary speed and memory and programming languages coupled with numerical algorithms that are more efficient and optimized than yesteryear. These software advances have allowed many of the advances in modern statistics to become accessible to the typical end-user. Modern missing data algorithms and Bayesian estimation procedures, for example, have been the beneficiaries of these advances. Of course, some of the software developments have included simplified interfaces with slick graphic user interfaces. The critical options are usually prefilled with default settings. These latter two aspects of advancing software are unfortunate because they lead to mindless applications of the statistical techniques. I would prefer that options not be set as default but, rather, have the software prompt the user to make a choice (and give good help for what each choice means). I would prefer that a complete script of the GUI choices and the order in which steps were taken be automatically saved and displayed. I have organized the handbook by starting with some basics. It begins with the philosophical underpinnings associated with science and quantitative methods (Haig, Chapter 2, Volume 1) followed by a discussion of how to construct theories and models so that they can be tested empirically and the best model selected (Jaccard, Chapter 5, Volume 1). I then turn to an enlightened discussion of ethics in the conduct of quantitative research (Rosnow & Rosenbloom, Chapter 3, Volume 1) and related issues when quantitative methods are applied in special populations (Widaman, Early, & Conger, Chapter 4, Volume 1). Harlow (Chapter 6, Volume 1) follows with an encompassing and impassioned discussion of teaching quantitative methods. The theme in the next grouping of chapters centers on measurement issues. First, the late McDonald (Chapter 17, Volume 1) provides a thorough overview of Modern Test Theory.1 De Ayala (Chapter 8, Volume 1) adds a detailed discussion of Item Response Theory as an essential measurement and analysis tool. After these principles

of measurement are discussed, the principles and practices surrounding survey design and measure development are presented (Spector, Chapter 9, Volume 1). Kingston and Kramer (Chapter 10, Volume 1) further this discussion in the context of high-stakes testing. A next grouping of chapters covers various design issues. Kelley (Chapter 11, Volume 1) begins this section by covering issues of power, effect size, and sample size planning. Hallberg, Wing, Wong, and Cook (Chapter 12, Volume 1) then address key experimental designs for causal inference: the gold standard randomized clinical trials (RCT) design and the underutilized regression discontinuity design. Some key quasi-experimental procedures for comparing groups are discussed in Steiner and Cooks’ (Chapter 13, Volume 1) chapter on using matching and propensity scores. Finally, Van Zandt and Townsend (Chapter 14, Volume 1) provide a detailed discussion of the designs for and analyses of response time experiments. I put observational methods (Ostrov & Hart, Chapter 15, Volume 1), epidemiological methods (Bard, Rodgers, & Mueller, Chapter 16, Volume 1), and program evaluation (Figueredo, Olderbak, & Schlomer, Chapter 17, Volume 1) in with these chapters because they address more collection and design issues, although the discussion of program evaluation also addresses the unique analysis and presentation issues. I have a stellar group of chapters related to estimation issues. Yuan and Schuster (Chapter 18, Volume 1) provide an overview of statistical estimation method; Erceg-Hurn, Wilcox, and Keselman (Chapter 19, Volume 1) provide a nice complement with a focus on robust estimation techniques. Bayesian statistical estimation methods are thoroughly reviewed in the Kaplan and Depaoli (Chapter 20, Volume 1) contribution. The details of mathematical modeling are synthesized in this section by Cavagnaro, Myung, and Pitt (Chapter 21, Volume 1). This section is completed by Johnson (Chapter 22, Volume 1), who discusses the many issues and nuances involved in conducting Monte Carlo simulations to address the whatwould-happen-if questions that we often need to answer. The foundational techniques for the statistical analysis of quantitative data start with a detailed overview of the traditional methods that have marked social and behavioral sciences (i.e., the General Linear Model; Thompson, Chapter 2, Volume 2). Coxe, West, and Aiken (Chapter 3,

Volume 2) then extend the General Linear Model to discuss the Generalized Linear Model. This discussion is easily followed by Woods (Chapter 4, Volume 2), who synthesizes the various techniques of analyzing categorical data. After the chapter on configural frequency analysis by Von Eye, Mun, Mair and von Weber (Chapter 5, Volume 5), I then segway into nonparametric techniques (Buskirk, Tomazic, & Willoughby, Chapter 6, Volume 2) and the more specialized techniques of correspondence analysis (Greenacre, Chapter 7, Volume 2) and spatial analysis (Anselin, Murry, & Rey, Chapter 8, Volume 2). This section is capped with chapters dedicated to special areas of research—namely, techniques and issues related to the analysis of imaging data (e.g., fMRI; Price, Chapter 9, Volume 2). The closely aligned worlds of behavior genetics (i.e., twin studies; Blokland, Mosing, Verweij, & Medland, Chapter 10, Volume 2) and genes (Medland, Chapter 11, Volume 2) follows. The foundations of multivariate techniques are grouped beginning with Ding’s (Chapter 12, Volume 2) presentation of multidimensional scaling and Brown’s (Chapter 13, Volume 2) summary of the foundations of latent variable measurement models. Hox layers in the multilevel issues as handled in both the manifest regression framework and the latent variable work of structural equation modeling. McArdle and Kadlec (Chapter 15, Volume 2) detail, in broad terms, different structural equation models and their utility. MacKinnon, Kisbu-Sakarya, and Gottschall (Chapter 16, Volume 2) address the many new developments in mediation analysis, while Marsh, Hau, Wen, and Nagengast (Chapter 17, Volume 2) do the same for analyses of moderation. The next group of chapters focuses on repeated measures and longitudinal designs. It begins with a chapter I co-wrote with Wu and Selig and provides a general overview of longitudinal models (Wu, Selig, & Little, Chapter 18, Volume 2). Deboeck (Chapter 19, Volume 2) takes things further into the burgeoning world of dynamical systems and continuous-time models for longitudinal data. Relatedly, Walls (Chapter 20, Volume 2) provides an overview of designs for doing intensive longitudinal collection and analysis designs. The wonderful world of dynamic-factor models (a multivariate model for single-subject data) is presented by Ram, Brose, and Molenaar (Chapter 21, Volume 2). Wei (Chapter 22, Volume 2) covers all the issues of traditional time-series models and Peterson (Chapter 23, little

3

Volume 2) rounds out this section with a thorough coverage of event history models. The volume finishes with two small sections. The first focuses on techniques dedicated to finding heterogeneous subgroups in one’s data. Rupp (Chapter 24, Volume 2) covers tradition clustering and classification procedures. Masyn and NylundGibson (Chapter 25, Volume 2) cover the modelbased approaches encompassed under the umbrella of mixture modeling. Beauchaine (Chapter 26, Volume 2) completes this first group with his coverage of the nuances of taxometrics. The second of the final group of chapters covers issues related to secondary analyses of extant data. I put the chapter on missing data in here because it generally is applied after data collection occurs, but it is also a little out of order here because of the terrific and powerful features of planned missing data designs. In this regard, Baraldi and Enders (Chapter 27, Volume 2) could have gone into the design section. Donnellan and Lucas (Chapter 28, Volume 2) cover the issues associated with analyzing the large-scale archival data sets that are available via federal funding agencies such as NCES, NIH, NSF, and the like. Data mining can also be classified as a set of secondary modeling procedures, and Strobl’s (Chapter 29, Volume 2) chapter covers the techniques and issues in this emerging field of methodology. Card and Casper (Chapter 30, Volume 2) covers the still advancing world of meta-analysis and current best practices in quantitative synthesis of published studies. The final chapter of The Oxford Handbook of Quantitative Methods is one I co-authored with Wang, Watts, and Anderson (Wang, Watts, Anderson, & Little, Chapter 31, Volume 2). In this capstone chapter, we address the many pervasive fallacies that still permeate the world of quantitative methodology. A venture such as this does involve the generous and essential contributions of expert reviewers. Many of the chapter authors also served as reviewers for other chapters, and I won’t mention them by name here. I do want to express gratitude to a number of ad hoc reviewers who assisted me along the way (in arbitrary order): Steve Lee, Kris Preacher, Mijke Rhemtulla, Chantelle Dowsett, Jason Lee, Michael Edwards, David Johnson (I apologize now if I have forgotten that you reviewed a chapter for me!). I also owe a debt of gratitude to Chad Zimmerman at OUP, who was relentless in guiding us through the incremental steps needed to herd us all to a final and pride-worthy end product and to Anne Dellinger who was instrumental in bringing closure to this mammoth project. 4

introduction

Author note Partial support for this project was provided by grant NSF 1053160 (Todd D. Little & Wei Wu, co-PIs) and by the Center for Research Methods and Data Analysis at the University of Kansas (Todd D. Little, director). Correspondence concerning this work should be addressed to Todd D. Little, Center for Research Methods and Data Analysis, University of Kansas, 1425 Jayhawk Blvd. Watson Library, 470. Lawrence, KS 66045. E-mail: [email protected]. Web: crmda.ku.edu.

Note 1. This chapter was completed shortly before Rod’s unexpected passing. His legacy and commitment to quantitative methods was uncompromising and we will miss his voice of wisdom and his piercing intellect; R.I .P., Rod McDonald and, as you once said, pervixi… .

References Anselin, L., Murry, A. T., & Rey, S. J. (2012). Spatial analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 154–174). New York: Oxford University Press. Baraldi, A. N. & Enders, C. K. (2012). Missing data methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 635–664). New York: Oxford University Press. Bard, D. E., Rodgers, J. L., & Muller, K. E. (2012). A primer of epidemiology methods with applications in psychology. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 305–349). New York: Oxford University Press. Beauchaine, T. P. (2012). Taxometrics. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 612–634). New York: Oxford University Press. Brown, T. A. (2012). Latent variable measurement models. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 257–280). New York: Oxford University Press. Buskirk, T. D., Tomazic, T. T., & Willoughbby, L. (2012). Nonparametric statistical techniques. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 106–141). New York: Oxford University Press. Card, N. A. & Casper, D. M. (2012). Meta-analysis and quantitative research synthesis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 701–717). New York: Oxford University Press. Cavagnaro, D. R., Myung, J. I., & Pitt, M. A. (2012). Mathematical modeling. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 438–453). New York: Oxford University Press. Coxe, S., West, S. G., & Aiken, L. S. (2012). Generalized linear models. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 26–51). New York: Oxford University Press. De Ayala, R. J. (2012). The IRT tradition and its applications. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 144–169). New York: Oxford University Press. Deboeck, P. R. (2012). Dynamical systems and models of continuous time. In T. D. Little (Ed.), The Oxford Handbook

of Quantitative Methods (Vol. 2, pp. 411–431). New York: Oxford University Press. Ding, C. S. (2012). Multidimensional scaling. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 7–25). New York: Oxford University Press. Donnellan, M. B. & Lucas, R. E. (2012). Secondary data analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 665–677). New York: Oxford University Press. Erceg-Hurn, D. M., Wilcox, R. R., & Keselman, H. H. (2012). Robust statistical estimation. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 388–406). New York: Oxford University Press. Figueredo, A. J., Olderbak, S. G., & Schlomer, G. L. (2012) Program evaluation: Principles, procedures, and practices. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 332–360). New York: Oxford University Press. Greenacre, M. J. (2012). Correspondence analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 142–153). New York: Oxford University Press. Haig, B. D. (2012). The philosophy of quantitative methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 7–31). New York: Oxford University Press. Hallberg, K., Wing, C., Wong, V., & Cook, T. D. (2012). Experimental design for causal inference: Clinical trials and regression discontinuity designs. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 223–236). New York: Oxford University Press. Harlow, L. (2012). Teaching quantitative psychology. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 105–117). New York: Oxford University Press. Hox, J. J., (2012). Multilevel regression and multilevel structural equation modeling In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 281–294). New York: Oxford University Press. Jaccard, J. (2012). Theory construction, model building, and model selection. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 82–104). New York: Oxford University Press. Johnson, P. E. (2012). Monte Carlo analysis in academic research. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 454–479). New York: Oxford University Press. Kaplan, D. & Depaoli, S. (2012). Bayesian statistical methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 407–437). New York: Oxford University Press. Kelley, K. (2012). Effect size and sample size planning. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 206–222). New York: Oxford University Press. Kingston, N. M. & Kramer, L. B. (2012). High stakes test construction and test use. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 189–205). New York: Oxford University Press. MacKinnon, D. P., Kisbu-Sakarya, Y., & Gottschall, A. C. (2012). Developments in mediation analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 338–360). New York: Oxford University Press.

Marsh, H. W., Hau, K-T., Wen, Z., & Nagengast, B. (2012). Moderation. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 361–386). New York: Oxford University Press. Masyn, K. E. & Nylund-Gibson, K. (2012). Mixture modeling. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 551–611). New York: Oxford University Press. McArdle, J. J. & Kadlec, K. M. (2012). Structural equation models. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 295–337). New York: Oxford University Press. McDonald, R. P. (2012). Modern test theory. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 118–143). New York: Oxford University Press. Medland, S. E. (2012). Quantitative analysis of genes. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 219–234). New York: Oxford University Press. Ostrov, J. M. & Hart, E. J. (2012). Observational methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 286–304). New York: Oxford University Press. Peterson, T. (2012) Analyzing event history data. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 486–516). New York: Oxford University Press. Price, L. R. (2012). Analysis of imaging data. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 175–197). New York: Oxford University Press. Ram, N., Brose, A., & Molenaar, P. C. M. (2012). Dynamic factor analysis: Modeling person-specific process. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 441–457). New York: Oxford University Press. Rosnow, R. L. & Rosenthal, R. (2012). Quantitative methods and ethics. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 32–54). New York: Oxford University Press. Rupp, A. A. (2012). Clustering and classification. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 517–611). New York: Oxford University Press. Spector, P. E. (2012). Survey design and measure development. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 170–188). New York: Oxford University Press. Steiner, P. M. & Cook, D. (2012). Matching and Propensity Scores. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 237–259). New York: Oxford University Press. Strobl, C. (2012). Data mining. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 678–700). New York: Oxford University Press. Thompson, B. (2012). Overview of traditional/classical statistical approaches. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 7–25). New York: Oxford University Press. Van Zandt, T., & Townsend, J. T. (2012). Designs for and analyses of response time experiments. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 260–285). New York: Oxford University Press. von Eye, A., Mun, E. U., Mair, P., & von Weber, S. Configural frequency analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 73–105). New York: Oxford University Press.

little

5

Walls, T. A. (2012). Intensive longitudinal data. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 432–440). New York: Oxford University Press. Wang, L. L., Watts, A. S., Anderson, R. A., & Little, T. D. (2012). Common fallacies in quantitative research methodology. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 718–758). New York: Oxford University Press. Wei. W. W. S. (2012). Time series analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 458–485). New York: Oxford University Press. Widaman, K. F., Early, D. R., & Conger, R. D. (2012). Special populations. In T. D. Little (Ed.), The Oxford Handbook of

6

introduction

Quantitative Methods (Vol. 1, pp. 55–81). New York: Oxford University Press. Woods, C. M. (2012). Categorical methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 52–73). New York: Oxford University Press. Wu, W., Selig, J. P., & Little, T. D. (2012). Longitudinal data analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 387–410). New York: Oxford University Press. Yuan, K-H., & Schuster, C. (2012). Overview of statistical estimation methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods. (Vol. 1, pp. 361–387). New York: Oxford University Press.

CHAPTER

2

The Philosophy of Quantitative Methods

Brian D. Haig

Abstract This chapter provides a philosophical examination of a number of different quantitative research methods that are prominent in the behavioral sciences. It begins by outlining a scientific realist methodology that can help illuminate the conceptual foundations of behavioral research methods. The methods selected for critical examination are exploratory data analysis, statistical significance testing, Bayesian confirmation theory, meta-analysis, exploratory factor analysis, and causal modeling. Typically, these methods contribute to either the detection of empirical phenomena or the construction of explanatory theory. The chapter concludes with a brief consideration of directions that might be taken in future philosophical work on quantitative methods. Key Words: scientific realism, methodology, exploratory data analysis, statistical significance testing, Bayesianism, meta-analysis, exploratory factor analysis, causal modeling, latent variables, phenomena detection, hypothetico-deductive method, inference to the best explanation

Introduction Historically, philosophers of science have given research methods in science limited attention, concentrating mostly on the nature and purpose of theory in the physical sciences. More recently, however, philosophers of science have shown an increased willingness to deal with methodological issues in sciences other than physics—particularly biology, but also psychology to a limited extent. There is, then, a developing literature in contemporary philosophy of science that can aid both our understanding and use of a variety of research methods and strategies in psychology (e.g., Trout, 1998). At the same time, a miscellany of theoretically oriented psychologists, and behavioral and social scientists more generally, have produced work on the conceptual foundations of research methods that helps illuminate those methods. The work of both professional philosophers of science and theoretical scientists deserves to be included in a philosophical examination of behavioral research methods.

This chapter undertakes a philosophical examination of a number of different quantitative research methods that are prominent in the behavioral sciences. It begins by outlining a scientific realist methodology that can help illuminate the conceptual foundations of behavioral research methods. The methods submitted to critical examination are exploratory data analysis, statistical significance testing, Bayesian confirmation theory, meta-analysis, exploratory factor analysis, and causal modeling methods. The chapter concludes with a brief and selective consideration of directions that might be taken in future philosophical work on quantitative methods.

Quantitative Methods and Scientific Realism The three major philosophies of science that bear on psychology are empiricism, social constructionism, and scientific realism (Greenwood, 7

1992; Manicas & Secord, 1983). Nineteenth century British empiricism had a major influence on the development of British statistics in the first half of the twentieth century (e.g., Mulaik, 1985). The statistical methods developed in that intellectual milieu remain an important part of psychology’s statistical research practice. For example, Karl Pearson’s product moment correlation coefficient was taken by its founder to be the quantitative expression of a causal relation viewed in empiricist terms. Similarly, Fisher’s endorsement of inductive methods as the proper view of scientific method stemmed from a commitment to the empiricism of his day. Even in the current postpositivist philosophical climate, authors of research methods textbooks sometimes portray quantitative research as essentially positivist in its empiricist commitments (Yu, 2006). Among other things, positivism restricts its attention to what can be observed and regards theories as instruments that organize claims about observables but that do not explain them by appeal to hidden causes. Qualitative methodologists also often bolster their preferred conception of qualitative research by comparing it with an unflattering positivist picture of quantitative research. They tend to adopt the philosophy of social constructionism, which is opposed to the traditional notions of truth, objectivity, and reason, maintaining that our understanding of the world is determined by social negotiation. In one or another of its various forms, it is the philosophy of choice for many qualitative researchers, and it tends to be employed by those who are opposed, or indifferent, to quantitative methods. I shall not consider it further in this chapter. In what follows, I will adopt a scientific realist perspective on research methods. Although the subject of considerable debate, and opposed by many antirealist positions, scientific realism is the dominant philosophy of science today. It is also the tacit philosophy of most working scientists. This fact, combined with its current heavy emphasis on the nature of scientific practice, makes scientific realism a philosophy for science—not just a philosophy of science.

Scientific Realism The philosophies of positivism, social constructionism, and scientific realism just mentioned are really family positions. This is especially true of scientific realism, which comes in many forms. Most versions of scientific realism display a commitment to at least two doctrines: (1) that there is a real world 8

of which we are part and (2) that both the observable and unobservable features of that world can be known by the proper use of scientific methods. Some versions of scientific realism incorporate additional theses (e.g., the claims that truth is the primary aim of science and that successive theories more closely approximate the truth), and some will also nominate optional doctrines that may, but need not, be used by scientific realists (e.g., the claim that causal relations are relations of natural necessity; see Hooker, 1987). Others who opt for an “industrial strength” version of scientific realism for the physical sciences are more cautious about its successful reach in the behavioral sciences. Trout (1998), for example, subscribes to a modest realism in psychology, based on his skepticism about the discipline’s ability to produce deeply informative theories like those of the physical sciences. Given that this chapter is concerned with the philosophical foundations of quantitative methods, the remaining characterization of scientific realism will limit its attention to research methodology.

Scientific Realist Methodology Scientific realism boasts a rich conception of methodology, which is of considerable help in understanding and guiding research. The resourcefulness of realist methodology is suggested in the following description of its major characteristics (see Hooker, 1987; Nickles, 1987). First, realist methodology has three major tasks: to describe how methods function; to evaluate methods critically against their rivals; and to recommend how to use particular methods to pursue chosen research goals. Second, realist methodology is critically aimoriented. At a broad level, it recommends the pursuit of valuable truth, explanatory understanding, and effective control as primary research goals; and it is concerned with the mutual adjustment of methods and research goals. Third, realist methodology is naturalistic—that is, it is a substantive domain that uses the methods of the various sciences to study method itself. Proctor and Capaldi (2001) advocate a naturalistic approach to methodology in psychology in which the empirical justification of methodological ideas is emphasized. A fourth feature of realist methodology is that it is both generative and consequentialist. Generative methodology involves reasoning to, and accepting, knowledge claims in question from warranted premises. Exploratory factor analysis is a prominent example of a method in psychology that involves

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

a generative justification of the factorial hypotheses to which it gives rise. By contrast, consequentialist methodology focuses on reasoning from knowledge claims in question to their testable consequences. The widely used hypothetico-deductive method, with its emphasis on predictive accuracy, clearly exhibits a consequentialist approach to justifying knowledge claims. Fifth, realist methodology acknowledges the need for two quite different approaches to justifying knowledge claims. In philosophy these are commonly known as reliabilism and coherentism. With reliabilism, a belief is justified to the extent that it is acquired by reliable processes. In general, the innumerable methods that contribute to the detection of empirical phenomena are concerned with reliabilist justification. With coherentism, a belief is justified in virtue of its coherence with other beliefs. Thagard’s (1992) theory of explanatory coherence, used for the comparative evaluation of scientific theories, embodies an illuminating coherentist perspective on knowledge justification. These two forms of justification are different, complementary, and of equal importance. As a sixth feature, realist methodology regards science as a problem-oriented endeavor in which problems are conceptualized as constraints on their effective solution (Haig, 1987; Nickles, 1981). On this formulation, the constraints are actually constitutive of the problem itself; they characterize the problem and give it structure. Further, by including all the constraints in the problem’s articulation, the problem enables the researcher to direct inquiry effectively by pointing the way to its own solution. In a real sense, stating the problem is half the solution! Finally, realist methodology takes the researcher’s make up as a “knowing subject” seriously. Among other things, the researcher is regarded as a satisficer who makes heavy use of heuristics to guide her inquiries. For example, McGuire (1997) discusses many useful heuristics that can be employed to facilitate the generation of hypotheses in psychological research. Scientific realist methodology undergirds a wide variety of methods, strategies, and heuristics that have been successfully used to produce worthwhile knowledge about both empirical phenomena and explanatory theories. If quantitative researchers in psychology engage this literature seriously, then they will find resources for enhancing their understanding of research methods. I turn now to a philosophical consideration of the selected research methods.

Exploratory Data Analysis In psychological research, the major emphasis in data analysis is placed on statistical inference, where the task is to find out whether a data set exhibits a designated feature of interest characterized with reference to a probabilistic model. Unfortunately, the dominance of this goal has had the effect of discouraging a concerted examination of data sets in terms of their quality and structure. Detailed explorations of data are important in science, and it often makes good sense to conduct them instead of a probabilistic model or before the model is formulated and adopted. Consistent with this emphasis on the close examination of data, the last 30 years have witnessed the strong development of an empirical, data-oriented approach to statistics. One important part of this movement is exploratory data analysis, which contrasts with the more familiar traditional statistical methods with their characteristic emphasis on the confirmation of knowledge claims.

Exploratory Data Analysis and John Tukey Spelling out a philosophy of exploratory data analysis is difficult, and few methodologists have attempted to do so (for an initial attempt to do this from a Bayesian perspective, see Good, 1983). However, the intellectual progenitor of modern exploratory data analysis, John Tukey, has developed a systematic perspective on the subject that has helped to highlight its importance to research. It deserves to be considered as a philosophy of data analysis in its own right. Therefore, this brief examination of the philosophy of exploratory data analysis pays particular attention to Tukey’s thinking on the topic. According to Tukey (1980), data analysis should be treated as a two-stage compound process in which the patterns in the data are first suggested by exploratory data analysis and then critically checked through the use of confirmatory data analysis procedures. Exploratory data analysis involves descriptive—and frequently quantitative— detective work designed to reveal structure or pattern in the data sets under scrutiny. The data analyst is encouraged to undertake an open-eyed investigation of the data and perform multiple analyses using a variety of intuitively appealing and easily used techniques. The compendium of methods for the exploration of data, many of which were developed by Tukey (1977), is designed to facilitate both discovery and communication of information. These haig

9

methods are concerned with the effective organization of data, the construction of graphical and semi-graphical displays, and the examination of distributional assumptions and functional dependencies. Two additional attractive features of Tukey’s methods are their robustness to changes in underlying distributions and their resistance to outliers in data sets. Exploratory methods with these two features are particularly suited to data analysis in psychology, where researchers are frequently confronted with ad hoc sets of data on amenable variables, which have been acquired in convenient circumstances.

Exploratory Data Analysis and Scientific Method In his writings on data analysis, Tukey (1969) has emphasized the related ideas that psychology is without an agreed-upon model of data analysis and that we need to think more broadly about scientific inquiry. In an invited address to the American Psychological Association in 1968, Tukey presented the following excerpt from a prominent psychologist for his audience to ponder. I quote in part: I have the feeling that psychology is currently without a dominant viewpoint concerning a model for data analysis. In the forties and early fifties, a hypothetico-deductive framework was popular, and our mentors were keen on urging the design of “crucial” experiments for the refutation of specific predictions made from one or another theory. Inductive empiricism was said to be disorderly and inefficient. You and I knew then, as we know now, that no one approach is uniformly most powerful. (Tukey, 1969, p. 90)

Consider the hypothetico-deductive and inductive conceptions of scientific methods, which are mentioned here as candidate models for data analysis. Most psychological researchers continue to undertake their research within the confines of the hypothetico-deductive method. Witness their heavy preoccupation with theory testing, where confirmatory data analyses are conducted on limited sets of data gathered in accord with the dictates of the test predictions of theories. In this regard, psychologists frequently employ tests of statistical significance to obtain binary decisions about the credibility of the null hypothesis and its substantive alternatives. However, the use of statistical significance tests in this way strongly blunts our ability to look for more interesting patterns in the data. Indeed, the continued neglect of exploratory data analysis in psychological research occurs in good part because 10

there is no acknowledged place for such work in the hypothetico-deductive conception of inquiry (Wilkinson & The Task Force, 1999). I think the worth of the inductive method as a model for data analysis is dismissed too quickly in the above quotation. The major failing of the inductive account of scientific method lies not so much with its perspective on data analysis, but with its prohibition of the formulation of explanatory theories. A modern conception of inductive method is embedded in the important scientific process of phenomena detection. Phenomena are relatively stable recurrent general features of the world that we seek to explain (Woodward, 1989), and their detection frequently involves an inductive process of empirical generalization. With its emphasis on phenomena detection, inductive method reserves an important place for the exploratory analysis of data. In detecting phenomena, one is concerned to extract a signal from the noise of data, and for this the intensive search of large amounts of data is frequently essential. It is precisely because securing a heavy information yield for our data is likely to throw up potentially interesting data patterns that might turn out to be genuine phenomena. In this context, data mining is encouraged, and the capabilities of exploratory techniques in this regard often make them the appropriate methods of choice. By contrast, Behrens and Yu (2003) suggest that the inferential foundations of exploratory data analysis are to be found in the idea of abduction, or explanation (and by implication, not in the notions of hypothetico-deductive testing and inductive generalization). However, exploratory data analysis is a descriptive pattern-detection process that is a precursor to the inductive generalizations involved in phenomena detection. As will be seen later in the consideration of exploratory factor analysis, abductive inference is reserved for the construction of causal explanatory theories that are introduced to explain empirical phenomena. Beherens and Yu’s suggestion conflates the quite different ideas of descriptive and explanatory inference.

Exploratory Data Analysis and a Model of Data Analysis In the spirit of Tukey’s (1962; 1980) push for breadth of vision in data analysis, one might usefully take a perspective on data analysis that extends Tukey’s two-stage model (Haig, 2005b). Before exploring data for patterns of potential interest, researchers should assiduously screen their data for their quality. This initial data analysis involves

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

checking for the accuracy of data entries, identifying and dealing with missing and outlying data, and examining the data for their fit to the assumptions of the data analytic methods to be used. This important, and time-consuming, preparatory phase of data analysis has not received the amount of explicit attention that it deserves in behavioral science education and research practice. Fidell and Tabachnick (2003) provide a useful overview of the task and techniques of initial data analysis. Confirmation of the initial data patterns suggested by exploratory data analysis is a “just checking” strategy and as such should be regarded as a process of close replication. However, it is essential to go further and undertake constructive replications to ascertain the extent to which results hold across different methods, treatments, subjects, and occasions. Seeking results that are reproducible through constructive replications requires data analytic strategies that are designed to achieve significant sameness rather than significant difference (Ehrenberg & Bound, 1993). Exploratory data analysis, then, can usefully be regarded as the second in a four-stage sequence of activities that, in turn, attend to data quality, pattern suggestion, pattern confirmation, and generalization.

principle asserts that statements about observational data, including empirical generalizations, have a degree of acceptability on their own. Such claims are not indubitable, but they do stand by themselves better than claims justified solely in terms of what they explain. What justifies the provisional acceptance of data statements is that they have been achieved by reliable methods; what strengthens our provisional belief in the patterns thrown up by exploratory data analysis is their confirmation through use of computer-based resampling methods. Further, it is important to appreciate that the acceptability of claims provided by the reliabilist justification of computer-intensive resampling methods can be enhanced by making appropriate use of a coherentist approach to justification. One important form of coherence is explanatory coherence, and one method that delivers judgments of explanatory coherence is the theory of explanatory coherence (Thagard, 1992). According to this theory, data claims, including empirical generalizations, receive an additional justification if and when they enter into, and cohere with, the explanatory relations of the theory that explains them.

A Philosophy for Teaching Data Analysis Resampling Methods and Reliabilist Justification Since the 1980s, statisticians have been able to exploit the massive computational power of the modern computer and develop a number of computer intensive resampling methods, such as the jackknife, the bootstrap, and cross-validation (Efron & Tibshirani, 1993). These methods constitute one important set of confirmatory procedures that are well suited to the task of checking on the data patterns thrown up by exploratory data analysis. By exploiting the computer’s computational power, these resampling methods free us from the restrictive assumptions of modern statistical theory, such as the belief that the data are normally distributed, and permit us to gage the reliability of chosen statistics by making thousands, even millions, of calculations on many data points. It is important to appreciate that the resampling methods just mentioned make use of a reliabilist approach to justification. Here, the reliability checks on emergent data patterns are provided by consistency of test outcomes, which are time-honored validating strategies. Our willingness to accept the results of such checks is in accord with what Thagard (1992) calls the principle of data priority. This

An underappreciated, but important, feature of Tukey’s writings on exploratory data analysis is the illuminating remarks on the teaching of data analysis that they contain. These remarks can be assembled into a constructive philosophy for teaching data analysis, which can properly be regarded as an aspect of an overall philosophy of exploratory data analysis. This philosophy of teaching advises us to think about and teach data analysis in a way that is quite different from the prevailing custom. Provocatively, Tukey (1980) maintained that the proper role of statistics teachers is to teach that which is most difficult and leave that which is more manageable to good textbooks and computers. He recommended teaching data analysis the way he understood biochemistry was taught, concentrating on what the discipline of statistics has learned, perhaps with a discussion of how such things were learned. The detail of methods should be assigned to laboratory work, and the practice of learning data analytic techniques should be assigned to a different course in which problems arose. He foresaw that such a redirection in teaching data analysis would have to be introduced in phases. In Tukey’s (1962) words, “The proposal is really to go in the opposite direction from cookbookery; to teach not haig

11

‘what to do,’ nor ‘how we learned what to do,’ but rather, ‘what we have learned”’ (p. 63). This advice is broadly consistent with the idea that we should teach research methods in terms of their accompanying methodology, a recommendation considered at the end of this chapter. Another prominent feature of Tukey’s philosophy of teaching data analysis is his recommendation that we should teach both exploratory and confirmatory data analysis and that we have an obligation to do so. Tukey’s strong promotion of the value of exploratory data analysis was intended as a counter to the dominance of confirmatory data analysis in statistical practice. However, for Tukey, exploratory data analysis was not to be understood as more important than confirmatory data analysis because both are essential to good data analysis. Tukey also suggested that exploratory data analysis should probably be taught before confirmatory data analysis. There are several reasons why this recommendation has merit. Properly taught, exploratory data analysis is probably easier to learn, and it promotes a healthy attitude to data analysis (encouraging one to be a dataphile without becoming a data junkie). It requires the investigator to get close to the data, analyze them in various ways, and seek to extract as much as possible potentially important information from the data. This is done to detect indicative patterns in the data before establishing through confirmatory data analysis that they are genuine patterns. Tukey emphasized that learning exploratory data analysis centrally involves acquiring an appropriate attitude toward the data, which includes the following elements: exploratory data analysis is sufficiently important to be given a great deal of time; exploratory data analysis should be carried out flexibly with multiple analyses being performed; and exploratory data analysis should employ a multiplicity of methods that enhance visual display.

Statistical Significance Testing It is well known that tests of statistical significance are the most widely used methods for evaluating hypotheses in psychology (e.g., Hubbard & Ryan, 2000). These tests have been popular in psychology for nearly 50 years and in statistics for about 75 years. Since the 1960s, there has developed a massive critical literature in psychology regarding their worth. Important early contributions to this debate are collected in Morrison and Henkel (1970; see also Giere, 1972). Cohen (1994) provides a short perceptive review of the controversy, whereas Nickerson (2000) 12

has undertaken a useful extensive review of the controversy since its beginning. Despite the plethora of critiques of statistical significance testing, most psychologists understand them poorly, frequently use them inappropriately, and pay little attention to the controversy they have generated (Gigerenzer, Krauss, & Vitouch, 2004). The significance testing controversy is multifaceted. This section will limit its attention to a consideration of the two major schools of significance testing, their hybridization and its defects, and the appropriateness of testing scientific hypotheses and theories using tests of statistical significance. Psychologists tend to assume that there is a single unified theory of tests of statistical significance. However, there are two major schools of thought regarding significance tests: Fisherian and NeymanPearson. Initially, Neyman and Egon Pearson sought to build on and improve Fisher’s theory, but they subsequently developed their own theory as an alternative to Fisher’s theory. There are many points of difference between the two schools, which adopt fundamentally different outlooks on the nature of scientific method. The uncritical combination of the two schools in psychology has led to a confused understanding of tests of statistical significance and to their misuse in research.

The Fisherian Significance Testing School The Fisherian school of significance testing (e.g., Fisher, 1925) tests a hypothesis or theory of substantive interest against the null hypothesis that the experimental effect to be demonstrated is in fact absent. Fisher argued that an experiment is performed solely to give the data an opportunity to disprove the null hypothesis. No alternative hypothesis is specified, and the null hypothesis is the hypothesis to be nullified; it need not be the hypothesis of zero difference. Because one cannot accept the null hypothesis, no provision is made for Type II error, and relatedly, there is no place for a statistical concept of power. Most importantly, and as noted earlier, Fisher subscribed to an inductive conception of scientific method and maintained that significance tests are vehicles of inductive reasoning. As such they are concerned with evidence for beliefs.

Should We Use Fisher’s Significance Tests? The question of whether behavioral scientists should use Fisherian significance tests as a defensible form of hypothesis testing largely centers on whether p−values are good measures of scientific evidence.

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

Although many psychological researchers think, and some methodologists argue, that p−values can be used to measure strength of evidence, others hold them to be deeply problematic in this respect (e.g., Royall, 1997; Hubbard & Lindsay, 2008). Some have argued that the concept of evidence adopted by Fisher is defective. For one thing, it is widely agreed by philosophers of science that theory or hypothesis evaluation is a comparative affair in which evidence against one hypothesis is evidence for another hypothesis (e.g., Sober, 2008). However, as just noted, Fisher only countenanced the null, and without an alternative hypothesis with which to compare the null, the logic of his significance testing is importantly incomplete; one cannot have evidence against the null without it being evidence for another hypothesis. The idea that one might allow for alternative hypotheses by joining Fisher’s perspective with that of Neyman and Pearson will be seen by many to make matters worse. As will be noted shortly, Neyman and Pearson were concerned with the reliability of errors in decision making in the long run rather than with evidence for believing hypotheses in a particular experiment. Others contend that a further major problem with Fisher’s p-value is that it doesn’t measure evidence properly (e.g., Goodman, 1993). In this regard, four claims about p-values are thought to disqualify it as a proper measure of evidence. First, the p-value is not a direct measure of the probability that the null is false; it is a conditional probability of obtaining the data, calculated on the assumption that the null hypothesis is true. Second, to take a number that quantifies rare data under the null is to confuse the strength of evidence with the probability of its occurrence (Royall, 1986). These two things are different because the probability is an indication of the long-run Type I error rate, which is separate from strength of evidence. Third, the calculation of a p-value combines the rarity of an obtained result with the probability of results that didn’t happen. As Jeffreys (1939) stated long ago, “What the use of P implies . . . is that a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred” (p. 136). Finally, it is claimed that the p-value can exaggerate the strength of evidence against the point null and small interval hypotheses (Berger & Selke, 1987), which are frequently tested in psychology. These criticisms are not convincing to all. For example, Hurlbert and Lombardi (2009) recently considered these arguments and recommend a shift in focus from the classical Fisherian framework to a

neo-Fisherian alternative. Key elements of this alternative are that the probability of Type I error is not specified, p−values are not misleadingly described as “significant” or “nonsignificant,” judgment is suspended about accepting the null hypothesis on the basis of high p-values, the “three-valued logic” that gives information about the direction of the effect being tested is adopted, accompanying effect size information is provided, and use is made of adjunct information such as confidence intervals, where appropriate. Two things to note here about this neoFisherian perspective are that it is concerned with significance assessments but not null hypothesis significance tests and that it is concerned with statistical tests as distinct from the tests of scientific hypotheses. There are empirical studies in psychology that approximate this modified Fisherian perspective on significance tests.

The Neyman-Pearson Hypothesis Testing School Neyman and Pearson rejected Fisher’s notion of a significance test and its use of a threshold p-value as a basis for rejecting the null hypothesis. In this regard, they added the requirement of the specification of an alternative hypothesis as well as the null hypothesis, and they replaced Fisher’s evidential p-value with the Type I error rate, α (e.g., Neyman & Pearson, 1933). In addition, Neyman and Pearson permitted a more liberal formulation of the null hypothesis than did Fisher and regarded it as legitimate to speak of its acceptance. Thus, Type II error was admitted, and explicit provision was made for a statistical concept of power. To capture these differences, Neyman and Pearson spoke of their approach as hypothesis testing rather than significance testing. However, the Neyman-Pearson school differs from the Fisherian school most fundamentally in maintaining that significance tests are rules of inductive behavior rather than vehicles for inductive reasoning. On this view, significance testing is regarded as a theory of prudential decision-making; accepting or rejecting a hypothesis amounts to adopting an appropriate course of action, rather than believing it to be probably true or false. At root, Neyman and Pearson held different views from Fisher about the nature of science.

Should We Use Neyman and Pearson’s Hypothesis Tests? It might seem that by focusing on hypothesis testing and making provision for alternative hypotheses haig

13

and a statistical concept of power, Neyman and Pearson’s approach marks an improvement over Fisher’s significance tests. However, as noted earlier, Neyman and Pearson were concerned with measuring the reliability of errors in decision making in the long run rather than with evidence for believing hypotheses in a particular experiment; Type I and Type II error are objective probabilities, understood as long-run relative frequencies. As such, they belong to reference classes of endless series of trials that might have happened but never did. They are not about single events such as individual experiments. Further, being concerned with decision making understood as behavioral courses of action, Neyman-Pearson statistics do not measure strength of evidence for different hypotheses and thus do not tell us how confident we should be in our beliefs about those hypotheses. It would seem, then, that the Neyman-Pearson approach is suitable for use only when the focus is on controlling errors in the long run (e.g., quality control experiments). However, this does not happen often in psychology.

The Hybrid Although the Neyman-Pearson theory of testing is the official theory of statistical testing in the field of professional statistics, textbooks in the behavioral sciences ensure that researchers are instructed to indiscriminately adopt a hybrid account of tests of statistical significance, one that is essentially Fisherian in its logic but that is often couched in the decision-theoretic language of Neyman and Pearson. The hybrid logic is a confused and inconsistent amalgam of the two different schools of thought (Acree, 1979; Gigerenzer, 1993; Spielman, 1974; but see Lehmann, 1993, for the suggestion that the best elements of both can be combined in a unified position). To the bare bones of Fisherian logic, the hybrid adds the notion of Type II error (opposed by Fisher) and the associated notion of statistical power (which Fisher thought could not be quantified) but only at the level of rhetoric (thereby ignoring Neyman and Pearson), while giving a behavioral interpretation of both Type I and Type II errors (vigorously opposed by Fisher)! Because the authors of statistical textbooks in the behavioral sciences tend to present hybrid accounts of significance testing, aspiring researchers in these sciences almost always acquire a confused understanding of such tests. It is most unfortunate that many writers of statistics textbooks in the behavioral sciences have unwittingly perpetuated these basic misunderstandings. 14

To make matters worse, this confusion is compounded by a tendency of psychologists to misrepresent the cognitive accomplishments of significance tests in a number of ways. For example, levels of statistical significance are taken as measures of confidence in research hypotheses, likelihood information is taken as a gage of the credibility of the hypotheses being tested, and reported levels of significance are taken as measures of the replicability of findings (Gigerenzer, Krauss, & Vitouch, 2004).

Significance Tests and Theory Testing Meehl (1967, 1978, 1997) has made one of the strongest criticisms of the use of tests of statistical significance in psychology. He argued that the widespread use of tests of statistical significance to test substantive hypotheses and theories is deeply flawed because the support for a hypothesis or theory obtained by rejecting the null hypothesis is very weak. Sometimes psychological researchers test a hypothesis of substantive interest against the point null hypothesis that the difference between the relevant population parameters is exactly zero. But a fact, long known to professional statisticians and appreciated by Meehl, is that the point null hypothesis is virtually always false in the behavioral and social sciences. The reason for this is that in these sciences, most things are related to one another at least to some small extent. In many parts of psychology, “everything in the brain is connected with everything else,” resulting in a large positive manifold in which many variables correlate positively with one another to a significant degree. Thus, in the “softer” precincts of psychology, where “true” experiments are often not possible, obtaining a reasonable sample size makes the achievement of a statistically significant result the likely outcome of an empirical study. Meehl (1967) reasoned that if the null hypothesis of zero group differences is almost always false, then with sufficient power, directional hypotheses in these parts of psychology have a 50:50 chance of achieving statistical significance! Meehl and Lykken provided some empirical evidence for this claim more than 40 years ago (Meehl, 1967; see also Meehl, 1997). A recent simulation study on real data carried out by Waller (2004) confirmed Meehl’s claim. One can better appreciate what is wrong with using tests of statistical significance to appraise psychological theories by considering the logic involved in such testing. It is helpful to begin by observing the important distinction between scientific

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

hypotheses and the statistical hypotheses that may be derived from them (Bolles, 1962). Often in psychology, scientific theories about psychological processes or structures are formulated, and then statistical hypotheses are derived to facilitate their empirical testing. The former will characteristically invoke causal mechanisms for explanatory purposes, whereas the latter will appeal to statistical tests of null hypotheses about the population parameters of observed variables. Meehl has argued that psychological researchers tend to conflate the substantive theory and the statistical hypothesis and unwarrantedly take the successful refutation of the null hypothesis as grounds for concluding that the substantive theory has been strongly confirmed. However, if we have good grounds for believing that the point null hypothesis is probably false at the outset, and we use the null as the observation hurdle for our theories to surmount, then support for a theory by rejecting this implausible null alternative is quite feeble. For good reason, then, Meehl (1990) has urged psychologists to abandon tests of statistical significance for purposes of substantive theory testing in psychology. He suggests that psychologists should replace them with a strategy that is adapted from the philosopher of science, Imre Lakatos (1970), who argued against Popper’s strict falsificationist position on theory testing for the comparative theory appraisal of research programs over time. Meehl has maintained that one should defend and amend a theory only if it has a good track record of successful or near-miss predictions of low prior probability. In conclusion, it is clear that there are fundamental philosophical differences between the Fisherian and Neyman-Pearson schools of statistical thought. Fisher’s statistical contributions can been seen as a deliberate attempt to develop an objective alternative to Bayesian statistical thinking popular in Europe at the time, whereas those of Neyman and Pearson can be seen as an attempt to develop a position that is even more objective than Fisher’s. Nevertheless, Bayesian thinking today is an attractive option for many statisticians who hold misgivings about one or both of these schools of thought. It is also the focus of much attention in the philosophy of science. It is to the elements of the Bayesian position that we now turn.

Bayesian Confirmation Theory What is it for empirical evidence to provide confirmation or disconfirmation of a scientific hypothesis or theory? Methodologists of science have worked

long and hard to answer this important and challenging question by developing theories of scientific confirmation. Despite the considerable fruits of their labors, there is widespread disagreement about which theory of confirmation we should accept. In recent times, a large number of philosophers of science have contributed to Bayesian confirmation theory (e.g., Earman, 1992; Howson & Urbach, 2006). Many philosophical methodologists now believe that Bayesianism, including Bayesian philosophy of science, holds the best hope for building a comprehensive and unified theory of scientific inference. Bayesianism is a comprehensive position. It comprises a theory of statistical inference, an account of scientific method, and a perspective on a variety of challenging methodological issues. Today, it also boasts a fully fledged philosophy of science. In this section, attention is limited to a consideration of the strengths and weaknesses of Bayesian statistical inference, the ability of Bayesian confirmation theory to improve upon the hypothetic-deductive method, and the question of whether Bayesianism provides an illuminating account of the approach to theory evaluation known as inference to the best explanation.

Bayesian Statistical Inference The Bayesian approach to statistical inference is so called because it makes central use of a theorem of the mathematical calculus of probability known as Bayes’ theorem. This theorem can be written in a simple form as: Pr(H/D) =

Pr(H) × Pr(D/H) Pr(D)

With the proviso that Pr (D) and Pr (H) cannot be zero, the theorem says that the posterior probability of the hypothesis is obtained by multiplying the prior probability of the hypothesis by the probability of the data, given the hypothesis (the likelihood), and dividing the product by the prior probability of the data. It is through use of this and other versions of Bayes’ Theorem that Bayesians are able to implement their view of statistical inference, which is the orderly revision of opinion in the light of new information. For Bayesians, a couple of features of this gloss on Bayesian statistical inference recommend themselves. Most importantly, the Bayesian approach squares with the stated purpose of scientific inquiry noted above—namely, securing the probability of a hypothesis in the light of the relevant evidence. The haig

15

informational output of a traditional test of significance is the probability of the data, given the truth of our hypothesis, but it is just one input in the Bayesian scheme of things. A second stated desirable feature of the Bayesian view is its willingness to make use of relevant information about the hypothesis before the empirical investigation is conducted and new data are obtained, explicitly in the form of a prior probability estimate of our hypothesis. Traditional tests of statistical significance are premised on the assumption that inferences should be based solely on present data, without any regard for what we might bring to a study in the way of belief or knowledge about the hypothesis to be tested—a position that Bayesians contend is hardly designed to maximize our chances of learning from experience. To achieve their goal of the systematic revision of opinion on the basis of new information, Bayesians are able to employ Bayes’ theorem iteratively. Having obtained a posterior probability assignment for their hypothesis via Bayes’ theorem, they can then go on and use that posterior probability as the new prior probability in a further use of Bayes’ theorem designed to yield a revised posterior probability, and so on. In this way, the Bayesian researcher learns from experience.

Criticisms of Bayesian Hypothesis Testing Although my consideration of the merits of conventional significance tests and their Bayesian alternative is both sketchy and selective, some readers will sense that the Bayesian view provides an attractive alternative to the traditional approach, particularly when the latter assumes its hybrid form. However, as with all theories of confirmation, the Bayesian approach has come in for its share of criticism. These criticisms have tended to focus on the alleged problematic nature of prior probabilities. In this regard, it is objected that because Bayesians adopt a subjectivist conception of probability and resort to personal estimates of the prior probabilities of their hypotheses, they introduce an ineliminable, but highly undesirable, subjective element into their calculations. To this objection, the Bayesians have two plausible replies: they can concede that personal estimates of prior probabilities are subjective, that they may differ markedly from person to person, and that they are often very rough estimates and then go on to point out that when prior estimates err, they are brought into line by freshly obtained sets of data; or, they may appeal to the failure of strictly empiricist theories of confirmation, which hold that one may obtain an adequate test of a hypothesis solely 16

on the basis of evidence and logic, and assert that in real-life situations, there is no alternative to relying on a subjective component in our testing efforts. In deciding whether to adopt a Bayesian position on statistical inference, it should be kept in mind that one does not have to embrace a general Bayesian theory of scientific confirmation rather than, say, the hypothetico-deductive alternative. One might be a Bayesian when dealing with problems of statistical inference but remain wedded to a general hypothetico-deductive conception of scientific method. Or, more plausibly, one might employ Bayesian statistical methods when concerned with inferential problems about hypotheses for which we have the relevant probabilistic information, but adopt a non-probabilistic count of theory evaluation such as Thagard’s theory of explanatory coherence, which will be referred to later in the chapter. The general point to be made here is that Bayes’ theorem can help us deal with some problems of statistical inference, but clearly, a great deal of scientific work will be done with the use of other methods—some of them statistical and some of them not.

Bayesianism and the Hypothetico-Deductive Method One of the clear achievements of Bayesianism is its ability to improve on the unsatisfactory approach to hypothesis and theory appraisal taken by the hypothetico-deductive method. The hypotheticodeductive method has long been the method of choice for the evaluation of scientific theories (Laudan, 1981), and it continues to have a dominant place in psychology. Despite its popularity, it is usually characterized in an austere manner: The researcher takes a hypothesis or theory of interest and tests it indirectly by deriving from it one or more observational predictions that are themselves directly tested. Predictions borne out by the data are taken to confirm the theory to some degree; those predictions that do not square with the data count as disconfirming instances of the theory. Normally, the theory is not compared with rival theories in respect of the data, only with the data themselves. The hypothetico-deductive method, in something like this form, has been strongly criticized by methodologists on a number of counts (e.g., Glymour, 1980; Rozeboom, 1997). One major criticism of the method is that it is confirmationally lax. This laxity arises from the fact that any positive confirming instance of a hypothesis submitted to empirical test can confirm any hypothesis that is

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

conjoined with the test hypothesis, regardless of how plausible it might be. This state of affairs is known as the fallacy of irrelevant conjunction, or the tacking problem, because confirmation of a test hypothesis also confirms any conjunct that is attached to the test hypothesis. The fallacy of irrelevant conjunction arises with the hypothetico-deductive method because predictions are deduced from hypotheses only by making use of auxiliary hypotheses drawn from background knowledge. Clearly, this is an unacceptable state of affairs. Bayesians have challenged the assumption that the occurrence of the consequences of a theory confirm the theory and its conjuncts holistically. They argue that the Bayesian approach enables the differential support of the elements of a theory, specifying conditions showing that E never increases the probability of H conjoined with any additional hypothesis by more than it increases the probability of H. Another major criticism of the hypotheticodeductive method is that it tests a single hypothesis or theory of interest against the empirical evidence; it does not test a hypothesis or theory in relation to rivals in respect of the evidence. This is held to be a major flaw because it is widely agreed that theory evaluation is a comparative affair involving simultaneous evaluation of two or more hypotheses or theories. The comparative nature of theory evaluation is straightforwardly handled by the Bayesian position by rewriting the simple form of Bayes’ theorem given earlier to deal with two or more hypotheses. Here, Bayes’ theorem is presented for the case of two hypotheses, where the theorem can be written for each hypothesis in turn. For the first hypothesis, Pr (H1 /D) =

Pr(H1 ) × Pr(D/H1 ) Pr(H2 )×Pr(D/H2 )+Pr(H1 )×Pr(D/H1 )

This says that the posterior probability of the first hypothesis is obtained by multiplying its prior probability by the probability of the data, given that hypothesis (the likelihood), and dividing the product by the value that results from adding the prior probability of the second hypothesis, multiplied by the likelihood for that hypothesis, to the prior probability of the first hypothesis, multiplied by its likelihood. Bayes’ theorem for the second hypothesis is written in a similar way.

Bayesianism and Inference to the Best Explanation Recently, some Bayesians have claimed that their perspective on scientific method can also provide an enhanced characterization of the important approach to theory evaluation known as inference to the best explanation. Inference to the best explanation is based on the belief that much of what we know about the world is based on considerations of explanatory worth. In contrast to the Bayesian approach, accounts of inference to the best explanation take theory evaluation to be a qualitative exercise that focuses on explanatory criteria rather than a quantitative undertaking in which one assigns probabilities to theories (Haig, 2009; Thagard, 1992). Although inference to the best explanation has typically been regarded as a competitor for Bayesian theory evaluation, Lipton (2004) has recently argued that the two approaches are broadly compatible and that, in fact, their proponents “should be friends.” In broad terms, he suggests that judgments of the loveliest explanation, which are provided by the evaluative criteria of inference to the best explanation, such as unificatory power, precision, and elaboration of explanatory mechanisms, contribute to assessments of the likeliest explanation, which are provided by the probabilities of the Bayesian approach. Specifically, Lipton maintains that the explanatory considerations invoked in inference to the best explanation guide determination of the prior probabilities (and the likelihoods) that are inserted in Bayes’ Theorem. However, although appeal to explanatory matters might be one way in which Bayesians can determine their prior probabilities, Lipton does not suggest how this might be done. Further, those who hold inference to the best explanation to be a normative approach to scientific theory evaluation, with its own distinctive character, will worry that Lipton relegates it to a descriptive role within a Bayesian normative framework (e.g., Psillos, 2004). Another way of showing the compatibility of inference to the best explanation and Bayesianism is to translate the evaluative criteria employed within inference to the best explanation into probabilistic terms. McGrew (2003) has done this by taking the important theoretical virtue of consilience, or explanatory breadth, and showing that its Bayesian form leads to higher posterior probabilities of the hypotheses being evaluated. Nevertheless, McGrew has acknowledged that by translating consilience into its “flattened” probabilistic form, it no longer haig

17

remains a genuine explanatory virtue. Not only is there no guarantee that consilience will be concerned with an explanation of the evidence, there is no way that probabilistic translations of the explanatory virtues can refer to the causal connections that are often appealed to in scientific explanations. Further, Weisberg (2009) has recently argued that the explanatory loss incurred in such translations will occur for any distinctively explanatory virtue that is given such probabilistic treatment. In short, it would seem that Bayesianism cannot capture the intuitively important notion of explanatory power without significant loss.

What Should We Think About Bayesianism? Philosophical assessment of the worth of Bayesianism range from claims that it is without peer as a theory of scientific reasoning to the view that it is fundamentally wrong-headed. Howson and Urbach (2006) exemplify the former view, claiming that scientific reasoning is both inductive and probabilistic and that the axioms of probability suffice to articulate such reasoning. The latter view is exemplified by Bunge (2008), who has argued that Bayesianism is fundamentally wrong for three reasons: (1) it assigns probabilities to statements rather than taking them as objective features of the world; (2) it conceives of probabilities as subjective; and (3) it appeals to probabilities in the absence of randomness. To add to this mix of views, many statisticians take Bayesian statistical inference to be a superior alternative to classical statistical inference, for the reasons stated earlier. Finally, some advocates of Bayesianism see it as a comprehensive theory of confirmation, whereas others see it as having only context-specific application. The difficulties of deciding just what to think about Bayesianism are captured well by the ambivalence of John Earman (1992), a Bayesian philosopher of science. He confesses to being an enthusiastic Bayesian on Mondays, Wednesdays, and Fridays. But on Tuesdays, Thursdays, and Saturdays, he holds doubts about the totalizing ambitions of Bayesianism and indeed whether it can serve as a proper basis for scientific inference. Faced with such difficulty, it is probably prudent to settle for a contextual application of Bayesian thinking, as indicated earlier in this section. For example, in particular domains such as medical diagnosis, where the relevant probabilistic information is often available, scientists sometimes appeal to the Bayesian corpus to justify the selective use of its methods. By contrast, 18

in domains where the evaluation of explanatory hypotheses and theories are of primary concern, scientists have, for good reason, often employed something like inference to the best explanation. Like it or not, the intending Bayesian scientist will have to consult the relevant philosophical literature, among other methodological literatures, to furnish an informed justification for their Bayesian practices.

Meta-Analysis In the space of three decades meta-analysis has become a prominent methodology in behavioral science research, with the major developments coming from the fields of education and psychology (Glass, McGaw, & Smith, 1981: Hedges & Olkin, 1985; Hunter & Schmidt, 2004). Meta-analysis is an approach to data analysis that involves the quantitative analysis of the data analyses of primary empirical studies. Hence, the term meta-analysis coined by Glass (1976). Meta-analysis, which comes in a variety of forms (Bangert-Drowns, 1986), is concerned with the statistical analyses of the results from many individual studies in a given domain for the purpose of integrating or synthesizing those research findings. The following selective treatment of metaanalysis considers its possible roles in scientific explanation and evaluation research before critically examining one extended argument for the conclusion that meta-analysis is premised on a faulty conception of science.

Meta-Analysis and Explanation Meta-analysis is a prominent example of a distinctive use of statistical methods by behavioral scientists to aid in the detection of empirical phenomena. By calculating effect sizes across primary studies in a common domain, meta-analysis helps us detect robust empirical generalizations (cf. Schmidt, 1992). By using statistical methods to ascertain the existence of such regularities, meta-analysis can be usefully viewed as the statistical analog of direct experimental replication. It is in this role that metaanalysis currently performs its most important work in science. However, given that the detection of empirical phenomena and the construction of explanatory theories are quite different research tasks, the recent suggestion that meta-analysis can directly contribute to the construction of explanatory theory (Cook et al., 1992; Schmidt, 1993) is an arresting methodological claim. In approving this

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

extension of meta-analysis beyond a concern with phenomena detection, Schmidt has acknowledged that scientific explanation normally involves the causal explanation of observed phenomena. Nevertheless, he maintains that it is appropriate to take scientific explanation to include “all research processes that contribute ultimately to theory building, including the first step of determining what the relationships are among important variable or constructs and how stable these relationships are” (Schmidt, 1993, p. 1164). Thus, the demonstration of a general effect, such as the pervasive influence of psycho-educational treatments on adult surgical patients, is deemed to be a meta-analysis at the “lowest level of explanation.” On the other hand, the use of meta-analysis to test competing theories of how patients cope with the stress of surgery is viewed as higher level explanatory meta-analysis. However, this attempt to extend the role of metaanalytic methods beyond phenomena detection to explanation obscures the basic methodological distinction between phenomena detection and scientific explanation. As noted earlier in the chapter, the stable general effects gleaned from meta-analysis are empirical phenomena, and statements about phenomena are the objects of scientific explanations; they are not the explanations themselves. The question, “What do statements of empirical phenomena explain?” occasions no natural reply. This is not surprising, for the successful detection of phenomena is essentially a descriptive achievement that involves investigative practices that are, for the most part, quite different from explanatory endeavors. In psychology, these methods are often statistical in kind. By contrast, scientific explanation is often causalmechanistic in nature (Salmon, 1984). On this view, explanation requires the identification of the mechanisms that underlie and give rise to empirical phenomena, along with a detailing of the ways in which those mechanisms produce the phenomena we seek to understand. When meta-analysis enters into the process of testing explanatory theories, it contributes to an evaluation of those theories in terms of predictive success. However, this common strategy for evaluating scientific theories is not directly concerned with their explanatory adequacy. To repeat, it is not being denied that meta-analytic methods can be employed in the course of testing theories, but meta-analysis itself is not an approach to theory testing (Chow, 1996). To employ meta-analysis to assist in the predictive testing of an explanatory theory does not

thereby confer an explanatory role on meta-analysis itself. One does not assign status simply on the basis of association.

Meta-Analysis and Evaluative Inquiry It is surprising that methodological discussions of meta-analysis and its applications have shown little regard for the rationale that Glass originally provided for its use. Glass claims that many researchers misunderstand meta-analyses of outcome research because they fail to take cognizance of his rationale. Specifically, this failure is offered by him as the reason for the widespread misunderstanding of Smith, Glass, and Miller’s (1980) original meta-analysis of psychotherapy outcome studies. In a number of different publications, Glass insists that meta-analysis should be understood as an exercise in evaluative research rather than in scientific research (Glass, 1972; Smith, Glass, & Miller, 1980; Glass & Kleigl, 1983). The core of Glass’s underlying rationale for meta-analysis involves drawing a strong distinction between scientific and evaluative inquiry. Glass’s position is that researchers as scientists are concerned to satisfy their curiosity by seeking truthful conclusions in the form of theories comprising explanatory laws. By contrast, evaluators undertake research on behalf of a client that is aimed at producing useful decisions based on descriptive determinations of the worth of particular products or programs. For Glass, the meta-analysis of outcome studies properly involves the integration of the products of evaluative research only. The methodology for this conception of metaanalysis fashions the distinction between scientific and evaluative inquiry in terms of the relevance for each of the concepts of truth, explanation, values, problems, and generalizations. Because of space limitations, I will consider just one of these contrasts—that of explanation. Glass contends that scientific inquiry involves the continual search for subsurface explanations of surface phenomena. Evaluative inquiry, on the other hand, does not seek explanations: “A fully proper and useful explanation can be conducted without producing an explanation of why the product or program being evaluated is good or bad of how it operates to produce its effects . . . [It] is usually enough for the evaluator to know that something attendant upon the [product or program] is responsible for the valued outcomes.” (Glass, 1972, pp. 5–6)

haig

19

Glass’s position seems to be that although program treatments can be causally responsible for their measured outcomes, it matters little that knowledge of this gleaned from evaluation studies does not tell us how programs produce their effects, because such knowledge is not needed for policy action. Glass is surely correct in asserting that scientists are centrally concerned with the construction of causal theories to explain phenomena, for this is the normal way in which they achieve understanding of the empirical regularities they discover. However, he is wrong to insist that proper evaluations should deliberately ignore knowledge of underlying causal mechanisms. The reason for this is that the effective implementation and alteration of social programs often benefits from knowledge of the relevant causal mechanisms involved (Gottfredson, 1984), and strategic intervention in respect of these is often the most effective way to bring about social change. Although standard versions of scientific realism are wrong to insist that the relevant causal mechanisms are always unobserved mechanisms, it is the case that appeal to knowledge of covert causal mechanisms will frequently be required for understanding and change. To conclude this highly selective evaluation of Glass’s rationale for meta-analysis, science itself is best understood as a value-laden, problem-oriented human endeavor that tries to construct causal explanatory theories of the phenomena it discovers. There is no sound way of drawing a principled contrast between scientific and evaluative inquiry. These critical remarks are not directed against the worth of evaluation as such or against the use of meta-analysis in evaluating program or product effectiveness. They are leveled specifically at the conception of evaluation research that appears to undergird Glass’s approach to meta-analysis.

Meta-Analysis and the Nature of Science Proponents of meta-analysis often justify the use of these methods by pointing out the need to glean valuable knowledge from the information in a domain that lies dormant in the cornucopia of scattered primary studies. However, Sohn (1996) has expressed concern about the quality of empirical psychological studies that are used in meta-analysis. He has urged resistance to the generally accepted view that meta-analysis is a form of research rather than a review of research, and he has balked at Schmidt’s (1992) revisionist model of possible future science 20

as “. . . a two-tiered research enterprise [where] one group of researchers will specialize in conducting individual studies [and] another group will apply complex and sophisticated meta-analysis methods to those cumulative studies and will make the scientific discoveries” (p. 1180). Sohn’s primary concerns are to challenge the claim that meta-analysis is an important vehicle of scientific discovery and to identify the major problems besetting mainstream psychological research. Sohn (1996) has questioned the basic idea of meta-analysis as a standalone literature review capable of discovering truths, whereas traditionally scientific discoveries were contained in the empirical findings of the primary studies themselves. For Sohn, the idea that meta-analytic literature reviews can make discoveries about nature rests on the assumption that the primary research literature is a proxy for nature. It is an assumption that he has roundly rejected. Noting the tendency of meta-analysts to paint a bleak picture of progress in twentieth century psychology, Sohn (1996) has suggested that although meta-analysis has been introduced to improve matters in this regard, it is in fact symptomatic of its poor progress. In his judgment, this lack of good progress is a consequence of psychology adopting a hypothesis-testing view of science. For Sohn, this view of science seeks knowledge by testing research hypotheses about the relationship of descriptive variables without regard for causal mediating variables. Essentially, the approach amounts to the hypothetico-deductive testing of outcome studies through use of significance tests and effect size measures. Sohn maintains that there are, in fact, two deleterious consequences of such an approach to research: one is the lack of agreement about outcomes, and the other is the absence of knowledge of the causal mechanisms that are responsible for those alleged outcomes. Meta-analysis is indicted by Sohn for failing to remedy both types of defect. However, Sohn has supported his claim that meta-analysis does not produce demonstrable evidence for treatment effects in a curious way. He has acknowledged that Smith, Glass, and Miller’s (1980) well-known meta-analytic treatment of the benefits of psychotherapy has been corroborated by subsequent meta-analyses yet has maintained that this does not constitute evidence for replicable effects. He has expressed a distrust of research that relies on statistical methods for making claims about replicable effects. This distrust appears to

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

be founded in part on an extension of the view attributed to Lord Rutherford that if an experimental study requires statistics, then the experiment is in need of improvement. For Sohn, “If one’s science needs [meta-analysis], one should have done better science.” However, this view flies in the face of widely accepted scientific practice. Woodward’s (1989) detailed examination of the practice of phenomena detection in science strongly supports the view that different parts of the various sciences, from physics to anthropology, appropriately make extensive use of statistical methods in the detection of empirical phenomena. It is hard to imagine that statistics would exist as we currently know it unless it provided a necessary armament for science. In this regard, it is worth noting that Sohn has acknowledged the claim made by Hedges and Olkin (1985) that metaanalysis in some form or other has a long history of use in the hard sciences. Sohn has stated his disagreement with this position, but he has not argued against it. Space limitations preclude further analysis of Sohn’s (1996) argument, but perhaps enough has been said to suggest that it should be regarded with some skepticism. More work on the philosophical foundations of meta-analysis is clearly needed. However, from this highly selective examination of its conceptual foundations, it can be concluded that meta-analysis receives its primary justification in scientific research by articulating one, but only one, way in which researchers can fashion empirical generalization from the findings of primary studies. It derives its importance in this role directly from the importance accorded the goal of phenomena detection in science.

Exploratory Factor Analysis Despite the advanced statistical state and frequent use of exploratory factor analysis in the behavioral sciences, debate about its basic nature and worth abounds. Thurstone (1947) has appropriately emphasized the exploratory nature of the method, and many methodologists take it to be a method for postulating latent variables that are thought to underlie patterns of correlations. Some, however, understand exploratory factor analysis as a method of data reduction that provides an economical description of correlational data. The present section considers this and other important foundational issues that have figured prominently in discussions of the method.

Factor Analytic Inference Alongside the debate between the fictionalist and realist interpretations of factors, there is a difference of view about whether the basic inferential nature of factor analytic inference is inductive or abductive in nature. Expositions of exploratory factor analysis seldom consider its inferential nature, but when they do, the method is usually said to be inductive in character. This is not surprising, given that exploratory factor analysis can be plausibly located historically within seventeenth- and eighteenth-century empiricist philosophy of science and its inductive conception of inquiry (Mulaik, 1987). However, even if one relaxes the Baconian ideal that inductive method is an algorithm that produces incorrigible knowledge, an inductive characterization of exploratory factor analysis seems inappropriate. This is because inductive inference, being descriptive inference, cannot take the researcher from manifest effects to theoretical entities that are different in kind from those effects. However, abductive inference, which is concerned with the generation and evaluation of explanatory hypotheses, can do so. For this reason, exploratory factor analysis is better understood as an abductive method of theory generation (Haig, 2005a), a characterization that coheres well with its general acceptance as a latent variable method. With exploratory factor analysis, abductive inference is explanatory inference that leads back from presumed effects to underlying causes. There are different forms of abductive reasoning. Exploratory factor analysis is a method that can facilitate the drawing of explanatory inferences that are known as existential abductions. Existential abductions enable researchers to hypothesize the existence, but not the nature, of entities previously unknown to them. The innumerable examples of existential abduction in science include the initial postulation of hidden entities such as atoms, genes, tectonic plates, and personality traits. In cases like these, the primary thrust of the initial abductive inferences is to claims about the existence of theoretical entities to explain empirical facts or phenomena. Similarly, the hypotheses given to us through the use of exploratory factor analysis postulate the existence of latent variables such as Spearman’s g and extraversion. It remains for further research to elaborate on the first rudimentary conception of these variables and their interrelation. The factor analytic use of existential abduction to infer the existence of, say, the theoretical entity haig

21

g , can be coarsely reconstructed in accordance with the following argument schema: The surprising empirical phenomenon of the positive correlations among different tests of ability is identified. If g exists, and it is validly and reliably measured by a Weschler intelligence scale (and/or some other objective test), then the positive manifold would follow as a matter of course. Hence, there are grounds for judging the hypothesis of g to be initially plausible and worthy of further pursuit.

Note that the schema for abductive inference and its application to the generation of the hypothesis of g is concerned with the form of the arguments involved, rather than with the actual generation of the explanatory hypotheses. The explanatory hypothesis is given in the second premise of the argument. An account of the genesis of the explanatory hypothesis must, therefore, be furnished by some other means. It is plausible to suggest that reasoning to explanatory hypotheses trades on our evolved cognitive ability to abductively generate such hypotheses (Carruthers, 2002). Whatever its origin, an informative methodological characterization of the abductive nature of factor analytic inference must appeal to the scientist’s own psychological resources as well as those of logic. This injunction is motivated by the realist methodological thesis of naturalism stated near the beginning of the chapter. Although exploratory factor analysis exemplifies well the character of existential abduction, it is clearly not an all-purpose method for abductively generating explanatory hypotheses and theories. With its focus on common factors, it can properly serve as a generator of elementary theories only in those multivariate domains that have common causal structures.

The Principle of the Common Cause It is well known that exploratory factor analysis is a common factor analytic model in which the latent factors it postulates are referred to as common factors. Less well known is the fact that there is an important principle of scientific inference, known as the principle of the common cause. (e.g., Sober, 1988), that can be used to drive the nature and shape of the existential abductive inferences involved in exploratory factor analysis. The principle of the common cause can be formulated concisely as follows: “Whenever two or more events are improbably, or significantly, 22

correlated, infer one or more common causes unless there is good reason not to.” Clearly, the principle should not be taken as a hard-and-fast rule, for in many cases, proper inferences about correlated events will not be in terms of common causes. The qualifier, “unless there is good reason not to,” should be understood as an injunction to consider causal interpretations of the correlated events other than the common causal kind. For example, in a given research situation, the correlated events might be related as direct causes, or their relationship might be mediated by a third variable in a causal sequence. Although exploratory factor analysis is used to infer common causes, expositions of common factor analysis that explicitly acknowledge the importance of the principle of the common cause are rare. Kim and Mueller’s (1978) textbook exposition of factor analysis is a noteworthy exception. In discussing the conceptual foundations of factor analysis, these authors evince the need to rely on what they call the postulate of factorial causation. The postulate of factorial causation is characterized by them as “the assumption that the observed variables are linear combinations of underlying factors and that the covariation between observed variables solely results from their common sharing of one or more of the common factors” (p. 78). The authors make clear that the common factors mentioned in the assumption are to be regarded as underlying causal variables. Understood as a methodological injunction, this postulate functions as a variant of the principle of the common cause. Without appeal to this principle, factor analysts could not identify the underlying factor pattern from the observed covariance structure. There are two features of the principle of the common cause that make it particularly suitable for use in exploratory factor analysis. First, it can be applied in situations where we do not know how likely it is that the correlated effects result from a common cause. The abductive inference to common causes is a basic explanatory move that is non-probabilistic and qualitative in nature. It is judgments about the soundness of the abductive inferences, rather than the assignment of probabilities, that confer initial plausibility on the factorial hypotheses spawned by exploratory factor analysis. Second, the principle can also be used in situations where we are essentially ignorant of the nature of the common cause. With this second feature, the principle of the common cause accommodates the fact the exploratory factor analysis trades in existential abductions.

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

Further, it is important to appreciate that the principle of the common cause does not function in isolation from other methodological constraints. Embedded in exploratory factor analysis, the principle helps one limit existential abductive inference to those situations where we reason back from correlated effects to one or more common causes. Although covariation is an important basic datum in science, not all effects are expressed as correlations, and of course, not all causes are of the common causal variety. It follows from this that one should not always look for common causal interpretations of multivariate data, for there are numerous alternative latent variable models.

The Underdetermination of Factors The methodological literature on exploratory factor analysis has given considerable attention to the indeterminacy of factors in the common factor model. Factor indeterminacy arises from the fact that the common factors are not uniquely determined by their related manifest variables. As a consequence, a number of different common factors can be produced to fit the same pattern of correlations in the manifest variables. Although typically ignored by factor analytic researchers, factor indeterminacy is an epistemic fact of life that continues to challenge factor analytic methodologists. Some methodologists regard factor indeterminacy as a serious problem for common factor analysis and recommend the use of alternative methods such as component analysis methods because they are considered to be determinate methods. One constructive perspective on the issue of factor indeterminacy has been suggested by Mulaik and McDonald (Mulaik & McDonald, 1978; McDonald & Mulaik, 1979; Mulaik, 1987). Their position is that the indeterminacy involved in interpreting the common factors in exploratory factor analysis is just a special case of the general indeterminacy of theory by empirical evidence widely encountered in science, and it should not, therefore, be seen as a debilitating feature that forces us to give up on common factor analysis. Indeterminacy is pervasive in science. It occurs in semantic, metaphysical, and epistemological forms (McMullin, 1995). Factor indeterminacy is essentially epistemological in nature. The basic idea of epistemological or, more precisely, methodological indeterminacy is that the truth or falsity (better, acceptance or rejection) of a hypothesis or theory is

not determined by the relevant evidence (Duhem, 1954). In effect, methodological indeterminacy arises from our inability to justify accepting one theory among alternatives on the basis of empirical evidence alone. Mulaik (1987) sees underdetermination in exploratory factor analysis as involving inductive generalizations that go beyond the data. However, inductive underdetermination should be seen as applying specifically to the task of establishing factorial invariance where one seeks constructive or external replication of factor patterns. However, for exploratory factor analysis there is also need to acknowledge and deal with abductive underdetermination involved in the generation of explanatory factorial theories. The sound abductive generation of hypotheses is essentially educated guess work. Thus, drawing from background knowledge, and constrained by correlational empirical evidence, the use of exploratory factor analysis can reasonably be expected to yield a plurality of factorial hypotheses or theories that are thought to be in competition. This contrasts strongly with the unrealistic expectation held by many earlier users of exploratory factor analysis that the method would deliver them strongly justified claims about the one best factorial hypothesis or theory. How then, can exploratory factor analysis deal with the specter of underdetermination in the context of theory generation? One plausible answer is that exploratory factor analysis narrows down the space of a potential infinity of candidate theories to a manageable subset by facilitating judgments of initial plausibility (Haig, 2005a). It seems clear enough that scientists often make judgments about the initial plausibility of the explanatory hypotheses and theories that they generate. However, it is less clear just to what this evaluative criterion amounts (cf. Whitt, 1992). With an abductive conception of exploratory factor analysis, judgments of the initial plausibility of theories are judgments about the soundness of the abductive arguments employed in generating those theories. It seems reasonable to suppose that those who employ exploratory factor analysis as an abductive method of theory generation often make compressed judgments of initial plausibility. By conferring judgments of initial plausibility on the theories it spawns, exploratory factor analysis deems them worthy of further pursuit, whereupon it remains for the factorial theories to be further developed and evaluated, perhaps through the use of confirmatory factor analysis. It should be emphasized that haig

23

using exploratory factor analysis to facilitate judgments about the initial plausibility of hypotheses will still leave the domains being investigated in a state of considerable theoretical underdetermination. It should also be stressed that the resulting plurality of competing theories is entirely to be expected and should not be thought of as an undesirable consequence of employing exploratory factor analysis. To the contrary, it is essential for the growth of scientific knowledge that we vigorously promote theoretical pluralism (Hooker, 1987), develop theoretical alternatives, and submit them to critical scrutiny.

Exploratory Factor Analysis and Confirmatory Factor Analysis The aforementioned consideration of exploratory factor analysis supports the conclusion that there is an important role for its use in factor analytic research. However, this conclusion raises the question of how exploratory factor analysis relates to its confirmatory namesake. In contrast to popular versions of the classical inductivist view of science that inductive method can generate secure knowledge claims, the use of exploratory factor analysis as an abductive method of theory generation can only furnish researchers with a weak logic of discovery—one that gives them educated guesses about underlying causal factors. It is for this reason that those who use exploratory factor analysis to generate theories need to supplement their generative assessments of the initial plausibility of those theories with additional consequentialist justification in the form of confirmatory factor analytic testing or some alternative approach to theory appraisal. However, in the factor analytic literature, there is a division of opinion about whether exploratory factor analysis and confirmatory factor analysis should be viewed as complementary or competing methods of common factor analysis. Quite a number of factor analytic methodologists have expressed views that discourage their complementary use in factor analytic research. For example, Gorsuch (1983), in his well-known book on factor analysis, has expressed a view about the relative importance of exploratory and confirmatory factor analysis that seems to be quite widely held today: Although the next three chapters [of Factor analysis] are primarily concerned with exploratory factor analysis, the space and time given to that technique is a function of the complexity of resolving its problems, not of its theoretical importance. On the

24

contrary, confirmatory factor analysis is the more theoretically important—and should be the much more widely used—of the two major factor analytic approaches. (p. 134)

Although Gorsuch makes his claim in emphatic terms, he provides no justification for it. He seems to assume that theory testing is more important than theory generation. However, this belief is difficult to defend, given the fact that there are many other important phases of scientific inquiry that together demand most of the researcher’s methodological time. Recall, for example, the importance to science of the detection of empirical phenomena and the generation, development, and comparative appraisal of theories. Viewed in this light, theory testing is just one, albeit important, part of scientific method (cf. Simon, 1968). Given the fact that science is as much concerned with theory generation as it is with theory testing, and acknowledging that exploratory factor analysis is a useful abductive method of theory generation, exploratory factor analysis deserves to be regarded as important as confirmatory factor analysis in the theory constructor’s toolkit. To conclude, despite the fact that exploratory factor analysis has been frequently employed in psychological research, the extant methodological literature on the method seldom acknowledges the explanatory and ontological import of the method’s inferential nature. Abduction is a major form of creative reasoning in science, and the principle of the common cause is a maxim of scientific inference with important application in research. By incorporating these two related elements into its fold, exploratory factor analysis is ensured an important, albeit circumscribed, role in the construction of explanatory theories in psychology and other sciences. By generating structural models about common causes, exploratory factor analysis can serve as a valuable precursor to confirmatory factor analysis.

Causal Modeling During the last 50 years, social and behavioral science methodologists have developed a variety of increasingly sophisticated statistical methods to help researchers draw causal conclusions from correlational data. These causal modeling methods, as they have sometimes been called, include path analysis, confirmatory factor analysis, and full structural equation modeling. Despite the fact that psychological researchers are increasingly employing more sophisticated causal

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

modeling methods in place of simple regression and partial correlation procedures, worries about both their accompanying methodology and their misuse have been expressed (e.g., Cliff, 1983). In this section, I consider some philosophical aspects of three foundational issues that have been discussed in the literature on causal modeling: the different ideas of causation presupposed by causal modeling; the suggestion that causal modeling can be viewed as a form of inference to the best explanation; and the contested nature of latent variables.

Causal Modeling and Theories of Causation One central methodological issue in the debates about causal modeling has to do with the appropriateness of the nature of causation involved in various causal modeling procedures. A popular view of the matter is clearly expressed by Kenny (1979), who points out that three conditions must be satisfied for a researcher to claim that one variable is the cause of another. The first condition is that the relationship be asymmetric. The second condition is that a functional relationship be present between cause and effect. The third condition is that the causal relationship be direct or non-spurious. These three conditions are exactly those of the regularity theory of causation, which depicts the causal relationship between events in terms of their regular succession, covariation, and contiguity. The regularity theory, which is more or less Humean in character, provides an important part of the epistemic backdrop against which traditional causal modeling methods like path analysis have been understood. However, like other parts of the standard empiricist enterprise, this theory has received strong criticism. Its claimed limitations can best be appreciated by contrasting it with a scientific realist alternative known as the generative theory of causation (Harré & Madden, 1975). Briefly stated, the generative theory depicts causation as a relationship where, under appropriate conditions, a causal mechanism produces its effect. For this to happen, the causal mechanism must connect to its effect and have the power to generate that effect, usually when stimulated by the appropriate causal condition. It is the productivity of a generative mechanism that makes it a causal mechanism, and for this to occur, there must be a naturally necessary connection that allows for the transmission of power from cause to effect. This causal power exists irrespective of whether it is currently being exercised. As such, it is properly viewed

as a tendency—that is, an existing state of an object, which, if unimpeded, will produce its effect. We are, therefore, able to infer abductively the presence of the causal mechanism on the basis of knowledge of the triggering condition and/or its presumed effect. Advocates of the generative theory of causation claim it has a number of important advantages over the regularity theory. One advantage of the generative theory is that it is able to accommodate deep structural, explanatory theories that postulate unobserved generative mechanisms. It is argued that we need a theory of causation that affords us the conceptual space to do this, because many of the world’s causal mechanisms are not open to direct inspection. The latent variables of many of our causal modeling methods are thought by many to be precisely of this kind. A related advantage of the generative theory is that it is needed for enlightened social policy because, as noted in the discussion of evaluation research earlier, the possibility of ameliorative action depends on effecting change based on an understanding of how things work, and for this, knowledge of the relevant underlying causal mechanisms is often essential. A third, and significant, advantage of the theory of generative causation is that it enables us to draw the important distinction between empirical regularities and genuine causal laws. An adequate methodology of causal modeling must be able to draw the distinction between empirical regularities and causal laws, because the ability to do so is a conceptual requirement of being able to differentiate properly direct causal relations from spurious correlations. By collapsing this distinction, empiricists, with their regularity theory of causation, are unable to articulate a satisfactory notion of spuriousness. For example, Simon’s (1985) influential analysis of spurious correlation explicitly rejects the generative theory of causation and endeavors to ground the distinction between true and spurious correlations on a commitment to an empiricist view of causation. The common or intervening causes that bring about spurious correlations will typically be unobserved. However, for a statistical treatment of these variables to be consistent with the regularity theory, Simon’s view of causation forces researchers to focus on altogether different variables at the manifest level. But this cavalier ontological slide wrecks our efforts to obtain worthwhile causal knowledge, because the manifest replacement variables cannot act as effective surrogates for their haig

25

presumed common and intervening causes. They are ontologically distinct from such causes and, although as causal conditions they may trigger their latent counterparts, they do not function as major causal mechanisms that can bring about spurious correlations. Although it can plausibly be argued that a generative view of causation is required to make sense of research that embraces hidden causal mechanisms, it does not follow, as is often supposed (e.g., Manicas, 1989; Sayer, 1992), that the regularity theory has no place in a realist conception of science. With its emphasis on the ideas of regularity, it would seem to be a suitable account of causation for claims about phenomena that take the form of empirical generalizations. Nor should it be thought that the regularity theory and the generative theory together give one a full understanding of causation in science. For example, structural equation modeling provides knowledge of causal networks. As such, it does not so much encourage the development of detailed knowledge of the nature of latent variables as it specifies the range and order of causal relations into which latent and manifest variables enter. For this type of research, a network theory of causation is needed (Thagard, 1999). The suggestion that different conceptions of causation are relevant to causal modeling fits with a philosophy of causal pluralism, which is increasingly being recommended in contemporary methodological studies of the nature of causation (GodfreySmith, 2009).

the best explanation being considered here will allow the researcher to say that a good explanatory theory will rate well on the explanatory criteria and, at the same, boast a measure of predictive success. Most methodologists and scientists will agree that an explanatory theory that also makes accurate predictions will be a better theory for doing so. Although the use of structural equation modeling in psychology often involves testing models in hypothetico-deductive fashion, it also contains a minority practice that amounts to inference to the best explanation in the sense just noted. This latter practice involves the explicit comparison of models or theories in which an assessment of their goodness-of-fit to the empirical evidence is combined with the weighting of the fit statistics in terms of parsimony indices (Kaplan, 2000). Here goodness-of-fit provides information about the empirical adequacy of the model, whereas parsimony functions as a criterion having to do with the explanatory value of the model. Both are used in judgments of model goodness. Markus, Hawes, and Thasites (2008) recently have suggested that in structural equation modeling, model fit can be combined with model parsimony, understood as explanatory power, to provide an operationalized account of inference to the best explanation. They discussed the prospects of using structural equation modeling in this way to evaluate the comparative merits of two- and three-factor models of psychopathy.

Structural Equation Modeling and Inference to the Best Explanation

Do Latent Variables Exist?

The guess-and-test strategy of the hypotheticodeductive method takes predictive accuracy as the sole criterion of theory goodness. However, it seems to be the case that in research practice, the hypothetico-deductive method is sometimes combined with the use of supplementary evaluative criteria such as simplicity, scope, and fruitfulness. When this happens, and one or more of the supplementary criteria have to do with explanation, the combined approach can appropriately be regarded as a version of inference to the best explanation, rather than just an augmented account of the hypothetico-deductive method (Haig, 2009). This is because the central characteristic of the hypothetico-deductive method is a relationship of logical entailment between theory and evidence, whereas with inference to the best explanation the relationship is also one of explanation. The hybrid version of inference to 26

Many causal modeling methods are latent variable methods, whose conceptual foundations are to be found in the methodology of latent variable theory (Borsboom, 2005; 2008). Central to this theory is the concept of a latent variable itself. However, the notion of a latent variable is a contested concept, and there are fundamental philosophical differences in how it should be understood. A clear example of the contested nature of the concept of a latent variable is to be found in the two quite different interpretations of the nature of the factors produced by exploratory factor analysis. One view, known as fictionalism, maintains that the common factors, the output of exploratory factor analysis, are not theoretical entities invoked to explain why the observed variables correlate the way that they do. Rather, these factors are taken to be summary expressions of the way manifest variables co-vary. Relatedly, theories that marshal

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

descriptions of such factors are properly considered to serve the instrumentalist function of economically redescribing the original correlational data. This interpretation of exploratory factor analysis has been quite influential in psychometrics (Block, 1976) and has been taught to generations of psychology students through textbooks on psychological testing (e.g., Anastasi & Urbina, 1997). Fictionalism seems to be the preferred option of many factor analytic researchers in the domains of personality and intelligence. However, fictionalism is a difficult position to defend, and it seems to fail in factor analysis for the reason it fails in science generally: it inappropriately grants ontological significance to a sharp distinction between observation and theory to buttress the claim that only observable, or manifest, entities exist, when observability is really a matter of degree. Fictionalists argue that because we do not have perceptual experience of theoretical entities, we do not have grounds for saying they exist; we only have grounds for claiming that observable entities exist. But realist philosophers of science (e.g., Maxwell, 1962) assert in reply that fictionalists cannot maintain a sharp distinction between what is observable and what is unobservable. What cannot be seen directly by the unaided eye might be observable through a magnifying glass and what cannot be observed through a magnifying glass might be observed through a microscope. Importantly, how we draw the observable/unobservable distinction at a particular time is a function of prior knowledge, our physiological make-up, and available instrumentation. Thus, the distinction provides no basis for deciding what entities do, and do not, exist. To assert that factors are theoretical entities is not to regard them as having a special existence; rather, it is to acknowledge that we come to know them indirectly in terms of their correlated effects. On this realist interpretation, the factors are regarded as latent variables that underlie, and give rise to, the correlated manifest variables. Borsboom (2005) has made a strong case for adopting a realist attitude to latent variables more generally by combining an argument similar to Maxwell’s, along with other foundational considerations in philosophy of science and psychometrics. This general argument against fictionalism simultaneously supports the doctrine of realism in science, but it does not by itself establish that the factors of exploratory factor analysis should be given a realist interpretation. Whether this should happen depends also on whether exploratory

factor analysis can facilitate the drawing of sound abductive inference about the existence of latent factors. This highly selective consideration of the philosophy of causal modeling points to three conclusions: (1) that causation in causal modeling manifests itself in a number of different ways; (2) that causal modeling can transcend the limitations of the hypothetico-deductive method and adopt the practice of inference to the best explanation; and (3) that latent variables deserve to be given a realist interpretation as genuine theoretical entities.

Conclusion The philosophy of research methods is an aspect of research methodology that receives limited attention in behavioral science education. The majority of students and research practitioners in the behavioral sciences obtain the bulk of their knowledge of research methods from textbooks. However, a casual examination of these texts shows that they tend to pay little, if any, serious regard to the philosophy of science and its bearing on the research process. As Kuhn pointed out nearly 50 years ago (Kuhn, 1962; 1996), textbooks play a major role in dogmatically initiating students into the routine practices of normal science. Serious attention to the philosophy of research methods would go a considerable way toward overcoming this uncritical practice. As contemporary philosophy of science increasingly focuses on the contextual use of research methods in the various sciences, it is to be hoped that research methodologists and other behavioral scientists will avail themselves of the genuine methodological insights that it contains.

Future Directions In this final section of the chapter, I suggest a number of directions that future work in the philosophy of quantitative methods might take. The first three suggestions are briefly discussed; the remaining suggestions are simply listed.

Understand Quantitative Methods Through Methodology A proper understanding of research methods cannot be had without an appreciation of their accompanying methodology (see Proctor & Capaldi, 2001). Methodology is the interdisciplinary field that studies methods. It draws from the disciplines haig

27

of statistics, philosophy of science, and cognitive science, among others. And yet, the professional literature of these disciplines does not figure in the content of research methods courses. Further, it is important to appreciate that methodology has descriptive, critical, and advisory dimensions: Again, the typical methods curriculum does not systematically deal with research methods with these considerations in mind. It is not surprising, therefore, that psychologists’ understanding of research methods often leaves a lot to be desired. A realist-oriented methods curriculum would profitably consider methods in the light of the primary characteristics of realist methodology outlined early in the chapter. To mention just three of these: Greater prominence would be given to generative methodology in which reasoning well to hypotheses and theories would figure in the assessment of those knowledge claims. The sound abductive reasoning to factorial hypotheses using exploratory factor analysis is perhaps psychology’s best example of generative justification. Similarly, the coherentist justification of explanatory theories using methods of inference to the best explanation would feature much more prominently than it does at present. Finally, in adopting methods that are apt for us as knowing subjects, heuristic procedures would receive much more explicit attention in the methods curriculum as realistic guides to our thinking. The British Psychological Society now takes conceptual and historical issues as one of psychology’s seven core areas. Teaching methods through methodology is the appropriate way to employ this core area in the teaching of research methods. The American Psychological Association and the Association of Psychological Science would do well to follow suit, for it is only by making considered use of methodology that a genuine education in research methods can be achieved.

Rethink the Quantitative/Qualitative Distinction A major feature of the methodological landscape has been the discussion of the distinction between quantitative and qualitative methods. Although perhaps necessary in establishing a legitimate role for the use of qualitative methods in research, the distinction is now the subject of critical scrutiny. The way the original distinction was drawn has been questioned (e.g., Michell, 2004), and the combination of qualitative and quantitative methods in mixed methods strategies has been strongly promoted in recent times. 28

However, the quantitative/qualitative debate has not considered the possibility that most methods have both quantitative and qualitative dimensions. In many cases, we are likely to gain a better understanding of the research methods we use not by viewing them as either qualitative or quantitative but by regarding them as having both qualitative and quantitative dimensions. Three examples are mentioned here. First, grounded theory (e.g., Strauss, 1987), the most prominent extant qualitative methodology, is in good part the product of a translation from some sociological quantitative methods of the 1950s. Moreover, there is nothing in principle to stop researchers using quantitative methods within the fold of grounded theory. Exploratory factor analysis, for example, could sometimes be used for generating grounded theory. Second, although exploratory factor analysis itself is standardly characterized as a multivariate statistical method, the inferential heart of the method is the important scientific heuristic known as the principle of the common cause. Importantly, this principle, which guides the factor analytic inference from correlations to underlying common factors, can be effectively formulated in qualitative terms. Finally, the theory of explanatory coherence (Thagard, 1992), which evaluates theories in terms of their explanatory power, is a qualitative method of theory appraisal, but it is implemented by a computer program that is part of the method proper, and that has a connectionist architecture that is mathematically constrained. It is recommended, then, that methodologists and researchers seriously entertain the prospect that individual methods are likely to have a mix of qualitative and quantitative features—that is, that individual methods are themselves mixed methods.

Evaluate the Philosophical Critiques of Quantitative Research Methods Most of the occasional references to scientific realism in psychology are to Bhaskar’s (1975; 1979) critical realism (e. g., Manicas & Secord, 1983), a philosophy that has had considerable impact on the social sciences (e.g., Sayer, 1992). Interestingly, critical realists have expressed strong reservations about the use of statistical methods in quantitative research. Bhaskar himself goes so far as to say that causal models should be “totally discarded.” There are various reasons for this attitude (see Pratschke, 2003), but perhaps the most fundamental one is the claim that statistical models themselves do not

t h e p h i l o s o p h y o f q u a n t i tat i v e m e t h o d s

provide researchers with the substantive models that are sought in causal modeling research. However, this claim rests on a mistaken conception of the relation between statistical models and substantive theoretical models. It is hard to deny that consideration of much more than the statistical machinery of causal modeling is needed to ground substantive conclusions. Indeed, it is difficult to see how any statistical method could be properly understood and used in research without appeal to suprastatistical matters. Consider, for example, the oft-made claim that factors of exploratory factor analysis are statistical entities and that the method cannot, therefore, be used to favor one substantive factorial theory of intelligence over another (e.g., Gould, 1996). This claim is false because factor analysts typically transcend the statistical level of the method and makes use of the relevant part of Latent Variable Theory to generate plausible hypotheses about the existence of latent variables. Of central relevance here is the fact that exploratory factor analysis exploits theso-called“principleofthecommoncause”tosanction inferences to the initial plausibility of interpreted latent variables. We saw earlier that inferences from manifest to latent variables made in accordance with this principle are abductive, or explanatory, in nature and are made by factor analysts themselves. Although the statistical machinery of multiple regression and partial correlation theory is obviously an important part of exploratory factor analysis, its primary function is to facilitate researchers’ suprastatistical inferences to latent factors. It is important to appreciate that the interpretive dimension on causal modeling methods is a proper part of its methodology. There is nothing in critical realism, or other variants of scientific realism, that prevents one from taking such an outlook on causal modeling. Indeed, scientific realism comports well with causal modeling methods that countenance latent variables.

Additional Directions Space considerations prevent discussion of additional future directions in the philosophy of quantitative methods. However, the following points deserve to be on an agenda for future study. • Develop a modern interdisciplinary conception of research methodology. • Give more attention to investigative strategies in psychological research. • Take major philosophical theories of scientific method seriously.

• Apply insights from the “new experimentalism” in the philosophy of science to the understanding of quantitative research methods. • Develop the philosophical foundations of theory construction methods in the behavioral sciences. • Assess the implications of different theories of causality for research methods. • Examine the philosophical foundations of “new” research methods such as data mining, structural equation modeling, and functional neuroimaging.

Author note Brian D. Haig, Department of Psychology, University of Canterbury Christchurch, New Zealand, Email: [email protected]

References Acree, M. C. (1979). Theories of statistical inference in psychological research: A historico-critical study. Dissertation Abstracts International, 39, 5073B. (UMI No. 7907000). Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.) Upper Saddle River, NJ: Prentice-Hall. Bangert-Drowns, R. L. (1986). Review of developments in metaanalytic method. Psychological Bulletin, 99, 388–399. Behrens, J. T., & Yu, C-H. (2003). Exploratory data analysis. In J. A. Schinka & W. F. Velicer (Eds.), Handbook of psychology, Vol. 2 (pp. 33–64). New York: Wiley. Berger, J. O. & Selke, T. (1987). Testing a point null hypothesis: The irreconcilability of p values and evidence (with comments). Journal of the American Statistical Association, 82, 112–139. Bhaskar, R. (1975). A realist philosophy of science. Brighton, England: Harvester. Bhaskar, R. (1979). The possibility of naturalism. Brighton, England: Harvester. Block, N. J. (1976). Fictionalism, functionalism, and factor analysis. In R. S. Cohen, C. A. Hooker, & A. C. Michalos (Eds.), Boston Studies in the Philosophy of Science, Vol. 32, (pp. 127–141). Dordrecht, the Netherlands: Reidel. Bolles, R. C. (1962). The difference between statistical and scientific hypotheses. Psychological Reports, 11, 629–645. Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge, England: Cambridge University Press. Borsboom, D. (2008). Latent variable theory. Measurement: Interdisciplinary Research and Perspectives, 6, 25–53. Bunge, M. (2008). Bayesianism: Science or pseudoscience? International Review of Victimology, 15, 165–178. Carruthers, P. (2002). The roots of scientific reasoning: Infancy, modularity, and the art of tracking. In P. Carruthers, S. Stich, & M. Siegal (Eds.), The cognitive basis of science (pp. 73–95). Cambridge, England: Cambridge University Press. Chow, S. L. (1996). Statistical significance: Rationale, validity, and utility. London, England: Sage. Cliff, N. (1983). Some cautions concerning the application of causal modeling methods. Multivariate Behavioral Research, 18, 115–126. Cohen, J. (1994). The earth is round (p 1.0 favors the control effect (risk increase). OR, the odds ratio, also called relative odds or the cross-product ratio, is the ratio of A/B to C/D, or the cross-product AD/BC. RRR, the relative risk reduction, is the reduction in risk of the adverse outcome (e.g., illness) in the treated sample relative to the control, which is indicated as a percentage by dividing RD (defined next) by [C/(C+D)] and then multiplying by 100. RD, the risk difference, also called the absolute risk reduction (ARR), is the reduction in risk of the particular adverse outcome (e.g., cancer, heart attack, stroke) in the treated group compared with the level of baseline risk in the control—that is, [A/(A+B)]–[C/(C+D)]. Multiplying RD (or ARR) times 10,000 estimates the number of people in a group of 10,000 that are predicted to benefit from the treatment. NNT = 1/RD = 1/ARR, is the number needed to treat to prevent a single case of the particular adverse outcome. Study 1 (N = 2,000)

Study 2 (N = 2,000)

Study 3 (N = 2,000)

Adverse outcome (1% )

Adverse outcome (25% )

Adverse outcome (50% )

Condition

Yes

No

Yes

No

Yes

No

Treatment Control

1 19

999 981

25 475

975 525

50 950

950 50

Chi-square

16.4 (p < 0.0001)

540.0 (p < 0.0001)

1,620.0 (p < 0.0001)

RR

0.05

0.05

0.05

OR

0.05

0.03

0.003

RRR

94.7%

94.7%

94.7%

r(phi)

0.090

0.52

0.90

RD=ARR

0.018

0.45

0.90

NNT=1/ARR=1/RD

55.6

2.2

1.1

ARR(10,000)

180

4,500

9,000

Study 4 (N = 2,000)

Study 5 (N = 2,000)

Study 6 (N = 2,000)

Adverse outcome (1% )

Adverse outcome (25% )

Adverse outcome (50% )

Condition

Yes

No

Yes

No

Yes

No

Treatment Control

9 11

991 989

225 275

775 725

450 550

550 450

Chi-square

0.2 (p = 0.89)

6.7 (p = 0.01)

20.0 (p < 0.0001

RR

0.82

0.82

0.82

OR

0.82

0.77

0.67

RRR

18.2%

18.2%

18.2%

r(phi)

0.01

0.06

0.10

RD=ARR

0.002

0.05

0.10

NNT=1/ARR=1/RD

500

20

10

ARR(10,000)

20

500

1,000

48

q u a n t i tat i v e m e t h o d s a n d e t h i c s

1000

Study 1

Study 2

Study 3

RR

RRR

0.05

95%

0.82

18%

800 600

Number of Patients

400 200 0 1000

Study 5

Study 4

Study 6

800 Number of Patients

600 400 200

Event Rate

1%

25%

Control

Treatment

Control

Treatment

Control

Group Assignment

Treatment

0

50%

Figure 3.3 Histograms based on the six studies in Table 3.7, in which the total sample size (N ) was 2,000 in each study. Darkened areas of the bars indicate the number of adverse outcomes (event rates), which increased from 1% (20 cases out of 2,000) in Studies 1 and 4, to 25% (500 cases out of 2,000) in Studies 2 and 5, to 50% (1,000 cases out of 2,000) in Studies 3 and 6. However, the relative risk (RR) and relative risk reduction (RRR) were insensitive to these vastly different event rates. In Studies 1, 2, and 3, the RR and RRR remained constant at 0.05 and 94.7%, respectively, whereas in Studies 4, 5, and 6, the RR and RRR remained constant at 0.82 and 18.2%, respectively.

of illness in the control sample. We would still find RR = 0.20 and RRR = 80% . It makes no difference how large we make the sample sizes, as RR and RRR will not budge from 0.20 and 80% so long as we assume 1 case of illness in the treated sample and 5 cases of illness in the control sample. Suppose we now hold the N constant and see what happens to the RR and RRR when the event rate in the overall N changes from one study to another. In Figure 3.3, we see the results of six hypothetical studies in which the event rates increased from 1% in Studies 1 and 4, to 25% in Studies 2 and 5, to 50% in Studies 3 and 6. Nonetheless, in Studies 1, 2, and 3, RR remained constant at 0.05 and RRR remained constant at an attention-getting 95% . In Studies 4, 5, and 6, RR and RRR stayed constant at 0.82 and 18% , respectively. Further details of the studies in Figure 3.3 are given in Table 3.7. The odds ratio (OR), for the ratio of two odds, was for a time widely promoted as a measure of association in 2 × 2 tables of counts (Edwards, 1963; Mosteller, 1968) and is still frequently reported in epidemiological studies

(Morris & Gardner, 2000). As Table 3.7 shows, OR and RR are usually highly correlated. The absolute risk reduction (ARR), also called the risk difference (RD), refers to the absolute reduction in risk of the adverse event (illness) in the treated patients compared with the level of baseline risk in the control group. Gigerenzer et al. (2008) recommended using the absolute risk reduction (RD) rather than the RR. As Table 3.7 shows, RD (or ARR) is sensitive to the differences in the event rates. There are other advantages as well to RD, which are discussed elsewhere (Rosenthal & Rosnow, 2008, pp. 631–632). Phi is the product-moment correlation (r) when the two correlated variables are dichotomous, and Table 3.7 shows it is sensitive to the event rates and natural frequencies. Another useful index is NNT, for the number of patients that need to be treated to prevent a single case of the adverse event. Relative risk may be an easy-to-handle description, but it is only an alerting indicator that tells us that something happened and we need to explore the data further. As Tukey (1977), the consummate exploratory data analyst, stated: “Anything rosnow, rosenthal

49

that makes a simpler description possible makes the description more easily handleable; anything that looks below the previously described surface makes the description more effective” (p. v). And, we can add, that any index of the magnitude of effect that is clear enough, transparent enough, and accurate enough to inform the nonspecialist of exactly what we have learned from the quantitative data increases the ethical value of those data (Rosnow & Rosenthal, 2011).

Conclusion In a cultural sphere in which so many things compete for our attention, it is not surprising that people seem to gravitate to quick, parsimonious forms of communication and, in the case of health statistics, to numbers that appear to speak directly to us. For doctors with little spare time to do more than browse abstracts of clinical trials or the summaries of summaries, the emphasis on parsimonious summary statistics such as RR communications in large sample RCTs may seem heavily freighted with clinical meaning. For the general public, reading about a 94.7% reduction in the risk of some illness, either in a pharmaceutical advertisement or in a news story about a “miracle drug that does wonders,” is attention-riveting. It is the kind of information that is especially likely to arouse an inner urgency in patients but also in anyone who is anxious and uncertain about their health. Insofar as such information exaggerates the absolute effects, it is not only the patient or the public that will suffer the consequences; the practice of medicine and the progress of science will as well. As Gigerenzer et al. (2008) wrote, “Statistical literacy is a necessary precondition for an educated citizenship in a technological democracy” (p. 53). There are promising opportunities for moral (and societal) rewards for quantitative methodologists who can help us educate our way out of statistical illiteracy. And that education will be beneficial, not only to the public but to many behavioral, social, and medical researchers as well. As that education takes place, there will be increased clarity, transparency, and accuracy of the quantitative methods employed, thereby increasing their ethical value.

Future Directions An important theoretical and practical question remains to be addressed: To what extent is there agreement among quantitative methodologists in their evaluation of quantitative procedures as to the 50

degree to which each procedure in a particular study meets the methodological standards of transparency, informativeness, precision, accuracy, and groundedness? The research program called for to address these psychometric questions of reliability will surely find that specific research contexts, specific disciplinary affiliations, and other specific individual differences (e.g., years of experience) will be moderators of the magnitudes of agreement (i.e., reliabilities) achieved. We believe that the results of such research will demonstrate that there will be some disagreement (that is, some unreliability) in quantitative methodologists’ evaluations of various standards of practice. And, as we noted above, that is likely to be associated with some disagreement (that is, some unreliability) in their evaluations of the ethical value of various quantitative procedures. Another important question would be addressed by research asking the degree to which the specific goals and specific sponsors of the research may serve as causal factors in researchers’ choices of quantitative procedures. Teams of researchers (e.g., graduate students in academic departments routinely employing quantitative procedures in their research) could be assigned at random to analyze the data of different types of sponsors with different types of goals. It would be instructive to learn that choice of quantitative procedure was predictable from knowing who was paying for the research and what results the sponsors were hoping for. Recognition of the possibility that the choice of quantitative procedures used might be affected by the financial interests of the investigator is reflected in the increased frequency with which scientific journals (e.g., medical journals) require a statement from all co-authors of their financial interest in the company sponsoring the research (e.g., pharmaceutical companies). Finally, it would be valuable to quantify the costs and utilities of doing and not doing a wide variety of specific studies, including classic and not-so-classic studies already conducted, and a variety of studies not yet conducted. Over time, there may develop a disciplinary consensus over the costs and utilities of a wide array of experimental procedures. And, although such a consensus is building over time, it will be of considerable interest to psychologists and sociologists of science to study disciplinary differences in such consensus-building. Part of such a program of self-study of disciplines doing quantitative research would focus on the quantitative procedures used, but the primary goal would be to apply survey research methods to establish

q u a n t i tat i v e m e t h o d s a n d e t h i c s

the degree of consensus on research ethics of the behavioral, social, educational, and biomedical sciences. The final product of such a program of research would include the costs and utilities of doing, and of not doing, a wide variety of research studies.

Notes 1. Where we quote from a document but do not give the page numbers of the quoted material, it is because either there was no pagination or there was no consistent pagination in the online and hard copy versions that we consulted. Tables 3.1–3.4 reprint only the original material, as there were slight discrepancies between original material and online versions. 2. Pattullo (1982) described the logical basis on which “rulemakers” (like DHEW) had proceeded in terms of a syllogism emphasizing not the potential benefits of research but only the avoidance of risks of harm: “(a) Research can harm subjects; (2) Only impartial outsiders can judge the risk of harm; (3) Therefore, all research must be approved by an impartial outside group” (p. 376). 3. Hearings on the recommendations in the Belmont Report were conducted by the President’s Commission for the Study of Ethical Problems in Medicine, Biomedical, and Behavioral Research. Proceeding on the basis of the information provided at these hearings and on other sources of advice, the Department of Health and Human Services (DHHS) then issued a set of regulations in the January 26, 1981, issue of the Federal Register. A compendium of regulations and guidelines that now govern the implementation of the National Research Act and subsequent amendments can be found in the DHHS manual known as the “Gray Booklet,” specifically titled Guidelines for the Conduct of Research Involving Human Subjects at the National Institutes of Health (available online at http://ohsr.od.nih.gov/guidelines/index.html). 4. Smith, Kimmel, and Klein (2009) reported that 43.4% of the articles on consumer research in leading journals in the field in 1975 through 1976 described some form of deception in the research. By 1989 through 1990, the number of such articles increased to 57.7% , where it remained steady at 56% in 1996 through 1997, increased to 65.7% in 2001 through 2002, and jumped to 80.4% in 2006 through 2007. The issue of deception is further complicated by the fact that active and passive deceptions are far from rare in our society. Trial lawyers manipulate the truth in court on behalf of their clients; prosecutors surreptitiously record private conversations; journalists get away with using hidden cameras and undercover practices to obtain stories; and the police use sting operations and entrapment procedures to gather incriminating evidence (cf. Bok, 1978, 1984; Saxe, 1991; Starobin, 1997). 5. The document, titled “Ethical Principles of Psychologists and Code of Conduct,” is available online at http://www. apa.org/ETHICS/code2002.html. 6. Adaptations of the models in Figures 3.1 and 3.2 have been used to cue students about possible ethical dilemmas in research and data analysis (cf. Bragger & Freeman, 1999; Rosnow, 1990; Strohmetz & Skleder, 1992). 7. The confusion of statistical significance with practical importance may be a more far-reaching problem in science. In a letter in Science, the writers noted that “almost all reviews

and much of the original research [about organic foods] report only the statistical significance of the differences in nutrient levels—not whether they are nutritionally important” (Clancy, Hamm, Levine, & Wilkins, 2009, p. 676). 8. NICE (2004) also recommended that researchers use a standardized mean difference (SMD) of half a standard deviation or more (i.e., d or g ≥ 0.5) with continuous outcomes as the threshold of clinical significance for initial assessments of statistically significant summary statistics (NICE, 2004). However, effects far below the 0.5 threshold for SMDs have been associated with important interventions. For example, in the classic Salk vaccine trial (Brownlee, 1955; Francis, Korns, Voight, Boisen, Hemphill, Napier, & Tolchinsky, 1955), phi = 0.011, which has a d -equivalent of 0.022 (Rosnow & Rosenthal, 2008). It is probably the case across the many domains in which clinical significance is studied that larger values of d or g are in fact generally associated with greater intervention benefit, efficacy, or clinical importance. But it is also possible for large SMDs to have little or no clinical significance. Suppose a medication was tested on 100 pairs of identical twins with fever, and in each and every pair, the treated twin loses exactly one-tenth of 1 degree more than the control twin. The SMD will be infinite, inasmuch as the variability (the denominator of d or g ) will be 0, but few doctors would consider this ES clinically significant. As Cohen (1988) wisely cautioned, “the meaning of any given ES is, in the final analysis, a function of the context in which it is embedded” (p. 535). 9. The high RR of HS in this study, in which participants (male physicians) took 325 milligrams every other day, might explain in part why the current dose for MI prophylaxis is tempered at only 81 milligrams per day.

Author Note Ralph L. Rosnow is Thaddeus Bolton Professor Emeritus at Temple University, Philadelphia, PA ([email protected]). Robert Rosenthal is a Distinguished Professor at the University of California at Riverside ([email protected]) and Edgar Pierce Professor of Psychology, Emeritus, Harvard University.

References Allison, D. B. (2009). The antidote to bias in research. Science, 326, 522. American Psychological Association. (1973). Ethical principles in the conduct of research with human participants. Washington, DC: Author. American Psychological Association. (1982). Ethical principles in the conduct of research with human participants. Washington, DC: Author. American Statistical Association (1999). Ethical guidelines for statistical practice. http://www.amstat.org/profession/index. cfm?fusaction=ethical Atwell, J. E. (1981). Human rights in human subjects research. In A. J. Kimmel (Ed.), New directions for methodology of social and behavioral science: Ethics of human subject research (No. 10, pp. 81–90). San Francisco, CA: Jossey-Bass. Baltimore, D. (January 27, 1997). Philosophical differences. The New Yorker, p. 8.

rosnow, rosenthal

51

Beecher, H. K. (July 2, 1966). Documenting the abuses. Saturday Review, 45–46. Beecher, H. K. (1970). Research and the individual. Boston: Little Brown. Blanck, P. D., Bellack, A. S., Rosnow, R. L., Rotheram-Borus, M. J., & Schooler, N. R. (1992). Scientific rewards and conflicts of ethical choices in human subjects research. American Psychologist, 47, 959–965. Blass, T. (2004). The man who shocked the world: The life and legacy of Stanley Milgram. New York: Basic Books. Blastland, M., & Dilnot, A. (2009). The numbers game. New York: Gotham Books. Bok, S. (1978). Lying: Moral choice in public and private life. New York: Pantheon. Bok, S. (1984). Secrets: On the ethics of concealment and revelation. New York: Vintage Books. Bordia, P., & Rosnow, R. L. (1995). Rumor rest stops on the information highway: A naturalistic study of transmission patterns in a computer-mediated rumor chain. Human Communication Research, 25, 163–179. Bragger, J. D., & Freeman, M. A. (1999). Using a cost–benefit analysis to teach ethics and statistics. Teaching of Psychology, 26, 34–36. Brownlee, K. A. (1955). Statistics of the 1954 polio vaccine trials. Journal of the American Statistical Association, 50, 1005–1013. Carling, C., Kristoffersen, D. T., Herrin, J., Treweek, S., Oxman, A. D., Schünemann, H., et al. (2008). How should the impact of different presentations of treatment effects on patient choice be evaluated? PLoS One, 3(11), e3693. Cicerone, R. J. (2010). Ensuring integrity in science. Science, 327, 624. Clancy, K., Hamm, M., Levine, A. S., & Wilkins, J. (2009), Organics: Evidence of health benefits lacking. Science, 325, 676. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304–1312. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997–1003. Committee on Science, Engineering, and Public Policy (2009). On being a scientist: A guide to responsible conduct in research (3rd ed.). Washington, DC: National Academies Press. Edwards, A. W. F. (1963). The measure of association in a 2×2 table. Journal of the Royal Statistical Society, 126, 109–114. Fairchild, A. L., & Bayer, R. (1999). Uses and abuses of Tuskegee. Science, 284, 918–921. Francis, T., Jr., Korns, R. F., Voight, R. B., Boisen, M., Hemphill, F., Napier, J., & Tolchinsky, E. (1955). An evaluation of the 1954 poliomyelitis vaccine trials: A summary report. American Journal of Public Health, 45(5), 1–63. Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M., & Woloshin, S. (2008). Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest, 8, 53–96. Harris, G (March 10, 2009). Doctor admits pain studies were frauds, hospital says. The New York Times. Retrieved March 20, 2009, www.nytimes.com/2009/03/11/health/research/ 11pain.html?ref=us. Haywood, H. C. (1976). The ethics of doing research . . . and of not doing it. American Journal of Mental Deficiency, 81, 311–317.

52

Huff, D. (1982). How to lie with statistics. New York: Norton. Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19. Jeans, M. E. (1992). Clinical significance of research: A growing concern. Canadian Journal of Nursing, 24, 1–2. Jones, H. H. (1993). Bad blood: The Tuskegee syphilis experiment (Revised edition). New York: Free Press. Kaiser, J. (September 18, 2009). Researcher, two universities sued over validity of prostate cancer test. Science, 235, 1484. Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1, 427–451. Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63, 146–159. Katz, J. (1972). Experimentation with human beings. New York: Russell Sage. Kelman, H. C. (1968). A time to speak: On human values and social research. San Francisco, CA: Jossey-Bass. Kimmel, A. J. (1991). Predictable biases in the ethical decision making of psychologists. American Psychologist, 46, 786–788. Kossinets, G., & Watts, D. J. (2006). Empirical analysis of an evolving social network. Science, 311, 88–90. Mark, M. M., Eyssell, K. M., & Campbell, B. (1999). The ethics of data collection and analysis. In J. L. Fitzpatrick & M. Morris (Eds.), Ethical issues in program evaluation (pp. 47–56). San Francisco, CA: Jossey-Bass. Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378. Milgram, S. (1975). Obedience to authority: An experimental view. New York: Harper Colophon Books. Miller, J. D. (2007). Finding clinical meaning in cancer data. Journal of the National Cancer Institute, 99(24), 1832–1835. Morris, J. A., & Gardner, M. J. (2000). Epidemiological studies. In D. A. Altman, D. Machin, T. N. Bryant, & M. J. Gardner (Eds.). Statistics with confidence (2nd. ed., pp. 57–72). London: British Medical Journal Books. Mosteller, F. (1968). Association and estimation in contingency tables. Journal of the American Statistical Association, 63, 1–28. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (April 18, 1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved April 8, 2009, http://ohsr.od.nih.gov/guidelines/belmont.html. National Commission on Research. (1980). Accountability: Restoring the quality of the partnership. Science, 207, 1177–1182. NICE. (2004). Depression: Management of depression in primary and secondary care (Clinical practice guideline No. 23). London: National Institute for Health and Clinical Excellence NICHSR. (2010). HTA 101: Glossary. Retrieved March 3, 2010 from http://www.nlm.nih.gov/nichsr/hta101/ta101014.html Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783. Orne, M. T. (1969). Demand characteristics and the concept of quasi-control. In R. Rosenthal & R. L. Rosnow (Eds.). Artifact in behavioral research (pp. 143–179). New York:

q u a n t i tat i v e m e t h o d s a n d e t h i c s

Academic Press. (Reissued in Rosenthal & Rosnow, 2009, pp. 110–137). Panter, A. T., & Sterba, S. K. (2011). Handbook of ethics in quantitative methodology. Taylor & Francis. Pattullo, E. L. (1982). Modesty is the best policy: The federal role in social research. In T. L. Beauchamp, R. R. Faden, R. J. Wallace, Jr., & L. Walters (Eds.), Ethical issues in social research (pp. 373–390). Baltimore, MD: Johns Hopkins University Press. Paulos, J. A. (1990). Innumeracy: Mathematical illiteracy and its consequences. New York: Vintage Books. Paulos, J. A. (2003). A mathematician plays the stock market. New York: Basic Books. PLoS Medicine Editors (September, 2009). Ghostwriting: The dirty little secret of medical publishing that just got bigger. PLoS Medicine, 6, Issue 9, e1000156. Accessed September 19, 2009 from http://www.plosmedicine.org/static/ghostwriting. action. Prasad, K., Jaeschke, R., Wyer, P., Keitz, S., & Guyatt, G. (2008). Tips for teachers of evidence-based medicine: Understanding odds ratios and their relationship to risk ratios. Journal of General Internal Medicine, 23(5), 635–640. Reynold, P. D. (1975). Value dilemmas in the professional conduct of social science. International Social Science Journal, 27, 563–611. Rosenthal, R. (1994). Science and ethics in conducting, analyzing, and reporting psychological research. Psychological Science, 5, 127–134. Rosenthal, R., & Rosnow, R. L. (1969). The volunteer subject. In R. Rosenthal & R. L. Rosnow (Eds.), Artifact in behavioral research (pp. 59–118). New York: Academic Press. (Reissued in Rosenthal & Rosnow, 2009, pp. 48–92). Rosenthal, R., & Rosnow, R. L. (1975). The volunteer subject. New York: Wiley-Interscience. (Reissued in Rosenthal & Rosnow, 2009, pp. 667–862). Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research. American Psychologist, 45, 775–777. Rosenthal, R., & Rosnow, R. L. (2008). Essentials of behavioral research: Methods and data analysis (3rd ed.). New York: McGraw-Hill. Rosenthal, R., & Rosnow, R. L. (2009). Artifacts in behavioral research. New York: Oxford University Press. Rosnow, R. L. (1990). Teaching research ethics through role-play and discussion. Teaching of Psychology, 17, 179–181. Rosnow, R. L. (1997). Hedgehogs, foxes, and the evolving social contract in psychological science: Ethical challenges and methodological opportunities. Psychological Methods, 2, 345–356. Rosnow, R. L., & Rosenthal, R. (1970). Volunteer effects in behavioral research. In K. H. Craik, B. Kleinmuntz, R. L. Rosnow, R. Rosenthal, J. A. Cheyne, & R. H. Walters, New directions in psychology (pp. 211–277). New York: Holt, Rinehart & Winston. Rosnow, R. L., & Rosenthal, R. (1995). “Some things you learn aren’t so”: Cohen’s paradox, Asch’s paradigm, and the interpretation of interaction. Psychological Science, 6, 3–9. Rosnow, R. L., & Rosenthal, R. (1996). Contrasts and interactions redux: Five easy pieces. Psychological Science, 7, 253–257. Rosnow, R. L., & Rosenthal, R. (1997). People studying people: Artifacts and ethics in behavioral research. New York: W. H. Freeman.

Rosnow, R. L., & Rosenthal, R. (2008). Assessing the effect size of outcome research. In A. M. Nezu & C. M. Nezu (Eds.), Evidence-based outcome research (pp. 379–401). New York: Oxford University Press. Rosnow, R. L., & Rosenthal, R. (2011). Ethical principles in data analysis: An overview. In A. T. Panter & S. K. Sterba (Eds.), Handbook of ethics in quantitative methodology (pp. 37–58). New York: Routledge. Rosnow, R. L., Rotheram-Borus, M. J., Ceci, S. J., Blanck, P. D., & Koocher, G. P. (1993). The institutional review board as a mirror of scientific and ethical standards. American Psychologist, 48, 821–826. Rosnow, R. L., Skleder, A. A., Jaeger, M. E., & Rind, B. (1994). Intelligence and the epistemics of interpersonal acumen: Testing some implications of Gardner’s theory. Intelligence, 19, 93–116. Sales, B. D., & Folkman, S. (Eds.). (2000). Ethics in research with human participants. Washington, DC: American Psychological Association. Saxe, L. (1991). Lying: Thoughts of an applied social psychologist. American Psychologist, 46, 409–415. Schwartz, L. M., & Woloshin, S. (2003). On the prevention and treatment of exaggeration. Journal of General Internal Medicine, 18(2), 153–154. Scott-Jones, D., & Rosnow, R. L. (1998). Ethics and mental health research. In H. Friedman (Ed.), Encyclopedia of mental health (Vol. 2, pp. 149–160). San Diego, CA: Academic Press. Smith, M. B. (1969). Social psychology and human values. Chicago, IL: Aldine. Smith, N. C., Kimmel, A. J., & Klein, J. G. (2009). Social contract theory and the ethics of deception in consumer research. Journal of Consumer Psychology, 19, 486–496. Spielmans, G. I., & Parry, P. I. (2010). From evidencebased medicine to marketing-based medicine: Evidence from internal industry documents. Journal of Bioethical Inquiry: doi:10.1007/s11673-010-9208-8. Starobin, P. (January 28, 1997). Why those hidden cameras hurt journalism. The New York Times, p. A21. Steering Committee of the Physicians’ Health Study Research Group. (1989). Final report on the aspirin component of the ongoing Physicians’ Health Study. New England Journal of Medicine, 321, 129–135. Strohmetz, D. B., & Skleder, A. A. (1992). The use of role-play in teaching research ethics: A validation study. Teaching of Psychology, 19, 106–108. Stryker, J. (April 13, 1997). Tuskegee’s long arm still touches a nerve. The New York Times, p. 4. Tatsioni, A., Bonitsis, N. G., & Ioannidis, J. P. A. (2007). Persistence of contradicted claims in the literature. Journal of the American Medical Association, 298, 2517–2526. Trials of War Criminals before the Nuernberg Military Tribunals under Control Council Law No. 10, October 1946-April 1949, Vol. II. Washington, DC: U.S. Government Printing Office. Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press. Tufte, E. T. (1990). Envisioning information. Cheshire, CT: Graphics Press. Tufte, E. T. (2006). Beautiful evidence. Cheshire, CT: Graphics Press. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.

rosnow, rosenthal

53

Turner, E. H., Matthews, A. M., Linardatos, E., Tell, R. A., & Rosenthal, R. (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine, 358(3), 32–40. VandenBos, G. R. (Ed.). (2007). APA dictionary of psychology. Washington, DC: American Psychological Association. Vedula, S. S., Bero, L., Scherer, R. W., & Dickersin, K. (2009). Outcome reporting in industry-sponsored trials of Gabapentin for off-label use. New England Journal of Medicine, 361(20), 1963–1971. Vinacke, W. E. (1954). Deceiving experimental subjects. American Psychologist, 9, 155. Wainer, H. (1984). How to display data badly. American Statistician, 38, 137–147. Wainer, H. (1996). Depicting error. American Statistician, 50(2), 101–111. Wainer, H. (2000). Visual revelations: Graphical tales of fate and deception from Napoleon Bonaparte to Ross Perot. Mahwah, NJ: Erlbaum.

54

Wainer, H. (2009). Picturing the uncertain world. Princeton, NJ: Princeton University Press. Wainer, H., & Thissen, D. (1981). Graphical data analysis. Annual Review of Psychology, 32, 191–241. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge, England: Cambridge University Press. Woloshin, S., Schwartz, L. M., Casella, S. L., Kennedy, A. T., & Larson, R. J. (2009). Press releases by academic medical centers: Not so academic? Annals of Internal Medicine, 150(9), 613–618. Zuckerman, M., Hodgins, H. S., Zuckerman, A., & Rosenthal, R. (1993). Contemporary issues in the analysis of data: A survey of 551 psychologists. Psychological Science, 4, 49–53.

q u a n t i tat i v e m e t h o d s a n d e t h i c s

CHAPTER

4

Special Populations

Keith F. Widaman, Dawnté R. Early, and Rand D. Conger

Abstract Special populations offer unique opportunities and challenges for mathematical/statistical modeling of data. First, we discuss several ways of construing the notion of special populations, including the basis on which we argue that the general notion of special populations is a rather recent one. Then, we discuss four key methodological implications when considering special populations: (1) properly defining and accessing participants from the special population; (2) ensuring that the same dimensions are present across multiple populations; (3) assessing whether empirical implications of psychological theories hold across populations; and (4) exploiting unusual variation in special populations that may allow tests of unique hypotheses. Next, we provide examples that illustrate how to deal with each of the methodological issues. We close with a discussion of issues occasioned by our discussion of special populations, issues that represent topics for future developments. Key Words: Special populations, group differences, factorial invariance, measurement invariance, structural invariance, individual differences

Introduction The topic of the current chapter is the place or importance of special populations, particularly with regard to how quantitative methods or techniques can be used to understand or characterize special populations or inform about the nature of special populations. Research on special populations has burgeoned during the past quarter-century, and the pace of development of quantitative methods has also expanded rapidly during this period. In this chapter, we deal with the intersection of these two streams of research—special populations and quantitative methods—to illuminate both. That is, we discuss ways in which the use of state-of-the-art quantitative techniques can help explain the nature of special populations in unique and informative ways. In turn, we hope that consideration of special populations may provide feedback that will lead to interesting developments in quantitative

methods to capture better behavioral phenomena in these groups. We develop several goals for the chapter based on our considerations of the application of quantitative methods to special populations. An initial goal is to identify the nature or conception of special populations. Here, we discuss our observations on how special populations are identified. A second goal is to explore how research on special populations offers challenges to or ready application of methodological or quantitative approaches. To meet this goal, we discuss four major implications we draw when thinking about conducting research with special populations, and we discuss techniques that would be particularly appropriate in pursuing these implications. Our third goal is to describe a series of applications of quantitative techniques in the study of special populations to provide a substantive instantiation of how quantitative techniques can be used to clarify 55

the nature of special populations and the dynamics of psychological and biological processes in these groups. We close with conclusions and a series of questions that represent issues for future research. In pursuing the nature and implications of special populations for quantitative methods, we performed several literature searches to help bound or circumscribe our thinking about special populations. A PsycINFO search in August 2010 using “special population” and “special populations” as possible title words yielded 397 citations. The oldest citation was to a publication by Uehling in 1952, the next oldest was to a publication in 1975, and 390 (or more than 98%) of the 397 citations were to publications in 1980 or later. Thus, the clear majority of publications listing “special population” as a title word have occurred relatively recently, within the last 30 years. More inclusively, we next conducted a PsycINFO search using “special population” and “special populations” as possible keywords. This search turned up 1546 citations, a far broader net of citations than provided by the title-word search. Here, the oldest citation was to a chapter in a volume resulting from a White House Conference on Child Health and Protection, published in 1931 during the Hoover administration (see Folks, 1931). The second oldest citation was the Uehling (1952) paper that was the oldest publication in the title-word search, and 1503 (or more than 97%) of the 1546 citations were to publications in 1980 or later. Finally, a PsycINFO search using “special population” or “special populations” as phrases to be found anywhere in the database led to 4266 citations, our search that led to the most inclusive list of reference citations. The two oldest citations were the Folks (1931) and Uehling (1952) papers uncovered in the earlier searches, and a full 4221 (or more than 98.9%) of the citations were to works published in 1980 or later. Consistent with the preceding two searches, the vast bulk of citations under the explicit title or heading of “special populations” occurred within the past three decades. Each of these three searches outlined above supports the conclusion that work on or thinking about special populations under that specific rubric is a fairly recent phenomenon. However, work on special populations has been a hallmark of research in psychology for more than 100 years, if not more, even if this research has not been published under the heading of special populations. For example, research and theory on persons with mental illness can be traced back more than 200 years, as can 56

s p e c i a l p o p u l at i o n s

research on persons with mental retardation or intellectual disability. Clinical methods in use more than 200 years ago for dealing with persons with mental illness or intellectual disability appear barbaric to the twenty-first-century practicing scientist or informed citizen; indeed, many clinical methods in use only 50 or 75 years ago seem rather unfortunate and misguided. Thus, research on special populations has long been pursued in psychology, allied behavioral and social sciences, and medical sciences. But, the time is ripe for renewed exploration of behavioral phenomena in special populations to provide a fuller understanding of persons in these populations, which should lead to improved ways of treating and improving the lives of persons in special populations.

Conceptions of Special Populations To consider methodological implications and applications related to special populations, one must consider first what a special population is. The term special population has no obvious and ready referent. That is, if one asked 10 different psychologists what was meant by “special populations,” one might get 10 different answers with little in the way of overlap aside from the indication that “special population” implies deviation from general population norms. To confront this definitional problem, we read a random sample of publications identified in our literature searches to identify the ways different researchers used the term “special populations.” In the rest of this section, we discuss several ways in which investigators have used the term, admitting that our categorization is not exhaustive.

Disability Groups One common use of the term special population appears, implicitly, to represent a less pejorative way of referring to one or another disability group. Research on persons with a slowed rate of mental development was long published under the heading of research on mentally retarded persons, with a clear implication that this was a special population. More recently, the term mentally retarded persons was replaced by persons with mental retardation, which was a less pejorative term; still more recently, the accepted term is now persons with intellectual disability. Historical changes in these terms are reflected in the name of the leading professional organization for research and practice in this domain, which has changed its name from the American Association on Mental Deficiency to the American Association on Mental Retardation and then to the American

Association on Intellectual and Developmental Disabilities. Regardless of the precise term used to refer to this population, the key defining feature is that a person is considered to be a member of the special population if he or she has exhibited a significantly slower rate of mental development relative to persons in the general population. As noted above, persons with mental illness are another group that is treated as a special population, a special population defined on the basis of a perceived disability. Many additional disability groups can be identified, including persons with visual deficits or blindness, persons with hearing loss or deafness, persons with physical disabilities, and individuals with learning disabilities, to name only a few. The one family resemblance shared by all of these uses of the term “special population” is that the generic label is applied to a group to denote the presence of a behavioral disability that is common to all members of the group. One problem with the use of the term “special population” in connection with disability groups arises in the context of comorbidity, or the presence of more than one identifier of disability. For example, consider the case of a person with intellectual disability who is also deaf. Is this person a member of two different special populations—that is, is such a person a member of the special population of persons with intellectual disability and also a member of the special population of persons who are deaf? Or, is the person a member of a new and still more special population, the population of persons with dual diagnoses of intellectual disability and deafness? No resolution of this issue is proposed here. Rather, we merely raise this issue to highlight a vexing issue in evolving notions of disability groups. Research on disability groups takes a number of forms, and we survey a few of these here. As one example, Martz and Daniel (2010) collected data in the United States and Kenya to determine whether disability prototypes for four disability groups (persons with AIDS, hearing impairment, mental illness, and spinal cord injury) differed across the four groups and whether this was moderated by country of residence. As a second example, Or, Cohen, and Tirosh (2010) used discriminant analysis to determine whether measures from a parentrated questionnaire could discriminate among three groups of students, including those with attentiondeficit hyperactivity disorder (ADHD), those with a learning disability, and those with a combination of ADHD and learning disability. Finally, Cobb, Lehman, Newman-Gonchar, and Alwell (2009)

synthesized results from prior meta-analyses of selfdetermination interventions for various categories of persons with disability. In some meta-analyses, research on students from any disability group were included, whereas other meta-analyses focused on more restricted groups, such as students with intellectual disability or developmental disabilities or students with ADHD. The synthesis by Cobb et al. ended on an unfortunate note, concluding that interventions to enhance levels of self-determination appear to have relatively weak effects, perhaps because of heterogeneity of the special populations combined in the meta-analyses. Echoing issues identified above, Barton (2009) has decried the lack of a common definition of the basic term disability by persons with disabilities or their advocates. In 1990, the U.S. Congress passed the landmark Americans with Disabilities Act, which outlawed discrimination on the basis of disability and outlined many ways in which accommodations for persons with disabilities must be provided. Given this legislation and the relatively large numbers of persons in the United States who fall in one or another disability category, the population of persons with disability could and should be a potent political force. But, without a common and inclusive definition of the term disability, the political clout of this population typically is watered down, as advocates for particular disability groups (e.g., persons with intellectual disability, students with deafness) follow their own special agendas. If persons with disability and their advocates could unite subgroups under a commonly accepted and inclusive definition of disability, this would almost certainly lead to a more united stance on issues, leading to greater power. Until that time, the political power of persons with disabilities is likely to remain fragmented and therefore weaker.

“Superability” Groups In contrast to the notion of disability grouping, “special population” can also be used to define individuals with superior levels of ability or performance, which can be termed superability groups to contrast with the term disability group. One of the most common “superability” groups is the intellectually gifted, often defined as persons with intelligence quotients (IQs) greater than or equal to 130. But, giftedness has many dimensions; some individuals are considered gifted in general, whereas others may be deemed gifted in narrower domains such as the arts or specific academic domains. widaman, early, conger

57

One issue in the study of superability groups is the early identification of such individuals. Two notable attempts at early identification were undertaken during the twentieth century. In the first of these, Terman established what became known as the Genetic Studies of Genius. Terman had recently developed the Stanford Binet Scale of Intelligence, which was published in 1916. Then, from 1921 to 1923, he asked fifth grade teachers in California to nominate the three brightest children in their classes and the youngest child. After testing these children with the new Stanford Binet scale, children who scored 130 or above (i.e., two or more SDs above the population mean) were invited into the study. Initially, more than 1450 children were enrolled in the study, and later additions resulted in a final sample size of 1528 children. The second large-scale study of early identification was the work begun in 1971 by Stanley to identify junior high school students with very high levels of math skill, an undertaking subsequently dubbed the Study of Mathematically Precocious Youth (SMPY). These youth were identified at age 12 or 13 years and had to score within the top 3% on a test of school achievement. Then, the young people were given the SAT, a standard test for college admission, and had to score within the top 1% on the test. Once selected into a cohort, the young students were given a variety of accelerated academic experiences to facilitate their learning in domains of science and math. As described by Lubinski and Benbow (2006), who are now the co-directors of SMPY, a total of five cohorts of SMPY youth were recruited. Recruitment for the various cohorts took place between 1972 and 1992, and a total of more than 5300 youth have participated in the program. A more recent example of early identification of gifted individuals is a study by Kuo, Maker, Su, and Hu (2010), who described an early identification protocol used in Taiwan to identify gifted preschoolers. Children so identified were then enrolled in an enrichment program to offer an optimal environment for them to increase their problem-solving abilities in multiple modalities. The protocol used various kinds of information—from interviews, checklists, portfolios, intelligence tests, and observer ratings—to identify giftedness in several domains. Although most children were deemed gifted in one or another of the domains, almost 20% of children were identified as gifted in more than one domain. Whether these very-early identified children will remain characterized as gifted at later points in their 58

s p e c i a l p o p u l at i o n s

lifespan will be an interesting result to track in the future. In an interesting twist, a superior level of ability or performance in one area can be exhibited in the presence of rather low performance in other areas. For example, Olson, Berryhill, Drowos, Brown, and Chatterjee (2010) reported on a patient who had rather severe impairments in episodic memory that presumably arose as a result of anoxia during birth. The impairments in episodic memory were quite general, resulting in rather poor performance on many memory measures. However, the patient had extremely accurate ability to recall calendar data with regard to day, month, and year, and this unusually high skill enables the patient to recall the precise date of many of his personal experiences. Another example is that of hyperlexia, or precocious development of single-word reading, which has typically been identified only in persons who have a developmental disability (Grigorenko, Klin, & Volkmar, 2003). As a result, the identification of a particular form of performance as a “superability” does not insure that the person displaying such an ability is thereby a member of a generally advantaged group. Rather, the superability may merely represent an unusually superior level of performance in the context of a generally depressed level of functioning.

Demographic Groups Another way of using the “special population” term is to refer to groups of individuals who are identified on the basis of demographic characteristics. The most common characteristics used to classify individuals in the United States are sex, age, ethnic (or racial) status, and socioeconomic status (SES) grouping, although other demographic variables are also used. For example, Sussman (2006) has discussed the prevention of adolescent alcohol problems in various special populations, citing variation across gender, ethnicity, region of the country, and SES groups. The issue of special populations often arises when investigating the differential validity of psychological tests with persons from different ethnic groups. Cleary (1968) offered a series of statistical tests using multiple regression analysis to determine whether intercept bias and/or slope bias existed in the use of test scores when evaluating students from different ethnic groups for college admission. If neither intercept nor slope bias were found to occur, then a test would be deemed unbiased for use as a selection device. If fewer applicants from a special population

(e.g., African-American) were selected using such a test, then one could justify the result by claiming that the lower level of selection resulted from lower levels of scores obtained by members of the special population on unbiased tests. Although socalled “Cleary tests” have been used routinely for more than four decades in college admissions and in personnel selection, concerns about the utility of the approach remain (e.g., Meade & Tonidandel, 2010; Colarelli, Han, & Yang, 2010). Retrieving literature that references “special populations,” we were struck by the frequency with which authors who referred to special populations cited groups such as women, children, the elderly, and minority ethnic groups. This designation of special populations implies that non-women, nonchildren, non-elderly, non-minority individuals— that is, adult, White males and, perhaps, college sophomores—are “the” populations that serve as common reference groups, and anyone who deviates from these norms is a member of a special population. Why White adult males and college sophomores became standard reference groups is anyone’s guess. Most likely, the emergence of these reference groups resulted from not just one but a combination of factors. Since the founding of the United States of America, with few notable exceptions, adult White males have tended to occupy the highest positions of power in business, academia, and government. Regardless of the optimality of this distribution of power, the mere presence of this demographic in positions of power may have led, explicitly or implicitly, to acceptance of adult White males as the reference group against which outcomes for other groups would be compared. Additionally, the use of a single demographic, such as adult White males, might have been considered a way to reduce heterogeneity that might otherwise cloud research results. Assuming that results for different demographic groups should not differ dramatically, reduction of heterogeneity might allow trends in data to be seen more clearly. Other reasons are, almost surely, responsible for the use of college sophomores as a standard or reference group. The top reason for selecting college sophomores as “typical” research participants must be convenience. When taking introductory psychology courses in college, students often must participate in experiments so that they learn about how studies are conducted, and college sophomores are a common demographic in introductory psychology classes. A quarter-century ago, Sears (1986,

updated by Henry, 2008; Sears, 2008) decried the use of college sophomores as typical research participants, arguing that college sophomores may provide systematically different responses than members of the general population on many, if not most, experimental questionnaires and paradigms. College sophomores are in a stage of life when they are attempting to “find” or define their identities and thus may be much more susceptible to various influences, such as experimental inductions, than would others. Despite the potentially limited utility of college sophomores as research participants, given their unrepresentativeness relative to the population, convenience in obtaining their participation in research is a leading cause of their continued predominance in studies in social and personality psychology. Political pressure often appears to be another factor related to the choice of research participants, either restricting or promoting research on a given topic. If a researcher chooses to study a topic (e.g., romantic love) or a group (e.g., homosexual males) that a member of Congress finds objectionable, the research project might be highlighted as a waste of taxpayer money. Senator William Proxmire (D— Wisconsin) made headlines in the 1970s and 1980s when announcing his Golden Fleece Awards, which derided programs of research he personally opposed. On the other hand, the decades-long support for research on persons with intellectual disability by the members of the Kennedy family in Congress led to far more research funding in this area than otherwise would have occurred. Indeed, continued research support for any special population usually requires the presence of a special champion in Congress for that population, given conflicting funding priorities at the national level. Yet another reason why certain groups are not often represented as research participants is the sheer difficulty in finding and recruiting participants in these groups. Masten and colleagues (e.g., Obradoviæ, Long, Cutuli, Chan, Hinz, Heistad, & Masten, 2009) have been studying homeless and highly mobile children, and identifying members of this group and then tracking them longitudinally has been difficult. Other challenges often face researchers who study court-related samples, such as victims of physical or sexual abuse or children or adolescents in foster care, where concurrence of legal entities are yet another impediment to research. Regardless of the basis for the designation of reference populations, signs of the breakup of the hegemony of White adult males and college sophomores as reference groups are clearly in evidence. widaman, early, conger

59

For example, when applying for grants through the National Institutes of Health, researchers must carefully describe the projected sample with regard to sex, age, and ethnicity, and any exclusion of persons from a particular demographic category must be justified in convincing fashion. Further, concerted efforts have been made to correct the demographic imbalance in prior research on health. The Women’s Health Initiative (http://www.nhlbi.nih.gov/whi/) was a major program of research funded by the government to investigate various outcomes in postmenopausal women, focusing on cardiovascular disease, cancer, and osteoporosis as common causes of disability, morbidity, and death of women. Furthermore, the Office of Minority Health of the U.S. Department of Health and Human Services has pursued Minority Health Initiatives (http://minorityhealth.hhs.gov/) to investigate causes of disease, morbidity, and death in minority populations that deviate from patterns common in the majority (i.e., White) population. We look forward to the day when women, children, the elderly, and ethnic minorities are not considered special populations but are considered major portions of the general population that deserve just as much attention as research subjects as any other demographic group.

Functional Groups The term special population may also be applied to identify persons who, in our terminology, are members of identifiable functional groups. By functional groups, we refer to individuals who have particular combinations of behavioral profiles or life situations that may have unique importance for understanding their behavior. That is, individuals can be characterized by the full repertoire of positive and negative behaviors they exhibit and the life situations they have selected or that have been imposed on them. Any of these factors, particularly in combination, may define groups that function in unique fashion to determine their behavior and their susceptibility to environmental presses, such as treatments. In our literature searches on special populations, we found a great many of the articles retrieved concerned treatment outcomes in special populations, where these special populations met our description of functional groups. Over a decade ago, Polinsky, Hser, and Grella (1998) described the extremely varied types of clients who received services from drug treatment programs in Los Angeles County, 60

s p e c i a l p o p u l at i o n s

highlighting characteristics such as the health status, ethnicity, language needs, and gender-related issues of clients. A key issue Polinsky et al. discussed was the application of types of treatments to types of clients, under the assumption that client characteristics may alter the effectiveness of particular treatments. Taking the issue further, Polinsky et al. proposed that treatments might be specially adapted to the characteristics of the client to obtain maximal success. More recently, Diaz, Horton, McIlveen, Weiner, and Nelson (2009), using data from substance abuse programs, found that almost half (48%) of the clients in their sample had dysthymia. Building on this finding, Diaz et al. recommended that treatment programs consider whether clients have psychological disorders such as dysthymia when treatments are formulated, because treatment success may depend on the presence of significant comorbid characteristics. Expanding the reach of functional groups beyond personal traits or characteristics, aspects of personal life situations may also be used to define functional groups. Individuals are born into families, these families live in communities, and communities are nested within larger geographical entities. In many publications, Bronfenbrenner (e.g., 1977, 1986a, 1986b, 1999) laid out the ever-expanding circles of embedded environments from micro- to macrosystems. A recent book edited by Little, Bovaird, and Card (2007) was dedicated to presenting statistical and other methodological solutions to the modeling of contextual effects, such as embedded social systems, that are rife in studies conducted in representative, everyday contexts. In one contribution to the Little et al. volume, Widaman (2007) discussed the integration of embedded versions of both the social environment and the physical environment as these combine with personal characteristics to influence behavior. Although any of these embedded levels may play a role in moderating behavioral change, variables associated with the more proximal environments of family and community probably play a larger role than do more distal variables. In a recent study, Chassin, Knight, Vargas-Chanes, Losoya, and Naranjo (2009) found that treatments to reduce certain forms of negative behavior were effective only if families were involved in the treatment. The upshot of this finding is that personal characteristics and aspects of both the social and physical life situations within which an individual functions should be considered when attempting to understand the behavior and adaptability of the individual.

Biological or Genetic Markers of Group Membership A fifth and final type of special population is perhaps the most current and state-of-the-art way of defining special populations—by the presence of specific biological or genetic markers that have ties to behavior. Since its rebirth in the early 1970s, behavior genetic research has accumulated at a seemingly ever-increasing rate. The general thrust of findings in this field is that most, if not all, behavioral traits have some heritability, and certain important traits (e.g., intelligence) have high levels of heritability. Consistent with this focus, research continues to focus on biological markers related to behavior, where biological markers are broadly defined, from the molecular level of particular genes or single-nucleotide polymorphisms (SNPs) to the molar level of performance on experimental tasks. Searching for the molecular genetic bases of behavioral traits is a major focus at the present time (Petrill, 2010). The successful identification of particular genes related to a certain condition is often touted in the press. For example, Chakrabarti et al. (2010) recently reported that certain genes associated with sex steroids and neural growth are related to autistic traits and Asperger syndrome. But, the mere presence of certain SNPs may not be the key. Rather, gene expression at the molecular levels may be the important feature, as Simunovic et al. (2009) reported in their study of the pathology underlying Parkinson’s disease. Although successes in finding certain SNPs or particular forms of gene expression related to a disease or behavior have been reported, most of these involve very small portions of variance explained and suffer from lack of replication in follow-up investigations. Perhaps more promising is research at a more molar level. For example, Pennington et al. (2008) reported synaptic and metabolic abnormalities in the prefrontal cortex in persons with schizophrenia or bipolar disorder. Focusing on the anterior cingulate, Eastwood and Harrison (2010) found increased synaptic transmission and plasticity in persons with bipolar disorder. At a still more molar level, Koychev, El-Deredy, Haenschel, and Deakin (2010) reported visual information processing deficits in persons with schizotypy, which is a marker of vulnerability to schizophrenia. And, Reichenberg, Caspi, Harrington, Houts, Keefe, Murray, Poulton, and Moffitt (2010) argued that they had identified patterns in performance on standard psychological tests that reflected cognitive deficits related to childhood schizophrenia. The

varied levels at which this research is undertaken— from the level of synaptic processes to molar patterns in behavior—is remarkable, yet the patterns uncovered all point to the biological nature of the processes involved. Two additional ways of understanding biological markers and their effects deserve mention here. First, the search for a solitary SNP or a small number of SNPs responsible for a particular psychological or behavioral trait is almost certainly a rather unrealistic goal. A more likely outcome is reflected in research on phenylketonuria (PKU). As reported on the Phenylalanine Hydroxylase Locus Knowledgebase website (http://www.pahdb.mcgill.ca/), more than 500 mutations on the phenylalanine hydroxylase (PAH) gene have been identified, and all of these lead to reduced metabolism of phenylalanine into tyrosine, the underlying problem in PKU. Although a small number of mutations (e.g., 5 or 6) may account for the majority of mutations in particular populations (e.g., European), any of the mutations can cause PKU. Further, many of the mutations have been categorized with regard to the severity of the mutation, indexed by the degree of disruption of phenylalanine metabolism. If a large number of mutations are found to underlie a single, rather restricted phenotype such as PKU, we should expect that very large numbers of mutations or SNPs are related to broader phenotypes such as intelligence, intellectual disability, or personality disorders. Second, we think that researchers must pay at least as much attention to the environment as to gene SNPs when searching for genes that affect behavior, an approach that has been characterized as the search for Gene X Environment, or G X E, interactions. Specific genes or SNPs are important in the G X E approach, but the SNPs alone do not directly herald the emergence of a behavioral trait. Rather, behavioral differences among groups identified with different genetic alleles may arise only in particular environmental circumstances or may be clearly exacerbated in such environments, so main effects of genes or environments supply less-than-complete information. Rather than main effects, the G X E interaction indicates that the effects of genetic alleles is moderated by environmental circumstances, so environments modulate how genetic factors are expressed in behavior. Two early papers on the G X E approach, by Caspi et al. (2002) and by Caspi et al. (2003), were both published in Science, the leading general journal in all of science. In the second of these papers, Caspi et al. (2003) investigated the 5-HTTLPR region, which is associated with widaman, early, conger

61

serotonin function. The 5-HTTLPR region is characterized by either short (s) or long (l) alleles; because individuals obtain one copy from mother and one from father, individuals can be characterized as s/s, s/l, or l/l based on whether they have (1) two short, (2) one short and one long, or (3) two long alleles, respectively. Caspi et al. presented data suggesting that persons with the l/l allele were relatively impervious to stressful environments and therefore tended to have lower levels of negative outcomes (e.g., depressive symptoms, suicide ideation/attempts) in the most stressful environments. In contrast, individuals with the s/s allele tended to have the most negative outcomes in the most stressful environments, and persons with the s/l allele had outcomes that fell midway between the l/l and s/s groups. However, these differences did not hold in all environmental situations. Indeed, in the least stressful environments, essentially no differences across allele groups were found. Thus, in low-stress environments or in environments with no maltreatment, the allele groups did not differ in depressive outcomes, but differences across groups appeared only as the stressfulness of the environment increased. More recently, researchers have been investigating the notion of the genetic basis for differential susceptibility to the environment. The studies by Caspi et al. (2002, 2003) support the contention of G X E interactions, but the allele group that fared worst in the most stressful environments rarely exhibited any benefit versus the other groups in the least stressful environments. However, as Belsky, Bakermans-Kranenburg, and van IJzendoorn (2007) argued, the Caspi et al. studies may not have investigated the widest possible range of environmental circumstances, instead generally looking only at stressful versus average environments. However, if one studied the entire range of environmental circumstances—from worst through average to superior environments—then a true cross-over G X E interaction may be found. That is, persons with certain genetic alleles (e.g., the l/l allele from the 5-HTTPLR) may do relatively well in very poor environments but also may not do much better in superior environments, representing a group of persons who are relatively impervious to environmental circumstances and therefore have low susceptibility to effects of the environment. In contrast, persons with other alleles (e.g., the s/s allele from the 5-HTTPLR) may indeed perform rather poorly in the worst environments but might show the best outcomes of all groups in superior environments. If this were to occur, these individuals would be the 62

s p e c i a l p o p u l at i o n s

most susceptible to environmental influence, with their behavioral outcomes tracking the negativity or positivity of the environments within which they have developed. Although firm conclusions about the presence of differential susceptibility and the resulting cross-over G X E interactions has not yet been provided, many experimental results published in the last few years seem to support this idea. If G X E interactions—particularly cross-over G X E interactions consistent with the differential susceptibility notion—are present in many behavioral domains, a more nuanced picture must be drawn, with genes and environments having co-equal status as the basis for behavioral phenotypes. Groups may still be identified by their genetic alleles, but the implications of their genes for these special populations can only be understood by considering the environments in which the persons have developed.

Summary As a reading of this section demonstrates, the notion of special populations is often invoked but can refer to very different kinds of groupings of individuals. We have identified five ways of characterizing special populations, based on disability status, superability status, demographic characteristics, functional characteristics, and genetic markers. Others might be able to identify additional classes of variables that might be used to designate special populations. Although the distinctions among different types of special populations underscore the heterogeneous nature of the alternate bases for groupings of persons, all of the distinctions among groups have an important familial resemblance: Researchers must investigate whether patterns of empirical results vary in important ways across special populations or when moving from the general population to a special population. If results vary importantly across groups, then special population status is a moderator of results, and conclusions about “the ways that things work” do not generalize across groups. Thus, special populations constitute a crucible for research in the social sciences, and we must guard against unwarranted generalization of findings across groups unless research supports such conclusions.

Methodological Implications of Special Populations Having established some guidelines for distinguishing among special populations, we turn next to the methodological or quantitative implications that arise when considering special populations. In

most graduate education in psychology, quantitative experts teach classes in which students are prepared to obtain data from a sample (typically described in nebulous terms); take standard, off-the-shelf statistical methods; estimate population parameters in a particular analytic model, and use their results to make conjectures about the population, as if the population were a single, monolithic entity. But, once one acknowledges the presence of special populations (which represent specially identified subsets of the larger population) our thinking about methodological and statistical procedures must be amended. Rather than estimating “the” population parameter in monolithic population, we should begin trying to understand the nature of special populations, how parameter estimates might vary across populations, when it is possible to compare parameter estimates across groups, and similar difficult questions. In this section, we slice up this task under four headings—identifying and accessing special populations, measuring the same constructs across groups, exploring the bounds of psychological theories, and exploiting unusual variation in special populations—and discuss the methodological implications of the substantive issues that arise.

Identifying and Accessing Participants in Special Populations The first task of any investigation into participants from special populations is gaining access to the participants. But, this characterization of “gaining access to the participants” masks several necessary steps in the design and conduct of an empirical investigation. To provide some cognitive structuring of this first concern, we have broken down the research process associated with drawing a sample from a special population into four steps or issues. The first task is the clear identification of the special population to be studied. As earlier sections of this chapter have shown, a single, overarching notion of what constitutes a special population is not present in the research literature. Rather, various ways of defining special populations can be used, and some of these will provide partially overlapping subsets of possible participants. Because of the seemingly murky nature of some special populations, we offer only the most general recommendations of how to deal with the problem of identification of the population. Researchers should be careful to develop a clear statement of the population to be studied. If the target population is all persons receiving mental health services in a catchment area, then no explicit

or implicit exclusion criteria should be used or one might inadvertently draw an unrepresentative sample. Of course, a more restricted population could be the topic of a study, such as persons from minority groups who seek mental health services with regard to alcohol problems but without attendant drug use problems. We hope the reader is clear on the issue of identification of a special population: Both inclusion and exclusion criteria for participants must be clearly elucidated and justified by the nature of the study, and the research team should remain vigilant to ensure that no unexpected factors are biasing the nature of the population defined. A second issue is the development of a plan to access participants from the special population. If one is interested in studying clinical populations, access to potential participants will often be sought through professional agencies, such as mental health centers, regional centers that offer services for members of particular populations (e.g., intellectual disability), state agencies, or the like. If the goal is to study students in elementary school, developing ties to schools is the most obvious approach. But, members of some populations are much harder to access. For example, researchers who study child physical or sexual abuse or neglect must often access their participants through arrangements with court systems, child welfare organizations, and so forth. In these research endeavors, investigators frequently find that a great deal of time must be spent and a large amount of red tape must be surmounted to gain access to research subjects. Still, one should not let the difficulty of the access alter the research goals of a program of research. To provide the most valid data from the most representative settings, proper and optimal access must be developed. A third step, once the special population has been identified and modes of accessing participants have been developed, is to develop a plan for sampling participants from the population. In some situations, a sampling plan can be relatively straightforward. For example, a researcher might select each birth (or every second birth) from a set of hospitals during a given time period as the sample to be drawn, and this will often lead to the drawing of a representative sample from the population that is served by the hospitals in question. Studying birth outcomes in low SES families might be approached in this fashion, if the hospitals selected tended to serve the low SES families in a community. But, if a researcher intends to investigate differences across certain strata (e.g., African-American, Hispanic [or Latino]) and participants from these widaman, early, conger

63

strata are unequally represented in the population, then oversampling the low-frequency participants and then re-weighting in the analysis stage can ensure statistical analyses that have greater external validity or generalizability. Full discussion of sampling plans is beyond the scope of this chapter; here, we merely emphasize the need to consider the sampling plan with as much attention and concern as other aspects of study design. The final step in an empirical study is the recruitment of participants from the special population. This step involves the initial contact with potential participants and their recruitment into the study. The recruitment rate into a study is a crucial statistic and should be reported in every published report. Recruitment rates vary across types of studies, so a low recruitment rate for a given study may not be a fatal flaw if this is representative of the studies in the domain. However, researchers should collect as much basic data as possible on potential participants so that trends in participation versus nonparticipation might be discerned. If nonparticipants differ systematically from participants on any variables, this may limit the generalizations to be drawn from the study. Researchers should report the recruitment rate in any published paper, so readers will have a basis for placing the research results in context. Implication 1: Informed identification of special populations and members of such groups is often a difficult task. But, the failure to identify, sample, and recruit members of special populations in appropriate ways may render any empirical results of dubious value.

The history of research in psychology tends to reflect the conduct of studies on samples of convenience. Researchers often are careful to describe the basic demographic characteristics of their participants but infrequently discuss whether the sample is representative of any larger population. As psychology matures as a science, greater attention will be paid to the issues of drawing appropriate samples from well-defined populations. Our hope is that as researchers pay greater attention to this set of issues, research results will begin demonstrating greater replication across studies than has often been the case to date.

Measuring the Same Constructs Across Groups Perhaps the initial analytic task to undertake when studying special populations is to determine 64

s p e c i a l p o p u l at i o n s

the meaning and the metrics of dimensions of individual differences within and among populations or subpopulations. Researchers typically assess individual differences on key dimensions in a domain in their research projects. For example, a study on mental abilities may include measures of fluid and crystallized intelligence, and a study of personality is likely to assess the Big Five dimensions of Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Further, the listing of certain dimensions of intelligence and personality represents only a meager sampling of the many different types of characteristics we assess in our work as researchers in psychology. However, using a standard measure developed and normed on the general population to assess individuals from a special population is fraught with problems, and a researcher cannot assume that the individual differences reflected in scores on the measure are directly comparable across populations. Consider the use of a widely employed measuring device in a special population. For example, a researcher might want to assess dimensions of personality in a large sample of persons with intellectual disability. The researcher may intend to determine whether persons with intellectual disability have different mean levels on the personality dimensions (e.g., higher levels of Agreeableness) and whether individual differences on the personality traits relate in meaningful ways to success in community placements. To do so, the research might select the Big Five Inventory (BFI) (John, Donahue, & Kentle, 1991), a 44-item measure of the major dimensions of personality that contains 8 to 10 items for each of the five dimensions. On each item, the respondent is asked to indicate, using a 1-to-5 rating scale, his or her degree of agreement with certain adjectives describing personal behaviors or descriptions. In a situation like this, a researcher may blithely assume that responses by persons with intellectual disability can easily be compared at the scale level with responses by persons who do not have intellectual disability. Thus, one might sum up the item scores on the Extroversion scale and compare mean and variance differences on scale scores between relevant samples of persons with and without intellectual disability. However, psychometric investigations over the past 50 years and more have demonstrated that incorrect conclusions may be drawn in such situations unless one is confident that one is assessing “the same constructs” across the different populations. Methods of verifying that one is assessing “the same constructs” across groups

have been published under several rubrics, including methods to assess measurement invariance, factorial invariance, or lack of differential item functioning (DIF). The upshot of this concern leads to an important implication when researching special populations: Implication 2: Establishing the measurement invariance of instruments across groups is a key result in the investigation of any special population. Measurement invariance must hold for meaningful comparisons to be made across samples from different populations.

Measurement invariance is a broad topic, one that subsumes research and statistical models that cross many boundaries. For example, some work on measurement invariance has looked at prediction models, which may be used to predict some behavioral outcome (e.g., college grade point average) from relevant predictors (e.g., high school grade point average, admission test scores). Multiple models can be employed in such research, and regression analysis is often used. In regression analysis, questions arise regarding any potential intercept bias or slope bias across groups when predicting the outcome. When measurement invariance is considered within the context of factor analysis models, the term applied is usually factorial invariance, which is a restricted form of measurement invariance (Meredith, 1993). In factor models, we typically begin with a data model of the form: Yji = τj + λj1 η1i + · · · + λjr ηri + εji ,

(1)

where Yji is the score of person i(i = 1, . . . , N ) on manifest variable j(j = 1, . . . , p), τj is the intercept for manifest variable j, λjk is the factor loading (or regression weight) for predicting manifest variable j from latent variable k(k = 1, . . . , r), ηki is the factor score for person i on latent variable k, and εji is the score of person i on the unique factor for manifest variable j. The model in Equation 1 is termed the linear factor analysis data model to signify the fact that the linear model was developed as a model for understanding persons’ scores on manifest variables as linear functions of their scores on latent variables. Writing Equation 1 in matrix notation results in: Y = τ + Λη + ε, (2)  where Y is a p × 1 vector of scores for person i  on the p manifest variables, τ is a p × 1 vector of  intercepts for the p manifest variables, Λ is a p × r matrix of loadings of the p manifest variables on the 

r latent variables, η is an (r × 1) vector of scores  for person i on the r latent variables, and ε is a p × 1 vector of scores of person i on the p unique factors. One can use the model in Equation 2 to develop moment expectations for the manifest variables, moment expectations that yield expressions for the covariance structure and the mean structure of the manifest variables. In a single-group case, these expectations are: Σ = ΛΨ + Θ

(3a)

μ = τ + Λα,

(3b)

  where Σ is the p × p matrix of population covariances among manifest variables, Ψ is the (r × r) matrixof covariances among the latent variables, Θ  is the p × p matrix (usually diagonal) of covari ances among unique factors, μ is a p × 1 vector of population means on manifest variables, α is an (r × 1) vector of means on the latent variables, and other symbols were defined above. Equation 3a is the population covariance structure model, and Equation 3b is the population mean structure model. In any sample from a population, we observe sample covariances among manifest variables, which we signify as S, and sample means on the manifest ¯ Given these variables, which we can signify as Y. sample estimators of population values, we can write the sample covariance and mean models as: ˆ Σ ˆΨ ˆ ˆ Λˆ  + Θ= S∼ =Λ

(4a)

ˆ α, ˆ = τˆ + Λ ˆ Y¯ ∼ =μ

(4b)

where carets (ˆ) are added to matrices to indicate that sample estimates of population parameters are contained in the matrices, and all symbols are defined above. The covariance structure model in Equation 4a signifies that the matrix of sample covariances among manifest variables S is approximated by the ˆ with estiˆΨ ˆ Λˆ  +Θ; covariance structure model, Λ ˆ ˆ Ψ, ˆ andΘ, mates in the three parameter matrices Λ, the matrix expression yields an estimate of the popˆ ulation covariances among manifest variables, Σ, under the assumption that the model is correct in the population. The mean structure model in Equation 4b shows that the sample means are estimators of population means, and these are approximated as a function of intercepts, factor loadings, and means of the latent variables. widaman, early, conger

65

If we generalize the model in Equations 4a and 4b to the multiple-group context, we arrive at: ∼Λ ˆg =Σ ˆ gΨ ˆg ˆ g Λˆ  g + Θ (5a) Sg = ˆ g αˆ g , ˆ g =ˆτg + Λ Y¯ g ∼ =μ

(5b)

where the subscript g (g = 1, . . . , G) has been added to each matrix or vector to indicate that the elements of equations are derived from group g , and all other symbols are defined above. Levels of factorial invariance. Given the multiplegroup model shown in Equations 5a and 5b, consideration of factorial invariance can commence. A simple rendition of factorial invariance is this: The same factors should be present in multiple groups. But, this simple statement masks key issues. How can we tell if the same factors are present in different groups? What empirical results would give us confidence that we have identified the same factors in different groups? One way to get a bit more definite about how to verify that invariant factors have been identified in multiple groups is to consider a mathematical statement regarding the expectations of the  mani fest variables, which can be written as E Y|η, g . This equation states that the expected values of the manifest variables in Y are a function of the common latent variables in η and g to which  the group  a person belongs. Now, if E Y|η, g = E (Y|η), or if the expectations of the manifest variables given η and g equal the expectations of the manifest variables given just η, then the expectations are not dependent on the group of which a person is a member. If this identity holds, then the same latent variables are present in the different groups. To translate the expectation equations into implications regarding data, consider the factor model shown in Equations  5aand 5b above. The expectation equality E Y|η, g = E (Y|η) will hold only if the common factors are “translated” into manifest variable scores in the same fashion in each group. This requirement has led researchers to discuss several levels of factorial invariance. Many researchers (e.g., Byrne, Shavelson, & Muthén, 1989; Chen, Sousa, & West, 2005; Cheung & Rensvold, 1999; Ferrer, Balluerka, & Widaman, 2008; Hancock, Kuo, & Lawrence, 2001; Little, 1997; McArdle, 1988; Meredith & Horn, 2001; Millsap & Meredith, 2007; Nesselroade, 1983; Rensvold & Cheung, 1998) have written on the topic of factorial invariance, and the preceding listing of contributions merely scratches the surface of work in this area and therefore necessarily misses many contributions that should also be cited. 66

s p e c i a l p o p u l at i o n s

In our presentation here, we will follow the summary of approaches to factorial invariance research provided by Widaman and Reise (1997), who synthesized prior work by Jöreskog (1971), Horn, McArdle, and Mason (1983), and Meredith (1993) to arrive at four levels of factorial invariance. These four levels constitute levels of increasing restriction on parameters of the factor analysis model. Horn et al. (1983) termed the first level of invariance configural invariance. By configural invariance, we mean that the same pattern of fixed and free loadings in Λ is observed in each group. This form of invariance states merely that within each group, each manifest variable is predicted by the same latent variable(s) as occurs in other groups. Under configural invariance, the g subscript is still employed for each Λ matrix, because different estimates can be found for particular factor loadings across groups and the stipulation of configural invariance will still be satisfied. The second level of factorial invariance was termed weak factorial invariance by Widaman and Reise (1997) and refers to a model in which factor loadings are constrained to be invariant, or identical, on a one-by-one basis across groups. Thus, not only must the factor loadings display the same pattern of fixed and free loadings across groups, but the free loadings across groups are constrained to invariance. If the invariance constraint on factor loadings is supported, then regression weights for predicting manifest variables from latent variables are invariant across groups. This equality is one key element in showing that latent variables are translated into manifest variables in the same fashion. If this holds and the factor loading matrices can be constrained to invariance across groups, the g subscript can be deleted from the loading matrices, leading to: ˆg =Σ ˆΨ ˆg ˆ g Λˆ  + Θ Sg ∼ =Λ

(6a)

ˆ αˆ g , ˆ g = τˆ g + Λ Y¯ g ∼ =μ

(6b)

where all symbols in Equations 6a and 6b are defined above. The third level of invariance, strong factorial invariance, adds invariance across groups of the measurement intercepts to the weak factorial invariance model. Thus, under strong factorial invariance, the regression equation for predicting each manifest variable from the set of latent variables is invariant across groups, satisfying a key criterion for measurement invariance—that the latent variables related to

manifest variables in the same fashion across groups. The resulting equations are: ˆg =Σ ˆΨ ˆg ˆ g Λˆ  + Θ Sg ∼ =Λ

(7a)

ˆ αˆ g , ˆ g =ˆτ + Λ Y¯ g ∼ =μ

(7b)

where all symbols are defined above. Note that both the factor loading matrix and the vector of intercepts have the g subscript deleted, because these matrices have invariant estimates across groups. Two key outcomes accompany strong factorial invariance: (1) all differences across groups in the means on manifest variables are due to mean differences on the latent variables; and (2) group differences in means and variances on the latent variables are identified in a comparable metric across groups, enabling comparisons at the latent variable level across groups. Thus, the latent variable model in Equations 7a and 7b identifies the latent variables as the sources of differences across groups in mean levels on the manifest variables, and variances on latent variables can also be compared. The fourth and most restricted level of invariance is strict factorial invariance. Under this level of invariance, the unique factor variances are additionally constrained to invariance across groups. The resulting equations for the covariance structure and mean structure, respectively, are: ˆ =Σ ˆΨ ˆg ˆ g Λˆ  + Θ Sg ∼ =Λ

(8a)

ˆ αˆ g , ˆ g =ˆτ + Λ Y¯ g ∼ =μ

(8b)

where all symbols are defined above. As shown in Equation 8a, the g subscript is deleted from the ˆ because estiunique factor covariance matrix Θ mates in this matrix are invariant across groups. Under strict factorial invariance, all group differences in mean levels and in variances on the manifest variables are due to mean and variance differences, respectively, in the latent variables. Thus, under strict factorial invariance, we have a concise representation of all between-group differences on manifest variables. Although strict factorial invariance is the most concise of the levels of factorial invariance, researchers often find that equality constraints on unique factor variances across groups are too restrictive. This is not a problem, because comparisons across groups on the latent variables are justified if strong factorial invariance holds. Representing mean and variance/covariance differences across groups. If at least strong factorial invariance is satisfied for a set of data, then latent variables are identified in a form that allows investigation of group differences on the latent variables. These differences across groups are contained in

particular matrices in Equations 8a and 8b. Specifically, group differences in mean levels are obtained from the αˆ g matrices in Equation 8b. Models are often identified by fixing factor means to zero in one group (e.g., group 1, which serves as the reference group), so mean values for the other groups (e.g., groups 2, . . . , G) are estimated as mean differences from the reference group. Group differences in variance on the latent variables or covariances among latent variables are ˆ g matrices in Equation 8a. If obtained from the Ψ latent variables are identified by fixing latent variable variances to unity in the reference group (e.g., group 1), then variances on the latent variables are estimated relative to this reference group. In summary, if strong factorial invariance holds for a set of data, latent variables are identified in an invariant fashion across groups. Given this, between-group comparisons on mean and/or variance on the latent variables are justified, and these comparisons can be made at the error-free, latent variable level.

Exploring the Bounds of Psychological Theories As discussed above, obtaining strong factorial invariance allows the researcher to assume that mean and/or variance differences on dimensions of individual difference are interpretable across populations. However, psychological theories frequently lead to predicted relations among constructs. For example, many researchers have sought to outline dimensions of parenting styles, and the isolation and replication of such dimensions—particularly across groups (e.g., across ethnic groups)—is an important matter. But, once these initial steps have been taken, researchers are typically interested in whether parenting styles have impacts on, or at least consistent patterns of asymmetric relations with, other variables, such as child behavior. To pursue such research, we must focus on the relations among latent variables, determining whether the patterns and strength of relations among latent variables is similar or different across groups. Implication 3: Results of studies of the general population should not be generalized to special populations without research that supports such generalization. Investigations of the structural relations among latent variables across populations hold the key to determining whether conclusions regarding relations among variables generalize to special populations.

widaman, early, conger

67

In the structural modeling literature, researchers often distinguish between the measurement model and the structural model. The measurement model consists of the relations of the latent variables to the manifest variables, and the structural model contains relations among the latent variables. In the foregoing section, we described several levels of factorial invariance—such as weak and strong factorial invariance—and these concerned invariance of parameter estimates in the measurement model. Once at least strong factorial invariance is established, we may pursue other forms of invariance that are of great importance for generalizing theoretical conclusions across groups. These additional forms of invariance fall under the rubric we are calling structural invariance, because they involve invariance of the parameter estimates in the structural model. A great deal of work has been done on measurement or factorial invariance, but relatively little research has been published under the heading of structural invariance. To document this, PsycINFO searches were made with “measurement invariance” and “factorial invariance” as keywords; these searches returned 412 and 402 citations, respectively. A search with “structural invariance” as keyword returned only 101 citations, a much lower number. In addition, the term structural invariance seemed to be used in a much less consistent, more confused fashion in prior research. Many authors used the term “structural invariance” synonymously with what we have called factorial invariance, arguing that satisfying tests of factorial invariance implied that the factors were structurally invariant across groups. When using the term “structural invariance” to refer to elements in the structural model, “structural invariance” has typically been interpreted as denoting the invariance of the pattern of significant and nonsignificant directed relations among latent variables. To discuss levels of structural invariance, we need to revise slightly the structural model shown in Equations 8a and 8b. If we return to Equation 2 and allow directed relations among latent variables, we can write an equation for the latent variables as: η = Bη + ζ,

(9)

where B is an (r × r) matrix of regression weights for predicting latent variables from other latent variables, ζ is an (r × 1) vector of latent variable residuals, and other terms are defined above. Solving 68

s p e c i a l p o p u l at i o n s

Equation 9 for η leads to: η = (I − B)−1 ζ,

(10)

where I is an (r × r) identify matrix, the superscript −1 indicates the inverse of the associated matrix, and all other symbols were defined above. The covariance expectations for Equation 10 are: −1    , (11) E ηη = (I − B)−1 Ψ I − B where all symbols are defined above. To ease interpretation of two parameter matrices, we will distinguish between independent latent variables and dependent latent variables, using superscript (i) and (d ), respectively. Thus, we will write the matrix of covariances among latent variables as:   (i) 0 Ψ (12) Ψ= 0 Ψ(d ) and the matrix of factor means as  (i)  α α= , α(d )

(13)

where Ψ(i) contains free covariances among the independent latent variables, Ψ(d ) contains covariances among residuals of the dependent latent variables, the independent latent variables are assumed to be uncorrelated with the residuals of the dependent latent variables, α(i) is a vector of means of the independent latent variables, α(d ) is a vector of intercepts of the dependent latent variables, and other symbols are as defined above. Placing Equations 11, 12, and 13 into Equations 7a and 7b for a multiple-group version of the strong factorial invariance model yields:  −1 Ψ  ˆ (i) 0 g ∼Λ ˆ I − Bˆ g Sg = ) ˆ (d 0 Ψ g  −1 ˆg =Σ ˆg I − Bˆ g (14a) Λˆ  + Θ  αˆ g(i) ˆ ˆ g = τˆ + Λ , (14b) Y¯ g ∼ =μ αˆ g(d ) where g subscripts on vectors or matrices indicate the presence of differing parameter estimates across groups, and all symbols are defined above. Levels of structural invariance. Here, we propose that different levels of invariance can be distinguished for the structural model, levels that are analogous to the levels of factorial invariance. In offering this proposal, we hope we can lead to research on structural invariance that is as illuminating as work on factorial invariance and, in the

process, help establish a common nomenclature for discussing structural invariance. The first and most basic form of structural invariance is configural structural invariance of the pattern of fixed and free regression weights in the Bˆ g matrices in Equation 14a. If a restricted pattern of directed paths is estimated from independent latent variables to dependent latent variables and among dependent latent variables, configural structural invariance implies that the same latent variables have directed effects on the same outcome latent variables in each group. The second level of structural invariance is invariance of the regression weight estimates for predicting certain latent η variables from other latent variables. Paralleling distinctions made above for factorial invariance, placing invariance on the regression weights leads to what may be called weak structural invariance. These regression weights are contained in the Bˆ g matrices in Equation 14a. If parameter estimates in the Bˆ g matrices are constrained to invariance across groups, the g subscript would be dropped from these matrices, leading to   −1 Ψ ˆ (i) 0 g ˆ I − Bˆ Sg ∼ =Λ ) ˆ (d 0 Ψ g −1  ˆg =Σ ˆ g, I − Bˆ (15) Λˆ  + Θ where all terms are defined above, and the lack of subscripts on the Bˆ matrices indicates that acrossgroups invariance constraints have been imposed on parameter estimates in this matrix. At least two issues must be mentioned with regard to constraints on regression weights in the Bˆ matrices. First, across-group constraints on the Bˆ matrices are interpretable only if weak or strong factorial invariance has been established. Thus, if only configural factorial invariance holds for a given set of data, no clear and convincing substantive interpretation can be placed on constraints on elements in the Bˆ matrices. Data must support at least the hypothesis of weak factorial invariance to yield interpretable constraints on the Bˆ matrices, and still stronger interpretations of such constraints accompany the successful specification of strong factorial invariance. For this reason, we placed the restricted Bˆ matrices in the strong factorial invariance model in Equation 15, because strong factorial invariance allows a more adequate basis for discussing these constraints. Second, if across-group constraints are imposed on the Bˆ matrices, the constraints should generally

be placed on raw score regression weights (or equivalents of these), rather than standard score regression weights. As in typical multiple regression analyses, raw score regression weights are presumed to be invariant across samples from a common population, whereas standardized regression weights are expected to vary as a result of range restriction resulting from sampling. Now, with latent variable models, the scale of each latent variable may be fixed in any of several ways, and the “raw score regression weights” among the latent η variables will vary as a result. However, if the scale of each latent variable is fixed in one group and strong factorial invariance constraints are placed on ˆ and τˆ matrices, then the latent η variables the Λ are on the same scale across groups. As a result, the regression weights in the Bˆ matrices are analogous to raw score regression weights and are on a comparable metric, so invariance constraints on these weights are reasonable a priori hypotheses to test. A third form of structural invariance involves the intercepts for the dependent latent variables, contained in the αg(d ) matrices. Placing invariance constraints on these latent intercepts leads to strong structural invariance. If across-group constraints on latent intercepts are imposed, then the resulting equation for mean expectations has the form: ˆ ˆ g = τˆ + Λ Y¯ g ∼ =μ



αˆ g(i) αˆ (d )

 ,

(16)

where all terms are defined above, and the lack of subscripts on the α(d ) matrix indicates the presence of cross-group invariance constraints on parameter estimates in this matrix. More importantly, if these latent intercept terms are constrained to invariance across groups with no significant loss in model fit, then any group differences in mean level on the dependent latent variables result from group differences in mean level on the independent latent variables. This condition is analogous to the distinction at the measured variable level that distinguishes weak factorial invariance from strong factorial invariance; hence, our distinction here between weak and strong structural invariance. The fourth and final type of structural invariance to be represented and tested involves the residual covariances among the dependent latent variables, ) matrices. If these residual ˆ (d contained in the Ψ g covariances are constrained to invariance, the resulting model is characterized as conforming to strict widaman, early, conger

69

structural invariance. Under this model, Equation 15 becomes:  −1 ˆ I − Bˆ Sg ∼ =Λ



ˆ (i) Ψ g 0

0 ˆ (d ) Ψ

−1  ˆg =Σ ˆ g, I − Bˆ Λˆ  + Θ



(17)

where all terms are defined above, and the lack of subscripts on certain parameter matrices indicates cross-group invariance constraints on parameter estimates in these matrices. As a summary of issues in structural invariance, distinctions among levels of constraints similar to those made for factorial invariance may be drawn. Specifically, invariance constraints on the αˆ (d ) and Bˆ matrices are the most important for tests of substantive theory. Once cross-group constraints are imposed on the αˆ (d ) and Bˆ matrices, identical raw score regression models—both the intercepts and regression weights—hold in each group at the latent variable level. Invariance of the αˆ (d ) and Bˆ matrices is a reasonable a priori hypothesis; if sustained, then the lack of group differences at this level is an important finding. By comparison, additional constraints ) ˆ (d on the Ψ g matrices are nice but not necessary. In fact, there are reasonable bases for expecting that the ) ˆ (d Ψ g matrices will vary significantly across groups under sampling from a population, although the αˆ (d ) and Bˆ matrices may display invariance across groups. Finally, we reiterate our earlier statement that invoking strong or strict structural invariance constraints makes no sense substantively unless at least strong factorial invariance constraints have been placed on the model. Only if invariant latent variables are identified in an identical metric across samples does it make sense to test whether invariant regression parameters hold among these latent variables. Moreover, if the αˆ (d ) and Bˆ are invariant across the general population and specific special populations, then processes bringing about the dependent latent variables are the same across populations, allowing one to generalize theoretical conclusions across groups. On the other hand, if the αˆ (d ) and Bˆ are not invariant across the general population and special populations, then results from the general population cannot be extended to the special populations, and theories regarding the nature of relations among latent variables would require modifications in connection with special populations. 70

s p e c i a l p o p u l at i o n s

Exploiting Unusual Variation in Special Populations A final issue that arises in the study of special populations is the frequent finding of unique forms of variability in the special populations. That is, relative to the general population, individuals in a given special population may exhibit substantial variation on key variables that does not exist in marked form in the general population. For example, adaptive behaviors are forms of behavior that enable one to live independently in the community. Because of the way that adaptive behaviors are measured, persons with IQ scores that are at or above the mean of the population may exhibit little variability on a measure of adaptive behavior, as they score at the highest level on every item. This is a quite reasonable outcome, as persons with IQ scores at or above the population mean do not have problems living independently in the community. One tends to find substantially more variability in ratings of adaptive behavior in samples of persons with intellectual disability; again, this is a reasonable outcome, as many persons with intellectual disability have difficulties in one or more domains of adaptive functioning, limiting the quality of their independent living in the community. Of course, if a special population is a subset of the general population, then the marked variation on key variables does exist in the general population. But, if the special population is a rather small subset of the population, then extreme variability on certain variables may be submerged in data on the full population and therefore go unnoticed. This issue of unusual variation within special populations leads to our fourth methodological implication of the presence of special populations: Implication 4: Special populations may exhibit unique forms of variability that can be exploited to test theoretical conjectures in ways unavailable in the general population.

We have not uncovered any particular methodological techniques that are unique to the issue of unusual variation within special populations. In the preceding sections, we outlined how quantitative techniques could provide informative ways of answering questions regarding measurement equivalence and structural invariance. If unusual variation were present on one or more variables in a special population, then this unusual variation could well be uncovered in particular matrices in the general structural equation model we described. Here, we merely highlight the issue of potential unusual variation in a special population. In

emphasizing this issue, we encourage researchers to be vigilant for the presence of unusual variation. Unusual variation may take any of several forms. Persons in a special population may exhibit variation on dimensions on which members of the general population may show no variability at all. An example of this is special artistic abilities, such as individual differences in composing symphonies or directing movies. No one doubts the presence of individual differences among composers and movie directors, and preference for certain music or movies is often a lively topic of discussion. But, we rarely reflect on the fact that relatively few individuals compose symphonies or direct movies, so no individual differences on these dimensions are evident outside the small group of persons who pursue these distinctive endeavors. Alternately, members of special populations may exhibit unusually large or small variation on behavioral traits relative to the general population. Persons in a special population may show a particular symptomatic behavior and so exhibit essentially no variation in this behavior, which may fall at one end of a behavioral continuum on which the general population exhibits wide individual variability. For example, persons with Down syndrome often have facial features, including small chin, round face, almond eye shape, and so forth, that are an unusual combination in the general population. Thus, the common facial appearance of persons with Down syndrome, which is distinct from other members of the population, is an early signal that the individual is a member of this special population. Or, persons in a special population may exhibit enhanced or enlarged variability on a dimension relative to the general population. Regardless of the form of unusual variability, such variability may offer unique ways to understand differences across groups.

Examples of Quantitative Explorations of Special Populations Having outlined various methodological issues and approaches that can be used to characterize special populations, we now describe empirical studies that embody each of the methodological issues that we have discussed. The approaches we have taken and the results obtained can be extended readily by other researchers to the study of special populations.

Identifying and Accessing Participants in Special Populations One example of the problems that arise when recruiting members of a special population was

reported by Nary, Froehlich-Grobe, and Aaronson (2011), who sought to recruit wheelchair users into a randomized controlled exercise trial. Nary et al. set out to recruit 180 participants for their study, or 60 persons in each of three cohorts. The research team used a wide variety of recruiting strategies, beginning with the traditional approach of contacting hospitals, health-care organizations, healthcare providers, and disability agencies. When these sources of recruitment led to discouraging results, Nary et al. began using more innovative approaches, including putting flyers in movie theaters and public transportation (e.g., busses), advertising in newspapers, newsletters, and so forth, and employing direct mail coupon packets (identified as ValPak). These recruitment efforts were extremely varied, as Nary et al. listed approximately 30 different locations/activities associated with their recruitment. When participants were recruited, they were asked where they heard about the project. Somewhat surprisingly, the single most effective method was the ValPak direct mail coupon packet and other contacts through the media (e.g., radio and TV advertisements, newspaper advertisements); a full third of the sample of participants was recruited through the media contacts. Although the research team did not fully meet their recruitment goal of 180 participants, they did come close to the goal—and only did so by employing a wide array of approaches that cost much more in time and effort than the research team had anticipated. The Nary et al. paper is an entertaining, if sobering, accounting of the myriad approaches the researchers took to recruit participants and the relative value of the different approaches. Nary et al. have summarized their experiences in a series of five “lessons learned,” such as needing to be cognizant that recruiting of persons with disabilities will be more difficult than recruiting persons without disabilities and that more time, effort, and money will be required in recruiting members of special populations than initially thought. The most unfortunate lesson learned was the fourth lesson, which related to the research team finding that health-care agencies were not especially helpful in recruiting participants, although the special population of wheelchair users has, in general, greater reliance on such agencies. The Nary et al. paper is a wonderful place to start when planning to recruit members of populations with disabilities, and learning from the efforts of these researchers might ease recruitment in future studies. A second example of ways researchers have dealt with identifying and accessing participants in a widaman, early, conger

71

special population comes from our California Families Project (CFP), a study of 674 Mexican-origin families in California (e.g., Conger, Song, Stockdale, Ferrer, Widaman, & Cauce, 2012; Conger, Stockdale, Song, Robins, & Widaman, in press). We worked through the school districts in two cities in Northern California to identify potential members of our Mexican-origin population, which we defined as the population of fifth grade public or Catholic school children whose four grandparents were born in Mexico or whose forebears were born in Mexico. Children were drawn at random from the student rosters for the school districts of these two cities. Families of these children were then recruited by telephone or, for cases in which there was no listed phone number, by a recruiter who went to their home. Of the 982 families contacted, 68.6% of these eligible families (N = 674) agreed to participate in the study. All family members were of Mexican origin as determined by their ancestry and their selfidentification as being of Mexican heritage. First-, second-, and third-generation children of Mexican origin were eligible for the study. Also, the focal child had to be living with his or her biological mother. Either two-parent (82% of the sample) or singleparent (18% of the sample) families were eligible to participate. In two-parent families, the father had to be the child’s biological father. In addition to the initial recruitment, we face challenges in tracking our families over time, especially given high mobility of this population during times of economic downturn and resulting economic pressure on families. We are currently in the sixth year of assessing families in our longitudinal study and are happy to report that we have a retention rate over 90% for the families in our study. This has taken a number of special approaches to retaining families, including going door-to-door if attempts to contact a family by phone are unsuccessful, contacting persons who know a family (especially if the family has moved), and following families for assessments as they move to other states (e.g., Texas, Arizona) or back to Mexico. In the future, we should document all of the different strategies we have used to keep contact with our sample. An article detailing the different approaches and their relative success would provide a useful comparison to the recruitment problems confronted by Nary et al. (2011) in their study of wheelchair users. Only by keeping recruitment and retention at the highest possible levels will the data generated by our study and by others (e.g., Nary et al., 2011) be optimal for drawing the 72

s p e c i a l p o p u l at i o n s

conclusions we wish to draw. Optimal recruitment and retention is an unexpectedly time-consuming and expensive proposition, and we have our own list of lessons learned in this necessary, thankless task.

Measuring the Same Constructs Across Groups Economic pressure and depression. As one example of measuring the same constructs across groups, we present an example of measurement invariance across groups using data from three ethnically diverse samples living in the United States. Data for these samples were obtained from the following studies: (1) The Family and Community Health Study (FACHS), a study of 889 African-American children and their families; (2) the CFP, a study of 674 Mexican-origin families and children; and (3) the Family Transitions Project (FTP), a study of 550 European American children and families. For these analyses, we focused our attention on the impact of economic pressure on husbands’ and wives’ depressive symptoms. Because we were interested in impacts on two-parent families, all three samples were reduced in number, yielding 300 families from the FACHS study, 482 families from the CFP study, and 281 families from the IYFP study, for a total of 1063 families across the three groups. The confirmatory factor model fit to the data for the three samples contained seven manifest variables and three latent variables and is illustrated in Figure 4.1. In the figure, standard figural notation is used: Triangles denote the unit constant used to estimate means (αk ) or intercepts (τj ), squares or rectangles represent manifest variables, circles or ellipses stand for latent variables, single-headed arrows reflect directed effects (e.g., factor loadings λj , regression weights), and double-headed arrows represent latent variable (ψkk ) and unique factor (θj ) variances or covariances (ψkk  ). In Figure 4.1, three latent variables are shown: Economic Pressure, Husband Depression, and Wife Depression. All three studies included the same three indicators of Economic Pressure: (1) Unmet Material Needs (U), the average of 4 items assessing unmet material needs in different domains, including “having enough money to afford the kind of home, clothing and food your family needs” (response scale 1 = “strongly agree” to 4 = “strongly disagree”); (2) Can’t Make Ends Meet (C), the average of 2 items measuring difficulties in having

money to cover expenses, including “difficulty paying bills” and “the amount of money left at the end of the month” (response scale: 1 = “no difficulty at all” to 4 = “a great deal of difficulty”); and (3) Financial Cutbacks (F), the number of 11 areas in which the family had to make cutbacks, including “the family postponing major household purchases, changing residence, and or eliminating medical insurance” (each area scored dichotomously, 0 = “no cutbacks” and 1 = “cutbacks”). Husbands and wives provided answers on the preceding scales, and average responses across husbands and wives were used as analyzed scores. For families in which husbands did not participate, scores on the economic pressure variables were treated as missing data. Scores on all three indicators were reverse-scored, when necessary, so that higher scores indicated greater economic pressure on the family. Across the three ethnic groups, the depression latent construct was obtained from the Mini-Mood and Anxiety Questionnaire (Mini-MASQ; Clark & Watson,1997). For this analysis, we used the five items that measure general distress—depression. These items ask both husband and wife to self-report on how much they had felt depressed, discouraged, and or worthless in the past week. Each item was answered on a scale ranging from 1 = “not at all” to 4 = “very much.” Two parcels (one two-item and one three-item parcel) were formed, and these are denoted M1H, M2H, M1W, and M2W for parcels 1 and 2 from husband and wife, respectively.

α1

Assessing factorial invariance across groups. We then fit a series of three-group confirmatory factor models to test factorial invariance of the model across the three samples. The first model, Model 1, was a three-factor model that had the same pattern of fixed and free factor loadings in each group. This configural invariance model fit the data well, χ 2 (33, N = 1063) = 41.99, with RMSEA of .028, and CFI and TLI values of 0.996 and 0.992, respectively. Next, we imposed invariance constraints on factor loadings in the Λ matrix across gender and ethnicity. For example, the loadings for the two indicators of husband depression were constrained to be equal to the loadings for the two indicators of wife depression within each sample; these loading were then constrained to equality across all three samples. The fit of the resulting model, Model 2, was slightly worse; however, indicators of model fit remained in an acceptable range, χ 2 (42, N = 1063) = 61.25, p = 0.02, with RMSEA of 0.036, and CFI and TLI scores of 0.991 and 0.987, respectively. The latter indices of practical fit suggest that the invariance constraints imposed on Model 2, which represent weak factorial invariance, are appropriate. In prior research, several authors have discussed changes in TLI or CFI values that might be considered practically significant when comparing nested models. Thus, Widaman (1985) considered a difference in the TLI of less than 0.01 when comparing two models to be a practically unimportant difference in fit, and Cheung and Rensvold (1999) and

α2

α3

1 ψ 31 ψ21 Economic Pressure

ψ11

λ1

τ1

λ2

ψ22

λ3

λ4

U

C

F

θ1

θ2

θ3

τ2

ψ 32

τ3

Husband Depression λ5

M1H

λ6

M2H

θ4 τ4

Wife Depression λ7

M1W

θ5 τ5

ψ 33

M2W

θ6 τ6

θ7 τ7

Figure 4.1 A figural presentation of the relations between latent variables for Economic Pressure, Husband Depression and Wife Depression and their respective indicators (Note: triangle stands for the unit constant, squares for manifest variables, circles or ellipses for latent variables, single-headed arrows for direct effects, and double-headed arrows for symmetric effects such as variances or covariances)

widaman, early, conger

73

Little, Card, Slegers, and Ledford (2007) argued that the CFI must meet this criterion (i.e., a difference in CFI of 0.01 or more between models) to be deemed a difference in fit worthy of note. Using these standards, the changes in fit when moving from the configural invariance model to the weak factorial invariance model were small and unimportant. Because fit indices for Model 2 were acceptable, we next invoked invariance constraints on the manifest variable intercepts in τ, leading to Model 3. Invariance constraints on all but one τ value in each sample were acceptable; the τ value allowed to vary freely across the three samples was the intercept for the third economic pressure indicator. This partial strong invariance model resulted in poorer statistical fit than prior models, χ 2 (49, N = 1063) = 95.29, p < 0.001, reflecting a significant worsening of fit relative to the weak invariance model,

χ 2 (7, N = 1063) = 34.04, p < 0.001. However, practical fit indices were only modestly worse, with an RMSEA of 0.052, and CFI and TLI values of 0.979 and 0.973, respectively. Despite having worse fit that slightly exceeded the “change of 0.01” criterion for practically significant difference in fit, the practical fit indexes for Model 3 were still acceptable (i.e., the CI for the RMSEA included 0.05, and both TLI and CFI values were above 0.95), indicating that the partial strong invariance constraints applied were appropriate. The final confirmatory model, Model 4, imposed invariance constraints on the unique factor variances in Θ and therefore corresponded to strict factorial invariance. Model 4 resulted in relatively poor fit, χ 2 (65, N = 1063) = 317.84, p < 0.001, with RMSEA of 0.105, and CFI and TLI scores of 0.886 and 0.890, respectively. This represented a significant worsening of both the statistical index of fit,

χ 2 (16, N = 1063) = 222.55, p < 0.001, and practical indices of fit when compared to Model 3. Based on the poor practical fit indexes of the strict factorial invariance model, we accepted the partial strong factorial invariance model (Model 3) as the optimal model for the data. Group differences in means and variances on latent variables. Given the fit of the partial strong factorial invariance model, mean and variance differences across groups could be evaluated in an unbiased fashion. For the latent variables of Economic Pressure and Husband Depression, means were fixed at 0 and variances at unity for the FTP sample, and means and variances were estimated for all other latent variables. In the FTP sample, wives had significantly 74

s p e c i a l p o p u l at i o n s

higher levels of depression (M = 0.40 [SE = 0.10], SD = 1.46) relative to their husbands. The FACHS sample had elevated means of 0.12 [SE = 0.10](SD = 1.03), 0.28 [SE = 0.13](SD = 1.37), and 0.45 [SE = 0.12](SD = 1.27), on the Economic Pressure, Husband Depression, and Wife Depression latent variables, respectively, and the CFP sample also showed higher means on the three factors, respectively, of 0.60 [SE = 0.09](SD = 0.97), 0.36 [SE = 0.11](SD = 1.47), and 0.69 [SE = 0.11](SD = 1.52). Thus, relative to European American families, AfricanAmerican and Mexican-origin families showed both higher mean levels and greater variability on the three factors assessed in these confirmatory factor models.

Exploring Bounds of Psychological Theories Ethnic differences in effects of economic pressure on depression. We turn next to ways of exploring the bounds of psychological theories. A substantial body of prior research supports the conclusion that family economic pressure has negative effects on husband and wife depression. Much of this research has been based on data from European American families, and we wanted to see whether the effect of economic pressure on husband and wife depression was similar across ethnic groups. To answer this question, we returned to the three-group data used to test factorial invariance but examined models that substituted direct effects of economic pressure on husband and wife depression in place of the correlations estimated in prior models. In the first of these models, we imposed invariance constraints on the path coefficients leading from economic pressure to depression across husbands and wives within each sample. This crossgender constraint resulted in a significant worsening in fit compared to the partial strong invariance model, χ 2 (3, N = 1063) = 12.96, p = 0.01, with moderately worse practical fit indexes, RMSEA of 0.055, and CFI and TLI values of 0.975 and 0.969, respectively. Next, we freed the cross-gender constraints, but imposed equality constraints on the path coefficients of economic pressure on husband depression across groups and on wife depression across groups. This model also resulted in a significant worsening of fit when compared to partial strong factorial invariance model, χ 2 (4, N = 1063) = 19.65, p < 0.001, with worse practical fit indexes, RMSEA of 0.057, and CFI and TLI values of 0.972 and 0.967, respectively.

In our final model, we imposed invariance constraints on the path coefficients for economic pressure on husband and wife depression across the two minority samples, allowing the corresponding coefficients to be free for the European American sample. This resulted in a nonsignificant change in model fit when compared to the partial strong invariance model, χ 2 (2, N = 1063) = 1.52, p = 0.47. The practical fit indexes for this model were essentially identical to those for the strong factorial invariance model, so we chose this model as the optimal representation of our data. Parameter estimates in this model suggest that the impact of economic pressure on the level of depression for minority couples differs from that for European American couples. Specifically, economic pressure appears to have the strongest effect (β = 0.69) on depression in minority husbands (both AfricanAmerican and Mexican-American husbands), and a significantly weaker effect (β = 0.47) on depression in minority wives. In contrast, economic pressure had a stronger effect (β = 0.54) on European American wives than on European American husbands (β = 0.23), who were least affected by economic pressure. Parenting styles and child behavior. A second example of exploring the bounds of psychological theories developed on the general population is worthy of note. In this study, Widaman and Borthwick-Duffy (April, 1990) reported results from 109 families with a child with intellectual disability. Key findings on parenting styles, or styles of parenting behaviors, by researchers such as Baumrind (1968, 1971, 1991; Baumrind, Larzelere, & Owens, 2010) and Hoffman (1975, 1979, 1994) supported the presence of several dimensions of parenting behavior. The general findings of this research was that authoritative parenting (high control, plus high nurturance/warmth) was associated with optimal forms of child behavior and that authoritarian parenting (high control and power assertion, plus low nurturance and warmth) and permissive parenting (low control, plus high nurturance and warmth) both led to less optimal levels of child behavior. Widaman and Borthwick-Duffy (April, 1990) isolated a total of seven dimensions of parenting behavior in their study, including (1) nurturance/warmth, (2) induction, (3) maturity demands, (4) promoting autonomy, (5) firm control, (6) love withdrawal, and (7) power assertion. Consistent with research on the general population, Widaman and Borthwick-Duffy found that the

positive parenting behaviors of induction, maturity demands, and promoting autonomy had the strongest, positive effects on longitudinal changes in different forms of adaptive functioning, such as practical skills (e.g., dressing oneself ), conceptual competence (e.g., functional academics), and social competence (e.g., making and keeping friends). However, contrary to research on the general population, Widaman and Borthwick-Duffy found that the authoritarian dimension of power assertion consistently had the strongest effects on longitudinal changes in negative behaviors such as social maladaption (e.g., aggression, property destruction) and personal maladaption (e.g., self-injurious behavior). Although the standardized regression weights were not large (βs ranged from 0.15 to 0.24), power assertion was the only parenting style to impact changes in maladaptive behaviors. Thus, the proscription of power assertion as a less useful form of parenting behavior was based on research on families in the general population. However, to reduce the levels of maladaptive behaviors in children with intellectual disability, power assertion appears to be the only viable parenting option.

Exploiting Unusual Variability in Special Populations Adaptive behavior in persons with intellectual disability. The final analytic issue to illustrate is the exploiting of unusual variability in special populations. One example of unusual variability in special populations involves the domain of adaptive behavior, which attains significance in the population of persons with intellectual disability. The three key dimensions of adaptive behavior are practical competence (or independent living skills), conceptual competence (or cognitive skills), and social competence (or skills); all three dimensions represent everyday behaviors that enable a person to live independently in the community. Widaman, Gibbs, and Geary (1987) utilized a database maintained by the Department of Developmental Services (DDS) of the State of California. DDS required all persons receiving state services for developmental disability to be assessed on a 66-item instrument of adaptive behavior. The instrument assessed the three key dimensions of practical, conceptual, and social competence and also assessed three additional dimensions of motor competence, social maladaption, and personal maladaption. Widaman et al. extracted 14 groups of individuals based on a crossing of age (children, widaman, early, conger

75

adolescents, and adults), levels of intellectual disability (mild, moderate, and severe), and placement (home, community). Thus, these 14 groups are special subgroups defined by demographic and functional characteristics from the special population of persons with intellectual disability. Widaman et al. confirmed essentially identical six-factor solutions across all 14 groups; interested readers are referred to the published article for details. Of importance here is the fact that, given the way the 66 items are phrased, persons who do not have an intellectual disability would likely score at the highest scale point on each item, therefore failing to exhibit any variability in responses on the items. Only persons with intellectual disability exhibit substantial variability on the items, so the dimensions of adaptive behavior only have ready application in this special population. However, this research on adaptive behavior offers a chance to make recommendations for future research. The instrument developed by the California DDS contained items that exhibit much variability only in samples of persons with developmental disabilities, because persons without developmental disabilities would have ceiling effects on all items. This does not mean, however, that persons without developmental disabilities do not display individual differences in adaptive forms of behavior. More likely, the lack of variance in a sample of persons without developmental disabilities is a measurement issue or problem. This measurement problem could be confronted by developing a set of harder items for each dimension of adaptive behavior, so that items would have a higher ceiling and persons without developmental disabilities might not “top out” on every item. If this were done, then a computerized adaptive testing (CAT) approach could be developed to administer a unique set of items to each examinee, presenting items that would enable precise measurement of the individual’s standing on each dimension, although a minimal number of items were needed to do so. Waller and Reise (1989) described how to apply the CAT approach to personality measurement, and Reise and Waller (2003) discussed application of sophisticated techniques to assessing psychopathology. Similar approaches could undoubtedly be used to assess adaptive behaviors across the entire population in a more adequate way. Effects of prenatal exposure to phenylalanine. A second example of unusual variability in special populations that provides a unique opportunity to explore relations among variables is derived from 76

s p e c i a l p o p u l at i o n s

the Maternal PKU Collaborative (MPKUC) study. Phenylketonuria is a well-known genetically based disorder, which results in disrupted metabolism of phenylalanine (PHE) into tyrosine. If left untreated, infants with PKU who are normal at birth suffer permanent brain damage that leads to severe mental retardation (mean IQ of 50) by age 2 years. However, with early identification and placement of infants on a diet low in PHE, the negative effects of PKU can be circumvented, and children with PKU can grow to adulthood with no evidence of intellectual disability (Koch & de la Cruz, 1999a, 1999b). However, the story of maternal PKU is more complex. If a woman with PKU does not maintain a low-PHE diet when pregnant, then the increased PHE in her blood crosses the placental barrier and exposes the developing fetus to high levels of PHE. For persons with PKU, levels of endogenous PHE over 6 mg|dL have been found to be teratogenic, which means that such levels lead to negative effects on behavior. However, for the developing fetus, levels of exogenous PHE (i.e., PHE from the mother) that were sufficient to cause teratogenic effects had never been identified. The MPKUC study (Koch, de la Cruz, & Azen, 2003) began in 1984 as an intervention study to help pregnant women maintain low-PHE diets and therefore maintain low blood PHE levels during pregnancy. A secondary goal of the MPKUC study was to monitor maternal PHE levels throughout pregnancy to study the relation between PHE levels and child outcomes. Table 4.1 shows descriptive statistics for participants from the MPKUC study for three mother variables (Full Scale IQ, PHE level when on an unrestricted diet, and average PHE level during pregnancy) and three child variables (Verbal, Performance, and Full Scale IQ). Various forms of unusual variability are contained in this table. First, all four of the IQ variables exhibit substantial deviation from the population mean: The mean mother IQ was approximately a full standard deviation below the population mean of 100, and the three mean child IQ scores are more than one-half a standard deviation below the population mean. Furthermore, the three child IQ scores show markedly greater variability than in the population, with SDs greater than 21 relative to a population SD of 15. The excessive variability in child IQ scores is the result of a larger-than-expected number of observations with rather low IQ, presumably resulting from higher levels of prenatal exposure to PHE. The indices of mother PHE levels also exhibit unusual variability. Persons who do not have the PKU genetic defect

Table 4.1. Descriptive Statistics on Variables from the Maternal PKU Study Variable

N

Mean

SD

Min

Mother full-scale IQ

379

85.90

13.65

Mother PHE level on unrestricted diet

413

22.03

9.18

3.30

51.10

Mother PHE level during pregnancy

412

8.23

4.49

1.30

28.30

Child verbal IQ 7 years

284

92.06

22.39

40

142

Child performance IQ 7 years

285

92.00

21.86

40

133

Child full-scale IQ 7 years

284

91.35

23.21

35

139

40

Max 130

Note: Mother full-scale IQ based on Wechsler Adult Intelligence Scale—Revised; Mother PHE levels on regular diet and during pregnancy are in mg | dL; Child verbal, performance, and full-scale IQ based on Wechsler Intelligence Scale for Children—Revised.

would exhibit extremely low PHE values, below 1.0 and near 0, on these two measures. In contrast to this, mother PHE levels on an unrestricted diet varied between 3.3 and 51.1, and mother PHE levels during pregnancy varied between 1.3 and 28.3. To model the effect of prenatal PHE exposure on child Full Scale IQ at 7 years, one could use a linear regression model. The result of fitting this model is the following equation: predicted IQ = 119.5 − 3.53 (PHE) The standard errors for the intercept and regression slope were 2.13 and 0.24, respectively, so interval estimates (95% CIs) for the two parameter estimates were approximately 117.2 to 123.7 and −3.05 to −4.00 for the intercept and slope, respectively. The above equation represents the unrealistic expectation that a child’s IQ at age 7 years would be approximately 120 if his/her mother had maintained a PHE level of 0 during pregnancy, and that IQ would drop about 3.5 points for every 1 mg|dL increase above this value of 0. An alternative regression model is a two-piece linear spline, with three parameter estimates: an intercept, a knot point that estimates the PHE exposure at which a teratogenic effect begins to occur, and the linear slope representing the teratogenic effect after the knot point. The results for this model were: predicted IQ = 103.9−4.14 (PHE), knot = 5.50. In this equation, children who experienced prenatal PHE levels between 0 and 5.50 all had an expected 7-year IQ of about 104, near the population mean. Moreover, after the knot point of 5.50, the predicted IQ drops over 4.1 IQ points for every 1 mg|dL increased. Because the standard errors for

the three coefficients were 1.70, 0.32, and 0.62, the point estimates and interval estimates for the coefficients are: intercept = 103.9, 95% CI [100.5, 107.3], regression slope for PHE = −4.14, 95% CI [ −3.50, −4.78], and knot point = 5.50, 95% CI [4.25, 6.75]. Exploiting the unusual variability of mother PHE levels during pregnancy allowed us to reach several important goals. First, we could verify the nature of the teratogenic effect of prenatal exposure to PHE on child cognitive outcomes, which is nonlinear in nature with a threshold for teratogenic effects. Second, we could estimate the level of exogenous PHE at which the teratogenic effect begins to occur, which is close to the level of endogenous PHE often assumed to have teratogenic effects for persons with PKU. Third, these results could be used as the basis for recommendations for monitoring PHE levels in mothers with PKU, attempting to ensure that mothers with PKU keep their PHE levels below the level at which teratogenic effects are likely to occur. Additionally, all of these goals were accomplished only because of the unusual variability in PHE levels exhibited by mothers with PKU in the MPKUC study. Readers are referred to Widaman (2009) for an informative summary of findings of the MPKUC study.

Conclusions The presence of special populations—however delineated—provides opportunities beyond those afforded by drawing repeated samples that are representative of the general population. If we disregarded the presence of special populations, then we might develop a corpus of scientific findings that widaman, early, conger

77

applies to “the” population—whatever that is—but in actuality applies to very few individuals or, at least, fails to apply to members of important special populations. The mere existence of special populations, however these are defined, challenges us to verify that the most important of our findings are not moderated substantially as a function of special population status. We outlined four primary methodological issues that arise when investigating phenomena in the presence of special populations. These four issues involve identifying and assessing participants in special populations, verifying that we are measuring the same things across populations, discovering whether sociobehavioral processes unfold in the same ways across populations, and searching for unusual variability that might provide unique ways of viewing behavioral phenomena and testing theoretical conjectures. Given the rather recent upsurge of research pursued under the explicit rubric of special populations, the implications of special populations on the way we do science will only increase in the future. Furthermore, we expect that the recognition of the presence of special populations will only enrich the way we view social and behavioral science as we move further into the twenty-first century.

Future Directions Several future directions for research and theory with regard to quantitative methodology as applied to special populations can be drawn from the material presented earlier in this chapter. These are provided as a series of questions to guide future research. Question 1: What is the best way to conceptualize special populations? Or, are alternative bases for identifying special populations the best way to proceed? Question 2: Is a “one-size-fits-all” conception of special populations possible, or will the definition of special populations vary as a function of the research question asked? Question 3: How should a researcher proceed if full strong factorial invariance does not hold but only partial strong invariance is exhibited by data? How seriously does partial strong factorial invariance impede scientific conclusions relative to full invariance of all factor loadings and intercepts in the strong factorial invariance model? Question 4: How large must differences be across special populations before differences are considered important? Large sample sizes lead to increased power to detect statistically significant differences across

78

s p e c i a l p o p u l at i o n s

groups, but how should we characterize the magnitudes of effects, what magnitude of effects should be considered practically important, and would the magnitude of effects considered important vary across domains of research? Question 5: When does unusual variability represent a valid measurement outcome, and when does unusual (or different) variability across groups represent a failure to assess individual differences adequately or in comparable fashion across groups? Measurement is the basis on which the whole enterprise of science is erected, and concerted attention to accurate measurement of individual differences across the entire span of a dimension is crucial to answering questions such as these. Use of the most up-to-date measurement approaches, such as CAT, would go a long way to resolving issues of differences in variance across groups.

The future is bright for methodological and quantitative innovations in the study of special populations. As we documented at the start of this chapter, labeling groups as special populations is a relatively recent phenomenon, largely a product of the past three decades. Moreover, the majority of statistical methods and techniques for studying differences across groups are also of recent origin. Many of the most advanced techniques have been available for, at most, 30 years, and new ways in which these methods can be used to illuminate similarities and differences across populations are being developed on an almost daily basis. Improved, more sophisticated understanding of the nature of special populations is occurring at the genetic, biological, and psychological/behavioral levels, and optimal use of methodological approaches and quantitative techniques is a crucial element that will push this endeavor forward. Indeed, new quantitative techniques or innovative use of existing techniques may well be the key that unlocks the door to advanced understanding of why special populations deserve their distinctive status.

Author Note Support for this work was provided by grants from the National Institute of Child Health and Human Development, the National Institute on Drug Abuse, and the National Institute of Mental Health (DA017902, HD047573, HD051746, and MH051361). Correspondence regarding this manuscript should be addressed to Keith Widaman, Department of Psychology, University of California at Davis,

One Shields Avenue, Davis, CA 95616. E-mail: [email protected]

References Barton, B. (2009). Dreams deferred: Disability definitions, data, models, and perspectives. Journal of Sociology and Social Welfare, 36(4), 13–24. Baumrind, D. (1968). Authoritarian vs. authoritative parent control. Adolescence, 3, 255–272. Baumrind, D. (1971). Current patterns of parental authority. Developmental Psychology, 4 (1, Pt. 2), 1–103. Baumrind, D. (1991). The influence of parenting style on adolescent competence and substance abuse. The Journal of Early Adolescence, 11, 56–95. Baumrind, D., Larzelere, R. E., & Owens, E. B. (2010). Effects of preschool parents’ power assertive patterns and practices on adolescent development. Parenting: Science and Practice, 10, 157–201. Belsky, J., Bakermans-Kranenburg, M. J., van IJzendoorn, M. H. (2007). For better and for worse: Differential susceptibility to environmental influences. Current Directions in Psychological Science, 16, 300–304. Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32, 513–531. Bronfenbrenner, U. (1986a). Ecology of the family as a context for human development: Research perspectives. Developmental Psychology, 22, 723–742. Bronfenbrenner, U. (1986b). Recent advances in research on the ecology of human development. In R. K. Silbereisen, K. Eyferth, & G. Rudinger (Eds.), Development as action in context: Problem behavior and normal youth development (pp. 287–309). Heidelberg: Springer-Verlag. Bronfenbrenner, U. (1999). Environments in developmental perspective: Theoretical and operational models. In S. L. Friedman & T. D. Wachs (Eds.), Measuring environment across the life span: Emerging methods and concepts (pp. 3–28). Washington, DC: American Psychological Association. Byrne, B. M., Shavelson, R. J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105, 456–466. Caspi, A., McClay, J., Moffitt, T. E., Mill, J., Martin, J., Craig, I. W., et al. (2002). Role of genotype in the cycle of violence in maltreated children. Science, 297, 851–854. Caspi, A., Sugden, K., Moffitt, T. E., Taylor, A., Craig, I. W., Harrington, H., et al. (2003). Influence of life stress on depression: Moderation by a polymorphism in the 5-HTT gene. Science, 301, 386–389. Chakrabarti, B., Dudbridge, E., Kent, L., Wheelwright, S., HillCawthorne, G., Allison, C., et al. (2010). Genes related to sex steroids, neural growth, and social-emotional behavior are associated with autistic traits, empathy, and Asperger syndrome. Autism Research, 2, 157–177. Chassin, L., Knight, G., Vargas-Chanes, D., Losoya, S. H., & Naranjo, D. (2009). Substance use treatment outcomes in a sample of male serious juvenile offenders. Journal of Substance Abuse Treatment, 36, 183–194. Chen, F. F., Sousa, K. H., & West, S. G. (2005). Testing measurement invariance of second-order factor models. Structural Equation Modeling, 12, 471–492. Cheung, G. W., & Rensvold, R. B. (1999). Testing factorial invariance across groups: A reconceptualization and proposed new method. Journal of Management, 25, 1–27.

Clark, L. A., & Watson, D. (1997). The Mini Mood and Anxiety SymptomQuestionnaire (Mini-MASQ). Unpublished manuscript, University of Iowa. Cleary, T. A. (1968). Test bias: Prediction of grades of Negro and White students in integrated colleges. Journal of Educational Measurement, 5, 115–124. Cobb, B., Lehmann, J., Newman-Gonchar, R., & Alwell, M. (2009). Self-determination for students with disabilities: A narrative metasynthesis. Career Development for Exceptional Individuals, 32, 108–114. Colarelli, S. M., Han, K., & Yang, C. (2010). Biased against whom? The problems of “group” definition and membership in test bias analyses. Industrial and Organizational Psychology: Perspectives on Science and Practice, 3, 228–231. Conger, R. D., Song, H., Stockdale, G. D., Ferrer, E., Widaman, K. F., & Cauce, A. M. (2012). Resilience and vulnerability of Mexican origin youth and their families: A test of a culturallyinformed model of family economic stress. In P. K. Kerig, M. S. Schulz, & S. T. Hauser (Eds.), Adolescence and beyond: Family processes and development (pp. 268–286). New York: Oxford University Press. Conger, R. D., Stockdale, G. D., Song, H., Robins, R. W., & Widaman, K. F. (in press). Predicting change in substance use and substance use cognitions of Mexican origin youth during the transition from childhood to early adolescence. In Y. F. Thomas, L. N. Price, & A. V. Lybrand (Eds.), Drug use trajectories among African American and Hispanic Youth. New York: Springer. Diaz, N., Horton, E. G., McIlveen, J., Weiner, M., & Nelson, J. (2009). Dysthymia among substance abusers: An exploratory study of individual and mental health factors. International Journal of Mental Health and Addiction, 7, 357–367. Eastwood, S. L., & Harrison, P. J. (2010). Markers of glutamate synaptic transmission and plasticity are increased in the anterior cingulate cortex in bipolar disorder. Biological Psychiatry, 67, 1010–1016. Ferrer, E., Balluerka, N., & Widaman, K. F. (2008). Factorial invariance and the specification of second-order latent growth models. Methodology, 4, 22–36. Folks, H. (1931). Committee C-1—Socially handicapped— Dependency and neglect. In White House Conference on Child Health and Protection, White House conference 1930: Addresses and abstracts of committee reports, White House conference on child health and protection called by President Hoover (pp. 319–340). New York: The Century Co. Grigorenko, E. L., Klin, A., & Volkmar, F. (2003). Annotation: Hyperlexia: disability or superability? Journal of Child Psychology and Psychiatry, 44, 1079–1091. Hancock, G. R., Kuo, W.-L., & Lawrence, F. R. (2001). An illustration of second-order latent growth models. Structural Equation Modeling, 8, 470–489. Henry, P. J. (2008). College sophomores in the laboratory redux: Influences of a narrow data base on social psychology’s view of the nature of prejudice. Psychological Inquiry, 19, 49–71. Hoffman, M. L. (1975). Altruistic behavior and the parent-child relationship. Journal of Personality and Social Psychology, 31, 937–943. Hoffman, M. L. (1979). Development of moral thought, feeling, and behavior. American Psychologist, 34, 958–966. Hoffman, M. L. (1994). Discipline and internalization. Developmental Psychology, 30, 26–8. Horn, J. L., McArdle, J. J., & Mason, R. (1983). When is invariance not invariant: A practical scientist’s look at the

widaman, early, conger

79

ethereal concept of factorial invariance. Southern Psychologist, 1, 179–188. John, O. P., Donahue, E. M., & Kentle, R. L. (1991). The Big Five Inventory-Versions 4a and 54. Berkeley, CA: University of California, Berkeley, Institute of Personality and Social Research. Jöreskog, K. J. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36, 409–426. Koch, R., & de la Cruz, F. (1999a). Historical aspects and overview of research on phenylketonuria. Mental Retardation and Developmental Disabilities Research Reviews, 5, 101–103. Koch, R., & de la Cruz, F. (Eds.). (1999b). Phenylketonuria. Mental Retardation and Developmental Disabilities Research Reviews, 5, 101–161. Koch, R., de la Cruz, F., & Azen, C. G. (Eds.). (2003). The Maternal Phenylketonuria Collaborative Study: New developments and the need for new strategies. Pediatrics, 112, 1513–1587. Koychev, I., El-Deredy, W., Haenschel, C., & Deakin, J. F. W. (2010). Visual information processing deficits as biomarkers of vulnerability to schizophrenia: An event-related potential study in schizotypy. Neuropsychologia, 48, 2205–2214. Kuo, C.-C., Maker, J., Su, F.-L., & Hu, C. (2010). Identifying young gifted children and cultivating problem solving abilities and multiple intelligences. Learning and Individual Differences, 20, 365–379. Little, T. D. (1997). Mean and covariance structures (MACS) analyses of cross-cultural data: Practical and theoretical issues. Multivariate Behavioral Research, 32, 53–76. Little, T. D., Bovaird, J. A., & Card, N. A. (Eds.) (2007). Modeling contextual effects in longitudinal studies. Mahwah, NJ: Erlbaum. Little, T. D., Card, N. A., Slegers, D. W., & Ledford, E. C. (2007). Representing contextual effects in multiple-group MACS models. In T. D. Little, J. A. Bovaird, & N. A. Card (Eds.), Modeling contextual effects in longitudinal studies (pp. 121–147). Mahwah, NJ: Erlbaum. Lubinski, D., & Benbow, C. P. (2006). Study of Mathematically Precocious Youth after 35 years: Uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1, 316–345. Martz, E., & Daniel, S. (2010). Comparing disability prototypes in the United States and Kenya. Journal of Applied Rehabilitation Counseling, 41, 19–25. McArdle, J. J. (1988). Dynamic but structural equation modeling of repeated measures data. In J. R. Nesselroade & R. B. Cattell (Eds.), Handbook of multivariate experimental psychology (2nd ed., pp. 561–614). New York: Plenum. Meade, A. W., & Tonidandel, S. (2010). Not seeing clearly with Cleary: What test bias analyses do and do not tell us. Industrial and Organizational Psychology: Perspectives on Science and Practice, 3, 192–205. Meredith, W., & Horn, J. (2001). The role of factorial invariance in modeling growth and change. In L. M. Collins & A. G. Sayer (Eds.), New methods for the analysis of change (pp. 203– 240). Washington, DC: American Psychological Association. Meredith, W. M. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525–543. Millsap, R. E., & Meredith, W. (2007). Factorial invariance: Historical perspectives and new problems. In R. Cudeck & R. C. MacCallum (Eds.), Factor analysis at 100: Historical developments and new directions (pp. 131–152). Mahwah, NJ: Erlbaum.

80

s p e c i a l p o p u l at i o n s

Nary, D. E., Froehlich-Grobe, K., & Aaronson, L. (2011). Recruitment issues in a randomized controlled exercise trial targeting wheelchair users. Contemporary Clinical Trials, 32, 188–195. Nesselroade, J. R. (1983). Temporal selection and factor invariance in the study of development and change. In P. B. Baltes & O. G. Brim, Jr. (Eds.), Life-span development and behavior (Vol. 5; pp. 59–87). New York: Academic Press. Obradoviæ, J., Long, J. D., Cutuli, J. J., Chan, C.-K., Hinz, E., Heistad, D., & Masten, A. S. (2009). Academic achievement of homeless and highly mobile children in an urban school district: Longitudinal evidence on risk, growth, and resilience. Development and Psychopathology, 21, 493–518. Olson, I. R., Berryhill, M. E., Drowos, D. B., Brown, L., & Chatterjee, A. (2010). A calendar savant with episodic memory impairments. Neurocase, 16, 208–218. Or, D., Cohen, A., & Tirosh, E. (2010). The reliability and validity of the Aggregate Neurobehaviroal Student Health and Educational Review Parent’s Questionnaire (ANSER-PQ). Journal of Child Neurology, 25, 157–164. Pennington, K., Beasley, C. L., Dicker, P., Fagan, A., English, J., Pariante, C. M., et al. (2008). Prominent synaptic and metabolic abnormalities revealed by proteomic analysis of the dorsolateral prefrontal cortex in schizophrenia and bipolar disorder. Molecular Psychiatry, 13, 1102–1117. Petrill, S. A. (2010). Editorial: Identifying the genetic and environmental influences on psychological outcomes. Journal of Child Psychology and Psychiatry, 51, 745–746. Polinsky, M. L., Hser, Y.-I., & Grella, C. E. (1998). Consideration of special populations in the drug treatment system of a large metropolitan area. Journal of Behavioral Health Services & Research, 25, 7–21. Reichenberg, A., Caspi, A., Harrington, H., Houts, R., Keefe, R. S. E., Murray, R. M., et al. (2010). Static and dynamic cognitive deficits in childhood schizophrenia: A 30-year study. The American Journal of Psychiatry, 167, 160–169. Reise, S. P., & Waller, N. G. (2003). How many IRT parameters does it take to model psychopathology items? Psychological Methods, 8, 164–184. Rensvold, R. B., & Cheung, G. W. (1998). Testing measurement model for factorial invariance: A systematic approach. Educational and Psychological Measurement, 58, 1017–1034. Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow database on social psychology’s view of human nature. Journal of Personality and Social Psychology, 57, 515–530. Sears, D. O. (2008). College student-it is redux. Psychological Inquiry, 19, 72–77. Simunovic, F., Yi, M., Wang, Y., Macey, L., Brown, L. T., Krichevsky, A. M., et al. (2009). Gene expression profiling of substantia nigra dopamine neurons: Further insights into Parkinson’s disease pathology. Brain, 132, 1795–1809. Sussman, S. (2006). Prevention of adolescent alcohol problems in special populations. In M. Galanter (Ed.), Alcohol problems in adolescents and young adults: Epidemiology, neurobiology, prevention, and treatment (pp. 225–253). New York: Springer Science & Business Media. Uehling, H. F. (1952). Rorschach “shock” for two special populations. Journal of Consulting Psychology, 16, 224–225.

Waller, N. G., & Reise, S. P. (1989). Computerized adaptive personality assessment: An illustration with the Absorption scale. Journal of Personality and Social Psychology, 57, 1051– 1058. Widaman, K. F. (1985). Hierarchically nested covariance structure models for multitrait-multimethod data. Applied Psychological Measurement, 9, 1–26. Widaman, K. F. (2007). Intrauterine environment affects infant and child intellectual outcomes: Environment as direct effect. In T. D. Little, J. A. Bovaird, & N. A. Card (Eds.), Modeling contextual effects in longitudinal studies (pp. 387–436). Mahwah, NJ: Erlbaum. Widaman, K. F. (2009). Phenylketonuria in children and mothers: Genes, environments, behavior. Current Directions in Psychological Science, 18, 48–52. Widaman, K. F., & BorthwickDuffy, S. A. (April, 1990). Parental influences on the development of adaptive behaviors. Paper

presented at the Gatlinburg Conference on Research and Theory in Mental Retardation and Developmental Disabilities, Brainerd, MN. Widaman, K. F., Gibbs, K. W., & Geary, D. C. (1987). The structure of adaptive behavior: I. Replication across fourteen samples of nonprofoundly retarded persons. American Journal of Mental Deficiency, 91, 348–360. Widaman, K. F., & Reise, S. P. (1997). Exploring the measurement invariance of psychological instruments: Applications in the substance use domain. In K. J. Bryant, M. Windle, & S. G. West (Eds.), The science of prevention: Methodological advances from alcohol and substance abuse research (pp. 281–324). Washington, DC: American Psychological Association.

widaman, early, conger

81

CHAPTER

5

Theory Construction, Model Building, and Model Selection

James Jaccard

Abstract General issues involved in (1) building causal theories, (2) translating those theories into a set of mathematical representations, (3) choosing an analytic strategy to estimate parameters in the equations implied by the theory, and (4) choosing the “best” model from a set of competing models are discussed. Theory construction fundamentally relies on six relationship types, including direct effects, indirect effects, moderated effects, reciprocal effects, spurious effects and unanalyzed relationships. Once specified, each type of effect can be represented mathematically, thereby translating a path diagram into a set of (linear) equations. Parameters in the equations are then estimated using either limited information estimation approaches or full information estimation approaches, taking into account measurement properties, population distributions, and matters of robustness. Choices between competing models are based on indices of relative fit with the data and relative fit with competing models, as well as more general theoretical criteria (e.g., parsimony, consistency with related theories). Key Words: Theory; theory construction; modeling; parameter estimation

Few people dispute that theory is at the heart of the scientific enterprise. We use theories to explain phenomena and to help solve important applied problems. With theoretical propositions in hand, we design research to gain perspectives on the viability of the theory and imbue those propositions with a certain degree of confidence based on the results of research and the quality of the research design. Theory is fundamental to the social sciences. Advances in statistical analysis have been considerable during the past 30 years. Our statistical toolbox was at one time somewhat limited, and it was not uncommon for analysts to adopt practices that forced data to conform to the assumptions of our statistical tools so that we could apply inferential methods of analysis to those data. For example, although we knew that a construct like depression was non-normally distributed in the population and that our measures of depression and data reflected 82

this, we would transform data on depression so that it would approximate a normal distribution and be amenable to analysis using statistical methods that assumed normality. Such days, fortunately, are over. The statistical tools available now allow us to give precedence to theory and model testing without being slave to many of the traditional assumptions made by methods of analysis. The present chapter discusses issues that analysts need to take into account as they move from theory to analysis. I focus first on the nature of theory in the social sciences, with an emphasis on describing theories that invoke the language of causality. Although there are many other approaches to theory and modeling, one cannot deny the prominence and pervasiveness of causal thinking in social science research. Hence, causal frameworks capture the bulk of my attention. Next, I discuss issues involved in moving from a well-specified causal

theory to a mathematical representation of that theory for purposes of statistical analysis. I then discuss a range of issues that must be considered as one moves from a set of equations representing a theory to formal data analysis, including measurement considerations, full versus limited information estimation strategies, distributional assumptions, and the possible presence of outliers. Finally, I consider issues in model selection, which refers to the process of choosing one model as being the “best” from a set of candidate models. In the present chapter, I use the terms theory and model interchangeably, although some scientists do not do so. As examples, various authorities contend that models are a special type of theory (e.g., Coombs, Dawes, & Tversky, 1970, p. 4; Kaplan, 1964, p. 263), models are portions of theories (Sheth, 1967, p. 444; Torgerson, 1958, p. 4), models are derived from theories (e.g., Pap, 1962, p. 355), models are simplified versions of theories (e.g., Carnap, 1971, p. 54), models represent correspondence between two or more theories (Brodbeck, 1968), or theories represent specific interpretations of (i.e., are derived from) models (e.g., Green & Tull, 1975, p. 42). Others consider the terms to be synonymous (cf, Dubin, 1976; Simon & Newell, 1956). Although there may indeed be meaningful distinctions between theories and models, it also is the case that models, like theories, involve variables and relationships between variables, usually invoking the concept of causality. Accordingly, I will use the terms theory and model interchangeably.

Specifying a Causal Theory The Nature of Causality Theories take many forms in the social sciences. One common form involves specifying presumed relationships between variables while invoking the concept of causality. The nature of causality has been debated extensively by philosophers of science (e.g., Bunge, 1961; Cartwright, 2007; Frank, 1961; Morgan & Winship, 2007; Pearl, 2000; Russell, 1931; Rubin, 1974, 1978; Shadish, Cook, & Campbell, 2002) and most agree that causality is an elusive concept that is fraught with ambiguities. It is beyond the scope of this chapter to delve into the underlying philosophical issues (see Jaccard & Jacoby, 2010). Rather, I emphasize here a “working model” of causality that is adopted by most social scientists. The concept of causality is usually thought of in terms of change—that is, X is a cause of Y if changes in X produce changes in Y (but see Sowa, 2000, and

Lewis, 2000, for alternative conceptualizations). Four properties of causality are typically emphasized. First, a cause always must precede an effect in time. Second, the time that it takes for a change in X to produce a change in Y can vary, ranging from almost instantaneous change to years, decades, centuries, or millennia. Third, the nature and/or strength of the effect of X on Y can vary depending on context. X may influence Y in one context but not another context. Finally, cause and effect must be in some form of spatial contact or must be connected by a chain of intermediate events. We return to these ideas in later sections of this chapter.

The Nature of Theories that Use Causality The focus of most causal theories is on explaining why variation in one or more variables exists. Some people make a great deal of money and others are poor. Why? What can account for this variation? Some people are able to remember complex material easily whereas for other people, it is difficult to do so. Why? What explains this variability? We answer these questions by specifying the presumed causes of the variability, and then we seek to test our theoretical proposition(s). In any given causal theory, there are six fundamental types of relationships that can be referenced; these are illustrated in Figure 5.1. These six relationship types are the cornerstone of causal theories and define the universe of causal relationships that theorists draw upon. In Figure 5.1, a variable is indicated by a box, and a causal influence is represented by a straight arrow emanating from the cause and pointing to the effect. I discuss the bidirectional curved arrow in Figure 5.1 shortly. Referring to Figure 5.1, a direct causal relationship is one in which a given cause is assumed to have a direct causal impact on some outcome variable. For example, exposure to violence in the media is assumed to influence aggressive behavior. Or, the quality of the relationship between a mother and her adolescent child is assumed to influence whether the child uses drugs. By contrast, an indirect causal relationship is when a variable influences another variable indirectly through its impact on an intermediary variable (see Fig. 5.1). For example, failing to accomplish a goal may lead to frustration that, in turn, causes someone to aggress against another. In this case, the failure to obtain a goal is an indirect cause of aggression. It influences aggression through its impact on frustration. Frustration is formally called a mediating variable or, jaccard

83

Direct Causal Relationship X

Y

Spurious Relationship Z

X

Indirect Causal Relationship X

Z

Y

Bi-directional Causal Relationship Y

X

Y

Unanalyzed Relationship

Moderated Causal Relationship Z

X

Y

X

Y

Figure 5.1 Relationships in Causal Models

more informally, a mediator, because other variables “work through” it to influence the outcome. Indirect relationships are sometimes called mediated relationships. Whenever a theorist posits an indirect relationship, the issue of whether to specify partial or complete mediation presents itself. In partial mediation, the distal variable has a direct effect on the outcome variable over and above its effect on the mediator. In complete mediation, all of the impact of the distal variable on the outcome variable is accounted for by the mediator. With partial mediation, in addition to the two causal paths characterizing the indirect effect illustrated in Figure 5.1, one adds an additional causal path linking the distal variable (X) and the outcome variable (Y) directly. As an example, we might posit that the quality of the relationship between a mother and her adolescent child (X) impacts the child’s motivation to perform well in school (Z) and that one’s motivation to do well in school, in turn, impacts (negatively) the tendency for an adolescent to use drugs (Y). As we think about the matter of specifying partial versus complete mediation, we might decide that there are other mechanisms by which the quality of the relationship between parent and adolescent can impact drug use, such as by lessening the attention that adolescents 84

give to peers who use drugs. We therefore decide to posit partial mediation to reflect this fact and add a direct causal path from X to Y. If we are able to specify another mechanism by which Z influences Y over and beyond Z, then why not just incorporate that additional mediator into the theory? Of course, we could very well do this, but then the issue becomes whether the two mediators in the theory, considered together, are complete or partial mediators of the causal effect of X on Y. This might lead us to speculate about yet a third mechanism, and once we have specified it, the issue of partial or complete mediation will present itself yet again. At some point, we must decide to close out the system and just let a direct path between X and Y stand so as to reflect partial mediation without formally bringing additional mediators into the model. If pressed, we could articulate one, but we simply do not want to complicate the theory further. A spurious relationship is one in which two variables are related because they share a common cause but not because either causes the other (see Fig. 5.1). As an example, if we select a random sample of people in the United States and calculate the association between height and length of hair, then we would find a moderate relationship between the two variables: People with shorter hair grow

theory construction, model building

taller. Does this mean that a causal relationship exists between these variables—that is, that cutting one’s hair will make one grow taller? Of course not. The reason the variables are correlated is because they share a common cause: gender. Males tend to have shorter hair than females and men tend to grow taller than females. The common cause of gender produces a correlation between length of hair and height, but it is spurious. A moderated causal relationship, like spurious and indirect relationships, involves at least three variables (see Fig. 5.1). In this case, the causal relationship between two variables, X and Y, differs depending on the value of a third variable, Z. For example, it might be found that a given type of psychotherapy (X) is effective for reducing depression (Y) for females but not for males. In this case, the causal relationship between being exposed to psychotherapy and depression is moderated by gender. When gender has the value “female,” X impacts Y. However, when gender has the value “male,” X does not impact Y. Gender is called a moderator variable because the relationship between the presence or absence of psychotherapy (X) and depression (Y) changes as a function of (or is “moderated by”) the levels of gender. A bidirectional causal relationship exists when two variables are conceptualized as influencing each other (see Fig. 5.1). For example, in the area of reproductive health, a theorist might posit a bidirectional influence between a woman’s belief that the rhythm method is effective at preventing pregnancy (X) and her attitude toward the rhythm method (Y). A woman may have a positive attitude toward the rhythm method because she believes it is effective. Simultaneously, she may believe it is effective, in part, because she has a positive attitude toward it, via a mechanism that involves rationalization of behavior. Technically, there can never be simultaneous reciprocal causation because there always must be a time interval, no matter how infinitesimally small, between the cause and the effect that follows from that cause. If we observed the causal dynamics within the appropriate time frames, then the true dynamic underlying a reciprocal causal relationship would appear as follows: Xt1→ Yt2→ Xt3→ Yt4, where Xt1 is variable X at time 1, Yt2 is variable Y at time 2, Xt3 is variable X at time 3, and Yt4 is variable Y at time 4. As an example, suppose we conduct a cross-sectional study and at a given point

in time we measure adolescent drug use and grade point averages in school. It is likely that the measured drug use reflects the influence of prior poor performance in school because adolescents who do poorly in school might turn to drugs as a coping mechanism or as a way of spending free time that normally would have been directed to school work. Similarly, the measured school performance likely reflects the effects of any prior drug use, which can cause students to lose interest in school and to not concentrate on tests and studying. It would be wrong to assume there is unidirectional causality from one construct to the other in this study. More realistically, the two measures reflect a more fine-grained process that has played itself out—that is, poor performance in school at time t influenced drug use at time t + 1, which in turn influenced school performance at time t + 2, which in turn influenced drug use at time t + 3, and so on. It is only when we are unable to capture the more fine-grained time intervals and we must instead work with coarser time intervals that the dynamic of the reciprocal causal relationship as illustrated in Figure 5.1 applies. By working with coarser time units, the more finegrained temporal causal dynamics are assumed to have already played themselves out (often referred to as the equilibrium assumption). In this sense, there exists reciprocal causality per Figure 5.1. The final type of relationship that can occur in a causal model is an unanalyzed relationship. In Figure 5.1, the two variables for this type of relationship are connected by a double-headed curved arrow. This arrow signifies that the two variables are possibly correlated but that the theorist is not going to specify why they are correlated. The correlation may be spurious or it may result from a causal connection of some kind. The theorist wants to recognize the possible correlation between the variables, but trying to explain it is beyond the scope of the theoretical effort. The relationship will remain unanalyzed. Most causal models have more than one of these six types of relationships in them. An example of a multivariate causal model appears in Figure 5.2, which was based on an endeavor that developed an intervention to reduce unprotected sexual intercourse to reduce the spread of sexually transmitted diseases (STDs). The intervention is represented as a two-level, qualitative variable in which individuals are randomly assigned to either an intervention group or a control group. The intervention is designed to influence (1) the belief that having unprotected sex increases the risk of contracting an STD (see path a in Fig. 5.2) and (2) the belief jaccard

85

Belief Unprotected Sex leads to STDs

c

a Engages in Unprotected Sex

Intervention versus Control

e

Has an STD

b Belief STDs are Serious

d

Figure 5.2 A Multivariate Causal Model

that contracting an STD is bad for one’s health (see path b in Fig. 5.2). These two beliefs, in turn, are assumed to influence the tendency for a person to engage in unprotected sex (paths c and d ). The tendency to engage in unprotected sex, in turn, is thought to impact the extent to which the person contracts STDs (path e). Paths a through e each represent direct causal relationships. There also is aspurious relationship in this model, as seen by the fact that the two beliefs share a common cause, the intervention (paths a and b). Because of this common cause, we expect the two beliefs to be correlated to some extent, although there is no presumed causal connection between them. There also are several indirect causal relationships in the model. For example, the intervention indirectly affects the occurrence of STDs through the two belief mediators and, in turn, their influence on the tendency to engage in unprotected sex. In causal theories, distinctions are made between exogenous and endogenous variables. Any variable that has a straight arrow going to it in a path diagram is called an endogenous variable. Endogenous variables, essentially, are outcome variables—that is, they are presumed to be influenced by another variable in the theoretical system. Variables that do not have a straight arrow going to them are called exogenous variables. They do not have presumed causes that are elaborated upon in the theory. In sum, the fundamental orientation to constructing a causal theory is to explain variation in one or more outcomes. This is accomplished with reference to six types of relationships, direct effects, indirect effects, reciprocal effects, moderated effects, spurious effects, and unanalyzed relationships. In the theory construction process, it is not uncommon for the theorist to first identify the outcome variable he or she wishes to explain and then to specify a few direct causes of that variable. One or more of 86

the direct causes can then be turned into an indirect effect by elaborating the mechanisms by which the cause produces the effect. The “mechanisms” are, essentially, mediators. The theory might be further elaborated by specifying the boundary conditions of effects, thereby introducing moderated relationships into the framework (e.g., the effect holds when condition A is operative but not when conditions B is operative; or the effect holds for one type of individual but not another type of individual). For a discussion of the many heuristics scientists use to identify mechanisms and boundary conditions when developing causal theories see Jaccard and Jacoby (2010). Theories can be represented in the form of path diagrams using the graphical schemes illustrated in Figures 5.1 and 5.2. Such representations are possible no matter what research design is used to test a theory—that is, we can represent the underlying theory with path diagrams for experiments just as readily as for purely observational designs. Figure 5.2 is an example of a causal theory that incorporates an experimental design.

Causal Theories with Explicit Temporal Dynamics An important structure of many (but not all) causal theories is a focus on longitudinal dynamics. Causal theories that focus on longitudinal dynamics contain one or more of the six types of relationships described above, but there also is an explicit temporal dynamic that is of theoretical interest. A nomenclature has emerged around such theories, which I briefly describe here. One popular type of causal model that includes longitudinal dynamics is called a panel model in which multiple variables are modeled at multiple points in times, also called waves. Figure 5.3

theory construction, model building

Alcohol Use Time t

a

Alcohol Use Time t+1 d

Delinquency Time t

b

a

d

c

Delinquency Time t+1

Alcohol Use Time t+2

b

c

Delinquency Time t+2

a

Alcohol Use Time t+3

d

b

c

Delinquency Time t+3

Figure 5.3 A Panel Model

presents an example of a two-variable, four-wave panel model. This model examines the relationship between adolescent alcohol use and delinquency across the 4 years of high school. Heavy alcohol use at time t is thought to have a direct effect on alcohol use at time t + 1, and the same type of dynamic is thought to hold for delinquency. A path that specifies a variable at time t + 1 as being a function of that same variable at time t is said to reflect an autoregressive effect (see paths a and b in Fig. 5.3). In the model in Figure 5.3, the autoregressive effects are first-order effects, because the variable at a given time, t, is assumed to be a function of that same variable at the time just prior to it. A second-order autoregressive effect is one where the variable at time t is impacted (also) by the same variable at time t −2. A third-order autoregressive effect implies a direct effect between a variable at time t and that same variable at time t − 3. And so on. The theory in Figure 5.3 also specifies that drinking alcohol at time t predisposes one to delinquency at that same time period under the rationale that people often commit acts of delinquency when they are drunk. When a causal relationship exists between two variables at the same period of time, it is said to represent a contemporaneous causal effect (see path c in Fig. 5.3). Finally, when a variable at time t has an effect on a different variable at time t + 1, it is referred to as a lagged effect (see path d in Fig. 5.3). In this case, alcohol use at time t is thought to have a delayed effect on delinquency at time t + 1 independent of the other indirect causal chains that link alcohol use at time t to delinquency at time t + 1. In sum, it is common in longitudinal models to theorize about autoregressive effects, contemporaneous effects, and lagged effects. These effects are common features of panel models (Collins, 2006; Finkel, 2008). A second type of theory that formally incorporates temporal dynamics is theory based on growth processes. This approach views variation in outcomes across time as arising from an unobserved

“growth” process that causes changing values of an outcome over time. These models are typically associated with latent growth curve models in the social science literature or, more simply, latent curve models (LCMs). The classic form of these models is presented in Figure 5.4, as applied to the alcoholdelinquency example considered earlier. Observed variables or measures are represented by boxes, and latent (unobserved) variables are represented by circles. This model parameterizes a latent “growth” or “maturation” process for alcohol use as students progress through high school (represented by the variable called “Alcohol Slope” in Fig. 5.4) as well as a latent growth or maturation process for delinquency as students progress through high school (see the variable “Delinquency Slope” in Fig. 5.4). The “growth process” for alcohol use is assumed to impact the “growth process” for delinquency across the four time periods. One often will encounter longitudinal causal theories expressed in this form instead of in the more traditional panel model form (Collins, 2006). Of course, it is possible that both types of dynamics in the two types of models operate. When the two types of processes are integrated into a single model, we obtain what is called an autoregressive latent trajectory model (Bollen & Curran, 2006). Figure 5.5 presents this model. In sum, when constructing theories that incorporate longitudinal dynamics, one will explicitly take into account the possible causal dynamics described by panel models, by LCMs, or by autoregressive latent trajectory models.

Multilevel Causal Theories Another common type of causal model that has received considerable attention in the social sciences is one that incorporates multiple levels of analysis, or what is called a multilevel model. These models deal with scenarios where there is nesting—for example, where individuals are nested within different, higher jaccard

87

Baseline Alcohol Use

Alcohol Use Time t

Alcohol Slope

Alcohol Use Time t+1

Alcohol Use Time t+3

Alcohol Use Time t+2

a

Delinquency Time t

Delinquency Time t+1

Delinquency Time t+2

Baseline Delinquency

Delinquency Slope

Baseline Alcohol

Alcohol Slope

Delinquency Time t+3

Figure 5.4 A Latent Curve Model

Alcohol Use Time t

a

Alcohol Use Time t+1

a

d

b

Delinquency Time t+1

Baseline Delinquency

b

Alcohol Use Time t+3 e

d c Delinquency Time t+2

Delinquency Slope

Figure 5.5 An Autoregressive Latent Trajectory Model

88

a

d c

Delinquency Time t

Alcohol Use Time t+2

theory construction, model building

c

b

Delinquency Time t+3

level organizational units. For example, students are nested within schools and both characteristics of the students and characteristics of the schools can influence an outcome variable, such as performance by students on a standardized math test. Employees are nested within organizations, and both characteristics of the employees and characteristics of the organization can influence employee behavior. Patients are nested within hospitals and both characteristics of the patients and characteristics of the hospitals can influence patient recovery.

(a) δ

Y

X

Z (b) δ

δ2

Smoking

Drug Use

Summary of Causal Model Forms In sum, causal theories seek to explain variation in one or more outcome variables by identifying causes of those variables. Social scientists posit theories or models to represent the presumed causal relationships among variables, and these theories typically have one or more of the six fundamental relationships types in them—namely, direct effects, indirect effects, spurious effects, moderated effects, reciprocal causality, and unanalyzed relationships. The theories can focus on a single time period or explicitly deal with temporal dynamics, If they deal with temporal dynamics, then this usually takes the form of a panel model, a LCM, or an autoregressive latent trajectory model. Theorizing also can occur at a single level of analysis or at multiple levels of analysis in which lower order units are nested within higher order units, with characteristics of the units at both levels influencing outcomes.

Theories and Disturbance Terms There is a more subtle facet of theory construction beyond those elucidated thus far, and this concerns the concept of disturbance terms. Consider the simple theory in Figure 5.6a. This theory has two direct causes where variables X and Z are assumed to influence variable Y. A fourth “variable” is represented in the system by a circle. This “variable” reflects all unspecified variables that influence Y other than X and Z. It formally recognizes that the theory is incomplete and that we have not specified every cause of the outcome variable. This “variable” is called a disturbance term, and it represents the totality of all unspecified causal effects on the endogenous variable it is associated with. The presence of a disturbance term explicitly recognizes that not all causal influences on a variable have been specified in the model. Traditionally, each endogenous variable in a theory has a disturbance term associated with it.

Gender

(c)

δ1

δ2

Smoking

Drug Use

Gender Figure 5.6 Theories with Disturbance Terms. (a) Theory with Disturbance Term (b) Smoking and Drug Example with Uncorrelated Disturbance Terms (c) Smoking and Drug Example with Correlated Disturbance Terms

Consider another example in Figure 5.6b. There are two endogenous variables, and they share a common cause. One of the endogenous variables is adolescent tobacco use, and the other is adolescent drug use. The common cause is gender. The theory posits that boys are more likely than girls to smoke cigarettes and that boys also are more likely than girls to use drugs. There is a disturbance term for each of the endogenous variables. These terms recognize that factors other than gender impact tobacco use and drug use. But there is a problem with this theory. According to the theory, the only reason that smoking cigarettes jaccard

89

and drug use in adolescence are correlated is because they share the common cause of gender. In reality, there are many other common causes of these two constructs. For example, social class impacts both tobacco use and drug use during adolescence, with more economically disadvantaged youth having an increased tendency to smoke cigarettes and to use drugs. Essentially, social class resides within the disturbance term for smoking cigarettes and it also resides within the disturbance term for drug use. If the same unspecified cause is in each disturbance term, then we would expect the two disturbance terms to be correlated. Figure 5.6c presents a more plausible theory that includes this correlation between disturbances. According to this theory, there are two reasons why adolescent cigarette smoking and adolescent drug use are correlated. One reason is because they share the common cause of gender. Another reason is that they share other common causes that are unspecified by the theory and that reside in both disturbance terms. A well-developed theory provides explicit statements about which disturbance terms in the framework are correlated and which disturbance terms are not. The lazy way out for a theorist is to simply assume all disturbance terms are correlated. But this is not satisfactory, and it can create considerable difficulties for testing the theory empirically. A better approach is to carefully consider every pair of disturbance terms and to articulate a possible common cause that resides in each of the separate disturbance terms. If such a common cause can be articulated, then it makes sense to posit correlated disturbances. If one cannot articulate any such variable, or if its effects are thought to be trivial, then one does not posit correlated disturbances. For models with a longitudinal component, many theorists have a “knee-jerk” reaction that disturbances directed at the same variable at two different points in time must be correlated. Again, if one can articulate a compelling rationale for correlated disturbances, then by all means, correlated disturbances should be incorporated into the theory. Otherwise, correlated disturbances should be viewed with theoretical skepticism. If a theorist is able to articulate a variable that resides in two disturbance terms to create correlated disturbances, then why not explicitly incorporate the variable into the theoretical system? For example, for the smoking cigarette and drug use example in Figure 5.6, why not explicitly bring social class into the theoretical system? This, of course, is desirable. But at some point, we want to close out 90

the theoretical system and work just with the variables we have specified. By including disturbance terms and correlated disturbances, we are explicitly recognizing the operation of other variables, but we choose not to give them central focus in our theory.

Latent Variables, Structural Models, and Measurement Models Some researchers take matters a step further and also incorporate a measurement theory into their conceptual frameworks when they are performing an empirical test of the theory. This goes beyond the typical province of theory construction per se, but I mention the ideas here as they ultimately impact data analysis and the choice of statistical methods. The integration of conceptual and measurement theories is something that should be standard practice for social science research. An empirical test of a theory necessarily requires developing and using measures of the theoretical constructs in the theory. Just as one can build a theory linking one concept to another concept, so too can one build a theory linking a construct to a measure of that construct. Some theorists combine both types of theories into a single overarching framework. Traditional measurement models make a distinction between a latent variable and an observed measure of that variable. The latent variable is the true construct about which one is interested in making statements, such as depression. Although we can directly observe many of the symptoms of depression, we can’t directly observe the “seat” of depression in a person’s mind. Rather, we rely on some observable response(s) to assess the latent variable, such as a multi-item inventory of depression that a person completes. Figure 5.7a presents one representation of a measurement model. The latent variable of depression is contained in a circle, and the observed measure thought to reflect depression is contained in a square (the label “AR” stands for adolescent report of depression). A causal path is drawn from the latent variable to the observed measure, under the assumption that how depressed a person is influences how he or she responds to the questions on the inventory. There also is an error term that reflects measurement error—that is, there are factors other than depression that influence a person’s responses on the inventory. Ideally, measurement error is minimal, but it is a fact of life for many research endeavors. The relationship between the latent construct and the observed indicator is

theory construction, model building

(a)

(b) ε

ε1

ε2

AR

AR

MR

Depression

Depression

Figure 5.7 Measurement Models. (a) Single Indicator (b) Multiple Indicators

usually assumed to be linear, but it could also be nonlinear. Sometimes we obtain multiple indicators of a construct. For example, a researcher might obtain a self-report of depression from an adolescent as well as a report from the adolescent’s mother about how depressed the child is (MR). Figure 5.7b presents a measurement model for this scenario. The latent variable of depression is assumed to influence both of the observed measures, and each measure is assumed to have some measurement error as reflected by the presence of error terms. The errors are assumed to be uncorrelated because we cannot articulate any viable reason why we would expect them to be correlated. However, one can introduce correlated measurement error, if appropriate. Figure 5.8 presents an example of a more elaborate theoretical framework that incorporates a theory about the relationship between constructs as well as a measurement theory. Although it appears somewhat intimidating, it is a straightforward model. There are five latent constructs, and the main substantive theory is focused on them. The portion of the diagram focused on the causal relations among the latent variables is called the structural model. The primary outcome variable in this model is the birth weight of a newborn. Birth weight is thought to be influenced by two factors: how much alcohol the mother consumes during her pregnancy and how much she smokes during her pregnancy. Both of these variables are thought to be influenced by two other variables. The first determinant is the extent of support the mother has from friends and relatives who can help her quit smoking and drinking. The second is the mother’s locus of control. Locus of control refers to the extent to which the mother believes that what

happens to her is beyond her control. The theory is that the more a mother thinks that what happens is not under her control, the more likely she will be to keep smoking and drinking during pregnancy. These two latent exogenous variables are assumed to be correlated. The three latent endogenous variables each have a disturbance term, indicated by a circle with a δ inside of it. The disturbances are assumed to be uncorrelated. The portion of the diagram with arrows from the latent constructs to the observed measures constitutes the measurement model. Each of the latent variables has multiple indicators. In other words, the researcher obtains three measures of each construct, with the exception of birth weight, which is measured using two different indicators. In the interest of space, we do not describe these measures, but note that each is assumed to be fallible (i.e., subject to some measurement error; see the circles ranging from ε1 to ε14). The measurement errors are assumed to be uncorrelated. Figure 5.8 provides an explicit roadmap for a researcher to test the combined structural theory and measurement theory. In experiments that involve a formal manipulation, the manipulation typically is considered to be an observed variable in its own right, with no latent construct underlying it (see Fig. 5.2). In some cases, the manipulation is designed to reflect or produce differences in an underlying psychological state (e.g., one’s mood when studying the effects of mood on memory). In this case, a measurement model may be introduced into the causal system, treating the manipulation as a formative rather than a reflective indicator of the construct (see Schumacker & Lomax, 2004, for elaboration). Many social scientists view measurement models as interesting causal theories in their own right rather than just a methodological feature of theory testing. The most common case is when the conceptual focus is on specifying the facets or dimensions of a construct vis-à-vis factor analysis. The causal theory underlying factor analysis is that one or more (possibly correlated) unmeasured latent variables are each a common cause of a set of observed measures of constructs that are of interest in their own right. For example, theorists have suggested there are four facets of social support: informational support, emotional support, tangible support, and companionship support. Each of these facets is conceptualized as a latent variable that impacts distinct manifestations of social support. For elaboration of theory-based expressions of measurement models, see Brown (2006). jaccard

91

ε1

ε2

ε3

ε4

ε5

ε6

S1

S2

S3

A1

A2

A3

Support

ε13

ε14

W1

W2

Alcohol

δ1 Birthweight

δ2

Control

δ3

Smoking

C1

C2

C3

K1

K2

K3

ε7

ε8

ε9

ε10

ε11

ε12

Figure 5.8 Example of Integrated Structural and Measurement Model

In sum, a well-developed theory will not only include causal relationships between constructs but also will include disturbance terms and a theory of the correlational structure of those disturbance terms. As one moves to an empirical test of a theory, one also specifies a measurement model that links theoretical constructs to measures obtained in the empirical test.

From Theories to Mathematical Representations Specifying Core Model Equations With a well-articulated theory in hand, the next step for choosing a form of statistical analysis is to translate the theory into a set of equations that then guide the statistical analysis. The basic strategy for doing so can be illustrated by making some simplifying assumptions, which I relax later. First, unless otherwise specified, I will assume that all variables in 92

the theoretical system are continuous. Second, I will assume that the measures of all variables have at least interval level properties. Finally, I will assume that all relationships between variables are linear. I adopt a strategy whereby the theory under consideration is expressed in the form of a set of linear equations. This does not mean, however, that researchers always parameterize data analysis in terms of the linear model. Many well-known methods of analysis, such as t tests, analysis of variance, and analysis of covariance, evolved outside of the context of the linear model. However, these methods can be re-expressed as linear equations, and in this sense, they are compatible with the current approach. Complex models of dichotomous, nominal, and count variables also can be approached from the perspective of linear equations using what is known as the generalized linear model (see Yuan & Schuster, Chapter 18, Volume 1). Finally, newer methods of analysis that focus on robust indices of central tendency

theory construction, model building

ε3

β4

Y3

β2 β1

Y2

X β5

Y4

β3

Y1

ε1

ε2

ε4 Figure 5.9 Path Model for Defining Equations

and variability can be viewed in the context of linear equations but where the parameterization shifts from means and variances to trimmed means, M estimators, and the like (see Erceg-Hurn, Wilcox, & Keselman, Chapter 19, Volume 1). Consider the case where the initial theory is represented as a path or influence diagram, such as the theory in Figure 5.9. A path diagram can be viewed as a pictorial representation of a set of equations. There is a separate equation for each endogenous variable in the theory. More specifically, given the aforementioned assumptions, each endogenous variable can be expressed as being a linear function of all variables with arrows going directly to and explicitly touching the box representing the endogenous variable. For the model in Figure 5.9, there are four linear equations that are of primary theoretical interest because there are four endogenous variables. Using the above rule, the relevant equations are: Y1 = α1 + β1 Y2 + ε1, Y2 = α2 + β2 Y3 + β3 Y4 + ε2, Y3 = α3 + β4 X + ε3, and Y4 = α4 + β5 X + ε4, where the various α are intercepts, the various β are linear coefficients, and the various ε are disturbance terms. Primary interest of the research is estimating and interpreting the parameters α1 through α4 , β1 through β5 , and the variances of ε1 through ε4 relative to the variances of Y1 through Y5 . We select statistical methods of analysis that provide the best and most well-behaved estimates of these parameters.

The rule for expressing a path diagram in terms of a set of core equations also applies to models with latent and observed variables, such as the model in Figure 5.10. For this model, the structural model has the following core equations: LY = α1 + β1 LM + δ1 and LM = α2 + β2 LX + β3 LZ + δ2 , with each equation focusing on a latent endogenous variable as an outcome. We are interested in estimating the parameters in these equations, but the task is more challenging statistically because we do not have direct access to a person’s scores on the latent variables. Nevertheless, statisticians have derived methods for obtaining such estimates (see McDonald, Chapter 7, Volume 1). The measurement model for the theory in Figure 5.10 implies the following equations (again, using the rule of specifying an equation for each endogenous variable, in this case, the observed endogenous variables): X1 = α3 + β4 LX + ε1, X2 = α4 + β5 LX + ε2, Z1 = α5 + β6 LZ + ε3, Z2 = α6 + β7 LZ + ε4, M1 = α7 + β8 LM + ε5, M2 = α8 + β9 LM + ε6, Y1 = α9 + β10 LY + ε7, and Y2 = α10 + β11 LY + ε8. jaccard

93

ε1

ε2

X1

X2

β4

ε5

ε6

ε7

ε8

M1

M2

Y1

Y2

β5

LX

β2

β8

β9

LM

β10 β1

β11

LY

β3 δ2

LZ

β6

δ1

β7

Z1

Z2

ε3

ε4

Figure 5.10 Path Model with Latent Variables to Define Equations

It is not uncommon to express the set of core equations in a theory in more succinct form than the above using matrix algebra. Readers unfamiliar with matrix algebra can skip to the summary paragraph at the end of this section. As an example, the above structural model is expressed by making distinctions between the latent endogenous variables and the latent exogenous variables, defining a vector of latent endogenous variables (η) and a vector of latent exogenous variables (ξ ) as



η1 ξ1 η= ξ= , η2 ξ2 and a vector of intercepts (α) and a vector of disturbance terms (δ) for the latent η as



δ1 α1 δ= α= . δ2 α2 In the present example, there are m = 2 latent endogenous variables, q = 2 latent exogenous variables, r = 4 observed endogenous measures with 94

respect to two latent η, and p = 4 observed endogenous variables with respect to the two latent ξ . We further specify an mXm matrix (B) representing the linear (regression) coefficients regressing the η onto the η and a mXq matrix () representing the linear (regression) coefficients regressing the η onto the ξ :



γ11 γ12 β11 β12 B= = . β21 β22 γ21 γ22 In the example in Figure 5.10, let LY = η1 , LM = η2 , LX = ξ1 , and LZ = ξ2 . Then



0 β12 0 0 = B= 0 0 γ21 γ22 and the structural model is defined as η = α + Bη + ξ + δ. The measurement model for the latent endogenous and exogenous variables defines separate matrices of factor loadings for the linear (regression) coefficients from the latent endogenous variables to

theory construction, model building

the observed endogenous variables/indicators with respect to them (λY which is an rXm matrix) and for the linear (regression) coefficients from the latent exogenous variables to the observed endogenous variables/indicators of them (λX , which is a pXq matrix): ⎞ ⎛ λY11 λY12 ⎜ λY λY22 ⎟ 21 ⎟ λY = ⎜ ⎝ λY31 λY32 ⎠ λY41 λY42 ⎞ ⎛ λX11 λX12 ⎜ λX λX22 ⎟ 21 ⎟ λX = ⎜ ⎝ λX31 λX32 ⎠ . λX4 λX42 For the present example, let the four observed endogenous variables with respect to the latent endogenous variables be Y1 , Y2 , M1 , and M2 , respectively, and the four observed endogenous variables with respect to the latent exogenous variables be X1 , X2 , Z1 , and Z2 , respectively. Then the above matrices are ⎞ ⎛ λY11 0 ⎟ ⎜ λY 0 21 ⎟ λY = ⎜ ⎝ 0 λY32 ⎠ 0 λY42 ⎞ ⎛ λX11 0 ⎟ ⎜ λX 0 21 ⎟. λX = ⎜ ⎝ 0 λX32 ⎠ 0 λX42 A vector of intercepts (τY ) and a vector of error terms (εY ) are defined for the observed indicators of the latent endogenous variables: ⎞ ⎞ ⎛ ⎛ τY1 εY1 ⎜ τY ⎟ ⎜ εY ⎟ 2 ⎟ ⎜ 2 ⎟ τY = ⎜ ⎝ τY3 ⎠ εY = ⎝ εY3 ⎠, τY4 εY4 and the observed indicators of the latent exogenous variables: ⎞ ⎞ ⎛ ⎛ τX1 εX1 ⎜ τX ⎟ ⎜ εX ⎟ 2 ⎟ ⎜ 2 ⎟ τX = ⎜ ⎝ τX3 ⎠ εX = ⎝ εX3 ⎠, τX4 εX4 and the measurement models are defined as: Y = τY + λY η + εY X = τX + λ X ξ + ε X , where Y is a column vector of the observed endogenous variables with respect to the latent endogenous

variables, and X is a column vector of the observed endogenous variables with respect to the latent exogenous variables. In addition to these matrices and vectors, one also typically specifies a matrix of covariances between the latent exogenous variables, a matrix of covariances for the disturbance terms, a matrix of covariances for the error terms, and a vector of means for the latent exogenous variables, although these are not typically part of the “core” equations in the model. Nevertheless, statisticians make use of these matrices in the analysis of data and model evaluation. In sum, given a well-developed causal theory that takes the form of a path or influence diagram, one can translate that diagram into a set of core equations that guide statistical analysis. This process is central to building correspondence between our theoretical models and our statistical models. The core equations can be represented in traditional algebraic terms or in matrix form, but either way, they guide the choice of the statistical model for purposes of data analysis.

Some Qualifications For models with moderated causal effects, the rule for translating a path diagram into a set of core equations must be slightly modified. Moderated effects are traditionally (but not always) represented by including product terms in the equation. Consider a simple causal model of the following form: Z

X

Y

The core equation in this case describes the outcome variable Y, as a linear function of the two variables involved in the moderated relationship and the product of the two variables: Y = α + β1 X + β2 Z + β3 XZ + δ. The coefficient associated with the product term reflects the degree of moderation that is operating. Traditionally, the component parts of the product term are included in the core equation as separate predictors in addition to the product term because doing so protects against arbitrary scaling affecting conclusions. For elaboration, see Blanton and jaccard

95

Jaccard (2006) and Yuan and Schuster (Chapter 18, Volume 1). The above equation suggests that another way of diagramming a moderated effect is to include three separate direct effects, one from each of the component parts of the product and a third from the product term. Interaction analysis does not have to be pursued through product terms in a linear equation (Wilcox, 2005), but this is the most common approach. Another qualification to specifying core equations is whether the relationship between variables is nonlinear rather than linear. For example, the relationship between an outcome variable and a causal variable might be characterized by a logarithmic function, an exponential function, a power function, a polynomial function, or a trigonomic function, among others. In such cases, the form of the core equation differs from those specified earlier. Nonlinear modeling can be complex and consideration of such models is beyond the scope of this chapter. Readers are referred to Jaccard and Jacoby (2010) and Yuan and Schuster (Chapter 18, Volume 1). Although there are other nuances that can further shape the nature of the equations one derives from a causal theory, the general point I stress here is that (1) it is possible to represent most causal models in the form of one or more equations and (2) that translating a model into equation form is a fundamental step for choosing an appropriate analytic method for testing the viability of a causal model.

Additional Considerations for Choosing Analytic Strategy With a carefully specified set of equations that are theoretically guided, the next task of the researcher is to choose a data analytic method to estimate the parameters in those equations. The present section considers the role of facets that must be taken into account when making such decisions, including measurement/metric considerations, the use of full versus limited information estimation, distributional assumptions of estimation strategies, and the anticipated presence of outliers. Of course, there are a host of more nuanced considerations that researchers must account for, and these are developed in the different chapters in this volume. My focus is on categories of more general considerations.

Measurement and Metric Considerations Strategies for estimating parameters in core equations, or the act of defining the parameters themselves, are influenced by the psychometric 96

properties of the measures of the variables that comprise the theory. One important metric consideration is the level of measurement of the measures. The classic distinction between nominal, ordinal, interval, and ratio level measurement is particularly important, because different methods of analysis often are called for depending on the operative levels of measurement. Nominal measurement involves the assignment of numbers to levels of variable that are categorical in nature and that have no inherent ordering on an underlying dimension. For the variable gender, a researcher might assign the values of 0 to females and 1 to males. The fact that one number is larger than the other has no substantive interest. The numbers function much like non-numeric labels. Ordinal, interval, and ratio level measures can be defined relative to the measurement equations between quantitative latent variables and observed measures described earlier. In describing the properties of these levels of measurement, I will assume no measurement error is operating, to simplify the presentation. Assuming no measurement error, an interval level measure is one that is a linear function of the underlying latent variable—that is, X = α + βLX, where X is the observed measure and LX is the latent variable. The intercept and the linear coefficient can take on any value. For ratio level measures, the underlying latent variable, LX, should have a meaningful conceptual zero-point (such as zero weight when measuring how heavy an object is) and the intercept in the measurement equation must equal zero (as would be the case for a weight measure in pounds or a measure in grams). For ordinal measures, the relationship between X and LX is nonlinear but monotonic. For example, X might be a power function of LX, in which case the measure has ordinal properties. As discussed in the other chapters in this volume, analytic strategies for parameter estimation vary as a function of these measurement levels. For example, when a measure of an exogenous variable in a causal model is nominal, it is common to represent that variable in a core equation using dummy variables (Chapter 18). If one wants to compare two groups on their average heights, then a measure of height that has ordinal properties is potentially problematic and not well-suited to comparing means on the underlying latent construct of height. An important but underappreciated point when taking measurement levels into account is the fact

theory construction, model building

that the distinction between the different measurement levels is best viewed as approximate rather than all-or-none. Metric properties are not inherent in scales but, rather, are inherent in data and, hence, are influenced by all of the facets of data collection. The extent to which a measure has interval properties not only depends on the scale used to make observations but also possibly on the particular set of individuals on which the observations are made, the time at which the data are collected, the setting in which the data are collected, and so on. Consider the following simplistic yet pedagogically useful example. The height of five individuals is measured on two different metrics, inches and a rank order of height:

Individual Height in Inches Rank Order Height A B C D E

72" 71" 70" 69" 67"

5 4 3 2 1

As is well known, the measures taken in inches have interval level properties. For example, a difference of 1 between any two scores corresponds to the same physical difference on the underlying dimension of height. The actual height difference between individuals A and B corresponds to the same true underlying height difference between individuals C and D, and the metric reflects this (i.e., 72−71 = 1 and 70 − 69 = 1). Similarly, the difference between D and E is 69 − 67 = 2, and the difference between A and C is 2. These differences also reflect the same amount on the underlying dimension of height. Note, however, that these properties do not hold for the rank order measure. The difference in scores between individuals A and B is 1 (i.e., 5 − 4), and the difference in scores for individuals D and E is also 1 (i.e., 2 − 1). These identical differences correspond to differing degrees of height disparities on the underlying dimension of height (i.e., the true difference between individuals D and E is larger than the true difference between individuals A and B, as is evident for the measure using inches). For these individuals, the rank order measures have ordinal properties but not interval properties. Now consider five different individuals with the following scores:

Individual Height in Inches Rank Order Height A B C D E

72" 71" 70" 69" 68"

5 4 3 2 1

Note that for these five individuals, the rank order measure has interval level properties. The difference in scores between individuals A and B is 1, as is the difference between individuals D and E. These differences correspond to the exact same amount on the underlying physical dimension. In this case, what we think of as traditionally being an ordinal “scale” actually yields measures with interval level properties. Suppose that individual E was not 68” tall but instead was 67.9” tall. In this case, the rank order measure is not strictly interval. But it is close and probably can be treated as if it is interval level without adverse effects. This example illustrates that the crucial issue is not whether a set of measures is interval or ordinal. Rather, the critical issue is the extent to which a set of measures approximates interval level characteristics. If the approximation is close, then the data often can be effectively analyzed using statistical methods that assume interval level properties. If the approximation is poor, an alternative analytic strategy is called for. In this sense, we often can apply statistical strategies that assume interval level measures to ordinal data without consequence as long as the ordinal data reasonably approximate interval level properties. Some researchers confuse the concept of measurement level with measurement precision. Precision refers to the number of scale points of a measure, such as a 5-point scale, a 10-point scale, or a 21point scale. Measures can have interval level properties, for example, but be imprecise, or they can have ordinal properties yet be relative precise. Precision of a measure may shape the way that one chooses to model data to evaluate a theory, but to the extent it does, it is not because of levels of measurement of the measures. With coarse and imprecise measures of continuous constructs, analytic methods that assume high levels of precision can be problematic and alternative analytic strategies are required (see Yuan & Schuster, Chapter 18, Volume 1). Another measurement matter that shapes the method of data analysis is whether the outcome measure is discrete and zero-bounded (such as measures of counts, like the number of times an adolescent has jaccard

97

smoked marijuana) and whether measures are censored. Censoring occurs when a value occurs outside the range of a measuring instrument. For example, a bathroom scale might only measure up to 300 pounds. If a 350-pound individual is weighed using the scale, then we would only know that the individual’s weight is at least 300 pounds. For details of how these metric qualities affect analytic strategy, see Long (1997) and Yuan and Schuster (Chapter 18, Volume 1).

Full Information versus Limited Information Estimation Given a set of equations derived from a causal theory, one approach to estimating parameters in those equations is to use a full information estimation approach. In full information estimation, the coefficients in all equations in the model are estimated simultaneously. This can be contrasted with a limited information estimation approach where the coefficients in the equations are estimated one equation at a time or, alternatively, focusing on only a portion of the larger model rather than the entire model. Full information estimation is common in structural equation modeling (SEM), but there also are limited information variants of SEM (see Bollen, 2001). Full information estimation has the advantage that parameter estimates typically (but not always) are more efficient (in a strict statistical sense of the term) than limited information estimators as long as the model being tested is correctly specified. Limited information estimation has the advantage that it often is less susceptible to adverse effects of specification error, because the effects of specification error are limited to the particular portion of the model where the error occurs. It also allows one to use the strongest methods of analysis available dependent on the properties of measures and variables in different portions of the model rather than applying the same homogenous algorithm throughout the model. A decision point for analysts when choosing a statistical method to estimate parameters defined by a theoretical model is whether to adopt a full information or limited information estimation strategy.

Statistical Assumptions Another issue that analysts must consider when moving from equations to estimation is the distributional assumptions of the estimation method and whether they are appropriate for the task at 98

hand. Three assumptions are typical: (1) normality of scores in the population, (2) homogeneity of variances in the population, and (3) independence of replicates. The ways these assumptions are instantiated vary by analytic method, and other assumptions also can come into play (Maxwell & Delaney, 2004). I focus on the above assumptions primarily to make general points about approaches to distributional assumptions. I will make reference to the robustness of a test to violations of underlying assumptions. A statistical test is said to be robust to violations of assumptions if (1) the nominal Type I error rate (alpha level) set by the investigator a priori (usually 0.05) is maintained in the face of assumption violations and (2) the statistical power of the test is relatively unaffected by assumption violations. For a more technical and nuanced discussion of robustness, see Wilcox (2005). A common perception of many researchers is that traditional F and t tests in analysis of variance (ANOVA) and regression are quite robust to violations of normality and homogeneity of variance assumptions. Because of this, these methods are often applied to data where the population assumptions are tenuous. Studies have shown that ANOVA and regression are not necessarily robust to assumption violations of normality and variance heterogeneity (see Keselman et al., 1998; Maxwell & Delaney, 2004; Wilcox, 2005). One strategy for dealing with assumption violations is to perform a preliminary test of the viability of the assumption in question and, if the test suggests a problem, perform a metric transformation or use a robust analytic alternative. This two-step strategy is controversial for several reasons. First, many preliminary tests lack power without large sample sizes and yield nonsignificant results for testing an assumption violation, even when the violation is problematic (Wilcox, Charlin, & Thompson, 1986; Wilcox, 2003). Second, the crucial issue is not whether the null hypothesis of normality or variance homogeneity can be rejected but, rather, estimating the degree to which the assumption is violated and making a decision as to whether the degree of violation is consequential. This requires documenting the magnitude of the assumption violation in the sample data and then taking sampling error into account when making decisions. For example, we might find that a variance ratio comparing the variances of two groups is 4.0, with a margin of error of plus or minus 3.0. The margin of error suggests that the variance could be as large as 7.0, which could be problematic. Unfortunately, it is rare for researchers to take

theory construction, model building

margins of error into account when evaluating preliminary tests. Third, many tests of non-normality are based on asymptotic theory and only perform adequately with large sample sizes (Shapiro & Wilk, 1965). However, with large N , such tests tend to detect minor departures from normality that may be of little consequence. In addition, normality tests can be differentially sensitive to different types of non-normality. Some tests are sensitive mostly to skew, whereas others are sensitive mostly to kurtosis. Fourth, the preliminary tests often make assumptions in their own right and may perform poorly when their assumptions are violated. For example, many tests of variance homogeneity assume the population data are normally distributed (Carroll & Schneider, 1985; Keyes & Levy, 1997; Parra-Frutos, 2009). If the population data are non-normal, then the preliminary test of variance homogeneity may be invalid. Fifth, using preliminary tests as a screen can change the sampling distribution of F tests and t tests in unpredictable ways. Although it seems reasonable, the strategy of conducting preliminary tests of model assumptions faces numerous difficulties. Transformation strategies for dealing with assumption violations have also been criticized. For example, Budescu and Appelbaum (1981) found that transformations to address variance heterogeneity can create more problems than they solve in inferential tests because they can adversely impact normality (see also Blaylock et al., 1980; Milligan, 1987; Doksum & Wong, 1983; Wilcox, 1996, 1998). Transformed variables often are difficult to interpret (e.g., the mean log of annual income is not easily interpreted). In models with multiple predictors, transformations of the dependent variable can create specification error that undermines covariate control because it alters the relationships between the outcome variable and all predictors in the equation. Years ago, before high-speed computers were widespread, analysts had little choice but to use transformations to make measures conform to the assumptions of the limited number of parametric statistical strategies available. Such practices are rarely needed today given the array of modern methods of analysis that are available. A growing number of statisticians recommend that analysts simply abandon the more traditional tests that make strong population assumptions unless they are confident in assumption viability based on theory or extant research. Rather, analysts should routinely use modern-day robust methods of analysis or, at the very least, routinely supplement traditional methods with modern robust methods (Keselman

et al., 2008; Wilcox, 2005). These scientists recognize that cases may occur where defaulting to robust analytic strategies will result in some loss of statistical power and inaccurate probability coverage of confidence intervals (CIs). However, the argument is that in the long run, the use of robust methods will result in better Type I error protection, increased power to detect effects, and CIs that more accurately reflect the desired probability coverage (Wilcox, 1998). Of course, it is always important to explore the shapes of distributions and dispersions of data. However, the recommendation is to view traditional tests of model assumptions and remedial strategies based on transformations with caution, deferring instead to the use of more modern robust analytic methods. Earlier, I discussed causal theories that are multilevel in nature, such as theories of how characteristics of organizations as well as characteristics of individuals affect the behavior of individuals within an organization. Research that tests multilevel models often strategically samples organizations (called Level 2 units) and individuals nested within those organizations (called Level 1 units). In such cases, the statistical assumption of independent residuals/errors often is untenable because of the impact that individuals within an organization have on one another, either directly or indirectly. In such cases, specialized statistical methods must be used to deal with the dependencies (see Yuan & Schuster, Chapter 18, Volume 1). In sum, the choice of an analytic method to use with data is impacted by the equations used to represent a theory, the psychometric properties of the measures, whether one seeks full information estimation or limited information estimation, and the population distributional assumptions that the statistical tools available to the researcher make relative to the actual population distributions. A growing number of scientists suggest adopting robust methods in favor of traditional methods because of the complexities of two step strategies that rely on preliminary tests of assumptions.

Outliers Outliers are unusually small or large scores that distort basic trends in the data. For example, for the scores 2, 3, 4, 5, 6, 7, 8, 9, 10, and 50, the last score is an outlier that distorts the mean and makes the use of the mean suspect as a way to characterize the central tendency of the data. Simple methods for outlier detection compare the results of an analysis when the case is included versus the jaccard

99

results of an analysis when the case is deleted. Such approaches, however, can be nondiagnostic when multiple outliers are present. For example, if there are two individuals in an analysis who distort a mean upward, deleting only one of them may not reveal an “outlier” effect as long as the second outlier is still in the data. Only when both outliers are removed is their distorting character revealed. Outlier identification is a complex enterprise, with some of the most sophisticated work being pursued in the literature on robust statistics (see Wilcox, 2003, 2005, for elaboration). Wilcox (1998, 2006) objects to applying traditional inferential methods to data that have eliminated outliers based on simple outlier detection methods. He argues that doing so invalidates the statistical theory on which the inferential tests are based because of dependencies that outlier elimination creates. Others recommend conducting analyses with and without outliers to determine if conclusions change. If conclusions do change, then one moves forward with any conclusions on a tentative basis. Probably the most effective strategy for dealing with outliers is to focus on parameter estimation methods that are resistant to outliers. For an introduction to these methods, see Wilcox (2005).

Model Selection Model selection refers to the process of choosing what one believes is the “best” model for describing a phenomenon from a set of candidate models. The set of plausible models might consist of many models or it might consist of just one or two models. Criteria for model selection can be considered in general terms using a mindset of specifying criteria to evaluate the overall quality of a theory in general; or they can be discussed in specific, quantitative terms when choosing between competing models within an experiment. I consider both perspectives.

General Criteria for Evaluating Theories Consensual validation is one basis by which theories are evaluated. This refers to the degree of consensus among the scientific community about the validity of the theory. If a theory enjoys widespread acceptance, then it is seen as being a “good” theory. Shaw and Costonzo (1982) have argued that three criteria are crucial for the acceptance of a theory. First, the theory must be logically consistent—that is, the theoretical statements within the conceptual system must not be contradictory. Second, the 100

theory must be in agreement with known data and facts. Third, the theory must be testable—that is, a theory must ultimately be subject to empirical evaluation. In addition to these criteria, Shaw and Costonzo (1982) have specified six additional features of a theory that are desirable but not necessarily critical. First, a theory should be stated in terms that can be understood and communicated to most other scientists. Second, the theory should be parsimonious in that it adequately explains a phenomenon but with a minimum of concepts and principles. All other things being equal, preference is given to theories that make fewerassumptions.Third, althoughwerecognizethat theories are occasionally so novel that they upset the theoretical apple cart, a theory should be consistent with other accepted theories that have achieved consensus among the scientific community—that is, it should be able to be integrated into existing bodies of theory. A fourth desideratum is scope. Other things being equal, the greater the range of the theory, the better it is thought to be. That said, there are times when narrow range theories tend to hold up better over time than broad range theories. Creativity or novelty is a fifth criterion sometimes suggested for evaluating a theory. A theory that explains the obvious is generally not as highly valued by the scientific community as one that provides a novel insight into an interesting phenomenon. Finally, many scientists suggest that a good theory is one that generates research activity. Brinberg and McGrath (1985) have noted that the various theory desiderata sometimes conflict with each other. For example, parsimonious theories tend to be more limited in scope. As such, theorists often make trade-offs as they construct theories to maximize what is valued by the scientific community.

Choosing Between Models in a Given Study As one pursues the theory construction process, one may develop competing theories that either make opposite predictions or that account for the same phenomena but using different explanations and assumptions. Faced with such scenarios, we design research to help us choose between the competing theories. Many scientists consider research that chooses between two or more logical and plausible theories to be inherently more interesting than studies that yield results regarding a single theory (Platt, 1964). In some cases, competing models making qualitatively opposite predictions about how data should

theory construction, model building

pattern themselves in a given study. In these cases, deciding which theory better accounts for the data is reasonably straightforward. As an example, suppose we describe the personal qualities of a political candidate to a person who he or she has not heard of by providing the person with three pieces of information about the candidate. Suppose that the three pieces of information all are quite positive (e.g., the candidate is said to be honest, smart, and empathic). For purposes of developing this example, we characterize how positive each piece of information is considered to be using a metric that ranges from 0 to 10, with higher numbers reflecting higher degrees of positivity. Suppose we want to predict how favorable a person will feel toward the candidate based on the three pieces of information. One plausible model states that the overall degree of favorability is a function of the sum of the positivity of each individual piece of information—that is, that people “tally up” the affective implications of each piece of information when forming their overall impression. Another model, by contrast, specifies a different function—namely, an averaging function. In this case, the overall feeling of favorability is thought to be a function of the average positivity of the information presented. What are the implications of specifying the function as being summative versus averaging in form? It turns out, they are considerable. Let’s explore the summation model first. Suppose the positivity values of the three pieces of information are 8, 8, and 8 respectively. The overall feeling of favorability toward the candidate will be a scaled function of 8 + 8 + 8 = 24. Now suppose we describe a second candidate to this person using the same three pieces of information but we add a fourth descriptor to them (cunning) that has a positivity value of 4. According to the summation model, the overall feeling of favorability toward this new candidate will be a scaled function of 8 + 8 + 8 + 4 = 28, and the person will prefer the second candidate to the first candidate. Psychologically, it is as if the second candidate has all the same qualities as the first candidate, and then “as a bonus,” you get a fourth positive attribute as well. Hence, the person prefers the second candidate to the first candidate. Now consider instead the averaging function. The overall feeling toward the first candidate is predicted to be (8 + 8 + 8)/3 = 8.0, and the overall feeling toward the second candidate is said to be (8 + 8 + 8 + 4)/4 = 7.0. In the averaging model, exactly the reverse prediction is made in terms of candidate preference—namely, the person now will prefer

Table 5.1. Correlations for Intelligence Example Y1

Y2

Y3

Y4

Y5

Y1 1.00

0.72

0.63

0.54

0.45

Y2 0.72

1.00

0.56

0.48

0.40

Y3 0.63

0.56

1.00

0.42

0.35

Y4 0.54

0.48

0.42

1.00

0.30

Y5 0.45

0.40

0.35

0.30

1.00

the first candidate to the second candidate. Psychologically, the first candidate has nothing but very positive qualities, whereas the second candidate has very positive qualities but also some qualities that are only somewhat positive. The person prefers the first candidate, who has nothing but very positive qualities, to the second candidate, who has very positive qualities but also moderately positive qualities. In the above example, the summation and averaging models make opposite predictions about candidate preference, and it is straightforward to choose between the theories based on an empirical study that asks people which of the two candidates they prefer. Of course, if the study found that neither candidate tended to be preferred, then this would question both models. Although studies like the above are compelling, it is common to conduct studies where competing theories do not make qualitatively opposite predictions, but, rather, they make predictions about how data should pattern themselves that allows researchers to choose between them. As an example, in the area of intelligence testing, theorists agree that there are different kinds of intelligence and cognitive abilities, such as math skills, vocabulary breadth, spatial skills, motor skills, and memory. Studies suggest that measures of these constructs are moderately correlated with one another. Some theorists believe that the correlations among them result from the common influence of general intelligence, sometimes called g . We refer to this as a “single factor model” because it posits that the correlations among the measures can be accounted for by a single underlying factor. Other theorists believe that g does not exist and that the correlations among the different abilities are a result of a more complex constellation of determinants consisting of multiple factors. Suppose in a population of individuals the correlations among the five variables are as presented in Table 5.1. It can be shown using somewhat involved, jaccard

101

but tedious, algebra that the one factor model predicts that the correlations should pattern themselves in a certain way—that is, they should follow certain mathematical regularities. For example, if we choose any two columns and ignore entries where a 1.00 occurs, then the ratios of the respective row entries for the two columns should be equal in value to the comparable ratio for any row in the matrix. For example, For columns 1 and 2: .63/.56 = .54/.48 = .45/.40 = 1.125 and For columns 2 and 5 : .72/.45 = .56/.35 = .48/.30 = 1.600. By contrast, models that posit more complex patterns of underlying determinants (e.g., a two-factor model, a three-factor model) predict a different set of regularities, which we do not elaborate here in the interest of space (see McDonald, 1985). The model selection task is to choose which of the competing models (e.g., a one-factor model, a two-factor model, etc.) is most consistent with the data. Statisticians have developed quantitative indices of fit that represent the overall degree of correspondence between the predicted pattern of data by a model and the observed data pattern. When there are competing models, a fit index is derived for each model and then the fit indices are compared to identify the model that best accounts for the data. The technical details of such comparisons can be quite complex, and these are elaborated throughout the many chapters in this volume. The point I am emphasizing here is that even if two models do not make qualitatively opposite predictions, as was the case for the example on summation versus averaging, models often can be compared in terms of their relative consistency with the data, with one model ultimately being declared as being more consistent with the data than another model based on the comparison of quantitatively defined fit indices. There is, however, an additional complication in model selection. In practice, it would be surprising if sample data yielded a set of correlations that perfectly followed the predicted regularities of, say, a one-factor model, even if the one-factor model was operative in the population. This is because sample correlations are subject to sampling error and will randomly deviate to a greater or lesser extent from the true population correlations. Even if this is the case, one expects that if the one factor model 102

is true in the population, then the sample correlation matrix should at least reasonably approximate the regularities predicted by the one-factor model. A challenge for scientists when comparing models vis-à-vis indices of fit (or, for that matter, evaluating a single model using an index of fit) is to take into account such sampling error. A final strategy that one encounters when scientists evaluate competing models is the case where scientists prefer the model that explains the most variation in an outcome variable. In these cases, one typically cannot differentiate models in terms of degree of fit to the data, as in the previous examples we considered. Nevertheless, one can differentiate between them in terms of whether one model accounts for significantly more variation in an outcome than the other model. For example, when explaining adolescent drug use, one might compare a model that assumes drug use is a simple additive function of gender and grade in school versus a model that assumes drug use is an additive function of gender, grade, plus the interaction effect between gender and grade. If the models account for about the same amount of variation in drug use, then preference for the first model will take precedence on grounds of parsimony and because taking into account the interaction effect does not seem to matter. By contrast, if the three-parameter model explains substantially more variation in drug use than the two-parameter model, then the threeparameter model is preferred. As scientists consider the complex issues surrounding model selection, a paramount concern is not to mistakenly embrace a model that is misspecified. A misspecified model is a model that is wrong because it (1) left out an important variable from the model whose omission biases parameter estimates and leads to faulty conclusions (also called left-out-variable-error or LOVE problems); (2) assumed one type of function between variables (e.g., linear) when, in fact, a different type of function was operative (e.g., a threshold function), with such misspecification leading to faulty conclusions; and/or (3) incorrectly modeled the temporal dynamics within a model in such a way that parameter estimates and conclusions are non-trivially biased. Throughout this volume, chapters elaborate on issues of model misspecification and ways to gain perspectives on it. Cudeck (1989) has argued that the crucial issue is not whether a model is misspecified because misspecification is inevitable in so much of the research we conduct. Rather, the more central issue is whether the degree of misspecification is

theory construction, model building

sufficiently large that it leads to conclusions that are incorrect or misplaced. In sum, model selection can involve two processes: (1) evaluating a model by and of itself to determine whether it meets criteria the scientific community judges to be important (such as being logical, testable, consistent with known facts/data, etc.), and (2) quantitatively comparing a model with other competing models within a given study to identify which model best accounts for the observed data or explains the most variation in an outcome. Three strategies are typically used to choose between models. First, one designs a study where the competing models make qualitatively opposite predictions. Second, one deduces from the model how data should pattern themselves and then derives a quantitative (fit) index representing the correspondence between the predicted patterns and the observed patterns. The model that “fits” the data best is the preferred model, everything else being equal. Third, one selects a model that can explain the most meaningful variation in a targeted outcome variable. There are many nuances surrounding the above strategies, and these are elaborated in other chapters in this volume.

Concluding Comments This chapter has described general issues involved in (1) building causal theories, (2) translating those theories into a set of mathematical representations, (3) choosing an analytic strategy to estimate parameters and sampling error vis-à-vis those equations, and (4) choosing the “best” model from a set of competing models. There is a vast array of technical issues to consider as one approaches these four tasks, and these technicalities are elaborated throughout the chapters in this volume. My purpose here was to provide a “big picture” view of the broader enterprise so that we can “see the forest through the trees” as we approach the task of building cumulative bodies of knowledge and solving social problems.

References Blanton, H. & Jaccard, J. (2006). Tests of multiplicative models in psychology: A case study using the unified theory of implicit attitudes, stereotypes, self esteem and self concept. Psychological Review, 122, 155–169. Blaylock, J., Salathe, L. & Green, R. (1980). A note on the BoxCox transformation under heteroskedasticity. Western Journal of Agricultural Economics, 45, 129–135. Bollen, K.A. (2001). Two-stage least squares and latent variable models: Simultaneous estimation and robustness to misspecifications. In Robert Cudeck, Stephen Du Toit, & Dag Sörbom (Eds.), Structural equation modeling: Present and

future, A festschrift in honor of Karl Jöreskog. (pp. 119–138). Lincoln, IL: Scientific Software. Bollen, K.A., & Curran, P.J. (2006). Latent curve models: A structural equation approach. Hoboken, NJ: Wiley. Brinberg, D. & McGrath, J. (1985). Validity and the research process. Newbury Park: Sage. Brodbeck, M. (1968). Readings in the philosophy of the social sciences. New York: Macmillan. Brown, T. (2006). Confirmatory factor analysis for applied research. New York: Guilford. Budescu, D. & Appelbaum, M. (1981). Variance stabilizing transformations and the power of the F test. Journal of Educational and Behavioral Statistics, 6, 55–74. Bunge, M. (1961). Causality, chance, and law. American Scientist, 69, 432–488. Carnap, R. (1971). A basic system of inductive logic, Part 1. In R. Carnap & R. C. Jeffrey (Eds.), Studies in inductive logic and probability. Vol. I, (pp. 18–39). Berkeley, CA: University of California Press. Carroll, RJ, & Schneider, H. (1985). A note on Levene’s test for equality of variances. Statistics & Probability Letters, 3, 191–194. Cartwright, N. (2007). Hunting causes and using them. New York: Cambridge University Press. Collins, L. (2006). Analysis of longitudinal data: The integration of theoretical model, temporal design, and statistical model. Annual Review of Psychology, 57, 505–528. Coombs, C., Dawes, R. & Tversky, A. (1970). Mathematical psychology: An elementary introduction. Englewood Cliffs, NJ: Prentice Hall. Cudeck, R. (1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin, 105, 317–327. Doksum, K.A., & Wong, C.-W. (1983). Statistical tests based on transformed data. Journal of the American Statistical Association, 78, 411–417. Dubin, R., (1976). Theory building in applied areas. In Marvin D. Dunnette (Ed.), Handbook of industrial and organizational psychology. (pp. 17–39). Chicago: Rand McNally. Erceg-Hurn, D. M., Wilcox, R. R., & Keselman, H. H. (2012). Robust statistical estimation. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 388–406). New York: Oxford University Press. Finkel, S. (2008). Linear panel analysis. In S. Menard (Ed.), Handbook of longitudinal research (pp. 475–504). New York: Elsevier Press Frank, P. (1961). Modern science and its philosophy. New York: Collier Books. Green, P. & Tull, D. S. (1975). Research for marketing decisions. Englewood Cliffs, NJ: Prentice-Hall. Jaccard, J. & Jacoby, J. (2010). Theory construction and model building skills: A practical guide for social scientists. New York: Guilford. Kaplan, A. (1964). The conduct of inquiry. San Francisco, CA: Chandler. Keselman, H., Algina, J., Lix, L., Wilcox, R. & Deering, K. (2008). A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes. Psychological Methods, 13, 110–129. Keselman, H. J., Huberty, C. J., Lix, L. M., Olejnik, S., Cribbie, R., Donahue, B., et al. (1998). Statistical practices of educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses. Review of Educational Research, 68, 350–386.

jaccard

103

Keyes, T. & Levy, M.S. (1997). Analysis of Levene’s test under design imbalance. Journal of Educational and Behavioral Statistics, 22, 227–236. Lewis, D. (2000) Causation as influence. Journal of Philosophy, 97, 182–197. Long, S. (1997).Regression models for categorical and limited dependent variables. Thousand Oaks, CA: Sage. Maxwell, S. E. & Delaney, H.D. (2004). Designing experiments and analyzing data: A model comparison perspective. Mahwah, NJ: Erlbaum. McDonald, R. (1985). Factor analysis and related methods. Hillsdale, NJ: Erlbaum. McDonald, R. P. (2012). Modern test theory. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 118–143). New York: Oxford University Press. Milligan, G. (1987). The use of the arc-sine transformation in the analysis of variance. Educational and Psychological Measurement, 47, 563–573. Morgan, S. & Winship, C. (2007). Counterfactuals and causal inference. New York: Cambridge University Press. Pap, A. (1962). An introduction to the philosophy of science. Glencoe, IL: The Free Press of Glencoe (MacMillan). Parra-Frutos, I. (2009). The behaviour of the modified Levene’s test when data are not normally distributed. Journal Computational Statistics, 24, 671–693. Pearl, J. (1999). Causality: Models, reasoning, and inference. New York: Cambridge University Press. Platt, J. (1964). Strong inference. Science, 146, 347–353. Rubin, D. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. Rubin, D. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34–58. Russell, B. (1931). The scientific outlook. London: George Allen & Unwin. Schumacker, R. E. & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling. Mahwah, NJ: Erlbaum.

104

Shadish, W. R., Cook, T. & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. Shapiro, S. S. & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52, 591–611. Shaw, M. & Costonzo, P. (1982). Theories of social psychology. New York: McGraw-Hill. Sheth, J. N. (1967). A review of buyer behavior. Management Science, 13, B718–B758. Simon, H. A. & Newell, A. (1956) Models: Their uses and limitations. In L.D. White (Ed.), The state of the social sciences. (pp. 61–83). Chicago, IL: University of Chicago Press. Sowa, J. (2000). Processes and causality. From www.jfsowa.com/ontology/causal.htm. Accessed August 15, 2011. Torgerson, W. S., (1958) Theory and methods of scaling. New York: Wiley. Wilcox, R. (2006). Graphical methods for assessing effect size: Some alternatives to Cohen’s d. Journal of Experimental Education, 74, 353–367. Wilcox, R.R. (1996). Statistics for the social sciences. New York: Academic Press. Wilcox, R. R. (1998). How many discoveries have been lost by ignoring modern statistical methods. American Psychologist, 53, 300–314. Wilcox, R.R. (2003). Applying contemporary statistical techniques. San Diego, CA: Academic Press. Wilcox, R.R. (2005). Introduction to robust estimation and hypothesis testing (2nd ed.). San Diego, CA: Academic Press. Wilcox, R. R., Charlin, V. L., & Thompson, K. (1986). New Monte Carlo results on the robustness of the ANOVA F, W, and F* statistics. Communications in Statistics. Simulation and Computation, 15, 933–944. Yuan, K-H., & Schuster, C. (2012). Overview of statistical estimation methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods, (Vol. 1, pp. 360–386). New York: Oxford University Press.

theory construction, model building

CHAPTER

6

Teaching Quantitative Psychology

Lisa L. Harlow

Abstract This chapter strives to enliven quantitative psychology teaching and encourage statistical literacy. Summaries of quantitative training, at the undergraduate and graduate levels, offer guidelines to improve instruction and call for greater emphasis on measurement, research, and quantitative methods. Strategies for effectively teaching quantitative psychology are suggested, including active, hands-on learning for engaging students, e-learning and Web-based instruction, mentoring and role models, and encouraging conceptual understanding. Quantitative students are encouraged to focus on the nature of research questions, similarities and differences in statistical methods, and interpreting findings with statistical tests, effect sizes, and confidence intervals. Future directions are offered regarding model building, quantitative learning beyond the classroom through workshops, membership in quantitative societies, and reading the quantitative literature. The reach of quantitative training should be widened to more readily include those from disadvantaged, early career, and under-represented groups to further strengthen the field and enlighten greater numbers about the wonders of quantitative psychology. Key Words: quantitative training, statistical literacy, engaging students, strategies for teaching, active learning, mentoring, underlying concepts, widening quantitative reach

Introduction Readers of this volume are part of a rare and unique subset of individuals who resonate with the term quantitative psychology. When the topic of psychology is discussed, whether by students, faculty, or the general public, the qualifier “quantitative” does not always enter the conversation. Students in psychology often delay taking required statistics and research courses, short-changing their ability to understand and develop scientific skills necessary to open up their career options and further the field of psychology (Rajecki, Appleby, Williams, Johnson, & Jeschke, 2005). In more than 25 years of teaching quantitative psychology, I have learned not to be too surprised by cringing and flinching, if only subtle, in students enrolled in my courses.

Experience and research make it all too apparent that students often approach quantitative courses with little interest or confidence, coupled with anxiety andmisperceptionsaboutlearningstatisticalmaterial (Ashcraft, & Kirk, 2001; DeVaney, 2010; Harlow, Burkholder & Morrow, 2006; Onwuegbuzie, 2000; Onwuegbuzie, & Wilson, 2003; Piotrowski, Bagui, & Hemasinha, 2002). What compounds the problem is that students with high anxiety tend to have poorer attitudes toward and lower performance in quantitative studies (Budé, Van De Wiel, Imbos, Candel, Broers, &Berger, 2007; Harlow, Burkholder & Morrow, 2002; Mills, 2004; Rodarte-Luna & Sherry, 2008; Tremblay, Gardner, & Heipel, 2000). Fortunately, quantitative attitudes have been shown to predict quantitative performance just as strongly as

105

pre-course quantitative skill in a class with engaging activities to get students involved, suggesting that it is worthwhile to try to reduce quantitative anxiety by making quantitative learning more appealing (Harlow, Burkholder, & Morrow, 2002). Thus, it is probably not extraordinary that I find it my greatest joy to try to engage and enlighten students with the remarkable prospective of quantitative thinking and analysis. I go so far as to argue that any student interested in psychology has the making of a quantitative scientist, if only at some latent, implicit level. What student has not posed a possible theory for why something is, or is not, a certain way? Doesn’t everyone wake up each morning with one or more hypotheses about how the day will emerge and which variables or factors will help bring about the desired effect, whether small, moderate, or large? And, don’t we all think about how much confidence we can place on our hypotheses or expectations, depending on specific mitigating or confounding circumstances? I believe that we all engage in these activities, however formal or informal, in which quantitative psychology can play a pivotal role. Quantitative psychology has the potential to empower and enlighten with the training, skills, reasoning, and capabilities to formalize these kinds of questions, helping us to describe and understand the essence of myriad data that come our way. With more advanced quantitative training, we could further develop and analyze intricate theoretical models that help explain and predict complex processes and behaviors, integrating information necessary to inform and improve interventions, policies, and the human condition (Harlow, 2010; Rodgers, 2010). In this chapter, I highlight some major issues and topics that go into teaching quantitative psychology. First, I provide an overview of quantitative training to get an overarching picture of the field. Second, I suggest a number of issues that should be taken into account when teaching quantitative courses and research that investigates ways to address these concerns. Third, I offer several themes that run through many statistical methods to help tie together the numerous, seemingly isolated and obscure quantitative procedures. Finally, I summarize the current field of quantitative teaching and give recommendations for other options to supplement, enrich, and expand statistical learning.

Overview of Quantitative Training Aiken et al. (1990, 2008) surveyed more than 200 graduate training programs regarding statistics, 106

measurement, and methodology. Only 15% of the graduate programs had a quantitative area, although about 50% offered a minor in quantitative studies. Almost all of the graduate programs required students to take at least one or two quantitative courses during their first year, although more than 25% sent students to other departments to get this training. Required quantitative courses usually included basic analysis of variance (ANOVA) and some regression, with half to two-thirds of programs including limited training in measurement or research. Computer training in SPSS and SAS is often part of the curriculum, with the possibility of EQS, AMOS, Mplus, LISREL, and other software exposure in more advanced courses (e.g., structural equation modeling) in about half of the programs. Over a span of almost two decades between surveys, quantitative training had not improved much; over time there was slightly more emphasis on measurement and deterioration in coverage of research design. Much improvement in advanced methods training is still needed in most departments to impart the expertise needed to effectively compete in quantitative studies and fill needed positions in the workforce. It is becoming readily apparent that efforts are needed to increase the number of quantitatively trained individuals. A recent report from an American Psychological Association (APA) Task Force to Increase the Quantitative Pipeline (Aiken et al., 2007) reported that in recent history, whereas there were approximately two applicants for each nonquantitative doctoral level job (e.g., cognitive, developmental, social), there were almost 2.5 jobs for every quantitatively trained PhD student. Further, the focus of quantitative methodology is moving away from emphasis on learning a set of specific methods and procedures and instead placing greater priority on developing a broader vision of quantitative science through theory building, modeling underlying processes, and integrating information across meta-analytic studies (e.g., Harlow, 2010; Jaccard & Jacoby, 2009; McGrath, 2011; Rodgers, 2010; Rosenthal & DiMatteo, 2001). Consistent with this vision is a focus on encouraging greater statistical reasoning, thinking and literacy, rather than rote learning (Gal, 2003; Garfield & delMas, 2010; Ridgway, Nicholson, & McCusker, 2007). Researchers are realizing that quantitative literacy is a needed goal in undergraduate studies, beginning with the most basic, introductory statistics courses (Ben-Zvi & Garfield, 2004; Mulhern & Wylie, 2004, 2006; Rumsey, 2002; Watson, 2006). Resources are becoming available to help encourage

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

quantitative thinking (Garfield & Ben-Zvi, 2008; Saville, Zinn, Lawrence, Barron, & Andre, 2008; Stark & Krause, 2009), along with a realization of the challenges involved (Ben-Zvi & Garfield, 2004; Sen, 2004). Garfield and colleagues from the University of Minnesota developed a program in Quantitative Methods in Education that is at the forefront on improving undergraduate statistical training. Much of their research focuses on training statistics teachers (Garfield & Everson, 2009), reforming the way introductory statistics is taught (Garfield, Hogg, Schau, & Whittinghill, 2002), and investigating how to teach and learn statistics (Garfield & BenZvi, 2007). Other researchers (Friedrich, Buday, & Kerr, 2000) also help outline the field by surveying undergraduate programs across the country with respect to quantitative training. In the next section, I review research focusing on strategies for teaching quantitative psychology (e.g., Gelman & Nolan, 2002), including a set of guidelines developed by Garfield and colleagues (Franklin & Garfield, 2006).

Strategies for Teaching Quantitative Psychology There is an abundance of resources to aid instructors of statistics courses, including best practice volumes (e.g., Dunn, Smith, & Beins, 2007; Hulme, 2007; Hulsizer & Woolf, 2009), compilations of research on teaching statistics (e.g., Bjornsdottir & Garfield, 2010; Zieffler, Garfield, Alt, Dupuis, Holleque, & Chang, 2008), learning objectives for introductory statistics and research (Tomcho, Rice, Foels, Folmsbee, Vladescu, Lissman, Matulewicz, & Bopp, 2009), and suggestions for teaching quantitative courses (Garfield & Everson, 2009; Ware & Johnson, 2000). In this section, I present several strategies that have been researched and recommended to improve statistical literacy.

Active Learning Research has demonstrated that actively involving students significantly improves performance in quantitative courses (e.g., Helman & Horswill, 2002). Guidelines to improve statistical learning (e.g., encouraging statistical thinking, using examples with technology and real data, emphasizing concepts, promoting active learning) reiterate this approach of engaging students in the process (Everson, Zieffler, & Garfield, 2008; Franklin

& Garfield, 2006). Humor is also a great icebreaker, catching students’ attention (e.g., Cobb, 1999, 2007; Zeedyk, 2006), helping to diffuse tension, and surreptitiously calling students back to quantitative learning. Creating a lively environment is essential for capturing the interest of students. A number of researchers have emphasized the importance of hands-on and interactive learning (Dolinsky, 2001; Kolar & McBride, 2003; Wulff & Wulff, 2004). A simple strategy could be to invite students to write down what is clear and not clear at the end of each lecture, with the faculty clarifying unclear points at the beginning of the next lecture (Harlow, Burkholder, & Morrow, 2006). It takes little time, and students get fairly immediate feedback on how to clear up misunderstandings, whereas faculty get a clearer idea of what the students understood and what needs to be clarified. Moreover, involving students with creative examples (Chew, 2007; Schwartz & Martin, 2004) and analyzing data (e.g., Nie & Lau, 2010; Watkins, Scheaffer, & Cobb, 2004) bring about more in-depth learning than traditional lecture-based approaches. Demonstrating visual images and graphs of procedures also helps improve understanding (e.g., Peden, 2001). If there is more time, then students can be arranged into small groups and be given a research scenario in which they need to consult among themselves to recommend statistical procedures to address the research question (Harlow, Burkholder, & Morrow, 2006). For a basic example, students could be asked how to assess whether men and women differed on hours of exercise per week (i.e., with a twosample independent t-test) or whether the number of hours worked at an outside job was related to GPA (i.e., with a correlation). A number of researchers have recommended engaging students in these cooperative small groups (DaRos-Voseles, Collins, Onwuegbuzie, & Jiao, 2008; Onwuegbuzie, Collins, & Jiao, 2009; Peterson & Miller, 2004) or learning communities (e.g., Barren, Benedict, Saville, Serdikoff, & Zinn, 2007) where students work together to understand quantitative material and immerse themselves in the process. Krause, Stark, and Mandl (2009), on the other hand, have found that cooperative groups did not directly improve statistics performance, although students reported greater perceived efficacy when working with others. It may be that group learning is not effective for all students, with more advanced students possibly benefiting the least. For example, Harlow, Burkholder, and Morrow (2002) found that learning activities harlow

107

that included working in groups and with peer mentors was viewed more favorably when students were more anxious and had lower confidence about quantitative learning. Future research could investigate whether ability or achievement level is a moderator for group learning and performance to clarify who benefits from group learning.

Technology and Learning Other research has revealed the value of technology in heightening quantitative learning using computer-assisted analysis (Bartz & Sabolik, 2001), Web-based tutorials (Bliwise, 2005), and specific online learning programs such as Estimating Statistics (EStat; Britt, Sellinger, & Stillerman, 2002), Simulation-Assisted Learning Statistics (SALS; Liu, Lin, & Kinshuk, 2010), the Utah Virtual Lab (Malloy & Jensen, 2001), Statistical Understanding Made Simple (SUMS; Swingler, Bishop, & Swingler, 2009), or Web Interface for Statistics Education (WISE; Berger, n.d., http://wise.cgu.edu/). In a meta-analysis of 45 studies, there was a smallto medium-sized Cohen’s d (i.e., 0.33) performance benefit effect size attributed to learning statistics with computer-assisted instruction versus learning in a lecture-based control group that did not provide such input (Sosa, Berger, Saw, & Mary, 2010). Thus, students who had technology-enhanced instruction demonstrated one-third of a standard deviation higher performance than those students without such advantage; further, the improvement was even more salient when involving those who were more advanced (i.e., graduate students) and when more time was allotted for instruction. Still, whereas research has demonstrated the benefits of e-learning approaches (e.g., Fillion, Limayem, Laferrière, & Mantha, 2008; Hanley, 2004; Sosa et al., 2010; Wender & Muehlboeck, 2003), more research is needed, as others remain unconvinced of the merits of adding technology to the classroom environment (e.g., Härdle, Klinke, & Ziegenhagen, 2007). Even the value of online discussions and whether or how much faculty should facilitate or contribute is not entirely clear (e.g., Mazzolini & Maddison, 2007). Instructors are encouraged to consider literature on cognitive learning and assessment to improve Web-based materials for students, particularly in quantitative learning (e.g., Romero, Berger, Healy, & Aberson, 2000).

Mentors and Role Models Mentoring can help students get the extra input needed to understand quantitative concepts (e.g., 108

Ferreira, 2001) and can help to supplement class lectures and faculty input (e.g., Katayama, 2001). Fortunately, graduate teaching assistants (TAs) are often provided for undergraduate- and graduatelevel quantitative courses. When TAs are not funded, I have found it very effective to invite one or more top achievers from a previous semester to serve as volunteer peer mentors or TAs, offering independent study or teaching practicum credit. Students in the course benefit from having a peer of similar age demonstrating and facilitating expertise in quantitative methods. TAs or peer mentors gain efficacy and greater confidence, often choosing to become even more involved with other quantitative courses and research to continue building their skills (e.g., Harlow, Burkholder, & Morrow, 2002, 2006). Mentoring can be particularly valuable for women and individuals from under-represented groups who have few role models in the quantitative field, for providing direction, support, and encouragement (e.g., Kosoko-Lasaki, Sonnino, & Voytko, 2006; Neal-Barnett, Mitchell, & Boeltar, 2002; Zirkel, 2002).

Conceptual Approach to Teaching Perhaps the most effective idea for conveying complex quantitative material is to focus on the concepts rather than using a strictly mathematical approach (e.g., Atkinson, Catrambone, & Merrill, 2003). Chiou (2009) found that encouraging students to collaborate on conceptually mapping statistical material significantly improved performance compared to having students complete textbook exercises and calculations. In another study, Aberson, Berger, Healy, and Romero (2003) demonstrated that an interactive approach to hypothesis testing concepts was received more positively and improved performance over traditional laboratory exercises. Similarly, Meletiou-Mavrotheris and Lee (2002) found that helping students to understand concepts, improve statistical reasoning, and build intuitions about statistical ideas was more facilitating than using a traditional teaching approach. Presenting and encouraging understanding of core concepts—particularly through hands-on engagement and research—can help foster more in-depth insight and greater involvement in inquirybased future learning (Aulls & Shore, 2008; Dewey, 1997). Inviting students to seek out solutions to quantitative research problems promotes greater statistical awareness and literacy. In the next section, I present a set of conceptual themes that are common

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

to many statistical methods and that help provide a foundation for understanding introductory and advanced quantitative learning.

Themes that Run Through Quantitative Psychology Quantitative understanding is increased when common ideas are revealed that occur in many statistical procedures. Harlow (2005) has emphasized a number of themes that run through multivariate methods and that can be extended to encompass univariate methods as well. Three basic themes are presented below to facilitate approaching, analyzing, and interpreting quantitative methods.

Considering the Research Question First, it is helpful to encourage students to consider the kind of research question that needs to be addressed. Group difference questions can be analyzed with basic z-tests when information is known about the mean and the variability in the population. For example, we could investigate whether an exceptional students’ class evinced an IQ that was different from the known average of 100, with a standard deviation of 15. When only a population mean is known or there are two groups involved, a t-test would be appropriate, requiring that a researcher estimate the population standard deviation(s) from the sample(s). For studies with two or more groups, an ANOVA would be useful for assessing whether there were differences among multiple groups. For example, a teacher may want to compare the level of interest in taking math courses for male and female students, which could be examined with a two-sample independent t-test. To investigate potential differences in math interest among students from three different college majors, an ANOVA would be helpful. To examine whether several groups differ on math interest, after accounting for the number of previous math classes taken, analysis of covariance (ANCOVA) could be used. Further, group differences across several groups could be assessed on several measures (e.g., math interest, math efficacy, and math anxiety) using multivariate analysis of variance (MANOVA). Similarly to the way that ANCOVA extended ANOVA by taking into account another predictor that correlated or covaried with the outcome, a multivariate analysis of covariance (MANCOVA) can extend a MANOVA when wanting to examine whether several groups differ on several outcomes (e.g., the three math attitudes suggested for the

MANOVA) after taking into account one or more covariates such as number of previous math courses or GPA. For those who are not specifically interested in mean differences across groups, a correlational question may be asked. Researchers interested in finding what is associated with achievement could conduct a simple correlation analysis, assessing whether math interest is related to college GPA. This bivariate procedure (between just two variables) could expand to canonical correlation (CC) that allows an examination of two sets of variables. For example, a researcher could examine whether several math attitude scores (e.g., math interest, math efficacy, and math anxiety) are related to several achievement outcomes (e.g., GPA, the number of conferences attended, and the number of memberships in honor societies). Correlation could also be extended to multiple regression (MR) to relate a linear combination of several continuous predictors (e.g., math ability, verbal ability, previous GPA, achievement motivation) to a single outcome such as current GPA. If the outcome were dichotomous (e.g., success or failure in a course or grade level), then the merit of similar predictors could be examined with logistic regression (LR) or discriminant function analysis (DFA), depending on whether there was interest in conveying the odds or the degree of correlation with the outcome, respectively. Multilevel modeling (MLM) would be useful to predict an outcome such as achievement, when there is reason to believe that participants are clustered in separate groups (e.g., classrooms, school districts). Thus, MLM allows, and even models, heterogeneity of variance across several groups in the data, contrary to more restrictive prediction methods (e.g., MR, LR, and DFA) that assume that data are drawn from a homogeneous group. Other, more advanced, research questions could be addressed with more sophisticated methods. Factor analysis (FA) or principal components analysis (PCA) would be useful when faced with a large set of measures with a goal of identifying a few key dimensions that explain the underlying structure or associations (e.g., quantitative and verbal intelligence dimensions in an IQ test of multiple subtests). Factor analysis would allow the delineation of unique or error variance in measures before forming factors with the variance that was in common among the measures. In contrast, PCA analyzes all of the variance in the variables when forming components. In practice, there may not be much actual difference in results across these two harlow

109

methods if the loadings are high for relevant variables on their respective factors, even with the initial difference in whether unique variance is included in the analysis (Velicer & Jackson, 1990). Structural equation modeling (SEM) could be used to examine whether latent factors, each with several measures, could be theoretically modeled to explain hypothesized underlying processes. For example, using Bandura’s (1997) social learning theory, we could test whether different factors of intelligence (i.e., quantitative and verbal) would predict degree of self-efficacy in learning, which in turn could predict an achievement factor (measured by homework, quiz, and exam scores). In testing the SEM, other covariates (e.g., socioeconomic status, previous GPA) could be added as possible predictors of the mediator (i.e., self-efficacy) to see if they are important, or to rule them out as predictors of achievement. Further, multiple group analyses could examine whether such a predictive SEM of achievement held in different groups (e.g., men versus women, different ethnicities). If findings differed across groups, then it would indicate that the grouping variable moderated the prediction. Longitudinal modeling (e.g., latent growth curve modeling) could examine whether achievement changed over time in mean level and in the rate of change and whether predictors (e.g., previous GPA, IQ, gender) could predict the level (called the intercept) and rate of change (called the slope) across time. Other methods discussed in this volume could also be examined to further fine tune the nature of the question being assessed. Therefore, the main point of the first theme (type of research question asked) is that it is the nature of the research and the questions asked of the data that drive the kind of quantitative method that is selected. There is a whole world of methods to consider, and the choice is not so daunting when realizing that certain methods readily lend themselves to different kinds of research questions (e.g., group difference, correlation or prediction, underlying structure, longitudinal, etc.). In the next section, similarities and differences among methods is the second theme that is discussed. Here, it will become apparent that although there are distinctions among methods that make them more likely to be applied to specific research questions, many quantitative procedures share similarities in how they reveal the essence of the data. For example, researchers who find a significant difference in well being between groups who exercise regularly and those who do not will also find that there is a significant relationship between well 110

being and exercise. Thus, both group difference and correlational methods can examine how much variance is shared between independent and dependent variables. In this regard, Cohen, Cohen, West, and Aiken (2003) have described how categorical grouping variables can be coded for use in correlational and regression procedures.

Noting Similarities and Differences in Quantitative Methods Examining research questions and seeing specific methods that seem to be relevant to such questions leads into a second theme, which is to notice the similarities and differences among various quantitative procedures. Quilici and Mayer (2002) helped students notice the underlying similarities in different statistical word problems. This helped students to know how to analyze specific problems by noting their similarities to other statistical problems that were addressed by certain methods. Similarly, Derryberry, Schou, and Conover (2010) helped students understand how to conduct rank-based statistical tests by revealing their resemblance to already studied parametric tests. Quantitative methods can be classified in several ways to help delineate common aspects or distinguishing differences. Thus, group-difference methods (e.g., z-test, t-test, ANOVA, ANCOVA, and MANOVA) are similar in allowing an examination of potential mean differences between groups on one or more outcome variables. In contrast, correlational methods (e.g., Pearson’s r, CC) do not tend to focus on means but, rather, on assessing association between independent and dependent variables. Prediction methods (e.g., MR, LR, DFA, and MLM) all focus on predicting an outcome from a set of predictors and may have some emphasis on grouping variables—particularly LR and DFA, which explicitly include a categorical dependent variable, and MLM, which takes into account different levels or groups in the data. However, each of these prediction methods differs somewhat from ANOVA-based methods in focusing more on weights linking independent and dependent variables and less on mean scores such as group averages or centroids. Dimensional or structural methods (e.g., FA, PCA) are similar by involving one or more sets of measures with a smaller set of underlying factors or components posited or revealed to help explain the patterns of relationships among variables. Thus, dimensional methods are similar to correlational methods in their

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

focus on associations among variables and different from group difference methods that are more concerned with mean differences among groups. SEM combines the best of group difference, correlation/prediction, and structural methods by allowing an investigation of the means of latent factors across different samples or groups, while still allowing an examination or confirmation of the structural or correlational relationships among variables. Longitudinal methods (e.g., time series, latent growth modeling) add the element of time to allow an examination of temporal ordering that can help in assessing causal precedence among variables. The main point of this second theme is to encourage students to see how various quantitative methods are similar and how they are different. Discerning these distinctions and similarities will go a long way toward highlighting the overriding features and underlying aspects of quantitative methods and in selecting an appropriate method to address a specific kind of research question.

Interpreting Findings from Quantitative Methods Third, after identifying a research question and noticing similarities and differences that lead to selecting a method to analyze one’s data, it is important to examine and interpret findings from multiple perspectives. Although the field of quantitative psychology has traditionally focused largely on significance testing, there is growing emphasis on considering other alternatives such as effect sizes and confidence intervals (e.g., Harlow, Mulaik, & Steiger, 1997; Kline, 2004; Wilkinson, & The Task Force on Statistical Inference, 1999). Initially, it can be helpful to assess whether the overall variation in the data is significantly different from chance. Significance tests usually involve some ratio of variances. We can think of group difference models as explaining the ratio of how scores vary among groups, relative to how much scores vary within each group. This ratio can be readily recognized as the F -test. The closer this ratio is to 1.0, the less we are apt to consider between-groups variance as anything more meaningful than the within-group error variance. Correlational methods examine whether the covariance between variables is salient when contrasted with the variances within each variable. Thus, a correlation is simply a ratio of the covariance between variables over the product of the standardized variance (i.e., standard deviations) within each variable. Using an analogy relating individuals, when the covariance between two people is salient,

despite their individual variance (or uniqueness), a meaningful relationship emerges. After assessing significance, it is important to examine the magnitude of an overall finding, called an effect size (ES). For group-difference methods, an ES can indicate the number of standard deviation units of difference there is among group means (i.e., Cohen’s d ; Cohen, 1988), with small, medium, and large effects having values of about 0.20 (almost a quarter of a standard deviation), 0.50 (half a standard deviation), and 0.80 (almost a full standard deviation) (Cohen, 1988). For correlational methods, we would hope to show a meaningful relationship between pertinent variables, with values of 0.1, 0.3, and 0.5 indicating small, medium, and large Pearson product moment correlations, respectively. Particularly for prediction methods, as well as other methods, proportions of shared variance effect sizes (e.g., η2 or R2 ) are useful for showing how much the independent and dependent variables have in common, ranging in size from 0 to 1.0. Proportion of variance ES values of 0.01, 0.09, and 0.25 indicate small, medium, and large univariate effects, respectively (obtained by squaring 0.1, 0.3, and 0.5 correlations, respectively); and 0.02, 0.13, and 0.26 or more refer to small, medium, and large multivariate effect sizes, respectively (e.g., Cohen, 1988; Harlow, 2005). When interpreting ES, it is also important to provide an indication of the margin of error with confidence intervals (e.g., Cumming & Fidler, 2009; Gilliland & Melfi, 2010; Odgaard & Fowler, 2010). Confidence intervals provide a range of values, with narrower intervals indicating more precision for an estimated effect. Research journals such as the Journal of Consulting and Clinical Psychology and others are beginning to require that statistical findings be supplemented with effect sizes and confidence intervals (e.g., La Greca, 2005; Odgaard & Fowler, 2010). Thus, the third theme, concerned with interpreting results, involves providing an indication as to whether a result is significantly different from chance, the magnitude of the effect, and the degree of certainly about the result. Ultimately, understanding quantitative themes and concepts (e.g., Abelson, 1995), including framing initial research questions; noticing similarities and differences when selecting statistical methods; and interpreting the significance, extent, and precision of one’s finding will go a long way toward moving psychology more in the forefront of quantitative science. harlow

111

Conclusions This chapter features a number of issues regarding the teaching of quantitative psychology. First, research on quantitative training underscores the need to provide greater guidelines and opportunities for learning more about measurement, research, and statistical methods (e.g., Franklin & Garfield, 2006; Garfield & Everson, 2009). Job opportunities are more available than are applicants for quantitative careers (e.g., Aiken et al., 1990, 2008). An APA task force on increasing the quantitative pipeline (Aiken et al., 2007) is aimed at bringing more people into this field to fill this need. Second, strategies for effectively teaching quantitative psychology emphasize the advantages of active learning (e.g., Onwuegbuzie, Collins, & Jiao, 2009), technology, and Web-based instruction (e.g., Bliwise, 2005; Fillion, Limayem, Laferrière, & Mantha, 2008; Sosa et al., 2010), mentoring and role models (e.g., Ferreira, 2001), and conceptual understanding (e.g., Mulhern & Wylie, 2004; Swingler, Bishop, & Swingler, 2009). All of these strategies implicitly invite students to become more immersed in the quantitative learning process and ultimately to become more statistically literate. Third, several themes have been offered to bring about greater clarity in quantitative learning (Harlow, 2005). One of the themes includes considering the nature of the research question. Students should be encouraged to ask whether their research question involves group differences, correlation, prediction, underlying structure, or longitudinal studies. Another theme encourages students to notice the similarities and differences among quantitative procedures, as this kind of awareness is helpful in selecting appropriate methods to analyze data and address research questions. Still another theme presents suggestions for interpreting quantitative findings using statistical tests, ES, and confidence intervals. Being aware of quantitative training practices, encouraging more engaged learning, and focusing on conceptual thinking and underlying themes can help us feature and spotlight quantitative psychology as a highly worthwhile and exciting field in which to take part.

Future Directions A number of future directions are suggested to point out areas in which quantitative teaching is emerging and needs to grow. First, teaching should focus on actively engaging students and helping them to understand basic, underlying concepts. This would also involve more emphasis on modeling 112

overarching constructs and processes rather than limiting teaching to narrow and isolated methodological procedures (Embretson, 2010; McGrath, 2011; Rodgers, 2010). Building quantitative models of behavior will generate more integrated and explanatory understanding that is necessary to keep psychology scientifically rigorous. Second, students, faculty, and professionals should be encouraged to explore options for furthering quantitative learning outside of the classroom. For example, quantitative journals (e.g., Multivariate Behavioral Research, Psychological Assessment, Psychological Methods, Structural Equation Modeling ) as well as quantitative handbooks (e.g., Cooper et al., 2012), statistical dictionaries (e.g., Dodge, 2003; Everitt, 2002; Upton & Cook, 2008), and even online resources on quantitative methods (e.g., Journal of Statistics Education, http://www.amstat.org/publications/jse/) provide opportunities for further quantitative studies. There are also numerous quantitative volumes that provide input and guidelines for understanding the main aspects of various quantitative methods (Hancock & Mueller, 2010), including multiple regression/correlation (Cohen, Cohen, West, & Aiken, 2003), multivariate statistics (Stevens, 2009), statistical mediation (MacKinnon, 2008), missing data analysis (Enders, 2010), and structural equation modeling (e.g., Kline, 2011), among others. For those interested in more advanced statistical material, they could venture into the British Journal of Mathematical and Statistical Psychology and Psychometrika, or even the Millsap and MaydeuOlivares (2009) volume on Contemporary Psychometrics, realizing that these latter sources would not be as accessible as the more translational articles in the former outlets. Those interested in additional training could consider workshops (e.g., the University of Kansas Summer Institute held by Todd Little), annual conferences featuring quantitative presentations (e.g., Division 5 talks at the APA convention, Division D talks at the American Educational Research Association convention, and presentations and workshops at the Association for Psychological Science). Websites focusing on quantitative material (e.g., UCLA statistical website, http://www.ats.ucla.edu/stat/, and a similar one at Georgetown, http://statpages.org/) and online tutorials in quantitative methods (Dinov & Christou, 2009; Garfield & delMas, 2010) offer additional occasions to heighten quantitative proficiency. Further, there are quantitative societies that provide

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

useful opportunities and contacts (e.g., American Psychological Association, Division 5: Evaluation, Measurement, and Statistics; American Statistical Association; Psychometric Society; Society for Mathematical Psychology, and the Society of Multivariate Experimental Psychology), including international statistical societies (e.g., European Association of Methodology, International Association for Statistical Education, International Statistical Institute, Society for Multivariate Analysis in the Behavioral Sciences). Participating in quantitative forums helps to enlarge quantitative networks, providing a stronger foundation to further one’s skills. Third, quantitative teaching should be widened to include individuals from low-income (Kitchen, DePree, Celedón-Pattichis, & Brinkerhoff, 2007) and under-represented groups (Kosoko-Lasaki, Sonnino, & Voytko, 2006). We also need to consider whether traditional approaches to quantitative teaching have helped to shrink or amplify the differences among mainstream and marginalized students (Ceci & Papierno, 2005). Other offerings can be encouraged (e.g., Quantitative Training for Underrepresented Groups) to help bring about greater equity and effectiveness that can open doors to more career options in the field of quantitative science to an even larger and more diverse group of individuals. Fourth, it is important to reach out to quantitative researchers who are newly graduated to encourage them to become involved in the larger field. Early career psychologists make up about 10% of the 150,000 members of the APA (2010), and Division 5 (measurement, statistics and evaluation) involves approximately 1% of APA members (i.e., Gruber, 2010). The APA Quantitative Task Force (Aiken et al., 2007) helps in this regard, with a website (http://www.apa.org/research/tools/quantitative/index.aspx) that provides definitions of the areas of focus within quantitative methods, recommendations for preparing for quantitative training, and lists of quantitative graduate programs that can help in recruiting individuals into quantitative psychology and in retaining those that are already participating. Finally, continuing efforts to generate greater interest, participation, and performance in quantitative literacy is an important goal. Unfortunately, psychology is not always considered as having the scientific credentials of other disciplines such as physics, chemistry, and biology (Simonton, 2004). Particularly, recent efforts to improve technological education in the United States (Chopra, 2010) have focused more on traditionally defined science,

Active g Learnin

Quant Teaching

ning E-Lear ring Mento

Research Questions Quant Concepts

Similarities & Differences Interpret Results

Other Quant Learning

Statistical Literacy

Models

Wo rks Soc ho Li ps ieti ter e atu s re Widening Quant Reach

Disadvantaged

Early Care er Und erre pres ente d Figure 6.1 Bringing about statistical literacy through quantitative teaching, other quantitative learning, and widening the quantitative reach

technology, engineering, and mathematics that sometimes exclude psychology. Further, research reveals that students from the United States fall in the bottom half regarding quantitative performance when compared to other countries (Gonzales, Guzmán, Partelow, Pahlke, Jocelyn, Kastberg, & Williams, 2003; McQuillan & Kennelly, 2005). Figure 6.1 depicts the main focus of this chapter, where innovative quantitative teaching (with active learning, e-learning, mentoring, and concepts learning), other quantitative learning (e.g., model building, workshops, quantitative societies, and quantitative literature), and widening the quantitative reach (to the disadvantaged, early career individuals, and the under-represented) can help bring about greater statistical literacy. More emphasis needs to be made on involving others in the marvels of quantitative psychology, whether at the undergraduate or graduate level, in the larger profession of psychology or in the general public. Ideas presented in this chapter, and in the larger handbook, are offered to harlow

113

help in teaching quantitative psychology, within the classroom, in research centers, and in every facet of life that involves quantitative thinking.

Author Note Lisa L. Harlow is Professor of Quantitative Psychology at the University of Rhode Island. Please address correspondence to Lisa L. Harlow, Department of Psychology, University of Rhode Island, 142 Flagg Rd, Chafee, Kingston, RI 02881-0808.

Acknowledgments I wish to acknowledge support from the National Science Foundation (NSF) Grant (# 0720063) on Quantitative Training for Underrepresented Groups; Lisa L. Harlow, Principal Investigator (PI). I would like to thank students, faculty and staff at the University of Rhode Island and the Cancer Prevention Research Center in Rhode Island for their ongoing support, inspiration and encouragement. Similarly, I would like to thank Dr. Lisa Dierker (PI) and her colleagues at the Quantitative Analysis Center at Wesleyan University. Serving on the advisory board for their NSF Grant (# 0942246) on An Inquiry-Based, Supportive Approach to Statistical Reasoning and Application has opened up my thinking and those of my students on how to productively engage students in the wonders of quantitative inquiry.

References Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ: Erlbaum. Aberson, C. L., Berger, D. E., Healy, M. R., & Romero, V. L. (2003). Evaluation of an interactive tutorial for teaching hypothesis testing concepts. Teaching of Psychology, 30, 75–78. Aiken, L. S., Aguinis, H., Appelbaum, M., Boodoo, G., Edwards, M. C., Gonzalez, R. D., Panter, A., et al. (2007). Report of the task force for increasing the number of quantitative psychologists. Washington DC: American Psychological Association. Retrieved July 1, 2010 from: http://www.apa.org/research/tools/quantitative/quant-taskforce-report.pdf. Aiken, L. S., West, S. G., & Millsap, R. E. (2008). Doctoral training in statistics, measurement, and methodology in psychology: Replication and extension of Aiken, West, Sechrest, and Reno’s (1990) survey of PhD programs in North America. American Psychologist, 63, 32–50. Aiken, L. S., West, S. G., Sechrest, L., & Reno, R. R. (1990). Graduate training in statistics, methodology, and measurement in psychology: A survey of PhD programs in North America. American Psychologist, 45, 721–734. APA (2010). Early career psychologists. Washington D.C.: American Psychological Association. Retrieved on July 1, 2010 from: http://www.apa.org/membership/index.aspx.

114

Ashcraft, M. H., & Kirk, E. P. (2001). The relationship among working memory, math anxiety, and performance. Journal of Experimental Psychology: General, 130, 224–237. Atkinson, R. K., Catrambone, R., & Merrill, M. M. (2003). Aiding transfer in statistics: Examining the use of conceptually oriented equations and elaborations during subgoal learning. Journal of Educational Psychology, 95, 762–773. Aulls, Mark W., & Shore, B. M. (2008). Inquiry in education, volume 1: The conceptual foundations for research as a curricular imperative. New York: Taylor & Francis/Erlbaum. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman. Barren, K. E., Benedict, J. O., Saville, B. K., Serdikoff, S. L., & Zinn, T. E. (2007). Innovative approaches to teaching statistics and research methods: Just-in-time teaching, interteaching, and learning communities. In Dunn, D. S., Smith, R. A, & Beins, B. (Eds.), Best practices for teaching statistics and research methods in the behavioral sciences, (pp. 143–158). Mahwah, NJ: Erlbaum. Bartz, A. E., & Sabolik, M. A. (2001). Computer and software use in teaching the beginning statistics course. Teaching of Psychology, 28, 147–149. Ben-Zvi, D. & Garfield, J. (Eds.) (2004). The challenge of developing statistical literacy, reasoning, and thinking. Dordrecht, Netherlands: Kluwer. Berger, D. (n.d.). WISE Web Interface for Statistics Education. Retrieved December 20, 2010 from http://wise.cgu. edu/. Bjornsdottir, A., & Garfield, J. (2010). Teaching bits: Statistics education articles from 2009. Journal of Statistics Education, 18, Retrieved May 9, 2012 from www.amstat.org/ publications/jse/v18n1/garfieldtb.pdf Bliwise, N. G. (2005). Web-based tutorials for teaching introductory statistics. Journal of Educational Computing Research, 33, 309–325. Britt, M. A., Sellinger, J., & Stillerman, L. M. (2002). A review of ESTAT: An innovative program for teaching statistics. Teaching of Psychology, 29, 73–75. Budé, L., Van De Wiel, M. W. J., Imbos, T., Candel, M. J. J. M., Broers, N. J., and Berger, M. P. F. (2007). Students’ achievements in a statistics course in relation to motivational aspects and study behaviour. Statistics Education Research Journal [Online], 6, 5–21. Retrieved May 9, 2012 from http://www.stat.auckland.ac.nz/∼iase/serj/SERJ6(1)_ Bude.pdf Ceci, S. J., & Papierno, P. B. (2005). The rhetoric and reality of gap closing: When the “have nots” gain but the “haves” gain even more. American Psychologist, 60, 149–160. Chew, S. L. (2007). Designing effective examples and problems for teaching statistics. In, Dunn, Dana S., Smith, Randolph A, & Beins, Barney (Eds.), Best practices for teaching statistics and research methods in the behavioral sciences, (pp. 73–91). Mahwah, NJ: Erlbaum. Chiou, C.-C. (2009). Effects of concept mapping strategy on learning performance in business and economics statistics. Teaching in Higher Education, 14, 55–69. Chopra, A. (2010). Building a future for technological innovation. White House Office of Science & Technology Policy. Retrieved January 13, 2011 from http://www.whitehouse.gov/sites/default/files/microsites/ ostp/CommClubr081710FINAL.pdf. Cobb, G. (1999). Move over, Bill Nye: George Cobb makes learning about statistics practical and fun. Mount Holyoke College news and events. Retrieved May 9, 2012

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

from http://www.mtholyoke.edu/offices/comm/csj/990409/ gummy.html. Cobb, G. W. (2007). The introductory statistics course: A ptolemaic curriculum? Technology Innovations in Statistics Education, 1, 1?15. Retrieved May 9, 2012 from http://escholarship.org/uc/item/6hb3k0nz# page-15 Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd edition). Hillsdale, NJ: Erlbaum. Cohen, J. Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd edition). Mahwah, NJ: Erlbaum. Cooper, H., Camic, P. M., Long, D. L., Panter, A. T., Rindskopf, D., & Sher, K. J. (Eds.). (2012). APA handbook of research methods in psychology (Vols. 1–3). Washington, DC: American Psychological Association. Cumming, G., & Fidler, F. (2009). Confidence intervals: Better answers to better questions. Zeitschrift für Psychologie/Journal of Psychology, 217, 15–26. DaRos-Voseles, D. A., Collins, K. M. T., Onwuegbuzie, A. J., & Jiao, Q. G. (2008). Effect of self-perception on performance of graduate-level cooperative groups in research methodology courses. Journal of Instructional Psychology, 35, 254–259. Derryberry, D. R., Schou, S. B., & Conover, W. J. (2010). Teaching rank-based tests by emphasizing structural similarities to corresponding parametric tests. Journal of Statistics Education, 18, Retrieved May 9, 2012 from www.amstat.org/publications/jse/v18n1/derryberry.pdf. DeVaney, T. A. (2010). Anxiety and attitude of graduate students in on-campus vs. online statistics courses. Journal of Statistics Education, 18, Retrieved May 9, 2012 from www.amstat.org/publications/jse/v18n1/devaney.pdf. Dewey, J. (1997). How we think. New York: Dover Publications. Dinov, I. D., & Christou, N. (2009). Statistics online computational resource for education. Teaching Statistics, 31, 49–51. Dodge, Y. (Ed.) (2003). The Oxford dictionary of statistical terms. New York: Oxford University Press. Dolinsky, B. (2001). An active learning approach to teaching statistics. Teaching of Psychology, 28, 55–56. Dunn, D. S., Smith, R. A., & Beins, B. (Eds.) (2007). Best practices for teaching statistics and research methods in the behavioral sciences. Mahwah, NJ: Erlbaum. Embretson, S. E. (Ed.) (2010). Measuring psychological constructs: Advances in model-based approaches. Washington, DC: American Psychological Association. Enders, C. K. (2010). Applied missing data analysis. New York: Guilford. Everitt, B. S. (2002). The Cambridge dictionary of statistics (2nd edition). Cambridge, UK: Cambridge University Press. Everson, M., Zieffler, A., & Garfield, J. (2008). Implementing new reform guidelines in teaching introductory college statistics courses. Teaching Statistics, 30, 66–69. Ferreira, M. (2001). Building communities through role models, mentors, and hands-on-science. The School Community Journal, 11, 27–37. Fillion, G., Limayem, M., Laferrière, T., & Mantha, R. (2008). Integrating ICT into higher education: A study of onsite vs. online students’ and professors’ perceptions. International Journal of Web-Based Learning and Teaching Technologies, 3, 48–72.

Franklin, C., & Garfield, J. (2006). The guidelines for assessment and instruction in statistics education (GAISE) project: Developing statistics education guidelines for grades pre K12 and college courses, In G.F. Burrill, (Ed.), Thinking and reasoning with data and chance: Sixty-eighth annual NCTM yearbook (pp. 345–375). Reston, VA: National Council of Teachers of Mathematics. Friedrich, J., Buday, E., & Kerr, D. (2000). Statistical training in psychology: A national survey and commentary on undergraduate programs. Teaching of Psychology, 27, 248–257. Gal, I. (2003). Teaching for statistical literacy and services of statistics agencies. The American Statistician, 57, 80–84. Garfield, J., & Ben-Zvi, D. (2007). How students learn statistics revisited: A current review of research on teaching and learning statistics. International Statistical Review, 75, 372–396. Garfield, J., & Ben-Zvi, D. (2008). Developing students’ statistical reasoning: Connecting research and teaching practice. New York: Springer. Garfield, J., & delMas, R. (2010). A web site that provides resources for assessing students’ statistical literacy, reasoning and thinking. Teaching Statistics, 32, 2–7. Garfield, J., & Everson, M. (2009). Preparing teachers of statistics: A graduate course for future teachers. Journal of Statistics Education, 17 (2), Retrieved May 9, 2012 from www.amstat.org/publications/jse/v17n2/garfield.html. Garfield, J., Hogg, B., Schau, C., & Whittinghill, D. (2002). First courses in statistical science: The status of educational reform efforts. Journal of Statistics Education, 10(2), Retrieved May 9, 2012 from www.amstat. org/publications/jse/v10n2/garfield.html. Gelman, A., & Nolan, D. (2002). Teaching statistics: A bag of tricks. New York: Oxford University Press. Gilliland, D., & Melfi, V. (2010). A note on confidence interval estimation and margin of error. Journal of Statistics Education, 18(1), Retrieved May 9, 2012 from www.amstat.org/publications/jse/v18n1/gilliland.pdf. Gonzales, P., Guzmán, J. C., Partelow, L., Pahlke, E., Jocelyn, L., Kastberg, D., & Williams, T. (2003). Highlights from the trends in international mathematics and science study (TIMSS). U.S. Department of Education Institute of Education Sciences, National Center for Educational Statistics. Gruber, C. (Ed.) (2010) The score newsletter, Vol. XXXII. No. 3. Division 5, American Psychological Association, Retrieved July 1, 2010 from http://www.apa.org/divisions/div5/pdf/ July10Score.pdf. Hancock, G. R., & Mueller, R. O. (Eds.). (2010). The reviewer’s guide to quantitative methods in the social sciences. New York: Taylor & Francis. Hanley, G. L. (2004). E-learning and the science of instruction. Applied Cognitive Psychology, 18, 123–124. Härdle, W., Klinke, S., & Ziegenhagen, U. (2007). On the utility of e-learning in statistics. International Statistical Review, 75, 355–364. Harlow, L.L. (2005). The essence of multivariate thinking: Basic themes and methods. Mahwah, NJ: Erlbaum. Harlow, L. L. (2010). On scientific research: Invited commentary on the role of statistical modeling and hypothesis testing. Journal of Modern Applied Statistical Methods, 9, 348–358.

harlow

115

Harlow, L. L., Burkholder, G., & Morrow, J. (2002). Evaluating attitudes, skill and performance in a learning enhanced quantitative methods course: A structural modeling approach. Structural Equation Modeling Journal, 9, 413–430. Harlow, L. L., Burkholder, G., & Morrow, J. (2006). Engaging students in learning: An application with quantitative psychology. Teaching of Psychology, 33, 231–235. Harlow, L. L., Mulaik, S., & Steiger, J. (Eds.) (1997). What if there were no significance tests? Mahwah, NJ: Erlbaum. Helman, S., & Horswill, M. S. (2002). Does the introduction of non-traditional teaching techniques improve psychology undergraduates’ performance in statistics? Psychology Learning & Teaching, 2, 12–16. Hulme, J. (2007). Review of Handbook of demonstrations and activities in the teaching of psychology volume 1: Introductory, statistics, research methods and history. Psychology Learning & Teaching, 6, 164–165. Hulsizer, M. R., & Woolf, L. M. (2009). Guide to teaching statistics: Innovations and best practices. New York: WileyBlackwell. Jaccard, J., & Jacoby, J. (2009). Theory construction and model building skills: A practical guide for social scientists. New York: Guilford Press. Katayama, A. D. (2001). Bi-modal instructional practices in educational psychology: Mentoring and traditional instruction. Journal of Instructional Psychology, 28, 171–177. Kitchen, R. S., DePree, J., Celedón-Pattichis, S., & Brinkerhoff, J. (2007). Mathematics education at highly effective schools that serve the poor: Strategies for change. Mahwah, NJ: Erlbaum. Kline, R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington, DC: American Psychological Association. Kline, R. B. (2011). Principles and practice of structural equation modeling ( 3rd edition). New York: Guilford. Kolar, D. W., & McBride, C. A. (2003). Creating problems to solve problems: An interactive teaching technique for statistics courses. Teaching of Psychology, 30, 67–68. Kosoko-Lasaki, O., Sonnino, R. E., & Voytko, M. L. (2006). Mentoring for women and underrepresented minority faculty and students: experience at two institutions of higher education. Journal of the National Medical Association, 98(9), 1449–1459. Krause, U,-M., Stark, R., & Mandl, H. (2009). The effects of cooperative learning and feedback on e-learning in statistics. Learning and Instruction, 19, 158–170. La Greca, A. M. (2005). Editorial. Journal of Consulting and Clinical Psychology, 73, 3–5. Liu, T.-C., Lin, Y.-C., & Kinshuk (2010). The application of Simulation-Assisted Learning Statistics (SALS) for correcting misconceptions and improving understanding of correlation. Journal of Computer Assisted Learning, 26, 143–158. MacKinnon, D. P. (2008). Introduction to statistical mediation analysis. New York: Taylor & Francis. Malloy, T. E., & Jensen, G. C. (2001). Utah Virtual Lab: JAVA interactivity for teaching science and statistics on line. Behavior Research Methods, Instruments & Computers, 33, 282–286. Mazzolini, M., & Maddison, S. (2007). When to jump in: The role of the instructor in online discussion forums. Computers & Education, 49, 193–213. McGrath, R. E. (2011). Quantitative models in psychology. Washington, DC: American Psychological Association.

116

McQuillan, L., & Kennelly, L. (2005). New study finds U.S. math students consistently behind their peers around the world: Findings challenge conventional wisdom about U.S. math success in early grades. Washington, D.C.: American Institutes for Research. Meletiou-Mavrotheris, M, & Lee, C. (2002). Teaching students the stochastic nature of statistical concepts in an introductory statistics course. Statistics Education Research Journal, 1, 22–37. Mills, J. D. (2004). Students’ attitudes toward statistics: Implications for the future. College Student Journal, 38, 349–361. Millsap, R. E., Maydeu-Olivares, A. (Eds.) (2009). The Sage handbook of quantitative methods in psychology. Thousand Oaks, CA: Sage Publications. Mulhern, G., & Wylie, J. (2004). Changing levels of numeracy and other core mathematical skills among psychology undergraduates between 1992 and 2002. British Journal of Psychology, 95, 355–370. Mulhern, G., & Wylie, J. (2006). Mathematical prerequisites for learning statistics in psychology: Assessing core skills of numeracy and mathematical reasoning among undergraduates. Psychology Learning & Teaching, 5, 119–132. Neal-Barnett, A., Mitchell, M., & Boeltar, C. (2002). Faculty of color serving students, serving self: The psychology group. Teaching of Psychology, 29, 44–45. Nie, Y., & Lau, S. (2010). Differential relations of constructivist and didactic instruction to students’ cognition, motivation, and achievement. Learning and Instruction, 20, 411–423. Odgaard, E. C., & Fowler, R. L. (2010). Confidence intervals for effect sizes: Compliance and clinical significance in the journal of consulting and clinical psychology. Journal of Consulting and Clinical Psychology, 78, 287–297. Onwuegbuzie, A. J. (2000). Attitudes toward statistics assessments. Assessment and Evaluation in Higher Education, 25, 321–339. Onwuegbuzie, A. J., Collins, K. M. T., & Jiao, Q. G. (2009). Performance of cooperative learning groups in a postgraduate education research methodology course: The role of social interdependence. Active Learning in Higher Education, 10, 265–277. Onwuegbuzie, A. J., & Wilson, V. A. (2003). Statistics anxiety: Nature, etiology, antecedents, effects, and treatments-a comprehensive review of literature. Teaching in Higher Education, 8, 195–209. Peden, B. F. (2001). Correlational analysis and interpretation: Graphs prevent gaffes. Teaching of Psychology, 28, 129–131. Peterson, S. E., & Miller, J. A. (2004). Quality of college students’ experiences during cooperative learning. Social Psychology of Education, 7(2), 161–183. Piotrowski, C., Bagui, S. C., & Hemasinha, R. (2002). Development of a measure of statistics anxiety in graduate-level psychology students. Journal of Instructional Psychology, 29, 97–100. Quilici, J. L., & Mayer, R. E. (2002). Teaching students to recognize structural similarities between statistics word problems. Applied Cognitive Psychology, 16, 325–342. Rajecki, D. W., Appleby, D., Williams, C. C., Johnson, K., & Jeschke, M. P. (2005). Statistics can wait: Career plans activity and course preferences of American psychology undergraduates. Psychology Learning & Teaching, 4, 83–89. Ridgway, J., Nicholson, J., & McCusker, S. (2007). Teaching statistics—Despite its applications. Teaching Statistics, 29, 44–48.

t e a c h i n g q u a n t i tat i v e p s yc h o l o g y

Rodarte-Luna, B., & Sherry, A. (2008). Sex differences in the relation between statistics anxiety and cognitive/learning strategies. Contemporary Educational Psychology, 33, 327– 344. Rodgers, J. L. (2010). The epistemology of mathematical and statistical modeling: A quiet methodological revolution. American Psychologist, 65, 1–12. Romero, V. L., Berger, D. E., Healy, M. R., & Aberson, C. L. (2000). Using cognitive learning theory to design effective online statistics tutorials. Behavior Research Methods, Instruments & Computers, 32, 246–249. Rosenthal, R. & DiMatteo, M. R. (2001) Meta-analysis: recent developments in quantitative methods for literature reviews. Annual Review of Psychology, 52, 59–82. Rumsey, D. J. (2002). Statistical literacy as a goal for introductory statistics courses. Journal of Statistics Education, 10(3), Retrieved May 9, 2012 from http://www.amstat.org/publications/jse/v10n3/rumsey2.html. Saville, B. K., Zinn, T. E., Lawrence, N. K., Barron, K. E., & Andre, J. (2008). Teaching critical thinking in statistics and research methods. In Dunn, D. S., Halonen, J. S., & Smith, R. A. (Eds.), Teaching critical thinking in psychology: A handbook of best practices, (pp. 149–160). New York: WileyBlackwell. Schwartz, D., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition Instruction, 22, 129–184. Sen, S. (2004). Teaching quantitative thinking: Comment. Science, 304( 5670), 518. Simonton, D. K. (2004). Psychology’s status as a scientific discipline: Its empirical placement within an implicit hierarchy of the sciences. Review of General Psychology, 8, 59–67. Sosa, S., Berger, D. E., Saw, A. T., & Mary, J. C. (2010). Effectiveness of computer-assisted instruction in statistics: A meta-analysis. Review of Educational Research, published online August 12, 2010. DOI: 10.3102/0034654310378174 Stark, R., & Krause, U.-M. (2009). Effects of reflection prompts on learning outcomes and learning behaviour in statistics education. Learning Environments Research, 12, 209–223. Stevens, J. P. (2009). Applied multivariate statistics for the social sciences (5th ed.). New York: Routledge. Swingler, M. V., Bishop, P., & Swingler, K. M. (2009). SUMS: A flexible approach to the teaching and learning of statistics. Psychology Learning & Teaching, 8, 39–45.

Tomcho, T. J., Rice, D., Foels, R., Folmsbee, L., Vladescu, J., Lissman, R., Matulewicz, R., et al. (2009). APA’s learning objectives for research methods and statistics in practice: A multimethod analysis. Teaching of Psychology, 36, 84–89. Tremblay, P. F., Gardner, R. C., & Heipel, G. (2000). A model of the relationships among measures of affect, aptitude, and performance in introductory statistics. Canadian Journal of Behavioural Science, 32, 40–48. Upton, G. & Cook, I. (2008). A dictionary of statistics. New York: Oxford University Press. Velicer, W.F., & Jackson, D.N. (1990). Component analysis vs. common factor analysis: Some issues in selecting an appropriate procedure. Multivariate Behavioral Research, 25, 1–28. Ware, M. E. & Johnson, D. E. (Eds) (2000). Handbook of demonstrations and activities in the teaching of psychology: Introductory, statistics, research methods, and history, Vol. I (2nd ed.). Mahwah, NJ: Erlbaum. Watkins, A. E., Scheaffer, R. L., & Cobb, G. W. (2004). Statistics in action: Understanding a world of data. New York: Key College Publishing. Watson, J. M. (2006). Statistical literacy at school: Growth and goals. Mahwah, NJ: Erlbaum. Wender, K. F., & Muehlboeck, J.-S. (2003). Animated diagrams in teaching statistics. Behavior Research Methods, Instruments & Computers, 35, 255–258. Wilkinson, L., & The Task Force on Statistical Inference (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604. Wulff, S. S., & Wulff, D. H. (2004). “Of course I’m communicating, I lecture every day:” Enhancing teaching and learning in introductory statistics. Communication Education, 53, 92–103. Zeedyk, M. S. (2006). Detective work on statistics street: Teaching statistics though humorous analogy. Psychology Learning & Teaching, 5, 97–109. Zieffler, A., Garfield, J., Alt, S., Dupuis, D., Holleque, K., & Chang, B. (2008). What does research suggest about the teaching and learning of introductory statistics at the college level? A review of literature. Journal of Statistics Education, 16 (2), Retrieved May 9, 2012 from http:// www/amstat.org/publications/jse/v16n2/zieffler.html. Zirkel, S. (2002). Is there a place for me? Role models and academic identity among white students and students of color. Teachers College Record, 104, 357–376.

harlow

117

CHAPTER

7

Modern Test Theory

Roderick P. McDonald†

Abstract This chapter provides a unified treatment of seven major topics in test theory. It relates Modern Theory based on item response modeling to classical linear modeling through the use of the common factor model. The topics include choosing a metric; measurement and measurement error (reliability); item selection: homogeneity and dimensionality; validity; equating tests; comparing populations. The treatment of these problems makes no distribution assumptions. Key Words: Test theory, metric, homogeneity, dimensionality, validity, equating, differential item functioning

Introduction The subject of this chapter is very large, so the approach to it requires a broad brush and has to be somewhat selective. I apologize for this in advance. We begin, as is commonly the case, with questions of definition. Test theory consists of mathematical models for item scores and theory for the test scores derived from them by a scoring formula. Further, a test (a set of items) is designed to measure a quantifiable attribute or set of related attributes of examinees. The term “attributes” has its ordinary dictionary definition—properties or characteristics of the examinees. There is an inevitable looseness in these three statements, but they should serve as a platform for the following account. The topics of test theory have developed in a piecemeal fashion, with little attempt to examine the relationships between them. We might date the beginnings of the “classical” period of test theory to Spearman’s papers on True Score Theory (1904a) and on the general factor model (1904b). The two

models were illustrated by the same data set. However, for decades Psychometric Theory has treated them as foundations of two major, unrelated topics. The primary problems of classical test theory were the homogeneity, the reliability, and the validity of a test. These remain major topics in the modern era. The treatment of these problems rested on simple linear additive models—nothing could be simpler than Spearman’s true score model—but with an uneasy recognition that linear models were inappropriate for binary items. The interested reader would find Gulliksen’s (1950) neglected classic an excellent account of thinking in the classical period. The incorporation of Alan Birnbaum’s rigorous treatment of item response models in Lord and Novick (1968) marks the beginning of the modern era. Lord and Novick’s text, accompanied by Lord (1980), should still be required reading for any student or researcher who wishes to have a general understanding of the field.

† Editor’s Note: Sadly, Rod passed away in October of 2011. His contributions to quantitative methods are inestimable

and he will be dearly missed. I would like to thank Aaron Boulton who completed the tables, figures, and proofs for this chapter.

118

The Lord and Novick text contains the necessary foundations for a unified treatment of test theory but leaves implicit the relationship between the linear models of classical theory, and item response models for binary test items. A general misconception is still apparently widely held that test theory is made up of two parts—Classical Test Theory for quantitative data, and Item Response Theory (IRT) for binary data, with a great gulf fixed between them. The possibility of a unified treatment of linear and nonlinear models for item scores was adumbrated by McDonald (1967), who showed how an item response model could be approximated by a linear model using standard procedures, common in physics, for linearizing a nonlinear model. A more rigorous unified treatment, based on the general linear model of Nelder and Wedderburn (1972), was provided by Bartholomew. (See Bartholomew & Knott, 1999, and Skrondall & Rabe-Hesketh, 2004). However, these accounts do not address the central problems of Test Theory. McDonald (1999) sought to present a “unified treatment” by applying the same psychometric concepts to a linear (unidimensional or multidimensional) model for quantitative data and to the parallel nonlinear model for binary data. This treatment shows how the linear model serves as a first approximation to the nonlinear model. Other authors have contributed to this unification of theory. See, for example, Jones and Thissen (2007) and Thissen and Wainer (2001). This chapter contains a brief introduction to the treatment given by McDonald (1999), with some revision and with restructuring to exhibit parallels between linear and nonlinear models. I rely on more specialized accounts in the present volume to fill in details that I must omit here. Specifically, I will not examine standard estimation methods for item parameters. (See Hallberg, Wing, Wong, & Cook, Chapter 12, Volume 1; Steiner and Cook, Chapter 13, Volume 1.) Following the next section, which sets out the properties of the linear (factor) model and the parallel nonlinear (item response) model that we need, I will discuss the application of these models to seven problems in test theory. These are: (1) imposing a metric on the measured attribute, (2) measurement and error of measurement, (3) item selection, (4) homogeneity and dimensionality, (5) validity, (6) equating tests, and (7) comparing populations. The last section, not surprisingly, is general discussion.

for the values they take or the scale on which they are distributed. I assume the reader is familiar with the algebra of expectations and with variances, covariances, and of course correlations. I will write an expected value—a mean—as E{ }, covariance as Cov{ }, and variance as Var{ }. This allows us to have E{Y |X = x} for the conditional mean of random Y when random X takes the value x, and Var{Y |X = x} for its conditional variance. Any change from, say, X to x signals a change from representing a random variable to representing a specific value for an individual or points on the scale of x. When a sentence defines a new term, the term will be written in italics. The models we consider are mathematical idealizations and can be regarded only as approximations to the behavior of any real measures (See McDonald, 2010). A set of m items is given to a large sample of examinees. For our first model, we suppose that the responses to them can be coded by item scores that range over enough numerical values to apply a linear model as an acceptable approximation. The item scores might be subtest scores, or Likert-scaled scores: for example, coding Strongly agree, Agree, Indifferent, Disagree, and Strongly disagree as the integers 5, 4, 3, 2, and 1, respectively. We take as a suitable model Spearman’s general factor model, written in the form X j = μj + λ j F + U j ,

(1)

where Xj is the score on the jth item of a randomly chosen subject from a defined population, F is the (unobserved) value of the attribute given by the model, and Uj is a random interaction between the item and the examinee. (To simplify the presentation, it will be understood that the subscript j, wherever it appears, ranges from 1 to m. We leave out the formal j = 1, 2, . . . , m. All summations are over this range, unless otherwise stated.) Equation 1 is a simple regression model, with F the “explanatory” variable, Xj the response variable, and Uj the residual. Accordingly, F and Uj are uncorrelated. We assume that the interaction terms Uj of distinct items are uncorrelated and write ψj2 for their variances. Then by the algebra of expectations, the variance of Xj is given by Var{Xj } = λ2j Var{F } + ψj2 ,

(2)

and the covariance of two distinct item scores Xj and Xk by (3) Cov {Xj , Xk } = λj λk Var{F }.

The Models With reference to notation, I will use uppercase italics for random variables, and lowercase Roman

(The reader who is familiar with the factor analysis of correlations needs to be told that for test theory mcdonald

119

applications we use covariances.) In this mathematical model, F is the common factor that accounts for the covariance of the item scores through the relation given by Equation 3. The common factor F is linked to the real world of applications by identifying it with a measure of the abstract attribute that the items share as their common property. The interaction terms Uj —the unique components—are linked to applications by identifying them with measures of specific properties of the items. This identification requires the strong assumption that their specific properties are strictly unique to the items. The assumption is realized only approximately in practice. (The unique components may also contain an error of replication that cannot be separated in a single set of observations.) The regression constant μj is the item score mean, representing item difficulty in cognitive applications. (Strictly, this should be item facility—the easier the item, the higher the mean score.) The regression slope λj measures the extent to which the item discriminates between subjects at different levels of the attribute. It is traditionally termed a factor loading. (For reasons that will appear, I would like to call it the discrimination parameter, the counterpart term in item response models, but tradition is too strong.) The parameter ψj2 is the unique variance of the item score. We can write Equation 1 as E {Xj |F = f } = μj + λj f ,

(4)

the alternative way to write a regression—that is, as the expected value of Xj conditioning on a value f of F . Defining Uj = Xj − E {Xj |F = f }

(5)

returns us to Equation 1, but in the form, Equation 4 it allows us to write Var{Xj |F = f } = ψj2 ,

(6)

Cov {Xj , Xk |F = f } = 0.

(7)

and That is, for a fixed value of the attribute the item scores are uncorrelated. Equation 7 is a weak version of the Principle of Local Independence, which governs item response models. A strong form of this principle requires that X1 , X2 , . . . , Xm are mutually statistically independent for fixed f. The strong form of the principle implies the weak form, and if the data have a multivariate normal distribution, then the weak form implies the strong form. Fitting and testing the model using just limited information from the item covariances is commonly good 120

modern test theory

enough. Higher moments of the joint distribution are extremely unstable in sampling and cannot be relied on to improve estimates. It is implicit in applications of factor models that the weak form of the principle of local independence is used to fit the model, but the strong form is intended. That is, we do not suppose that the common factor accounts for the covariances between the items but leaves other forms of statistical dependence unaccounted for. Equation 4 gives a simple linear relationship between the item scores and the factor. We might instead suppose that each item score has a distinct relationship to the factor, writing E {Xj |F = f } = γj (f ),

(8)

where γj is intended to represent a different nonlinear function for each item. There has been very little work on this kind of generality (see, for example, McDonald, 1967; Yalcin & Amemiya, 2001). A generalized linear model, following Nelder and Wedderburn (1972), is obtained by substituting a common nonlinear link function γ for a set of distinct functions γj , relating the item scores to a linear function of f, with E {Xj |F = f } = γ (μj + λj f ).

(9)

In principle there is a wide choice of link functions. In applications the choice is motivated by metric properties of the item scores (see Skrondall & Rabe-Hesketh, 2004, for a fairly comprehensive list). For our purposes, it must suffice to consider just two. The first, which we already have, is the simple linear relationship given by Equation 4, with the link function just the identity function. This would be chosen whenever the item score ranges appear to allow a linear model, especially if we can suppose—approximately—multivariate normality. The only alternative we will consider is motivated by binary data, items scored with just two values— 0 and 1. This will include multiple choice cognitive items, with one answer—the correct answer, we hope—scored 1 and the rest 0. It also includes noncognitive items with one response keyed for the attribute to be measured, and the other response(s) nonkeyed—for example, a checklist of symptoms of a behavior disorder. In such a case, it is easily shown that E {Xj |F = f} is the probability of giving the keyed response, conditioned on f. To avoid a clash of notations I will write this as: E {Xj |F = f } = P {Xj = 1|F = f } = γ (aj + bj f ), (10) where aj replaces μj and bj replaces λj . The former notation is set in stone for the factor model and will

1 4

0.9

2

0.8 1

0.7

.5

P(F)

0.6 0.5 0.4

−1

0.3 0.2 0.1 0 −3

−2

−1

0

1

2

3

F Figure 7.1

be needed again. The notation in Equation 10 is standard for regression coefficients. The choice of a link function is motivated by the fact that for binary data, E {Xj |F = f} becomes a probability, which is bounded below by 0 and above by 1. If we suppose that the link function should be monotone, then we naturally choose a cumulative distribution function for γ . There are many distribution functions, but theory has been developed for just two—namely, the cumulative normal curve (the normal ogive) and the cumulative form of the logistic distribution (the logistic function). I will just write N(z) = N(aj + bj f) for the normal ogive—given by the integral from negative infinity to z of the normal density function: n(t) = [1/(2π)1/2 ] exp[−(1/2)t 2 ],

(11)

and L(aj + bj f ) for the logistic function—given by: L(z) = 1/[1 + exp(−Dz)].

(12)

With D set equal to 1.701, these functions are virtually indistinguishable, but each has distinct and useful mathematical properties, and I will switch between them as needed. When I wish to refer to both, I will just write P(aj + bj f ). Figure 7.1 can be regarded as a graphical representation of either. There are three accepted ways to define the parameters of this model. The first we already have in Equation 10. I will refer to it as the regression parameterization. The second—Lord’s (1980) parameterization—is conventionally written as: P {Xj = 1|F = f } = γ (aj∗ (θ − bj∗ ).

(13)

(But it is written without the stars, which I have added to distinguish it from my regression notation.) Lord’s notation is firmly established—even to the use of θ for “ability”—but will not be used here. It has the advantage that the constant bj∗ corresponds to the position on the attribute scale where P(f ) is 0.5. Its disadvantage is that it does not generalize to multidimensional models (or allow a simple notational distinction between a random variable and values of it). The regression parameterization in Equation 10 easily generalizes, as we will see. The third parameterization comes from the work of Christoffersson (1975). He developed the normal ogive form of the model by supposing that an underlying tendency Xj∗ to give the keyed response follows the linear model (Equation 1) with a normal density function. The keyed response is given when Xj∗ is greater than a threshold value τj . This assumption leads to the parameterization: P {Xj = 1|F = f } = N[λj f − τj )/ψj2 ].

(14)

The chief advantage of this form of the model is that with a proper choice of origin and unit for F (see next section), the factor loadings λj are bounded by +1 and −1. They are much more stable than the corresponding parameters in the other forms, which range over the entire real line, and the loadings can be interpreted by the established standards of the common factor model. I will refer to Equation 14 as giving the item factor parameterization. The functions in Equations 10, 13, and 14 are variously referred to as item characteristic functions, item characteristic curves, and item response functions or mcdonald

121

curves. Although not an established convention, we can apply the same terms to the linear function in Equation 4. The constant in these models is referred to as the difficulty parameter (although this hardly applies to noncognitive items), and the slope parameter—the multiplier of F —as the discrimination parameter. The threshold parameter τj actually measures in the direction of difficulty for cognitive items, whereas bj and a∗j measure in the direction of easiness. In Lord’s parameterization, aj∗ can be called a location parameter. It is the location on the scale of the point of inflexion of the response curve. These parameters serve the same functions as in the counterpart linear model (Equation 4). The model: P(f ) = L(aj + bj f )

(15)

is referred to as the two-paramenter (2PL) model, a term pointing to its two-item parameters and the choice of (L)ogistic function. We can write a one-parameter (1PL) model by equating the slope parameters, giving P(f ) = L(aj + bf ).

(16)

This is a member of the family of models identified by the work of Rasch (1960). It is often referred to as “the” Rasch model. A three-parameter (3PL) model P(f ) = cj + (1 − cj )L(aj + bf )

(17)

allows for the effects of guessing in multiple-choice items. It will not be considered further here.1 As in the linear case, we can fit the model by the weak form of the Principle of Local Independence, using just bivariate relations between items. (See McDonald, 1982, 1997; Muthén, 1984.) These estimation procedures can be referred to as limited or bivariate information methods. Alternatively we can fit the model using the strong form of the principle, which for binary items reduces to the statement: Prob{X1 = 1, X2 = 1, . . . , Xm = 1|F = f } = Prob{Xm = 1|F = f }x Prob{Xm = 1|F = f } . . . x Prob{Xm = 1|F = f }.

(18)

This is a way of saying that the items are related in probability only through their relations with the attribute. Methods of estimating the item parameters using this strong form of the principle are referred to as full information methods (e.g., Bock & Aitkin, 1981). There is at present no clear advantage to fitting these models using the limited information from pairwise relations or the full information from 122

modern test theory

patterns of responses. Each has advantages and disadvantages. (See McDonald, 1999, Chapter 12.) We certainly assume the strong form in applications of the fitted model. Because the common factor model and the item response models developed independently, established terminology for the former refers to F as the common factor of the items whereas Item Response Theory calls it a latent trait or latent variable. From here I will refer to it as a latent trait in both the linear factor model and the item response models and regard it as a measure of the attribute in the metric supplied by the model. (See Metric section below.) It might be desirable to give an example of an application of Equation 1 to a suitable set of quantitative data, and an application of Equation 10 to a set of binary data. But for brevity, and to exhibit parallels, I will deliberately apply both to a single, much-analyzed set: the LSAT6 data set. Responses from 1000 examinees to a section of a Law School Admissions Test (items 11–15 of Section 6) have been reanalyzed by a number of psychometric theorists (e.g., Christoffersson, 1975; Bock & Aitkin, 1981; Muthén, 1978). The items themselves have been lost. Table 7.1 gives the proportion pj passing each item, and the item variances and covariances. Table 7.2 gives the fitted parameters and the discrepancies between the matrix of sample covariances S and the matrix of fitted covariances  (given by Equations 2 and 3), using a standard factor analysis program.2 The corresponding normal ogive model, fitted to the same data set by the NOHARM program (McDonald, 1982, 1997), gives the parameters in Table 7.3, in the item factor and the regression parameterizations. Any direct relations between the parameters of the factor model in Table 7.2 and those of the normal ogive are not visible to inspection, but Table 7.4 gives the values of the fitted normal

Table 7.1. LSAT-6—Difficulties and Covariance Matrix Item Item 1 2 3 4 5



pj

1

2

.924 .0702 .664

3

4

5

.524

.710

.806



⎥ ⎢ ⎢ .708 .0089 .2063 .418 .553 .630 ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ .553 .0130 .0259 .2472 .445 .490 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ .763 .0050 .0120 .0231 .1808 .678 ⎥ ⎦ ⎣ .870 .0021 .0132 .0089 .0142 .1131

Table 7.2. LSAT-6—Spearman Analysis Discrepancy Matrix Loadings λ

Unique Variances ψ 2

1

.0605

.0665

2

.1345

.1882

3

.1861

.2126

4

.1174

.1670

5

.0745

.1076

(Sample-Fitted Covariance Matrix, S – ) ⎡

⎢ ⎢ ⎢ .0008 ⎢ ⎢ ⎢ .0017 ⎢ ⎢ ⎢ .0021 ⎢ ⎣ −.0024

Table 7.3. LSAT-6—NOHARM Analysis τˆj

λˆ j

aj

bj

1

−1.433

.381

1.549

.412

2

−0.550

.379

0.595

.410

3

−0.133

.478

0.152

.544

4

−0.716

.377

0.773

.406

5

−1.126

.345

1.200

.368

Item

.0

ogive item response functions N(aj + bj f ), with the corresponding values of the approximating linear functions μj + λj f from the factor Equation 4, in parentheses, for six values of f . In this example, the approximation is remarkably good.

Some Test Theory Problems We turn now to a series of problems that can be solved with the use of the models.

Metric The attribute that we first aim to model, and then aim to measure, must be quantifiable in principle. By this I mean that it must have ordinal properties, admitting of “more” or “less.” However, its metric— not only the origin and unit of measurement, but its entire calibration—is not given by data and generally must be imposed by the model. Imagine a meter stick drawn on a rubber sheet, with the millimeters able to be stretched or compressed at will. The units in which the attribute is measured are determined by the choice of (1) a link function and (2) an origin and unit. The distribution of F in the calibration population will depend on the choice of link function and be determined by it. As we will see, it is also possible to determine a metric by choosing a formula score—a function of the item scores. The

.0008

.0017

.0021

−.0024

.0

.0009

.0038

.0032

.0009

.0

.0012

.0050

.0038

.0012

.0

.0054

.0032

.0050

.0054

.0

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

simple sum of the item scores is an obvious choice but not the only possibility. To calibrate the model, we fit it to the responses of a (hopefully large, hopefully random) sample of subjects from a suitable population. At least initially we choose the origin of the scale for the latent trait as the mean of F and the unit as its standard deviation. That is, we standardize the latent trait in the population chosen. (This simplifies Equations 2 and 3, setting Var{F } = 1.) The entire metric, and consequently the distribution of F , is then determined by the link function.3 If, in the course of developing the test items, a set is chosen that fits a model with equal coefficients of F , and the link function is monotone, then it can be shown that the difference f1 − f2 in F between two subpopulations is independent of the items chosen to measure them. This property was termed specific objectivity by Rasch (1960). It is sometimes thought of as a special and valuable property of the 1PL model and somewhat hyperbolically referred to as “item-free measurement” (e.g., Wright & Stone, 1979). However, specific objectivity is a property of any model, linear or nonlinear, in which the item slope parameters do not vary (see McDonald, 1999, p. 425). The claim is sometimes also made that the 1PL model, identified as the Rasch model, gives interval scale measurement—equality of attribute differences as measured by the model. However, the choice of distinct link functions for 1PL models gives distinct and mutually contradictory “interval” scales. To repeat, the metric of the measurements is imposed by the model chosen, and alternative metrics are always available. Note that specific objectivity does not make the items of a test interchangeable measures of the attribute. The error of measurement in a 1PL model depends on the difficulty parameter of the item, a fact that makes a problem for equating tests (see Alternate Forms and Test Equating) mcdonald

123

Table 7.4. LSAT-6—Normal-Ogive Item Response Functions −3

Item

−2

−1

1

2

1

.62

(.74)

.76

(.80)

.87

(.86)

.94

(.92)

.97

(.98)

.99

(1.04)

2

.26

(.30)

.41

(.44)

.57

(.57)

.77

(.71)

.84

(.84)

.92

(0.98)

3

.07

(.01)

.17

(.18)

.35

(.37)

.56

(.55)

.76

(.74)

.89

(0.93)

4

.33

(.41)

.48

(.53)

.64

(.65)

.78

(.76)

.88

(.88)

.94

(1.00)

5

.54

(.65)

.68

(.72)

.80

(.80)

.88

(.87)

.94

(.94)

.97

(1.02)

To calibrate the model, we need to sample a carefully chosen population, but we do not need to use the population mean and standard deviation to define the scale. Alternatives are available that do not rest on the population chosen for their origin and unit. A simple alternative measure—long antedating IRT—is the raw sum or the mean of the item scores. The test score, the raw sum of the item scores, Y = Xj ,

(19)

M = Y /m,

(20)

and the mean score, provide alternatives that many users of tests may justifiably prefer. For example, a psychiatrist may prefer a count of symptoms of a behavior disorder to a score on the latent trait. In the linear model, from Equation 4, E {Y |F = f } = μj + (λj )f ,

(21)

or, from Equation 1, Y = μj + (λj )F + Uj .

(22)

We can rewrite Equation 21 as t = μY + λ.f ,

(23)

where μY is the test score mean, and λ. the sum of the loadings λj . We can rewrite Equation 22 as: Y = T + E = [(μY + λ.)F ] + Uj .

(24)

Here, T is the true score of classical test theory, and E the error of measurement of the attribute by the total test score. By Equation 23, f = (t − μY )/λ.,

(25)

F Y = (Y − μY )/λ.,

(26)

we can define and rescale E into E Y = E /λ., so that F Y = F + EY. 124

0

modern test theory

(27)

That is, the rescaled test score F Y is a measure of the latent trait F with error of measurement E Y , the error in F from a rescaling of Y . Note that the correspondence between T and F fails, and consequently the linear model fails, when values of F would give values of T outside the range of possible values of Y . Before proceeding we should note that in Spearman’s original true score model, as in Equation 24, the distinction between true score and error remained undefined for decades, and various devices were proposed to deal with the lack of definition (see McDonald, 1999, Ch. 5.) If, as is usual, F has been standardized in the calibration population, Equation 23 gives a simple relation between the attribute so measured and the true score in sum score units, whereas Equation 24 gives an appropriate foundation for the classical model. This relationship justifies using the raw sum score or mean score as a measure of the attribute, each with an origin and unit that can have a direct and important meaning. Consider, as an example, a set of items with the same Likert-scale format, where a respondent’s mean score might represent, say, modal disagreement with all of the statements measuring an attitude. In any model with a nonlinear link function, we have, correspondingly, from Equation 19, E {Y |F = f } = γ (aj + bj f ).

(28)

For the case of binary items, Equation 28 becomes t = E {Y |F = f } =  P(aj + bj f ).

(29)

Equation 29 will give a nonlinear monotone relationship, with f unbounded on the real line and t bounded by 0 and the number of items. The graph of t on f is known as the test characteristic curve. The relation given by Equation 23 for the linear model is also a test characteristic curve, although not always recognized as such. In the LSAT6 data, as Table 7.5 shows, the test characteristic curve from

Table 7.5. LSAT-6—Normal-Ogive Item Response Functions f

−3

−2

−1

0

1

2

NO

1.82

2.51

3.27

3.88

4.39

4.72

Lin

2.10

2.67

3.24

3.82

4.39

4.96

Note. NO, normal-ogive; Lin, linear

the linear model gives a remarkably good approximation to that from the normal ogive model in this example. Undoubtedly cases can be found in which the approximation is poor. The metric given by the latent trait f is unbounded on the real line. We can regard Equations 28 and 29 as applicable to any function Y of the item scores. The sum score metric is bounded by 0 and m, and, an equivalent, the proportion keyed—the mean score—is bounded by 0 and 1. The metrics are related nonlinearly by the (formula score) test characteristic curve. As we will see, an important function of the model is to supply information about measurement error in the latent trait, in the sum or mean score, or in any function of the item scores. Because the attribute to be calibrated is not itself bounded, in any sense, it might seem that we should regard the metric given by the latent trait as more fundamental than that given by a (formula) score from a set of items chosen to measure it. Following Lord (1980), we would regard the test characteristic curve as exhibiting “distortions of mental measurement” (Lord, 1980, p. 49) resulting from the test score chosen. When the metric of the scale has been determined by the choice of a link function, the distribution of F and of any formula score is determined by that choice and is not open to arbitrary assumption. McDonald (1967) gave a method for estimating the moments of the distribution of F from the distribution of the observations. This provides a simple test of departure from normality. Bartholomew and Knott (1999) have taken the view that the distribution is arbitrary along with the choice of link function.

Measurement and Error of Measurement (Reliability) When we have calibrated the attribute by fitting a model, we may then wish to use the created test to assign a measure of the attribute to an examinee. We also wish to obtain a standard error of measurement

as scaled by the model or put confidence bounds on the measurement. We may wish to obtain a measurement of an attribute from one or more members of the calibration population or examinees from a different population. This consideration does not affect the measurement process. I suggest as an axiom that a measure of a defined quantity (which requires a calibrated scale) equals the quantity to be measured plus an error of measurement. Also the variance of the measurement should equal the variance of the measured quantity plus the variance of measurement error. Spearman’s true score model essentially expresses this axiom. In the linear (factor) model, Equation 24 satisfies the axiom, with

and from and

Var{Y } = Var{T } + Var{E },

(30)

Var{T } = (λj )2 ,

(31)

Var{E } = ψj2 ,

(32)

we have Var{Y } = Var{T } + Var{E } = (λj )2 + ψj2 . (33) Thus, the fitted factor model gives us the variance of the error of measurement from the unique variances of the item scores, and the true score variance from the factor loadings. Note that in this realization of Spearman’s (1904a) classical true score model, the error of measurement arises in a single administration of the test and results from specific properties of the items, although it may include confounded errors of unrealized replication. One of the oldest concepts in Test Theory is, of course, the reliability coefficient of a total test score, defined as the ratio of the variance of the true score to the variance of the total score, and conventionally denoted by ρYY . A somewhat neglected alternative concept is the reliability index, the correlation between the true score and the total score, denoted by ρYT . The two coefficients are related by 2 ρYY = ρYT .

(34)

However, for decades these quantities remained undefined, like the true score itself. From the parameters of the factor model, the reliability coefficient is given by 2 = ω = λ.2 /(λ.2 + ψj2 ). ρYY = ρYT

(35)

This is also Var {F }/Var{F Y }, the ratio of the variance of the factor score to the variance of its measure mcdonald

125

obtained by rescaling the total score in Equation 26. Coefficient omega, defined by Equation 35, was originally given by McDonald (1970). From the parameters of the linear model in Table 7.2, it is easy to compute coefficient omega. It is 0.307, which, of course, is very low. Guttman (1945) gave a lower bound to the reliability of a total test score, which was then further studied by Cronbach (1951) and is commonly known as Cronbach’s alpha. I prefer to acknowledge Guttman’s original contribution and call it the Guttman-Cronbach alpha. It is defined by α = [m/(m−1)][1−( Var{Xj }0/Var{Y }]. (36) A sample estimate of G.-C. alpha is still commonly used for reliability, although Guttman (1945) clearly showed that it was a lower bound. Novick and Lewis (1967) showed that alpha gives reliability if the items are true score-equivalent, meaning that the items fit the model Xj = T + Ej ,

(37)

and McDonald (1970) showed further that it gives reliability if and only if the factor loadings in Equation 1 are equal. In applications, G.-C. alpha is often a very good lower bound to coefficient omega. In the LSAT6 data, from Table 7.1 we obtain a value for alpha of 0.295, very little less than omega. The case for using omega rests on the fact that it is a simple byproduct of a prior analysis that gives other useful results, as will be seen. A reliability coefficient is not an end in itself. From its origin, it was a device for overcoming the problem of replicating test scores, to obtain a standard error of measurement. The measurement error variance can be expected to be approximately invariant over populations, whereas the reliability varies widely (with the true score variance) from population to population. This variability can be seen in lists of reliability estimates from different populations, recorded in handbooks of tests and measurements. The simple sum score, possibly rescaled to a mean by dividing by m, or rescaled to the metric of the common factor by Equation 26, is not the best measure of the attribute. With scaling to latent-trait metric (and latent-trait variance 1), the weighted sum (38) F B =  wj (Xj − μj ), with wj = [1/{λ2j /ψj2 }][λj /ψj2 ], 126

modern test theory

(39)

gives a measure (resulting from Bartlett, 1937), F B = [1/{λ2j /ψj2 }][λj /ψj2 ](Xj −μj ) = F +E B , (40) with minimum error variance Var{E B } = 1/(λ2j /ψj2 ),

(41)

and maximum reliability coefficient 1/[1 + Var(E B }]. For the LSAT6 data, the maximum reliability, given by these weights, is 0.309—hardly an improvement on 0.307 from the simple sum score. The reciprocal of the error variance in Equation 41, (42) I = (λ2j /ψj2 ), is a sum of m independent terms, one for each item. Each makes a separate contribution to the reduction of the measurement error. The more informative items are those with the largest ratio of squared loading to unique variance. We can take the test information to be defined by this reciprocal and the item information to be the contribution of each term in Equation 41.4 The usefulness of the information concept for us rests on the additivity of these terms, enabling, as we will see, the independent selection of good items. The weights given by Equation 38 minimize the measurement error variance and maximize the information and reliability, among all possible weighted sums of the items.5 The maximum reliability can be written in terms of information as I /(I + 1). The raw sum score is an equally weighted sum of the item scores. Scaled to F Y as in Equation 26, it has error variance Var{E Y } = ψj2 /(λj )2 ,

(43)

and test score (sum score) information I Y = (λj )2 /(ψj2 ).

(44)

RE = I Y /I = Var{E }/Var{E Y },

(45)

The ratio

which is the ratio of the information in the simple sum score to the maximum information in the test (given by Equation 37 with Equation 38), is the relative efficiency of this test score, necessarily less than 1. The relative efficiency of the sum score for the LSAT6 is 0.994. There are other possibilities— for example, we can obtain the relative efficiency of scores from a subset of items.6 Given the error variance, we have the corresponding standard error of measurement as its square root. By the Central Limit Theorem, the error of measurement will approach a normal distribution

as the number of items becomes large, because it is a sum of m independent unique components. We can then put confidence bounds on an examinee’s score by standard methods, without imposing distribution assumptions on the model. Using the linear model for the LSAT6 gives an error variance Var{E } = ψj2 of the raw score Y equal to 0.742 and a standard error of measurement of 0.876. Ninety-five percent confidence bounds on an examinee’s score of 3 would be 3+/−1.96×0.876—that is, 1.31 and 4.69, which nearly covers the range (0 to 5) of the scale. The usefulness of the LSAT6 clearly lies in the pleasures of analysis it has given to psychometric theorists rather than in its precision as a measuring instrument. We can rescale these numbers to the scale of the latent trait, from λj = 0.573 and the test mean μY = μj = 3.818, giving fY = (3 − 3.818)/0.573 = −1.43, with error variance 0.742/0.5732 = 2.260, standard error of measurement 1.503, and confidence bounds −1.43 + / − 1.96 × 1.503—that is, −4.377 and 1.517. On the face of it, the linear (factor) model makes the strong assumption that the errors of measurement are homoscedastic. This assumption is easily tested and will very commonly be found to be false. A classical method for testing it results from Mollenkopf (1949) and Thorndike (1951). (See also Lord, 1984, Feldt et al., 1985, and QuallsPayne, 1992.) The principle, which is capable of refinement, is: We split the items into two parallel halves, and plot the variance of the difference (which estimates error variance) against the sum (which estimates the true test score). We can call this a practical or empirical method. A method based on IRT is given later. When the item response model fits, there should be little to choose between the methods, and the model-based method has best theoretical justification. The resulting standard error of measurement, a function of test score, is referred to as a conditional standard error of measurement. In a nonlinear model, the item information and the test information—and hence the corresponding measurement error variance—are, by theory, functions of the position of the examinee on the scale. We can use the model to define a true score and a conditional error of measurement for any formula score—any quantity calculated from the item scores. Let S be any such score. Then a corresponding true score tS is given by, for example, tS = E {S|F = f } = g(f ).

(46)

The error of measurement of the formula score E S = S − tS . Then S = T S + E S = g(F ) + E S .

(47)

If the function g(f ) is invertible, then formally there is a nonlinear transformation g−1 (S) = F + EfS = F + g −1 (E S ).

(48)

Thus, g−1 (S) is a measure of F , with error of measurement EfS = g−1 (ES ). Now suppose we take S W =  wj Xj ,

(49)

a weighted sum with fixed weights. (With weights equal to 1, this gives the simple raw sum.) Then, T W = E {S w |F = f } =  wj Pj (F ),

(49a)

and E W =  wj Xj −  wj Pj (F ).

(49a)

Here T W and E W represent the true and error components of the weighted sum S W given by Equation 48. Then, Var{E W } =  w2j Pj (1 − Pj ).

(50)

If we intend to measure the attribute in the metric of the chosen formula score (the chosen weighted sum), then this last result is all we need. A common choice will be the raw sum score with Y = T Y + EY,

(51)

Var{E Y } = {Pj (f )[1 − P(f )]}.

(52)

where

This result supplies the conditional standard error of measurement of the sum score, based on the item response model. As remarked already, when the model fits, model-based conditional standard errors will agree closely with the empirical results from classical methods. A measure of the latent trait f for any individual can be obtained from her/his formula score sw in Equation 48 by equating it to its expectation, writing sW = tW —that is,  wj xj =  wj Pj (f ).

(53)

Equation 53 can be solved for a measure fW by plotting tW against f, and finding the point where the equality is satisfied—or by an equivalent computer program. This corresponds to applying Equation 47. Then, F W = F + EfW , mcdonald

(54) 127

where F W is the measure of F given by Equation 53. The corresponding error of measurement, EfW , has variance Var{EfW } = [ w2j Pj {f}(1 − Pj {f})]/[ wj Pj {f}], (55) where P’j {f} is the gradient of P{f}. Any choice of the weights will give a measure of f for each individual, from Equation 53, and a corresponding variance of measurement error, given by Equation 55. If we set weights equal to the discrimination (slope) parameters of the items, then the variance of the measurement error is the minimum possible from all choices of weights. This is the choice wj = bj in the regression parameterization (Equation 10), a∗j in Lord’s parameterization (Equation 13), or λj /ψj in the item factor parameterization (Equation 14). In Lord’s (1980) original account of these results, Equation 13 is used. Here it is convenient to use Equation 14, to exhibit the relation to the linear model. That is, the formula score s = (λj /ψj )[xj − μj ]

(56)

gives a measure fb of f for any individual from the solution fb of (λj /ψj )xj = (λj /ψj )Pj (f ).

(57)

It has the property that

with

F b = F + E b,

(58)

Var{F b } = Var{F } + Var{E b },

(59)

where Var{F } = 1 and Var{E b } = E {F b |F = f } = 1/I (f ).

(60)

Here, I (f ) = [(Pj {f})(1 − Pj {f})]/[Pj ’(f )2 ],

(61)

and Pj ’(f ) is the gradient of P(f ). As in the linear case, I (f ) is the information, the reciprocal of the error variance. The choice of these weights minimizes the error variance and, equivalently, maximizes the information from the data. We notice that Equation 61 parallels Equation 41 for the linear model. The measures fb are nonlinear counterparts of the Bartlett scores given by Equations 37 and 38. The information is the sum of terms that constitute item information—the contribution of each item to the reduction of error. There is no counterpart of the reliability coefficient in the nonlinear model. This is not a defect, from the modern perspective. However, Raju et al. 128

modern test theory

(2007) have suggested inventing a conditional reliability for a sum score, defined as the unit complement of the ratio of the conditional variance of the error of measurement to the total variance. The intended use of this index is to compare the conditional precision of two tests of the same attribute. It is not clear whether such an index has any advantages over the use of relative efficiency as suggested by Lord (1980) for such purposes. Unlike the linear case, the item information is a function of f. Any other formula score must give less information and a greater measurement error variance than the solution of Equation 57— a counterpart of Bartlett’s minimum error measure Equation 37 with 38—at every point on the scale of f. In particular, the sum score, Y = Xj , with true score T Y = Pj (F) gives a conditional error variance of Y Var{E Y } = [(Pj {f})(1 − Pj {f})],

(62)

and conditional error variance of the measure of f Var{Ef } = [(Pj {f})(1−Pj {f})]/[ Pj (f )2 ], (63) parallel to Equation 37. At every point of the scale, this must be greater than the minimum variance given by Equation 57. In applications, the difference may be small, and for some purposes it would be reasonable to use the total test score as a measure or its transformation onto the latent-trait scale. The information function and its reciprocal—the error variance—changes radically under transformations of the scale. For example, at the floor and ceiling of the test, the error variance of the latent trait becomes infinite, whereas that of the true score becomes zero. The unbounded metric shows clearly that we cannot get good estimates for examinees for whom the test is too easy or too difficult. At these extremes, the true score is constrained from changing in response to the attribute and conveys no information about it. For the LSAT6 data, Table 7.6 gives the item information functions. Table 7.7 summarizes the further results relevant to this section. These are: (1) TCC: The test characteristic curve—the sum of the item characteristic curves; (2) I (f ): the test information function—the sum of the item information functions; (3) Var{E b }: the minimum error variance available from the test; (4) S.E.M.(f ): the Standard Error of Measurement from fb ; (5) The TCC from the linear model; and (6) Var{EfY }: the (constant) error variance of F from the linear model. We observe that all the items are easy for this population, and we have lower standard errors of measurement (more information) for examinees of low ability than

Table 7.6. Item Information Functions f Item

−3

−2

−1

1

.114 .086 .054 .031 .016 .008 .004

2

.092 .117 .118 .095 .063 .037 .020

3

.059 .120 .192 .210 .154 .083 .038

4

.104 .119 .108 .080 .050 .028 .015

5

.097 .084 .062 .040 .024 .013 .008

0

1

2

3

My treatment of measurement and errors of measurement has been free of distribution assumptions. Rather than measures of factor scores/latent variables, in the sense employed here, we can ask for best predictors of them from any information we have about the subjects. In the Spearman model, given no information about a subject beyond his/her item scores, we can use the regression of the latent variable on the item scores as a best linear predictor, minimizing the residual variance. This predictor, given by Thomson (1934), takes the form fT = E {F |X1 = x1 , . . . , Xm = xm }

for examinees of high ability. For some purposes, this might be desirable, whereas for others it would be a defect of the test. We can immediately see how we could select items from a large calibrated set to form a test information curve of desired form. This is the subject of the next section. Over the last couple of decades or so in psychometric theory, there has been a general movement away from the method of maximum likelihood to Bayesian estimators, both for parameters of the items and for predicting values of the attributes of examinees (see, for example, Bartholomew & Knott, 1999). I need to point out that there remains some confusion in terminology about the “estimation” of latent traits, with no distinction made between measurement and prediction, and some writers loosely referring to obtaining, assessing, constructing, or finding “proxies” for factor scores—values of individual latent traits. To discuss the Bayesian treatment, I find it convenient to distinguish measurement, as treated so far, and prediction.

= [1/(1 + I )][λj /ψj2 ](xj − μj ).

(64)

Thomson’s predictor decomposes the measured quantity F into orthogonal components, giving F = F T + E T,

(65)

with Var{F } = Var{F T } + Var{E T } = I /(1 + I ) + 1/(1 + I ) = 1.

(66)

This may be contrasted with the Bartlett measure, which decomposes the measure F B into orthogonal components (67) F B = F + E B, with Var{F B } = Var{F } + Var{E B } = 1 + 1/I . (68) We note also that Var{F T } = 1/Var{F B } < Var{F } < Var{F B }, (69) Var{E T } < Var{E B },

(70)

Table 7.7. LSAT6—Summary of 2PL Results f −3

−2

−1

0

1

−2

3

TCC

1.82

2.52

3.25

3.90

4.39

4.69

4.85

I (f )

0.466

0.526

0.534

0.455

0.307

0.170

0.084

Var{E B }

2.146

1.901

1.873

2.198

3.257

5.882 11.904

S.E.M.(f )

1.47

1.38

1.37

1.48

1.81

2.43

3.44

TCC

2.10

2.67

3.24

3.82

4.39

4.96

5.59

Var{E Y }

2.23

2.23

2.23

2.23

2.23

2.23

2.23

Linear Approximation

mcdonald

129

and

E{F T |F = f } = [I /(1 + I )f ].

(71)

The regression predictor is a shrunken, conditionally biased estimator of the latent variable. The standard error of prediction is less than the standard error of measurement. It is not presently clear what would motivate the choice between measurement and prediction in applications of the model. Prediction appears to be the current “default” option. If it is found empirically that the latent trait has a normal distribution, then the regression predictor is also a Bayes predictor (see Bartholomew & Knott, 1999). In the corresponding 2PL model, if it turns out that empirically the latent trait has a normal distribution, then we can obtain a Bayes predictor by methods described in Bartholomew and Knott (1999) and Skrondall and Rabe-Hesketh (2004). The Bayes predictor is, again, a shrunken, conditionally biased estimator.

Item Selection In developing a test, it is common practice to try out a set of items on a calibration sample, of which only a subset will be kept for actual application— measuring examinees with the final calibrated test. We need a convenient way to choose a “best” subset. It may also happen that the items under trial seem insufficient to measure the attribute well enough, and we wish to know how many more need to be written. The conception of our freedom to shorten or lengthen a test measuring a given attribute contains, at least implicitly, the recognition that the attribute is not “operationally” defined by just the set of items chosen to measure it. The possibility of shortening or lengthening a test for an attribute rests on an idealization. In effect, we suppose that the items written come from a quasi-infinite set of items that would, if written and administered, define and measure the attribute precisely. Such a quasi-infinite set has been called a behavior domain, or a universe of content. I prefer to call it an item domain. Although it will virtually never be the case that the m items we have are a random sample from an item domain, it is necessary to think of them as obtained from it and to take the true score or latent trait to be determined by it. The limit results justifying this do not depend on random sampling. They do depend on the strong assumption that we know how to realize indefinitely more items measuring just the attribute we intend to measure. This requires very careful conceptualization. In Classical Test Theory, a large number of heuristic devices for item selection have been developed, 130

modern test theory

with varying degrees of theoretical motivation. These are no longer needed (see McDonald, 1999, Chapter 11). In the linear case, with a set of items fitting a single-factor model, we see immediately that a best subset of m items would contain the items with the largest information values, λ2j /ψj2 , thus yielding the smallest error variance and largest reliability for a given number of items. As an example in miniature, if we wanted the best three items from the LSAT6, using the linear model as approximating it well enough, then we would take items 3, 2, and 4, with information 0.163, 0.096, and 0.082, respectively, rejecting items 1 and 5, with information 0.055 and 0.052, respectively. Keeping the first three gives test information 0.341 and error variance 2.93, to be compared with information 0.448 and error variance 2.232 from the full set. The nonlinear counterpart procedure is more complex. We use the item information functions to select a subset of items giving a desirable test information function, recognizing that we may not be able to minimize conditional error variance at all levels of the latent trait, and may wish to have “small” error variance in specified intervals. This depends on the purpose of the test. A careful study of the item information functions in Table 7.6 and the way they give different orders at different points of the scale will indicate the possibilities and difficulties of the task. Of course, the principles just adduced for eliminating items from a set apply also to adding items of known characteristics. For the problem of “prophesying” the effect of adding items that are not yet available, Modern Test Theory adds little to the classical treatment. The Spearman-Brown classical “prophesy” formula (Spearman, 1910; Brown, 1910) allowed the prediction of the reliability of a lengthened test, of r items, from the reliability of a test of m items. It required the very strong condition that the items in the given test be parallel. In Classical Test Theory, this condition requires that each measures a shared true score with equal error variance. Interpreting this in terms of the linear factor model, the condition is that each of the m items has the same factor loading and the same unique variance. An assumption is also needed that the additional items have the same loading and unique variance. The one mild advantage of applying the factor model over the classical treatment is that the condition and assumption can be weakened considerably. If the m given items fit the Spearman model, we can

define an approximate reliability for them as 2

2

2

ω = (λ. /(λ. + ψ .),

(72)

where λ. is the sum of the loadings and ψ 2 . the sum of the unique variances. The Spearman-Brown formula is derived in Classical Test Theory as a special case of G.-C. alpha, and can also be derived from the Spearman model with equal loadings and equal unique variances (see McDonald, 1999, Chapter 6). The formula is given by ρm = mρ1 /[(m − 1)ρ1 + 1].

(73)

This is an expression for the reliability of a test of m parallel items from the reliability of just one of them. We can use it with Equation 72 to obtain the reliability of a projected test of r items with the strong condition eliminated, and the assumption weakened to the hope that the added items have the same average loadings and the same average unique variances as the given set. This is still a strong demand on the item writer. There is no clear strategy in the conceptually parallel model for binary data, allowing the investigator to predict the number of items needed to meet a criterion for error variance. For this purpose, it is not unreasonable to use the linear model as an approximation and apply the modernized classical formula. This is perhaps a little better than consulting a crystal ball, and perhaps the question is not of great importance. The most important function of the behavior domain concept is that it serves to determine the latent trait or true score as the score on a test of infinite length. The behavior domain gives a clear distinction between these scores and a measure of them from a chosen set of items. In Lord and Novick’s account, there is room for alternative treatments of a true score as either the score on a test of infinite length or as the mean of a “propensity distribution.” A propensity distribution is the distribution of the score obtained when one examinee is brainwashed to forget previous responses and retested many times under ideal conditions (but see McDonald, 2003).

Homogeneity and the Dimensionality of Tests An unexamined assumption of the previous discussion is that the items are measures of just one attribute. In the classical period, this was discussed as the question of test homogeneity. In terms of the Greek root, the question is whether the items are of the same (homo-) kind (genos). In that period,

a remarkable number of indices or heuristic devices were invented to measure or test the extent to which a set of items is homogeneous. These were based on rather intuitive notions of what homogeneity means. Hattie (1984, 1985) studied the behavior of 87 of these and found that only one could be recommended. As expected from McDonald (1981), this exception was based on a check to see whether the item scores fit a model with a single latent trait. Indeed, we may now take it that what was always intended by the term “homogeneous test” is one whose item scores fit a model with a single latent trait. This follows from our identification of the latent trait in the mathematical model with the attribute as measured by the item responses. In the early literature, psychometric theorists often treated the single common factor and the m unique factors as on the same conceptual level and described the model as containing m + 1 factors. This way of expressing it makes an ambiguity over what we should mean by the dimensionality of a test.7 Writing the linear model as the expected value of an item score for fixed factor score, as in Equation 4, and the item response model as the corresponding Equation 10, we regard f as the single dimension on which the attribute varies. We now consider the possibility of writing alternative p-dimensional models. I will just illustrate this possibility with two-dimensional models, writing E{Xj |F1 = f1 , F2 = f2 } = μj + λ1 f1 + λ2 f2 , (74) with Uj = Xj − E{Xj |F1 = f1 , F2 = f2 },

(75)

a 2PL model for quantitative data, or Prob{Xj = 1} = P(aj + bj1 f1 + bj2 f2 ),

(76)

a two-latent-trait model for binary data, in regression equation notation. This can also be written as Prob{Xj = 1} = P[(λj1 f1 + λj2 f2 − τj )/ψj ], (77) the obvious extension of Equation 10. The extension of the factor model to multiple dimensions has a long history. The history of the corresponding latent trait models is short, and unfortunately it has not always been recognized that multidimensional item response models require the techniques invented for multiple-factor models if we are to fit and interpret them in applications to real data. The latent traits in models of this kind are correlated, and each of the latent traits is standardized. The parameters of the linear model are mcdonald

131

the item means μj , the factor loadings λj1 and λj2 , the unique variances ψj2 , and, for this model, the correlation between the latent traits, which we will write as φ12 . Correspondingly, the parameters of the two-dimensional item response model are aj , bj1 , bj2 , and the correlation of the latent traits φ12 . In two or more dimensions, in addition to the problem of choosing an origin and unit for each axis, we face the very old problem of choosing an orientation of the axes in the two- or p-dimensional space—the rotation problem of classical factor analysis. In exploratory studies, this problem has usually been solved by fitting the model on some convenient basis and then transforming the factor loadings to Thurstonian simple structure, with uncorrelated (orthogonal) factors or correlated (oblique) factors (see, for example, Mulaik, 2010). In the context of test construction, we write items to measure an attribute and should not need to use exploratory methods. Even so, in the early stages of developing and calibrating a test, the conceptual denotation of the attribute may be unclear, and it may be that the attribute is conceived at a high level of abstraction from behavior, and the domain of possible items divides into subdomains. The paradigm case is the set of correlated primary mental abilities into which Thurstone divided Spearman’s general “intelligence”—scholastic ability. If the items are written so that they fall neatly into the subdomains, they form clusters that, if fitted separately, are homogeneous and fit the unidimensional models given by Equations 1 or 10. Jointly, they fit multidimensional models here represented by Equations 75 and 77, with correlated factors, and an item with nonzero factor loadings/slope parameters on one latent trait has zero factor loadings on the other. The items are said to be factorially simple, belonging clearly to just one attribute. This case is commonly referred to as having independent clusters (see McDonald 1999, Chapter 9). We may not succeed in creating pure clusters of items measuring just one latent trait. Some items may be factorially complex, with nonzero loadings on two or more factors. Although pure clusters are a desirable goal of measurement, at least the aim should be to create enough factorially simple items for these to form a basis for analyzing the measurement properties of the complex items and possibly eliminating them. (An example following should make these statements clearer.) Without such a basis, we cannot be sure what we are measuring (McDonald, 1999, Chapter 9, calls this case an independent clusters basis). 132

modern test theory

Reckase (2009) has provided a very different treatment of multidimensional item response models. Reckase seeks to describe the multidimensional space without reference to the methods developed in common factor modeling to determine what is measured. This interesting development awaits evaluation. In the linear Equation 4, and the nonlinear counterpart in Equation 10, if the items form independent clusters, then a measure of each latent trait with minimum error variance is given by the corresponding cluster of items, with the same expressions (Equations 37 and 65) as for the unidimensional case. Even if the factors are highly correlated, the errors of measurement are uncorrelated. I suggest that we call this the case of pure measurement, with each attribute measured by just its own items and with uncorrelated measurement errors. This is an important property, because generally we would not wish to have the measurement of one ability, say, affected by items measuring another correlated ability. Some writers would reject the distinction made here between measurement and prediction, regarding both as “estimation.” Bartholomew and Knott (1999), for example, suggest the use of Bayes predictors in place of the measures I recommend. The Bayes predictors have the property that the predicted value of one attribute of an examinee is increased or decreased by a high or low value of another. Thus, being good at English improves an examinee’s mathematics score. This effect can be described as “measurement contamination” or “borrowing strength from extraneous information,” depending on whether we wish to measure or to predict. If, indeed, the intention is to predict the value of an attribute, then we can use items belonging to related attributes and any other information about the examinee (e.g., socioeconomic status, educational history, etc.) that is relevant to prediction. My view is that prediction from extraneous information is not measurement, and it would require a distinct research motive. On the face of it, measurement models are designed for the purpose of measurement. As an example of a multidimensional model, I will give a brief account of an example in McDonald (1999, Chapter 14). Fifteen items taken from the ACT Mathematics Test have the item stems listed in Table 7.8. It seems reasonable to describe items 1 through 5 as measuring geometry achievement, and items 6 through 10 as measuring algebra achievement. Items 11 through 16 are less readily

Table 7.8. Fifteen ACT Mathematics Items Item

Item Stem Description

1

Angles in a right triangle

2

Areas of bisected triangles

3

Length hypotenuse—right triangle

4

Length adjacent—right triangle

5

Area trapezoid √ √ 2 28 + 3 175

Table 7.9. ACT Independent Clusters Basis Solution Loadings Item

6

I

II

Uniqueness

1

.766

.413

2

.642

.588

3

.451

.814

4

.604

.636

5

.485

.765

7

√1 2−1

6

.439

.809

8

(−3)2 + 3−2

7

.502

.650

9

x, for which [(x(x − 2))][(x − 1)(x − 2)] is undefined

8

.386

.851

22 + 20 + 2−2

9

.666

.556

10

10

.388

.849

11

Application of 73/3 + 17.85 + 61/2

12

Slope of line 2x + 3y + 6 = 0

13

Radius of circle given circumference

14

Speed given distance and time

15

Longest diagonal in box

classified. We fit the model given by Equation 78, specifying a corresponding pattern of zero and nonzero coefficients, allowing the last group of items to be, possibly, composites of geometry and algebra. The fitted parameters are given in Table 7.9.8 The correlation between the latent traits is 0.739. The last five items appear to be fairly equally balanced combinations of geometry and algebra abilities. The clusters formed by the first two sets give simple item characteristic curves and the subtest characteristic curves given in Table 7.10. The complex items in the last group give the subtest characteristic surface tabulated also in Table 7.10. The simplicity of the basis supplied by the pure clusters is what gives us an understanding of the complex items in the last group. Without the basis for interpretation supplied by the geometry and algebra items, it would be difficult, if not impossible, to determine what we are measuring and clearly impossible in models of higher dimensionality. The information functions and error variances have the same structure, with uncorrelated errors for formula scores from the first two sets and high correlations between the errors of measurement of the

11

.365

.367

.534

12

.355

.363

.551

13

.358

.349

.567

14

.223

.436

.615

15

−.335

.548

.859

two latent traits as given by the last set. We could have pure measurements by keeping only the first 10 items.

Validity The classical problem of determining the extent to which a test score is a valid measure of an attribute remains with us, although not quite in its classical form. I accept as a definition the statement: “A test score is valid to the extent that it measures the attribute of the respondents that it is employed to measure, in the population(s) in which it is used.” Early validity theory was influenced by an extremely behaviorist psychology and a logical positivist philosophy of science. The attributes we wish to measure were regarded as invented, convenient, fictional “constructs.” This view led to a recognition of three forms of validity—namely, predictive validity, concurrent validity, and content validity, with hesitant admission of a fourth—“construct” validity. The many predictive validities of a test were its abilities to predict external measures. Concurrent validity required correlating the test with another test of mcdonald

133

Table 7.10. ACT Test Characteristic Curves s1

s2

s3

TCC

f2

TCC

f1 /f2

−2

−1

0

1

2

−2

.149

−2

.051

−2

.026

.076

.175

.325

.507

−1

.326

−1

.121

−1

.042

.110

.236

.415

.614

0

.586

0

.260

0

.085

.182

.333

.521

.708

1

.821

1

.471

1

.162

.289

.452

.622

.770

2

.938

2

.679

2

.271

.418

.580

.698

.795

f1

the same name. Content validity, regarded with suspicion by behaviorists, rested on the “subjective” judgment that the item contents were indicators of a common attribute. Largely through the work of Cronbach and Meehl (1955), and that of Messick (1989), the common view now seems to be that there is one concept of validity, still called construct validity (out of pure habit), with predictive, concurrent, and content validity seen as evidence of it. Validation would now include all forms of evidence that the score is an acceptable measure of a specified attribute. My account will be limited to those forms of evidence that rest on the models we are considering as a framework for applications of test theory.9 It is possible to take the view that primary evidence of the validity of a test score comes from establishing that the test is homogeneous in two senses—that the test is unidimensional and that the item contents are judged to be measuring an abstract attribute in common. If we regard the attribute as what is perfectly measured by a test of infinite length, then a measure of validity can be taken to be the correlation between the total test score and the true, domain score. The redundant qualifier “construct” can be omitted, and we simply refer to “validity.” In the case of the Spearman model, the validity coefficient is just the reliability index given by the square root of coefficient omega. We can also call this index a coefficient of generalizability, from the m items in the test to the infinite set given by the item domain. Thus, on this view, reliability is validity is generalizability, which is a great simplification (see McDonald, 1985, 1999). In the case of binary items, the error variance is a function of the latent trait, so it seems impossible to define an overall reliability index. However, using the normal ogive model, we can usefully 134

modern test theory

define a validity and generalizability index for a set of binary items. The biserial correlation between a standardized variable Z and a binary variable X is the correlation between Z and an underlying response tendency X ∗ , such that X = 1 if X ∗ exceeds a threshold value τ . It can be shown that Cov{X , Z } = n(τ )Cor{X ∗ , Z }, where n( ) is the ordinate of the normal density function. In the normal ogive model in Equation 14, Cov{Xj∗ , F } = λj , so Cov{Xj , F } = n(τj )λj , giving Cor{Y , F } = [n(τj )Cov{Xj , F }]/[Var{Y }1/2 ]. (78) Evidence of validity comes from an examination of what the test does not measure as well as what it does. Suppose a set of items fits the multiple factor Equation 75 with independent clusters and correlated factors. Each cluster gives a subtest sum that should be correlated with its own factor, but it may also have a high correlation with the factors belonging to other clusters. The independent clusters model supplies the natural explication of a somewhat intuitive suggestion by Campbell and Fiske (1959). They suggested that multiple measures of a construct have convergent validity if they are sufficiently highly correlated and discriminant validity if they have sufficiently low correlations with tests of other, distinct constructs. The main contribution of Campbell and Fiske (1959) was the suggestion that any item can be regarded as a trait-method unit— a union of a particular trait-content and a method of measurement. To segregate trait from method, they recommend measuring a number of traits by

a number of methods in a crossed (multitrait– multimethod) design. This rather casual suggestion has spawned a large and confusing literature. The position I take is that multitrait–multimethod designs have not yet been shown to contribute to convergent/discriminant validity. To quantify convergent and discriminant validity by the independent clusters model, we compute (1) the correlation between each cluster sum and its own factor, and (2) the correlation between each cluster sum and the other factors. We hope to find the former high for convergent validity and the latter small for discriminant validity. The correlation of each cluster-sum with its own factor is just its reliability index, the square root of omega. The correlation of each with the other factors is just its reliability index multiplied by its correlation(s) with the other factor(s). This holds for binary items also, using Equation 79 for the correlation between the cluster-sum and its latent trait. Table 7.11 gives the correlations for the ACT data between the three cluster sums (s1 for items 1–5, s2 for items 6– 10, and s3 and the geometry [I ] and algebra [II ] latent traits; see McDonald, 1999, pp. 322–323, for details). The cluster sums from the unidimensional subsets yield the necessary conditions for convergent and discriminant validity, each having a higher correlation with its own “construct” than with the other. It is an intriguing observation that the sum of the “mixed” items 11 through 16 has a higher correlation with the algebra latent trait than the algebra cluster-sum and is close to the geometry sum in its correlation with the geometry latent trait. This might be a motive for keeping these items, but we would need to be concerned about the highly correlated errors of measurement that result from the complexity of the items.

Alternate Forms and Test Equating In a number of situations, we may wish to measure a given attribute using two (or more) distinct Table 7.11. ACT Subtest-Trait Correlations Independent Clusters I

II

s1

.751

.555

s2

.486

.658

s3

.673

.728

sets of items. It is customary to refer to these as alternate forms of a test. Two tests Y and V are item-parallel if the items in each are paired to have equal parameters in a jointly unidimensional model. (They are sometimes called “strictly parallel.”) A necessary and sufficient condition for the scores Y and V on test forms Y and V to have the same distribution for examinees with the same value of their latent trait is that the forms are item-parallel. This is a condition for the complete exchangeability of the test forms. In some applications, it is a condition for equity, meaning that it cannot matter to the examinee which form is administered. Two item-parallel test forms have (1) equal test characteristic functions, (2) equal test information functions, (3) equal test-score information functions, and (4) matched true scores and matched error variances at every point on the scale. We can recognize three distinct levels of matching or equivalence between test forms—namely, (1) item-parallel, (2) equal test characteristic and test-score information functions, and (3) equal test characteristic curves— matched true scores, but possibly different error variances. Only the first two can be considered equitably exchangeable. It is easier to select matched item pairs than to select nonmatched items to give equal test characteristic or test score information curves. Note that if the items are unidimensional, as we should first require, then we do not need to consider item content when matching them. Item response models play a central role in a rigorous matching process. (For a simple example, see McDonald, 1999, pp. 353–355.) It may happen that we already have two forms of a test—two tests intended to measure the same attribute. Test form Y is given to one set of randomly drawn examinees and form V to another. It may be that the forms are of comparable difficulty (horizontal equating ) or of different difficulty (vertical equating ). We wish to place the test -sum scores on a common scale. The most natural common scale would be that of the latent trait given by the model, but the common convention is to accept the sum score on one test (say, Y ) as defining the scale and transform the sum score V to give a score Y (V ) on the scale of Y . If we require equity, then the task of equating is unnecessary if it is possible, and impossible if it is necessary. If the tests are item-parallel, then equating is not needed. If a transformation is needed, then error variances cannot be matched on the entire range of scores. For some purposes (e.g., research studies of development over a wide age range) equity mcdonald

135

may not be a concern. Even so, the best equating methods would at least be able to tell us where on the scale the equating succeeds well enough and where it fails. Only the method known as true score equating seems informative enough to be recommended here.10 If two sets of items are jointly unidimensional, then the true scores from both tests are functions of the latent trait, and so in 1:1 correspondence. Then true scores can be mapped into true scores. If they are not jointly homogeneous, there is no motive for equating. Given two such forms, the scores on form V are easily mapped onto the scale of form Y. However, the equating is successful to the extent that the error variance of the transformed score Y (V ) matches that of Y . It is the error variance requirement that is problematic. The procedure is as follows: (1) Using the methods previously described, we obtain the item parameters of the two sets of items (e.g., in the 2PL model) on a common scale for the latent trait f. (2) We compute the test characteristic curves for each set, tY , and tV as functions of f, and the test information functions and hence the error variances as functions of f. (3) From the lists of true scores, by interpolation, or from a graph of tV on tY , we read off the (noninteger) values tY (v) corresponding to (integer) values of tV . These can be directly compared to test Y scores in the sense that an examinee who gets a score V on test V is expected to get a score of tY (V ) on test Y. (4) The problem that remains concerns the comparability of the error of the transformed score to the error of Y . In general, the plot of tV on tY is nonlinear. It may be shown that the variance of the error of measurement of Y (V ) is given by Var{E Y(V) } = (dtY /dtV )2 Var{E V }.

(79)

Comparing Var{E Y(V) } with Var{E Y } over the range of y, we see whether there is an interval of values of y (or f ) over which the error variances are close enough to allow equitable exchange of the tests. The valuable feature of this method is that it supplies diagnostics for its failure. Other equating methods lack this feature. As an illustration of the problem of equating, we consider two sets of items taken from an initial set of 60 in the ACT Mathematics test. These are multiple choice items with five answer categories. Their parameters in the item factor metric are given in Table 7.12 (Step 1). We can see from the threshold parameters that test V is more difficult than test Y. 136

modern test theory

Table 7.12. Item Parameters—Easy and Difficult ACT Items Test Y

Test V

Items

τ

λ

Item

τ

λ

5

0.830

0.731

3

−1.729

2.544

4

0.650

0.561

14

−1.798

1.378

27

0.575

0.855

15

−2.177

1.429

6

0.247

0.655

25

−1.018

0.359

7

0.094

0.675

40

−1.554

1.164

8

0.023

0.860

46

−0.547

0.705

9

−0.145

0.669

49

−4.514

4.012

57

−0.331

0.810

52

−3.647

3.513

10

0.004

1.014

58

0.974

0.982

18

0.285

0.500

59

−0.912

0.981

From these we obtain the Test Characteristic Curves, labeled TCCY and TCCV , and the error variance functions, labeled Var{EY } and Var{EV } in Table 7.13 (Step 2). At this point we have a 1:1 mapping of true scores on one test to true scores on the other. For example, at f = 0, the expected scores are 6.67 on test Y and 3.31 on test V. We then use a graph as in Figure 7.2 or use interpolation methods to read off the (noninteger) values of tY that correspond to (integer) values of tV (Step 3). These, given in Table 7.14, can be compared to test Y scores. An examinee who gets a score of V on V is expected to get a score Y (V ) on test Y. Note that Table 7.14 omits the perfect score 10 and scores 0, 1, 2—expected by chance in these multiple choice items. Equating cannot be done at the ceiling of the difficult test or in the region of chance responses. Finally, we obtain the error variance function of Y (V ). This is labeled V (E ) in Table 7.13 (Step 4). When referred to the scale of the easy test, the error variance of the difficult test is much larger in the low ability region. There is a small interval, from about f = −0.5 to + 0.5, where we could regard the tests as equitably exchangeable. Other methods of equating would conceal this failure.

Comparing Populations It was previously supposed that we calibrate the linear model or its counterpart item response model

Table 7.13. Test Characteristic Curves and Error Variance Functions—Easy and Difficult ACT Items F

Table 7.14. True-Score IRT Equating—ACT Items TV

TCCY Var{EY } TCCV Var{EV } V{E}

−4.0

2.15

1.69

2.02

1.61



−3.5

2.25

1.74

2.03

1.62

10.92

−3.0

2.42

1.82

2.06

1.63

9.36

−2.5

2.68

1.93

2.11

1.66

7.88

−2.0

3.09

2.08

2.20

1.70

6.82

−1.5

3.69

2.23

2.36

1.75

6.32

−1.0

4.52

2.34

2.60

1.79

6.16

−0.5

5.55

2.32

2.90

1.80

5.47

0

6.67

2.10

3.31

1.84

3.40

0.5

7.74

1.68

4.08

2.00

1.52

1.0

8.57

1.19

5.79

2.09

0.79

1.5

9.14

0.77

7.79

1.41

0.44

2.0

9.50

0.47

8.73

0.90

0.36

2.5

9.71

0.28

9.23

0.55

0.25

3.0

9.83

0.16

9.48

0.37

0.16

3.5

9.90

0.10

9.62

0.28

0.08

4.0

9.94

0.06

9.71

0.22



by fitting it to a sample from a defined population (the calibration population) and standardize the latent trait in that population. We may recognize

Y (TV )

3

4

5

6

7

8

9

6.0

7.6

8.2

8.6

8.9

9.2

9.6

more than one population of interest to us (e.g., the genders, or populations based on ethnicity). In the case of educational testing, especially for selection to educational programs, there are populations whose existence is recognized by laws concerning discrimination. We need to examine the conditions under which a test score on individuals from distinct populations (1) measures the same attribute or (2) gives an unbiased estimate of it. In comparisons of two populations, an accepted convention describes our “calibration” population as the reference group and the second as the focal group. As before, the reference group determines the metric—origin and unit—of the latent trait. We can fit a linear model, or an item response model, simultaneously to two or more populations. However, in most treatments of the problems we now consider, initially the model is fitted separately in each population, and the latent trait is separately standardized in both reference and focal groups. If, in fact, the items will fit the model with the same parameters when the scale of the focal group is changed to standard score units taken from the reference group, then there are simple linear relationships between the lists of item parameters. The necessary change of scale can then be determined from these. Corresponding approximate relationships will be revealed in sample estimates. If the two sets of item parameters cannot

10 9 8 7

TV

6 5 4 3 2 1 0

1

2

3

4

5

6

7

8

9

10

TY Figure 7.2

mcdonald

137

be made to agree well enough, then it may be that substantially different attributes are being measured in the two populations, and comparability is not possible. It may happen that a good proportion of the items have parameters that are nearly linearly related, and can be supposed to be measuring the same trait, whereas the remainder are not. If a binary item gives a different probability of a keyed response for the same ability/trait value in reference and focal groups, then it shows differential functioning. (This is redundantly called differential item functioning for pronunciation of the acronym DIF.) More generally, an item shows differential functioning if it gives a different mean response in the two populations for the same trait value. A direct method for checking agreement and finding differentially functioning items follows from the properties of our models.11 The method, which is a further development of Lord (1980), applies to both quantitative and binary data. It is given in more detail in McDonald (1999, Chapter 15). I will describe it here for the unidimensional linear model, noting that it carries over to binary items and generalizes to multidimensional cases. For definiteness, and for an example, we suppose two populations: male and female. We have a single factor model for each population, represented as (m) Xj



(m) + λj F (m)

(m)

(m) + Ej ,

(80)

(f )

(f )

(f )

(f )

= μj + λj F (f ) + Ej ,

(81)

for the female population, with superscripts (m) and (f ) identifying the populations. These functions are separately determined if we standardize the trait in each population. If the item parameters differ only because of their metrics, then f(f ) = kf(m) + c, giving

(m)

λj and

(m)

μj

(82)

(f )

= kλj , (f )

(83) (f )

= μj + cλj .

(84)

f m Graphs of μfj against μm j and λj against λj will show how well these relationships hold in sample estimates. An estimate of the multiplier k from sample factor loadings, given by (f ) (m)

k = [λj λj 138

(f )2

]/[λj

],

modern test theory

(m)

c = [(μj

(85)

(f )

(f )

(f )

(f )2

− μj )λj ]/[λj

(m)

(f )

],

(86)

(f )

minimizes [μj −μj −cλj ]2 . The summation can be over items believed not to be differentially functioning. The rescaled parameters from the female population are given by (f ∗)

λj and

(f ∗)

μj

(f )

= kλj (f )

(87) (f )

= μj + cλj .

(88)

(m)

These may be compared to λj and μm j . Standard errors for the item parameters give confidence bounds on the differences, aiding a judgment as to which items show differential functioning. Burt (1948) gave a coefficient of congruence measuring the agreement of the loadings. Like a correlation coefficient it ranges from –1 to 1, and equals 1 for perfect agreement. It is given by (m) (f ) (m)2 (f )2 λj ]/[(λj )(λj )]1/2 .

gλ = [λj

(89)

It is natural to define a similar coefficient of agreement for the mean parameters by (m)

gμ = [(μj

(f )

(f )

(m)

− μj )λj ]/[((μj

(f )2 1/2

× (λj

for the male population, and Xj

(m)

minimizes [λj − kλj ]2 , and an estimate of the constant c given by

)]

.

(f )

− μj )2 ) (90)

Referred to the zero mean and unit variance of the latent trait in the male group, the mean of the female group is –c/k, and its variance is 1/k2 . There is no mathematical reason why a unidimensional model should have item parameters that are invariant across populations. It might be conjectured that if an item functions differentially it must measure something in addition to the intended attribute in one population but not in the other. However, the something in addition does not have to be an additional latent trait—a second dimension. It may be a specific component in that group, included in its unique component. The possibility that differential functioning occurs in a subset of items because they measure a distinct latent trait in one population can be tested by fitting the multidimensional model suggested by the data. In the counterpart item response model, McDonald (1999) recommends using the item factor parameterization (Equation 14) for this purpose. The factor loadings can be expected to be more stable than the slope parameters in the other two parameterizations (Equation 10 or 13).

As an example, I will take the 19 items of the Illinois Rape Myth Scale, which are listed in Table 7.15. The responses are on a 7-point Likert scale, from Strongly Disagree = 1 to Strongly Agree = 7. Acceptance of these beliefs (myths) would serve different functions for men (rationalizing offensive behavior) and women (denying vulnerability). A Spearman model was fitted separately to data from 368 men and 368 women, giving the means, variances, and loadings in the first six columns of Table 7.16. The multiplier k is 1.261, from Equation 86, and the constant c is 1.246, from Equation 87. The mean of the female group in the metric of the males is –0.988 and its variance 0.629. That is, the female group is almost a full standard deviation below the males in their acceptance of these myths, and less diverse in their acceptance, which makes intuitive sense. The congruence coefficients are 0.973 for the slopes and 0.968 for the means. The parameters from the female group transformed to the metric of the male group are the starred parameters in Table 7.16. The standard errors of the loadings are all close to 0.05. The average of the standard errors of the mean parameters is 0.07. This suggests the use of a common set of confidence bounds +/ − 0.14 for

the differences in loadings and +/ − 0.20 for the means. The differences in the last two columns suggest that four items have suspicious differences in slope parameters—namely, 19, 15, 7, and 5. Three (19, 13, and 15) have suspect differences in means. The possibility that these differences might vanish in a multidimensional model was checked in the original analysis (McDonald, 1999, Chapter 15). This did not account for the differences. Omitting the five suspect items gives coefficients of congruence 0.991 for the loadings and 0.990 for the means but a negligible change in the transformed parameters. We can check the effect of including or excluding differentially functioning items by obtaining the resulting mean score characteristic functions—the expected values of the means of the item scores in each group, with and without the suspect items. Differences in these functions define differential test score functioning. Writing for any of these, E{M |F = f} = μ + λf,

(91)

with μ the average of the item means and λ the average of the item loadings, we obtain E{M |F = f} = 2.753 + 0.815f

(92)

for the males, with all 19 items, E{M |F = f} = 2.693 + 0.749f

(93)

Table 7.15. Illinois Rape Myth Acceptance Scale (Items Reordered) 1. 2. 3. 4. 6. 7. 10. 11. 12. 16. 18. 5. 13. 14. 8. 9. 19. 15. 17.

When women talk and act sexy, they are inviting rape. When a woman is raped, she usually did something careless to put herself in that situation. Any woman who teases a man sexually and doesn’t finish what she started realisitically deserves anything she gets. Many rapes happen because women lead men on. In some rape cases, the woman actually wanted it to happen. Even though the woman may call it rape, she probably enjoyed it. When a woman allows petting to get to a certain point, she is implicitly agreeing to have sex. If a woman is raped, often it’s because she didn’t say “no” clearly enough. Women tend to exaggerate how rape affects them. In any rape case one would have to question whether the victim is promiscuous or has a bad reputation. Many so-called rape victims are actually women who had sex and “changed their minds” afterward. Men don’t usually intend to force sex on a woman, but sometimes they get too sexually carried away. When men rape, it is because of their strong desire for sex. It is just part of human nature for men to take sex from women who let their guard down. If a woman doesn’t physically fight back, you can’t really say that it was a rape. A rape probably didn’t happen if the woman has no bruises or marks. If a husband pays all the bills, he has a right to sex with his wife whenever he wants. A rapist is more likely to be Black or Hispanic than White. Rape mainly occurs on the “bad” side of town.

mcdonald

139

Table 7.16. Unidimensional Quantitative Responses Item

μ(m)

σ 2m

μ(f )

σ 2f

λm

λf

μ∗f

λ∗f

μm – μ∗f

1

2.88

3.01

1.87

1.87

1.13

0.88

2.96

1.10

−.08

.13

2

3.12

2.77

2.32

2.27

0.85

0.77

3.28

0.98

−.16

−.13

3

2.13

1.91

1.43

0.86

0.79

0.55

2.11

0.69

.02

.10

4

3.79

2.92

2.60

2.69

1.12

1.01

3.86

1.28

−.07

−.16

6

3.01

2.51

2.11

2.91

0.99

0.79

3.10

1.00

−.09

−.01

7

1.69

2.74

1.22

2.08

0.62

0.30

1.59

0.37

.10

.24

10

2.97

1.23

1.86

0.42

1.14

0.86

2.93

1.08

.04

.05

11

2.52

2.08

1.77

1.02

0.91

0.68

2.62

0.86

−.10

.05

12

2.25

1.15

1.53

0.33

0.88

0.58

2.26

0.74

−.01

.15

16

3.63

2.99

2.34

1.89

0.95

0.86

3.42

1.09

.21

−.14

18

3.40

2.32

2.50

1.58

1.03

0.89

3.61

1.12

−.21

−.09

5

4.24

2.28

3.47

1.37

0.67

0.74

4.39

0.93

−.15

−.26

13

3.91

3.52

2.79

3.21

0.85

0.60

3.54

0.76

.37

.09

14

2.39

2.35

1.89

1.81

0.76

0.44

2.44

0.55

−.05

.21

8

2.13

2.32

1.46

1.89

0.73

0.50

2.08

0.63

.05

.11

9

1.72

3.40

1.25

2.54

0.47

0.22

1.52

0.28

.20

.19

19

1.92

1.92

1.13

1.23

0.73

0.15

1.52

0.19

.60

.54

15

2.36

2.31

1.89

2.22

0.51

0.15

2.08

0.19

.28

.31

17

2.24

1.92

1.67

0.23

0.35

0.32

2.06

0.40

.18

−.05

for the females, with all 19, and E{M |F = f} = 2.727 + 0.864f

(94)

for the males, with items 19, 15, 13, 7, and 5 omitted, and E {M |F = f} = 2.732 + 0.840f (95) for the females, omitting those items. These differences are slight in the range −3 to +3. As more fully presented in McDonald (1999, Chapter 15), this method has the following properties: (1) it applies equally to quantitative and binary items; (2) it applies equally to unidimensional and multidimensional items; (3) it directly assesses the amount of differential functioning; (4) it distinguishes differential item difficulty, differential item discrimination, and differential item dimensionality; (5) it assesses the extent to which the 140

modern test theory

λm – λf

differentially functioning items may bias the test; and (6) it gives the mean and variance of the focal group in the metric of the reference group.

Discussion There are a number of limitations on my account of Test Theory of which the reader could already be aware. I have deliberately given a treatment of the topics without any distribution assumptions. Once we have the item parameters of the linear (factor) model or the nonlinear (item response) model, a distribution-free account follows easily, as I hope I have shown. Lord (1980) used Maximum Likelihood to derive an estimate of the latent trait and the additive information properties, but his own presentation shows that the solution can be justified as giving minimum-variance errors of measurement. As mentioned already, a number of writers on item response models speak of “estimating” latent

traits without recognizing a distinction between measurement and prediction. It is possible to predict the values of a set of latent traits from all the item scores (as available in a multidimensional model), taking account of the joint distribution of these in the calibration population. (Indeed, if the purpose is prediction, we would add in all variables believed to be correlated with the latent trait, not just the indicators of the target trait and correlated traits.) In the linear model, the predictors are just the regression estimates given by the regression of F on X1 , X2 , . . . , Xm . If, empirically, the distribution of the latent trait is normal, then these are also Bayesian predictors. In the nonlinear counterpart, if the latent trait has a normal distribution, then again Bayes predictors are available. Theory for a more general class of prior distributions does not seem to have been developed. A “regression to the mean” effect shrinks these predictors from finite numbers of items so that their variance is less than the variance of the latent trait, violating the classical conception of a measurement. Reckase (2009) has provided an example in which the Bayes estimates supplied by TESTFACT (Bock & Schilling, 2003) give considerable “shrinkage” from 10 items per latent variable. In the item response case, a possible motive for choosing Bayes predictors over measurements is that the equation for the latter does not have a finite solution on the scale of f for examinees with all zero or all unity scores. Certainly we would not wish to assign them a score of infinity or negative infinity (with an infinite measurement error variance). The Bayes estimator assigns a finite value to these examinees. My view is that if a test is too difficult or too easy for the examinees, then it cannot determine a position for them on the real line, with finite error variance. Easier or more difficult items are needed. This question perhaps needs further careful examination. In a multidimensional model with independent clusters (as in Thurstone’s primary mental abilities), the measures of each trait are independent of other, correlated traits. The regression or Bayes predictors use information from all the correlated traits, so that an examinee with high numerical ability gets a higher verbal score than one with lower numerical ability. Bartholomew and Knott (1999) have pointed out that in the unidimensional case, the predictor (their recommended choice) gives the same rank order as the measure. This property does not generalize to the multidimensional case with correlated traits. (Bartholomew and Knott consider

the multidimensional case with independent clusters and uncorrelated latent variables. This case will occur very rarely in applications.) The corresponding methodological problem also requires further examination. In empirical applications, how should we choose between a measurement and a prediction? Bartholomew et al. (2009, p. 577) stated that the Bartlett measure “is the best estimate for [a] particular person based on their individual test scores,” whereas the regression estimate “predict[s] the value [. . .] for any member of the population being sampled.” This observation still leaves open the question of choice in applications. At the present time there does not seem to be any work on the possibility (or impossibility) of creating counterparts for the additive information functions associated with the minimum error variance measures, as treated in this chapter. Work of this kind is needed if Bayes estimates are to be used in Test Theory applications. Otherwise, they appear to lack motivation. Attention also needs to be given to the empirical distribution of latent traits and of corresponding formula scores. If a formula score (e.g., the sum score) has a nonlinear test characteristic curve, then the latent trait and the formula score cannot both be normally distributed. Obtaining Bayesian counterparts for the distribution-free results in Lord (1980) is possibly the most important future direction for research in this area. Currently there are computer programs (e.g., TESTFACT) that predict the traits of the calibration sample but apparently no programs for obtaining measures of traits or formula scores, with Conditional Standard Errors of Measurement, from fresh examinees. Programs of this kind would be such a natural development from Lord (1980) that the lacuna is quite strange.12 Further work on this, including a careful comparison with Bayes, is needed. Also, at the time of writing, the NOHARM program appears to be unique in supplying a confirmatory multidimensional model, allowing the investigator to prescribe independent clusters, corresponding to a good test construction design. Reckase (2009) has described a treatment of multidimensional item response models without the confirmatory methods inherited from the common factor model, which I have recommended here. This has many interesting features, but further work is needed to justify such an alternative—especially in the case of models with more than two dimensions, where visualization of the contents of the space

mcdonald

141

becomes impossible. (A similar problem in multidimensional scaling has never been satisfactorily resolved and perhaps cannot be.) Another matter for further research concerns the development of additive information functions for other link functions, as listed, say, by Skrondall and Rabe-Hesketh (2004). And, finally, there is a plethora of specialized problems in Test Theory, currently treated or needing treatment, for which there is no room here even to acknowledge them.

References

Notes 1. The 3PL model presents severe estimation problems. In the corresponding linear model, the three parameters are jointly underidentified, and the nonlinear model requires both very large samples and high discrimination parameters to get reasonable estimates by maximum likelihood. Bayesian estimators have been recommended (Swaminathan). 2. The item parameters have been estimated by least squares—minimizing the squared discrepancies between the sample item covariances and the fitted covariances. 3. In a very interesting study, Goldstein (1980) fitted a 1PL model, and a distinguishable one-parameter log-log model, showing that the two did not give the same rank order of estimates of their latent traits for a small sample of examinees. This would follow from the fact that the sum score is a sufficient statistic for the trait in the 1PL model but not for the log-log model. It is my conjecture that the expected order is independent of the chosen link function, given the item parameters. 4. Other, more technical definitions of information have been given—for example, Kendall and Stuart (1961) and Lord and Novick (1968). 5. In practical applications to empirical data, we must be prepared to find that the assumption in the linear model that the information—and error variance—is constant over the range of the test score or factor will be inadequate and must fail at the extremes of the scale. 6. The relative efficiency of a scoring formula, relative to the maximum information in the test, can be extended to the relative efficiency of two subtests and to the relative efficiency of two scoring formulas using the same items (see Lord, 1980; McDonald, 1999, Chapter 13.) 7. The representation of the model as containing m + 1 factors also invites us to examine an infamous pseudoproblem—the joint indeterminacy of the general factor and the unique components. (See, for example, Guttman, 1955; McDonald, 1977; Maraun, 1996; and responses to Maraun’s article.) 8. The NOHARM program (McDonald, 1982, 1997), used for this purpose, allows the researcher to specify a pattern of zero and nonzero loadings, as in Table 7.9—that is, to specify which items are pure measures of one latent trait, and that may be composites of traits. Currently, other programs do not have this feature. 9. McDonald (1999, Chapter 10) recommends regarding the predictive validities of a test as its predictive utilities. Its ability to predict a variety of other measures may, in some applications, bear on the question of what it measures. See also the discussion there of Cronbach and Meehl’s (1955) suggestion that we know the meaning of a concept only when it is embedded in a nomological

142

net—that its relationships with other variables are constitutive of its meaning. 10. For other equating methods, see Holland and Rubin (1982). 11. For other methods intended to detect differential functioning, see Holland and Wainer (1993). 12. McDonald (1999, Chapter 13) used a small teaching program for measurements and their errors, written by Brad Crouch, to obtain information functions and measurements of latent traits for individual examinees. I do not know of any commercially distributed programs for this purpose

modern test theory

Bartholomew, D.J., Deary, I.J., & Lawn, M. (2009). The origin of factor scores: Spearman, Thomson and Bartlett. British Journal of Mathematical and Statistical Psychology, 62, 569– 582. Bartholomew, D.J. & Knott, M. (1999). Latent variable models and factor analysis. London: Arnold. Bartlett, M.S. (1937). The statistical conception of mental factors. British Journal of Psychology, 28, 97–104 Bock, R.D. & Aitkin, M.A. (1981). Marginal maximum likelihood estimation of item parameters: an application of the EM algorithm. Psychometrika, 46, 443–459. Bock, R.D. & Schilling, S.G. (2003). IRT based item factor analysis. In M. du Toit (ed.) IRT from SSI: BILOGMG, MULTILOG, PARSCALE, TESTFACT. pp. 584–591. Scientific Software International, Lincolnwood IL. Brown,W. (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology, 3, 296–322. Burt, C. (1948). Factor analysis and canonical correlations. British Journal of Psychology, Statistical Section, 1, 95–100. Campbell, D.T., & Fiske, D.W. (1959). Convergent and discriminant validity by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. Christoffersson, A. (1975). Factor analysis of dichotomized items. Psychometrika, 40, 5–32. Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. Cronbach, L.J. & Meehl, P.E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. Feldt, L.S., Steffen, M., & Gupta, N.C. (1985). A comparison of five methods for estimating the standard error of measurement at specific score levels. Applied Psychological Measurement, 9, 351–361. Goldstein, H. (1980). Dimensionality, bias, independence and measurement scale problems in latent trait test score models. British Journal of Mathematical and Statistical Psychology, 33, 234–248. Gulliksen, H. (1950). Theory of mental tests. New York: Wiley. Guttman, L. (1945). A basis for analyzing re-test reliability. Psychometrika, 10, 255–282. Guttman, L. (1955). The determinacy of factor score matrices with implications for five other problems of commonfactor theory. British Journal of Mathematical and Statistical Psychology, 8, 65–81. Hallberg, K., Wing, C., Wong, V., & Cook T.D. (2012). Experimental design for causal inference: clinical trials and regression discontinuity designs. In T. D. Little (Ed.) The Oxford handbook of quantitative methods, volume 1 (pp. 223–236). New York: Oxford University Press.

Hattie, J.A. (1984). An empirical study of various indices for determining unidimensionality. Multivariate Behavioral Research, 19, 49–78. Hattie, J.A. (1985). Methodology review: assessing unidimensionality of tests and items. Applied Psychological Measurement, 9, 139–164. Holland, P.W. & Rubin, D.B. ((1982). Test equating. New York: Academic Press. Holland, P.W. & Wainer, H. (eds.) (1993). Differential item functioning. Hillsdale, NJ: Lawrence Erlbaum Associates. Jones, L.V. & Thissen, D. (2007). A history and overview of psychometrics. In C. R. Rao & S. Sinharay (Eds.) Handbook of Statistics, vol. 26 (pp. 1–22). Boston, MA: Elsevier. Kendall, M.G. & Stuart, A. (1961). The advanced theory of statistics. New York: Hafner. Lord, F.M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates. Lord, F.M. (1984). Standard errors of measurement at different ability levels. Journal of Educational Measurement, 21, 239– 243. Lord, F.M. & Novick, M.R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley. Maraun, M.D. (1996). Metaphor taken as math: indeterminacy in the factor analysis model. Multivariate Behavioral Research, 31, 517–538. McDonald, R.P. (1967). Nonlinear factor analysis. Psychometric Monograph, No. 15. McDonald, R.P. (1970). The theoretical foundations of common factor analysis, principal factor analysis, and alpha factor analysis. British Journal of Mathematical and Statistical Psychology, 23, 1–21. McDonald, R.P. (1977). The indeterminacy of components and the definition of common factors. British Journal of Mathematical and Statistical Psychology, 30, 165–176. McDonald, R.P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100–117. McDonald, R.P. (1982). Unidimensional and multidimensional models for item response theory. IRT/CAT conference. Minneapolis, MN. McDonald, R.P. (1985). Factor analysis and related methods. Mahwah, NJ: Lawrence Erlbaum Associates. McDonald, R.P. (1997). Normal ogive multidimensional model. In W. J. van der Linden & R. K. Hambleton (Eds.) Handbook of modern item response theory (pp. 258–270). New York: Springer. McDonald, R.P. (1999). Test theory: a unified treatment. Mahwah, NJ: Lawrence Erlbaum Associates. McDonald, R.P. (2003). Behavior domains in theory and in practice. The Alberta Journal of Educational Research, 49, 212–230. McDonald, R.P. (2010). Structural models and the art of approximation. Perspectives on Psychological Science., 5, 675–686. Messick, S. (1989). Validity. In R.L. Linn (ed.) Educational Measurement (3rd Ed., pp. 13–103). New York: Macmillan.

Mollenkopf, W.G. (1949). Variation of the standard error of measurement. Psychometrika, 14, 189–229. Mulaik, S.A. (2010). Foundations of factor analysis. New York: CRC Press. Muthén, B. (1978). Contributions to factor analysis of dichotomous variables. Psychometrika, 43, 551–560. Muthén, B. (1984). A general structural equation model with dichotomous, ordered category, and continuous latent variable indicators. Psychometrika, 49, 115–132. Nelder, J.A. & Wedderburn, R.W.M. (1972). Generalized linear models. Journal of the Royal Statistical Society Series A, 135, 370–384. Novick, M.R. & Lewis, C. (1967). Coefficient alpha and the reliability of composite measurements. Psychometrika, 32, 1–13. Qualls-Payne, A.L. (1992). A comparison of score level estimates of the standard error of measurement. Journal of Educational Measurement, 29, 213–225. Raju, N.S., Price, L.R., Oshima, T.C., & Nering, M.L. (2007). Standardized conditional SEM: A case for conditional reliability. Applied Psychological Measurement, 31, 169. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danmarks Paedagogiske Institut. Reckase, M.D. (2009). Multidimensional item response theory. New York: Springer. Skrondall, A. & Rabe-Hesketh, S. (2004). Generalized latent variable modeling. New York: Chapman & Hall. Spearman, C. (1904a). Proof and measurement of association between two things. American Journal of Psychology, 15, 72– 101. Spearman, C. (1904b). General intelligence, objectively determined and measured. American Journal of Psychology. 15, 201–293. Spearman, C. (1910). Correlation calculated with faulty data. British Journal of Psychology, 3, 271–295. Steiner P.M., & Cook D. (2012). Matching and propensity scores. In T. D. Little (Ed.) The Oxford handbook of quantitative methods, volume 1 (pp. 237–259). New York: Oxford University Press. Swaminathan, H., & Gifford, J. A. (1986). Bayesian estimation in the three-parameter logistic model. Psychometrika, 51, 589–601. Thissen, D., & Wainer, H. (2001). Test scoring. Mahwah, NJ: Lawrence Erlbaum. Thomson (1934). The meaning of ‘i’ in the determination of ‘g.’ British Journal of Psychology, 25, 92–99. Thorndike, R.L. (1951). Reliability. In E.F. Lindquist (ed.), Educational measurement, (pp. 560–620). Washington, DC: American Council on Education. Wright, B.D. & Stone, M.H. (1979). Best test design: Rasch measurement. Chicago: MESA Press. Yalcin, I. & Amemiya, Y. (2001). Nonlinear factor analysis as a statistical method. Statistical Science, 16, 275–294.

mcdonald

143

CHAPTER

8

The IRT Tradition and its Applications

R.J. de Ayala

Abstract Item response theory (IRT) is based on the premise that one or more unobservable (latent) variables are manifested in observable behaviors. These discrete observable behaviors are converted into continuous measurements through the application of an IRT model. We present and discuss our IRT models in terms of frames of reference, psychometric purpose, and type of response data. The models presented are applicable for affective, attitudinal, and proficiency data. The benefits and advantages of IRT models are given. We briefly discuss parameter estimation and provide a nonexhaustive list of estimation programs. The processes of model-data fit are presented as are transformation of our continuum’s metric. Key Words: Item response theory, latent variable, logistic model, normal ogive model, psychometrics, maximum likelihood

The Item Response Theory Tradition and Its Applications In this chapter I discuss a theory of item responses. This paradigm, item response theory (IRT), posits the existence of one or more unobservable (latent) variables that are manifested in observable behaviors. Our construct(s) of interest is represented by the latent variable(s), whereas the observable behaviors may be an individual’s responses to items or they may be the observations of (expert/knowledgeable) judges/raters of an individual’s behavior. As such, the term item response can reflect an individual’s response to a question from an attitude or affective scale, a vocational inventory, a proficiency examination, or it may be a judge’s rating. These different ways in which our item responses arise reflect different reference frames. In the typical case, we have persons responding to items or a twofacet frame of reference. As an example, we might have patient responses to a quality-of-life inventory

144

or examinee responses to test questions. In contrast, there are assessment situations that involve more than two facets. For example, clinicians’ judgments of patients’ responses would be a three-facet frame of reference (i.e., patients by responses by clinicians). Stated another way, we may apply IRT in two-facet situations or in cases that have more than two-facet cases, such as to person by items by judges data (i.e., three facets). The two- and three-facet reference frames differ in that with the two-facet we can directly measure an individual on a latent variable without using an intermediary, such as a judge or a rater. Typically, the three-facet reference frame uses a judge or rater as an intervening agent. As will be seen below, the IRT models used in the three-facet reference frame are modified versions of those used in the two-facet frame of reference. We can further modify our models to address different psychometric objectives. Broadly speaking, if

we are interested in locating people (or items) on a latent variable, then we are describing the respondents (or items) in terms of our measurements. In this case, our models might be referred to as descriptive IRT models (e.g., Wilson & De Boeck, 2004). However, a second objective could be to predict or explain the latent item and/or person variables from manifest item and person characteristics. In this case, these models are known as explanatory IRT models (e.g., Wilson & De Boeck, 2004). This may occur when we are interested in testing, for example, a theory of cognitive development by making theory-based predictions about respondent locations. As such, the individual respondent is not necessarily the sole focus of the instrument’s administration. These two objectives, describing and predicting, can be combined in our models. For example, we may be interested in assessing respondents’ efficacy for weight loss and also be interested

in explaining this efficacy in terms of respondent characteristics. The foregoing is intended to show that IRT is more broadly applicable than may be the reader’s impression from the literature. In short, when we apply IRT for measurement, we are not restricted to a two-facet frame of reference nor are we forced to confine ourselves to simply describing a person or item’s location on a continuum. In the following we will, for simplicity, address the common situation of a two-facet reference frame as well as a descriptive psychometric objective. We begin with some benefits of IRT, progress to a general model, and then proceed to specific IRT models. Following these models we briefly discuss the principles of estimation, model assumptions, fit analysis, metric transformations, and sample sizes. Table 8.1 contains a listing of commonlyusedsymbolsusedinthischapter, whereas Table 8.2 is a glossary of commonly used terms.

Table 8.1. Commonly Used Symbols Symbol

Comment

pi (xi )

Probability of a response of x on item i



The cumulative distribution function of the unit normal distribution



The logistic distribution function

D

A scaling constant equal to 1.702

γi

Intercept parameter for item i’s logit regression line

αi

Slope parameter for item i’s logit regression line; item discrimination

αˆ i δi

Estimated slope for item i’s logit regression line (discrimination)  Item i’s location parameter on the latent construct; δi = −αi γi

δˆi

Item i’s estimated location

θr

Person r’s location on the latent construct

θˆr

Person r’s estimated location on the latent construct

Ii (θ)

Item information

I (θ)

Total (or test) information

m

The number of categories

m

The number of transitions between categories

τk

The transition from rating category k − 1 to rating category k on an item with k = 1 . . . m; a.k.a., threshold parameter

πν

Latent class ν ‘s proportion

ζ ,κ

Metric transformation coefficients

de ayal a

145

Table 8.2. Commonly Used Terms Term

Acronym/Symbol

Definition

Item response function (item characteristic curve)

IRF or ICC

The probability of a response of 1 as function of item and person parameters

Option response function

ORF

The probability of responding in or selecting a particular item option as function of item and person parameters

Item information

Ii (θ)

The reduction in uncertainty about a person’s location provided by an item

Total (test) information

I (θ)

The reduction in uncertainty about a person’s location provided by an instrument

Likelihood function

L

The probability of a set of observations as a function of unknown parameter(s)

Log-likelihood function

lnL

Logarithmic transformation of the likelihood function

Maximum likelihood estimation

MLE

A parameter estimation technique in which the location of the likelihood’s maximum is the estimate of the unknown parameter(s) underlying the likelihood function

Maximum a posteriori

MAP

A Bayesian parameter estimation technique in which the mode of the posterior distribution of the likelihood is the estimate of the unknown parameter(s) underlying the likelihood function

Expected a posteriori

EAP

A Bayesian parameter estimation technique in which the mean of the posterior distribution of the likelihood is the estimate of the unknown parameter(s) underlying the likelihood function

Estimation

IRT model assumptions Conditional independence (local independence)

For any group of individuals that are characterized by the same latent location(s) the conditional distributions of the item responses are all independent of one another

Functional form

The data follow the function specified by the model

Dimensionality

Observations on the manifest variables are a function of one or more continuous latent person variables

Invariance

The estimate’s characteristic of not changing (in a relative sense) across different samples

Differential item functioning

146

DIF

An item that displays different statistical properties for different manifest groups after the groups have been matched on a measure of the construct

Focal group

A manifest group of respondents that is investigated to see whether it is disadvantaged by an item

Reference group

A manifest group of respondents that is used as the comparison group to see if the focal group is disadvantaged by an item

Linking

The alignment of two metrics with one another

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

Benefits of Item Response Theory With IRT models, it is possible to design instruments with specific characteristics. As an example, we may desire to create an instrument that provides maximum accuracy in person estimation at a particular decision point on the latent continuum (e.g., a cut point). Alternatively, we might need an instrument that provides equiprecise estimates across the continuum. In both cases, we select items not only for appropriate content coverage, but also for their parameter estimates that achieve our objective. Moreover, depending on the IRT model, it is possible to design items that we believe are consistent with cognitive theory and test whether they are. Item response theory has several advantages over Classical Test Theory (CTT). For example, with CTT a person’s observed score is directly related to the instrument’s characteristics. This is easiest to see in the context of proficiency testing. For example, if one administers a difficult exam to a group of examinees, then their observed scores will be systematically less than the scores they would have received had they been administered an easy exam. In contrast, IRT person estimates are independent of the specific sample of items administered to the person (i.e., “item-free” person estimation). It is this property that allows computerized adaptive testing (CAT) to tailor tests to individual examinees and yield person location estimates that can be compared to one another. (For more information on CAT, see Drasgow & Olson-Buchanan, 1999; Parshall, Spray, Kalohn, & Davey, 2002; Reckase, 1989; Sands, Waters, & McBride, 1997; and van der Linden & Glass, 2010.) A second advantage of IRT is item parameter estimates that are not dependent on the particular sample of examinees (i.e., “person-free” item estimation). In contrast, traditional item statistics, such as item difficulty (i.e., proportion correct) and item discrimination (e.g., the point biserial), depend on the examinee sample. Again this is easiest to see in the context of proficiency testing. For example, an item administered to high-ability examinees will show a higher item difficulty (i.e., an easy item) than when administered to low ability examinees. Generally, “person-free estimation of item parameters” and “item-free estimation of persons” are examples of item parameter and person parameter invariance, respectively. Therefore, it is possible to create instruments that are free of the particular respondents used in obtaining item parameter estimates as well as obtaining person location estimates

that transcend the particular instruments used in the assessment provided that one has model-data fit. The third advantage of IRT over CTT concerns measurement error. In CTT, one’s assessment of the measurement error for an instrument (i.e., the standard error of measurement) is constant for all persons regardless of his or her observed score; it also depends on the individuals to whom the instrument is administered. However, we know that the amount of measurement error varies across the observed score scale (see Haertel, 2006). As such, the standard error of measurement overestimates the amount of measurement error in some observed scores while underestimating the degree of error in other observed scores. In contrast, with IRT we have assessment of measurement error for each person rather than this aggregated measurement error. This measurement error statistic, the standard error of estimate (SEE), provides us with an index of the accuracy for each of our person location estimates (θˆ ). A person’s SEE indicates how uncertain we are about the person location estimate. The larger the SEE, the less certain we are about where the person is located. Conversely, a small SEE means that our instrument is providing us with lots of information about the person’s location. This concept of the information that an item and an instrument have for estimating person locations is not found in CTT. In IRT, information is defined at both the item- and instrument-levels. Item information, Ii (θ), refers to the amount of information an item provides for estimating a person’s location. Moreover, one can sum the individual item information across the L items on an instrument to produce its total (or test) infor mation, I (θ ); that is, I (θ) = L Ii (θ ); note the use of subscript i to reflect item information. As an example, Figure 8.1 shows the item information and total information functions for a fiveitem social anxiety instrument. The total information function (solid bold) shows that our instrument provides the most accurate person location estimates (i.e., has the most information) around 1, but the instrument is useful for accurately estimating persons from roughly –2 to 2.5. Stated another way, the location of an instrument’s maximum information is also where the person location estimates have the smallest SEEs, because information is inversely related to the square of the SEE. The figure also shows the item information functions for each of the five items that make up the total information function. These items provide their respective maximum information at different locations across de ayal a

147

Information

2 1.8 1.6 1.4

item 1 item 2 item 3 item 4 item 5 item 6 Total

0.4 0.2 0 –4.0 –3.0 –2.0 –1.0

0.0

1.0

2.0

3.0

θ 4.0

Figure 8.1 Total and item information plot for a five-item instrument.

our scale and in different amounts; an item’s maximum information has a direct relationship to how well an item discriminates among respondents. For example, item 4 provides its peak information at 1 and over a comparatively narrower range than does item 3. This property allows us to combine items that have different item information maxima and distributions to design an instrument that has specific estimation properties (e.g., equiprecise estimates throughout the continuum). Thus, the information function can be more useful in assessing an instrument’s psychometric quality than reliability estimates. As stated above, these benefits and advantages can only be realized when we have model-data fit. Moreover, implied in some of the advantages is a “degree of reasonableness.” For example, with item-free person estimation we assume that the different samples of items come from an item pool that measures the same construct or that with person-free estimation of item parameters our examinee samples come from the same population. Furthermore, IRT is not a panacea for poorly designed or worded items. However, when standard psychometric principles are followed, the use of IRT will provide benefits over CTT.

A General Model We begin with a general formulation for an IRT model pi (xi ) = f (),

(1)

where the probability, pi , of a response x to item i is a function, f (•), of the item and person parameters represented by ;  is the Greek letter Xi. The specific nature of Equation 1 depends on the psychometric context. 148

One psychometric context concerns our objective in administering our instrument. As mentioned above, this psychometric objective can be to (1) describe the respondents and/or items, (2) predict or explain the latent person and/or item variables from manifest item and person characteristics, or (3) some combination of description and prediction (or explanation). A second psychometric context concerns the responses, x. We categorize our responses as polytomous (e.g., rating scale, Likert scale) or dichotomous (e.g., True/False, correct/incorrect). In this latter case, we will not be concerned with whether these responses arose, for example, from the dichotomization of a normally distributed response variable (cf. tetrachoric correlation). As such, we can classify our IRT models as those used for polytomous data and those used for dichotomous responses. Both polytomous and dichotomous IRT models may be used together to obtain parameter estimates for an assessment. Moreover, specific dichotomous models are constrained variants of specific polytomous models. Note that the responses that we model may be (1) directly provided by a respondent (e.g., responses to a Likert scale, binary responses), (2) assigned to a respondent by a judge/rater according to a rubric, or (3) the outcome of scoring the responses (e.g., correct or incorrect). The application of our IRT model typically translates the discrete responses into a continuous measurement of the respondents and items. It may be obvious that these two psychometric contexts can co-occur (i.e., describing respondents when our responses are polytomous). We now turn to the issue of the nature of the function f (•). To discuss this we need to adopt a historical perspective; in the following, assume that we have binary responses. Over the previous century, several prominent psychometricians have used the standard cumulative normal distribution (i.e., the normal ogive) as a model for working with item responses (e.g., Lord, 1952; Thurstone, 1925; Tucker, 1946). As such, the function f (•) can be defined as a probit link function between the probability of the response x to an item (p(x)) and . That is, 1 p(x) = () = √ 2π



−∞

1 exp − z 2 dz, (2) 2

where  is the cumulative distribution function of the unit normal distribution (i.e., f () = fprobit () =()) and x is a dichotomous response variable (i.e., x = {0, 1}). Equation 2 is a linear normal

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

probability unit or probit model; some refer to probit as normit to reflect the use of a normal probability unit. An alternative candidate for the function f (•) is the (inverse) logit link function between the probability of the response and  p(x) = () =

e 1 = , 1 + e − 1 + e

(3)

where  is the logistic distribution function (i.e., f () = flogit () = ()). Equation 3 is a linear log istic probability unit or logit model. In some cases, it is convenient and/or instructive to represent Equation 3 in a log-odds format1 (also known as the logit transformation):

p(x) ln = (4) 1 − p(x) To summarize, there are IRT models that use the normal ogive to model the item response function, whereas others use the logistic function. Because the normal ogive models predate the logistic models, we sometimes see the logistic models written with a scaling constant, D, to maximize the similarity of the logistic results with those from the corresponding normal ogive model results (i.e., probit = logit∗ D = logit∗ 1.702); de Ayala (2009) contains more information on D. Given that we can transform the logistic class model results to those of the normal ogive class of models (and vice versa), the decision between the two is based on pragmatic considerations, such as available software. Moreover, whether we use the logistic models (with or without the D scaling constant) or the normal ogive models does not affect our model-data fit. However, because the logistic model class does not require integration, they are more commonly used than the normal ogive models. When we use the normal ogive models (or the logistic models with the D scaling constant), our results are said to be on the normal metric, otherwise the results are on the logistic metric. The importance of this distinction comes into play when making comparison with other related techniques. For example, because there is fundamentally no difference between a single-factor analytic model and a unidimensional IRT model, it is possible to estimate IRT item parameters using a factor analysis routine. However, because these results are on the normal metric, to compare them or to use them with estimates from the logistic class of models requires converting from the normal metric to the logistic metric or vice versa.

two-parameter model

Let us now turn our attention to  and present our first IRT model, the two-parameter model. We begin with this model because we consider it to be the nexus model. That is, almost all other models are an extension or a constrained version of the twoparameter model. For the two-parameter model we let  = γi + αi θr ,

(5)

where γi and αi are the intercept and slope parameters for item i’s logit regression line, respectively, and θr is person r’s location on the latent construct. We will refer to Equation 5 as either the slope-intercept or the linearized form. The two-parameter model is so-called because it contains two parameters to characterize the item; we discuss these parameters below. If we substitute Equation 5 into Equation 3, then we obtain the two-parameter logistic (2PL) model. The 2PL model states that the probability of a response of 1 given person r’s latent location of θ is given by pi (xi = 1|θr ) = (γi + αi θr ) =

1 1 + e −(γi +αi θr )

=

e γi +αi θr . 1 + e γi +αi θr

(6)

By a “response of 1” we mean that the response x has been categorized or coded as a 1 and may represent, for example, a correct response, a response of True, and so on. A “response of 0” would represent the complementary event, such as an incorrect response, a response of False, and so forth. The theoretical range of θr , αi , and γi is −∞ to ∞. Empirical values for θr typically fall between –3 and 3, with respondents located toward the upper end of the continuum (e.g., θr = 2) reflecting more of whatever the latent variable is than respondents located toward the lower end of the continuum (e.g., θr = −2). Items that discriminate well have values of αi above 0.8; a negative αi indicates an item that is inconsistent with the model or has been coded incorrectly. As will be seen below, the observed range of γi is not as important as that of a related parameter, the item’s location. To help us understand the meaning of γi and αi , we present the logit, γ + αθ , as a function of θ in Figure 8.2. Let us assume that the example item shown in this figure is from a personality inventory having to do with social anxiety and uses a true/false response format. Stated another way, our latent variable of interest is social anxiety. Moreover, the right de ayal a

149

γ + αθ 5.0 4.0 γ 3.0 2.0 1.0 0.0 –1.0 –2.0 –3.0 –4.0 –5.0 –4.0 –3.0 –2.0 –1.0

in Equation 5 and factor, then we obtain the logistic deviate form of Equation 5,  = αi (θr − δi ).

α

θ 0.0

1.0

2.0

3.0

4.0

Figure 8.2 Logit space plot for an item with α = 1.5 and γ = 0.75.

side of the continuum reflects higher social anxiety than does the left side of the continuum. To make our example more concrete, our item is “I feel socially anxious at parties.” Given our true/false response format, we code a response of true as a 1 and a response of 0 reflects a response of false. The item’s logistic regression line has a slope (α) of 1.5 and an intercept (γ ) equal to 0.75. As can be seen, the item’s intercept parameter is the point on the logit scale (γ + αθ ) where the logit regression line intersects with the vertical axis (ordinate) when θ = 0 and the slope of the line (indicated by the right triangle) is α. The logistic regression line shows that the log odds (or logit) of a true response increases as θ increases. For example, assume a person is located at 1 (i.e., θ = 1). Starting at θ = 1 on the horizontal axis (abscissa), going up to the logit regression line, and then projecting over to the ordinate, we see that the logit is 2.25. That is, a person located at 1 has a log odds of 2.25 (logit = 2.25) of responding true to our item. In terms of odds, a person located at 1.0 is almost 9.5 times more likely to respond true to being socially anxious at parties rather than false (i.e., odds = exp(2.25) = 9.49). Conversely, a person located at –1 has a logit value of –0.75 or odds of responding true to our item of 0.47. In other words, a person located at –1 is more than twice as likely to respond false to being socially anxious at parties as oppose to answering true (1/.47 = 2.12). We can reparameterize Equation 5 into a difference or deviate form by letting γi = −αi δi . In this parameterization, δi is item i’s location on the latent construct. Furthermore, this reparameterization permits another way of interpreting γi —namely, as the interaction of an item’s discrimination and its location. If we substitute −αi δi for γi 150

(7)

In this form we see that the primary determinant of the probability of a response of 1 is the weighted difference between the person and item locations. As such, persons and items are located on the same latent continuum. The theoretical range of δi is −∞ to ∞ with empirical values for δi typically falling between –3 and 3 logits. Generally speaking, items with δi s greater than 0 indicate comparatively more difficult to endorse items than items located below 0. In the context of proficiency assessment, the item latent location parameter δ is referred to as item difficulty and the person latent location parameter (θ ) is known as ability or proficiency. Applying Equation 7 to our logistic distribution function, () (Equation 3), we obtain an alternative representation of the 2PL model (all other terms are essentially the same as above): pi (xi = 1|θr , αi , δi ) = (αi (θr − δi )) e αi (θr −δi ) . (8) 1 + e −αi (θr −δi ) 1 + e αi (θr −δi ) Equation 8 has a graphical analog to the logit space plot shown in Equation 6. Specifically, the graphical representation of the relationship between θ and the probability of a response of 1 for an item is the (predicted) item response function (IRF); sometimes the IRF is called the item characteristic curve (ICC). Figure 8.3 contains the IRF for our example social anxiety item. As can be seen, as θ increases, so does the probability of a response of 1. To obtain the IRF corresponding to our social anxiety item (αi = 1.5, γi = 0.75), we first deter mine that the item is located at δi = −αi γi =  −1.5 0.75 = −0.50. We then calculate the probability of a response of 1 according to our IRT model (e.g., Equation 8) for values of θ from –3 to 3. For our item, we see that respondents located above –0.5 on the latent continuum are most likely to respond true to feeling anxious at parties and that persons located below –0.5 are most likely to respond false. In fact, a person located at –2 or below has less than 10% chance of responding true, whereas a person located at 1 or above has at least a 90% chance of responding true (i.e., given θ = 1, αi = 1.5, and δi = −0.50, then according to Equation 8 the probability of answering true is 0.905). In short, the more socially anxious our respondent is (i.e., the higher his or her θ), the more likely it is that he or she will respond true to our item. =

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

1

=

p 1.0 0.9 tangent line 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 –4.0 –3.0 –2.0 –1.0

δ

0.0 1.0

2.0

θ

3.0 4.0

Figure 8.3 IRF for an item with αi = 1.5 and δi = −0. 5.

There are other features of the IRF shown in Figure 8.3 worth making salient. First, and as stated above, the item location δ is on the same continuum as the respondent location θ. This characteristic of having items and people located on the same continuum is not found in CTT. (In CTT an item’s difficulty is on a 0 to 1 scale and the observed score is not.) Further, it can be seen that δ corresponds to the inflexion point for the IRT and that the probability of a response of 1 at this point is 0.50. In this context and at first glance it may be difficult to visualize how the item’s other parameter, αi , comes into play. As was the case with the slope-intercept form of the 2PL model αi is related to the slope, but in Equation 8 it is proportional to the slope of a line tangent to the IRF at δ. Because the slope of the line relates to how well the item can differentiate among respondents located at different points on the continuum, α is typically referred to as the item’s discrimination parameter. As one would expect given the use of the logistic function, the lower and upper asymptotes of the IRF are 0 and 1, respectively.2

sufficient statistic for estimating a person r’s location and the sum of item responses across  respondents (i.e., the manifest item score, qi = r xri ) is a sufficient statistic for estimating the item i’s location. As such, all persons with the same observed score obtain the same estimated location, θˆ , and all items with the same item score have the same estimated locaˆ This characteristic can be used to simplify tion, δ. parameter estimation and facilitates communicating the results to laypeople because of the direct relationship between the manifest and latent variables. The 1PL model is sometimes referred to as the Rasch model (Rasch, 1961, 1980), although others restrict the equivalence of the 1PL and the Rasch model to when α = 1.

extending our model Rather than concerning ourselves only with locating respondents on the latent continuum, we may wish to predict or explain the differences between respondents in terms of their person parameters. In this context, our model could be considered a person explanatory model (see Wilson & De Boeck, 2004). This prediction or explanation is based on a weighted linear  composite of manifest variables— that is, θr = bk Zk + εr . As an example, assume that we believe that our respondents’ social anxiety (i.e., their locations on the social anxiety continuum) can be “explained” in terms of two predictors, the respondent’s experience with past public humiliations (a binary variable, yes/no), and tendency to worry. Our analysis would allow us to determine the effect of each predictor in explaining the variability of social anxiety (latent) locations as well as an assessment of how much of the variability is accounted for/explained.

one-parameter model Equation 8 can be simplified by imposing the constraint that all items on a scale share a common discrimination parameter. Therefore, items differ from one another only in terms of their locations on the latent continuum. Concerning ourselves solely with the logistic deviate, this constraint would be represented by dropping the subscript on α. Thus, we have  = α(θr − δi ).

(9)

Typically, the one-parameter model is expressed using logistic distributionfunction, (). In this case the model is called the one-parameter logistic (1PL) model. The implications of imposing a constant α constraint are that the manifest observed score, Xr , is a

linear logistic test model We can modify the logistic deviate used with the 1PL/Rasch model to incorporate information about the cognitive operations that underlie our observed responses. The resulting model is the linear logistic test model (LLTM). The LLTM (Fischer, 1973) is an example of the aforementioned psychometric objective of predicting or explaining the latent item variable from manifest item characteristics. Again, concerning ourselves solely with the deviate we would have     fjs ηs + C ,  = α(θr − δi ) = α θr − s

(10) de ayal a

151

where the item’s location is a weighted linear composite of item characteristics—that is, δi =  f η + C . The ηs is a basic parameter associated is s s with elementary component s (s = 1 . . . S), fis is the weight of component s for item i, and C is a normalization constant. The fis s could be the hypothetical frequencies with which each component s influences the response to item i or may simply reflect whether a component is necessary for responding to an item; when S equals the number of items, then the LLTM is equivalent to the Rasch model (Embretson, 1984). The ηs s typically reflect the psychological structure of an item. For example, they may correspond to cognitive operations underlying a response (or the difficulties thereof ), instructional conditions (characterized by their efficacy), item characteristics determined by an experimental design, and so on. Typically, α = 1 in the LLTM. The LLTM is another example of a model that fulfills the objective of predicting or explaining the latent item variables from manifest item characteristics. If the data used with the LLTM arise from an experimental design that investigates or that uses item characteristics to explicate the response data, then we view the LLTM as serving an explanatory objective. However, if the data arise from a nonexperimental setting (i.e., without experimental control and random selection and assignment), then it is most accurate to consider the LLTM as fulfilling a predictive objective. The LLTM can be considered an example of an item explanatory model (see Wilson & De Boeck, 2004). Although all occurrences of the LLTM of which we are aware formulate the LLTM using () (Equation 3), the model could utilize a probit link (i.e., the linear probit test model ). The reader is referred to Baker (1993a), Embretson (1985, 1996), Fischer (1973), Frederiksen, Mislevy, and Bejar (1993), and Irvine and Kyllonen (2002) for greater detail on the LLTM and its application.

facet model We started this chapter talking about frames of reference. One of our scenarios was a three-facet reference frame involving clinicians’ judgments of patients’ conditions based on the patients’ responses. In this case, our manifest observations arise or are the product of the interaction of the patient (facet 1), the responses (facet 2), and the clinician (facet 3). Therefore, the observations that are the basis of our measurement are given by a clinician’s judgment of a patient’s responses rather than directly from the 152

patient’s responses. As stated above, a key characteristic of a three-facet reference frame is that the data come from an intervening agent’s judgments of an individual’s interactions with the questions that are posed to him or her. In contrast, in a twofacet reference frame, our data arise directly from our respondent’s interactions with the items. We can extend the 1PL/Rasch model from a two-facet frame of reference (i.e., participants by items) to a multifacet frame of reference. This extension is known as the Facet model or the Many-Facet Rasch model (MFRM; Linacre, 1988, 1989). In contrast with the above models’ focus on dichotomous responses, the MFRM can be used with polytomous ratings as well as dichotomous responses. Our introduction of modeling polytomous responses reflects the fact that, typically, judges/raters use a rating scale with more than just two categories (i.e., a polytomous rating scale); implicit in a rating scale is that the rating response categories are ordered. Because the MFRM can theoretically be applied to any number of facets, to present it we need to specify the number of facets. In this light the logistic deviate for the MFRM for a (most common) threefacet framework is given by  = α(θr − δi − ωj − τk ),

(11)

where α and δi represent one facet (item i) and are defined as above, θr represents a second facet (respondent r) and is defined as above, ωj represents the third facet (the j th rater/judge/grader), τk represents the transition from rating category k − 1 to rating category k on an item and k = 1 . . . m; the non italicized “m” is the number of transitions between categories.

The symbol ωj represents the j th rater/judge/ grader’s severity, whereas τk represents the relative difficulty of being rated in the k th category over the k th – 1 category (e.g., category 1 vs. category 0). The lowest rating category is coded 0 and so the number of rating or response categories is given by m (= m + 1); we italicize “m” to represent the number of categories. (In the context of dichotomous models, our items would have two response categories (x = {0, 1}) and m would equal 2 with one transition between the response categories (i.e., m = 1) that occurs at the item’s location, δi .) The three-facet deviate shown in Equation 11 can be extended to include additional facets (e.g., occasions) by adding a corresponding parameter to the deviate. For an example of a four-facet reference frame, see Smith and Kulikowich (2004).

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

In contrast to commonly used logistic distribution function presentation of an IRT model, the MFRM is typically represented in the log-odds format (Equation 4). This form is obtained by substituting Equation 11 into Equation 4. Thus, the log-odds of respondent r being given a rating in category k instead of a rating in category (k − 1) on item i by judge j is   p(xrijk ) ln =  = α(θr − δi − ωj − τk ), p(xrij(k−1) ) (12) where p(xrijk ) is the probability of respondent r being given a rating in category k, on item i by judge j, and p(xrij(k−1) ) is the probability of receiving a rating in category (k − 1) on item i by judge j, and k = {1, . . . , m}.

In terms of our logistic distribution function, () (Equation 3), our MFRM would be written as p(xrijk |θr , α, δi , ωj , τk ) = () x 

e k=0

= 1+

m 

=

e k=0 m 

e k=0

k 

αi (θr − δih ).

x 

e k=0

Therefore, the GPC model is  k  αi (θr − δih ) exp p(xik |θr , αi , δ i ) =

h=1 mi 



exp

c 

, αi (θr − δih )

h=1

(15)

α(θr −δi −ωj −τk )

α(θr −δi −ωj −τk )

(14)

h=1

α(θr −δi −ωj −τk )

α(θr −δi −ωj −τk )

x 

=

c=1

v=1 x 

of questions designed to measure collegiate quality using a response format such as “very often,” “often,” “sometimes,” and “never.” Alternatively, the ordered polytomous response data can represent a partial credit proficiency assessment situation (i.e., 0 points = no credit, 1 point = partial credit response, 2 points = full credit) or a Likert response scale. The GPC model assumes that the probability of selecting a particular response category over the previous one is governed by the dichotomous 2PL model (Equation 8). As a result of applying this “dichotomized process” across an item’s successive response categories, one obtains a model whose logistic deviate is:

,

(13)

v=0

where p(xrijk ) is the probability that respondent r is judged by rater j to be in item i’s category k (i.e., the probability of a rating score of xrijk where xrijk = {0, 1, . . . , m}). As presented above, the MFRM assumes that the rating scale is constant across an item set as well as for all judges/raters/graders; these assumptions may be relaxed. Moreover, as was the case with the LLTM, α is typically set to 1. A MFRM analysis produces estimates of the person and item locations as well as an assessment of the judges’ severity.

generalized partial credit and partial credit models As was the case with the MFRM, the generalized partial credit (GPC) model (Muraki, 1992) can be used with ordered polytomous response data. As an example, consider the National Survey of Student Engagement in which students respond to a series

where p(xik |θr , αi , δik ) denotes the probability of a person located at θr responding in item i’s category k (i.e., xik ) given item parameters αi and δ i ; for notational convenience, “exp[]” is used in lieu of “e  .” As was the case with the 2PL model, the subscript on item i’s discrimination parameter, αi , indicates that items can differ in their discrimination. Additionally, δi represents item i’s set of transition location parameters, δih s, so that δi = [δi2 , δi3 , . . . , δimi ]. That is, the transition location parameter δih reflects the transition from the (h – 1) response category to the (next) h th category. Because Muraki arbitrarily defines the first transition location parameter as zero (i.e., δi1 ≡ 0), there are mi − 1 transition locations (i.e., δi2 , δi3 , . . . , δimi ); the number of response categories mi is free to vary across items. (Note that we use italicized “m” to indicate the number of response categories and non-italicized “m” to indicate the number of transition locations [cf. MRFM].) The probability of responding in a category as a function of the latent variable is graphically depicted by a response category’s option response function (ORF). Figure 8.4 contains an example ORF for a three-category item. Let us assume that our example item comes from a scale to measure quality of life and asks the respondent to rate the quality of his or her relationships on a three- category scale (1 = “unsupportive/unsympathetic,” 2 = “neutral,” and de ayal a

153

3 = “supportive/sympathetic”). As can be seen, a person who feels that he or she has a low quality of life (e.g., he/she is located at –2) would probably respond in category 1 as opposed to categories 2 or 3. In this case, applying the GPC model (Equation 15) yields a probability of 0.72 of responding in category 1. In fact, any person located below –1 has a higher probability of responding “unsupportive/unsympathetic” than in any of the other categories. This is what is represented by category 1’s ORF (the dashed line). Clearly, as the person’s quality of life increases, the greater the probability that a person will respond in category 2 and eventually in category 3. For example, a person with high quality of life (e.g., θ = 2.5) has a probability of 0.81 of responding in category 3, a probability of 0.18 of responding in category 2, and 0.01 probability of responding in category 3. As can be seen from Figure 8.4, for a given θ, the sum of the probabilities across response categories is 1 (e.g., for θ = 2.5 we have 0.81 + 0.18 + 0.01 = 1). Figure 8.4 also shows that the transition location parameter represents the intersection point of adjacent ORFs. With our three-category item (mi = 3) we have two transition location parameters. Our first transition from response category 1 to category 2 occurs at δi2 , and our second transition from category 2 to category 3 occurs at δi3 . Although for our example item, the transition location parameters are in order, there is no requirement in the model that they be ordered. (An alternative to the GPC model is Samejima’s [1969] graded response model.) The GPC model can be simplified to obtain the Rasch partial credit model (Masters, 1982) by imposing the constraint that all items have the same discrimination. That is, =

k 

α(θr − δih ).

(16)

h=1

1.0 p 0.9 0.8 δi2 δi3 0.7 0.6 0.5 0.4 0.3 0.2 0.1 θ 0.0 –4.0 –3.0 –2.0 –1.0 0.0 1.0 2.0 3.0 4.0

category 1 category 2 category 3

Note the omission of the item subscript on α. Like the GPC model, the PC model can be applied to ordered polytomous data as well as to dichotomous response data. In this latter case, the PC model simplifies to the Rasch/1PL model.

generalized rating scale and rating scale models Although we can use either the GPC or PC models with rating scale data (e.g., a Likert response scale), if we believe or are willing to assume that the relative difficulty of endorsing of one rating over another is the same for all items using a common rating scale, then we can further simplify our models. In this case, we can decompose an item’s transition locations, δih s, into an item location parameter and a set of threshold components. That is, each item has a location on the latent continuum (δi ), and the transition across adjacent rating categories is captured by a series of threshold parameters (τh s) that are constant for a common rating scale. As an example, imagine that we have two items that use a three-point rating format (D = Disagree, N = Neutral, and A = Agree). Figure 8.5 shows the item locations (δ1 ∼ = −0.7 and δ2 ∼ = 0.4) and the associated common set of threshold parameters (τ1 and τ2 ). Although items 1 and 2 are located at different points on the latent variable, the difficulty in endorsing the neutral category over the disagree category (represented by the threshold τ1 ) is the same for both items. Similarly, the difficulty in endorsing the agree category over the neutral category (represented by the threshold τ2 ) is also the same for both items. (Note that with four or more categories the thresholds do not have to be equidistant.) As can be seen, the thresholds are offsets from the item’s location. Therefore, although the two items share a common set of thresholds, the actual location of the transition from—say, neutral to agree—occurs at different points on the latent variable’s continuum. For example, for item 1, the location of the transition from neutral to agree occurs at approximately –0.65, whereas for item 2, the transition occurs at .5. The foregoing can generically represented symbolically as δih = δi − τh , where δi is the item location and τh is the threshold between categories (h – 1) and h. Therefore, by substitution into Equation 14, the logistic deviate for the generalized rating scale (GRS) model (Muraki, 1990) is

Figure 8.4 ORFs for an item with αi = 1.0, δi2 = −1 and δi3 = 1.

154

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

=

k  h=1

αi (θr − δi + τh ).

(17)

τ1 D

τ2 N

τ1 A

D

τ2 N

A θ

–1

0

1

δ1

δ2

Figure 8.5 Conceptualization of two items with one three-point rating scale.

Muraki (1992) has interpretted the τh as the relative difficulty of “step” h “…in comparing other steps within an item” (p. 165); “difficulty” may also be interpreted as the difficulty of endorsing a particular category. Moreover, the τh s do not need to be sequentially ordered across the categories. The GRS allows items to vary in their capacity to discriminate among respondents located at different points on the latent continuum. The GRS model can be simplified to obtain the Rasch rating scale (RS) model (Andrich, 1978a, 1978b, 1978c) by imposing the constraint that all items have the same discrimination. That is, =

k 

α(θr − δi + τh ).

(18)

h=1

(Note the omission of the item subscript on α to indicate a constant value of α across items.) As such, the RS model can be viewed as simplification of the GRS. Alternatively, one can view the RS model to be a constrained or reparameterized version (i.e., δih = δi – τh ) of the PC model (Equation 16). In either case, the RS model simplifies to the Rasch/1PL model when applied to dichotomous data.

Nominal Response Model The above polytomous IRT models assume that certain responses indicate more of what is being measured than do other responses. Thus, the corresponding responses categories contain information about the magnitude of the construct being measured by the item. In contrast, in some cases, the responses are not inherently ordered. In these cases, a response is simply distinct from the other responses. For example, assume that we are interested in measuring social anxiety. One of our items could be “I feel uncomfortable at parties” with a response format of “yes,” “no,” and “not applicable.” Because our format’s response categories cannot be ordered to reflect the degree of social anxiety any set of numbers that we use to represent the different response categories is arbitrary (e.g., 1 = “yes,” 2 = “no,” 3

= “not applicable”; 1 = “no,” 2 = “not applicable,” 3 = “yes”; etc.). In short, we are using a nominal (also known as a categorical) response format. As is the case with our ordered response formats, this nominal response format consists of mutually exclusive response categories. To model the response behavior involving a nominal response format, we return to Equation 4 (for the reader’s convenience, it is presented here as Equation 19). This equation provides the log odds of a response of 1 (numerator) relative to a response of 0 (denominator). Stated another way, Equation 19 provides the log odds (logit) of a response in category 1 compared to a response in the baseline category 0:

p(x) = . (19) ln 1 − p(x) We can extend this idea to multiple response categories. That is, for each of our response categories, we can determine the log odds (or odds) that a respondent will select a particular response category relative to a baseline category. In terms of our social anxiety example, we might arbitrarily select the “not applicable” response as our baseline category and determine the log odds (or odds) of a respondent providing a “yes” or “no” response relative to a “not applicable” response. Moreover, rather than talking about the log odds (or odds) of a response in one category over another (baseline) category, we can directly express the probability of a particular response given the baseline category. Therefore, we modify Equation 5 to incorporate category parameters:  = γih + αih θr ,

(20)

where γih and αih are, respectively, the h th category intercept and slope parameters for item i’s logit regression line and θr is person r’s location on the latent construct. By substitution of Equation 20 into Equation 3 (and simplifying), we obtain Bock’s (1972) nominal response (NR; also called the nominal de ayal a

155

categories) model: p(xik |θr , α i , γ i ) =





exp γik + αik θr , (21) mi    exp γih + αih θr r

h=1

where p(xik |θr , α i , γ i ) is the probability of responding in category k on item i, mi is the number of response categories for item i, the vector αi contains the mi slope parameters (αi = [α1 , . . . , αmi ]), and the vector γi contains the item i’s intercept parameters (γi = [γ1 , . . . , γmi ]). To identify the model, the baseline response category’s slope and intercept are set to 0; by convention the baseline category is the response category with the highest frequency. Therefore, item i has mi − 1 slope parameters and mi − 1 intercept parameters. However, these mi − 1 sets of parameters can be transformed so that each response category has a slope and parammintercept i α = 0 and eter subject to the constraints h=1 ih mi γ = 0. h=1 ih As is the case with the above polytomous models, the probability of responding in each response category as a function of the latent trait θ can be graphically depicted by the item’s ORFs. In general, these have the appearance of those shown in Figure 8.4. That is, one ORF will be monotonically increasing (e.g., category 3 in Fig. 8.4), one will be monotonically decreasing (e.g., category 1 in Fig. 8.4), with any remaining ORFs appearing as unimodal and symmetric (e.g., category 2 in Fig. 8.4). To obtain the transition point (i.e., intersection) between the ORFs for categories k and k*, δk ∗ ,k , we would calculate γk ∗ − γk δk ∗ ,k = , (22) αk − α k ∗ where mi > 2, k ∗ < k, and αk ∗ = αk . When we have a dichotomous response format (mi = 2), then the NR model reduces to the 2PL model. Moreover, by appropriately reparameterizing the NR model one can obtain the GPC and PC models. This hierarchical relationship would allow one to compare the relative fit of a model assuming a NR format with that of a model assuming an ordered format. Therefore, one has a way of investigating those situations in which we may believe responses should be ordered but are simply not sure on the order.

multidimensional two-parameter model The foregoing models all include a single-person location parameter to denote a unidimensional construct. However, in some situations it is more plausible that multiple latent variables account for 156

the observed data. For example, using our social anxiety example, we might hypothesize that there are two factors at the root of socially anxious behavior. One dimension is a self-consciousness factor with opposing endpoints of private and public selfconsciousness, whereas the other dimension is a generalized anxiety factor. As such, these factors would be modeled using two latent variables: generalized anxiety and self-consciousness. We can model these data by extending the two-parameter model (e.g., Equation 5) to include F latent variables so that 

 = γi + αi i θr ,

(23)

where γi is the intercept of item i’s logit response  plane, αi is a (row) vector containing item i’s discrimination parameters on the F latent variables (i.e.,  αi = [αi1 , . . . , αiF ]), and θr is a vector that contains person r’s location parameters on each of the Fdimensions (θr = [θr1 , . . . , θrF ]). As is the case with the unidimensional 2PL model, γi is the interaction of the item’s discrimination and location parameters,  γi = − Ff =1 αif δif . To obtain the probability of a response of 1 on item i given a person’s latent locations, we substitute Equation 23 into our logistic distribution function, () (Equation 3) to obtain the multidimensional compensatory 2PL (M2PL) model: 

pi (xi = 1|θr , αi , γi ) = (γi + αi θr ) 

e γi +αi θr = .  1 + e γi +αi θr

(24)

Although we limit our presentation to the M2PL model for dichotomous data, it should be noted that there are multidimensional extensions of other dichotomous models as well as some polytomous models. In this latter case, however, at present there are no user-friendly packages to estimate the models’ parameters. The M2PL model is an example of a multidimensional item response theory (MIRT) model. Although the M2PL model is useful with multidimensional situations, it is sometimes convenient to have a single (i.e., scalar) value that represents the best that an item can discriminate across the latent variables. This value is known as the item’s multidimensional discrimination capacity, Ai . Similarly, it is useful to have a single value that represents an item’s “location” in the multidimensional space. Technically, item i’s multidimensional item location, i , is defined as the distance from the origin in the latent

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

space (i.e., θ) to the point of maximum discrimination in a particular direction from the origin. Further discussion of these concepts is beyond the scope of this chapter, and the reader is referred to Reckase (2009).

mixture item response theory model An alternative multidimensional perspective can be found in a situation that involves a mixture of latent classes and latent continua. In this case, we conceptualize the latent variable as consisting of latent classes, within which are latent continua. For example, using our social anxiety example, we might hypothesize that social anxiety is best explained by a combination of categorical (mutually exclusive and jointly exhaustive) latent classes and a continuous latent variable rather than, as above, as two continuous factors. As such, the self-consciousness dimension is conceptualized as two discrete classes of homogeneous individuals. One of our classes consists of privately self-conscious persons, whereas the other latent class contains public self-consciousness individuals. Further, within each of these classes is a generalized anxiety continuum on which we can locate our respondents. With this conceptualization, the observed data consist of one or more latent classes and within each latent class there is an IRT model. In the simplest case, there is only one latent class and the respondent sample contains only members from this class and one has model-data fit with a simple IRT model; the respondent sample is also known as the calibration sample, and the process of obtaining estimates of person and item parameters is known as calibration. However, when the observed data consist of members from different latent classes, there is not an IRT model that accurately reflects the data for the entire calibration sample (i.e., there is model-data misfit). Rather, there are different item and person parameters that are conditional on the different subpopulations or latent classes. Mixture distribution models such as those of Rost (1990) as well as Mislevy and Verhelst (1990) have addressed this general idea, and their extensions of the Rasch model have been concerned with solution strategies that differ across subpopulations (for a general framework, see also Kelderman & Macready, 1990). In the simplest case, our mixture model is pi (xi = 1|θr , αν , δiν , ν) =

 ν

e αν (θr −δiν ) πν , 1 + e αν (θr −δiν ) (25)

where πν is latent class ν’s proportion and αν , and δiν are item i’s discrimination and location, respectively, in latent class ν. Just as is the case with the other IRT models, we are interested in obtaining item and person parameter estimates. However, with our mixture IRT model, our person parameters consist of not only a respondent’s location on the latent continuum (θ ) but also the individual’s (latent) class membership. Therefore, each item has a location in each latent class’s continuum. Similarly, each respondent has a location in each latent class’s continuum and membership in only one class. This membership is probabilistic in nature. For example, person A has a probability of 0.8 of belonging to latent class 1 and a probability of 0.2 of belonging to latent class 2. Equation 25 may be extended to allow for varying item discrimination (i.e., αiν ) as well as for applicability to polytomous data.

Estimation Obtaining estimates for an instrument is referred to as calibrating the instrument. Generally speaking, some variant of maximum likelihood estimation (MLE) is the approach most commonly seen. The gist of MLE is to determine the parameter estimate that maximizes the likelihood function observed. To clarify what this means, let us, for the sake of simplicity and without loss of generality, assume dichotomous responses and a single latent variable, θ. Then the probability of a set of responses, x, on a L-item instrument is L

(pixi )(1 − pi )(1−xi ) ,

p(x|θ, ϑ) =

(26)

i=1

where pi is given by, say, the 1PL model; xi is the response to item I ; and ϑ contains the item parameters. Once individual r’s responses are observed, this expression becomes a likelihood function and we have L

(pixri )(1 − pi )(1−xri ) .

L(xr |θr , ϑ) =

(27)

i=1

For computational reasons, we take the natural log (ln) of Equation 27 and obtain the log-likelihood function: ln L(x r |θr , ϑ) =

L 

xri ln pi + (1 − xri ) ln(1 − pi ).

i=1

(28) The location of the maximum of the likelihood function is the same as that of the maximum of the log likelihood function. de ayal a

157

158

–4.0 –3.0 –2.0 –1.0 0.0

0.0

1.0

2.0

3.0

4.0 θ

–2.0 –4.0 InL

To help conceptually understand MLE, assume that we have a client, Kim, whose responses to a five-item depression scale are (1, 1, 0, 0, 0) where 1 = true and 0 = false—that is, Kim responded true to the first two items and false to the last three items. Moreover, assume that we are using the 2PL model for our depression scale and our item locations are δ1 = −1, δ2 = −0.5, δ3 = 0, δ4 = 0.5, and δ5 = 1 with item discriminations of α1 = 1.5, α2 = 1, α3 = 1.25, α4 = 2, and α5 = 1.1. In this example, we are interested in estimating Kim’s ˆ on our depression (latent) variable. In location (θ) other words, we are searching for the value of Kim’s location that most likely produces (i.e., maximizes the likelihood of observing) the responses of (1, 1, 0, 0, 0). The (log) likelihood function (lnL) for Kim’s response vector is shown in Figure 8.6. To find the value of Kim’s location, one can envision traversing this function to find the location of its peak. This would be our estimate of Kim’s depression. As can be seen, the function has its peak or maximum at approximately –0.3. In other words, θˆ = −0.3 is the parameter estimate that maximizes the likelihood function for the observed responses of (1, 1, 0, 0, 0). (Below we return to how we can rescale this θˆ to aid its interpretation.) The log likelihood function depicted in Figure 8.6 clearly has a nice parabolic shape and a single maximum value. In some circumstances, the (log) likelihood function does not have this shape. For example, if our observed responses consisted of a single value—say, all 1s or all 0s—then the (log) likelihood function would not have a maximum value. Graphically, the log likelihood function would increase and then become asymptotic to 0. Therefore, for certain response vectors that have zero variance, it is not possible to obtain a maximum likelihood estimate. However, there are alternative approaches that can be used. These Bayesian approaches still use the likelihood function but also utilize a prior distribution. The incorporation of the prior distribution with the likelihood function results in a distribution called the posterior distribution. The mode or mean of this posterior distribution is used as the estimate of the person parameter. When the mode is used, then the approach is known as maximum a posteriori (MAP), whereas when the mean is used the method is called expected a posteriori (EAP). To summarize, the primary ways we can estimate a person’s location are by MLE, MAP, or EAP. (There are additional variations of MLE that can also be used.) Although our example demonstrates the principles underlying the MLE of a person’s location, the

–6.0

θˆ

–8.0 –10.0 –12.0 Figure 8.6 Log likelihood function for x = (1, 1, 0, 0, 0).

same principle can be applied to estimating the item parameters. Further, variants of this basic MLE conceptualization, such as marginal MLE (MMLE), joint MLE (JMLE), or conditional MLE, address the additional complexities that arise when estimating item parameters in the context of unknown person parameters and vice versa, invoking prior distributions for item parameter estimation, as well as taking advantage of certain model properties. Greater details on these methods as well as other estimation approaches, such as Markov chain Monte Carlo (MCMC) or minimum chi-square, may be found in Baker and Kim (2004) and de Ayala (2009). The nonlinearity of our IRT models requires the use of an iterative estimation approach that successively refines the parameter estimates until an acceptable level of refinement is attained. As such, model estimation is facilitated by using a computer program. Table 8.3 presents several estimation programs and the models that can be estimated with each of the programs. For example, for dichotomous models, such as the 1PL/Rasch, 2PL, and 3PL models, one could use BILOG-MG (Zimowski, Muraki, Mislevy, & Bock, 2003) program. With respect to polytomous models, the programs MULTILOG (Thissen, Chen, & Bock, 2003) and PARSCALE (Muraki & Bock, 2003) may be used; each of these programs can also estimate dichotomous models as well as unique models. The programs ConQuest (Wu, Adams, & Wilson, 1997), FACETS (Linacre, 1989, 2009), and WINSTEPS (Linacre, 2001) are Rasch model-focused programs (i.e., item discrimination is assumed to be one) and can be used with dichotomous and polytomous data; Conquest and FACETS can also be used to estimate the MFRM. Additionally, Mplus

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

Table 8.3. Nonexhaustive List of Programs for Parameter Estimation Program

Estimation method

Model(s) estimated

BILOG-MG

Items: MMLE

1P, 2P, 3P

Persons: EAP, MAP, MLE ConQuest

Items: MMLE, JMLE

1PL/Rasch, PC, RS, LLTM, MFRM, MIRT

FACETS/MINIFAC3

Items and Persons: JMLE

MFRM, 1PL/Rasch, PC, RS

NOHARM3

Items: Ordinary LS1 on observed & predicted proportions

1P, 2P, 3P, MIRT models: 2P, M3P4

Mplus

ML, Robust weighted LS1

1P/Rasch, 2P, GR, mixture IRT/Latent class mode

MULTILOG

Items: MMLE

1P, 2P, 3P, GR, PC, NR, MC

Persons: EAP, MAP, MLE PARSCALE

Items: MMLE

1P, 2P, 3P, GR,GPC, GRS,PC,RS, rater-effect

Persons: EAP, MAP, MLE2 R (ltm add-on) (R, 2007)3

Items: MMLE

1PL/Rasch, 2P, 3P, GR, GPC

R (eRm add-on) (R, 2007)3

Items and Persons: Conditional MLE

1PL/Rasch, LLTM, PC, RS

SAS (IRT-FIT; NLMIXED)

Maximize an approximate integrated likelihood

IRT-FIT: 1PL/Rasch, 2PL, 3PL, GR, GPC, GRS, PC, RS, NR; NLMIXED: explanatory models

SYSTAT

Items: MLE

1PL, 2PL

Person: MLE TESTFACT (Wood, Wilson, Gibbons, Schilling, Muraki, & Bock, 2003)

Full-information factor analysis

WINMIRA (von Davier, 2001)

2P

1PL/Rasch, PC, latent class analysis, mixture IRT model

WINSTEPS, BIGSTEPS3 , MINISTEP3

Items and Persons: JMLE

1PL/Rasch, PC, RS

XCALIBRE (Assessment Systems Corporation, 1997)

Items: MMLE

2P, 3P

Unless otherwise noted estimation can be performed on both normal ogive and logistic versions of models. 1 LS = Least-squares 2 Warm’s Weighted MLE (WML) 3 Freeware 4 Modified 3P model—user provides pseudo-guessing parameter estimates

(Muthén, & Muthén, 2007), SAS (SAS, 2002), and SYSTAT (SYSTAT, 2007) may be used to estimate some of the models discussed. In contrast to

the above, programs such as NOHARM (Fraser & McDonald, 2003), MINIFAC, BIGSTEPS, eRM, or ltm are available for free.

de ayal a

159

Assumptions All IRT models make assumptions about the nature of the data. Specifically, the IRT models discussed are predicated on a dimensionality assumption. This assumption states that the observations on the manifest variables are a function of one or more continuous latent person variables. Typically, this assumption is referred to as the unidimensionality assumption in the context of all models discussed above the M2PL model. A second assumption is the local or conditional independence (CI) assumption. This assumption is the keystone of all estimation algorithms. The CI assumption states that for any group of individuals who are characterized by the same latent location(s), the conditional distributions of the item responses are all independent of each other (Lord & Novick, 1968). Therefore, whatever relationships exist among the items disappears when one conditions on the latent location(s). The third assumption is the functional form assumption. This assumption states that the data follow the function specified by the model. For example, for Equation 8, the functional form states that (1) the probability of a response of 1 increases monotonically when there is an increase in θ, and (2) that for infinitely low θs the probability of xi = 1 approaches 0 (see Fig. 8.3). The foregoing assumptions are common to all IRT models. Specific models make additional assumptions and/or demands of the data. For example, the 1PL model assumes that all items discriminate to the same degree, whereas the nominal response model assumes that respondents with infinitely low θs will pick a particular option rather than randomly guessing at an item’s options. Whether a model’s unique assumptions are tenable needs to be examined in the context within which it is used.

Fit To obtain the above mentioned advantages of IRT, it is necessary to have acceptable model-data fit. Implied in this statement is that we consider fit to be a matter of degree rather than absolute. As such, one question that needs to be asked is “What is our level of misfit tolerance?” Our simple answer is that, all things being equal, we will tolerate misfit up to the point where misfit interferes with our assessment objective. Moreover, it should be noted that model-data fit is necessary, but not sufficient, for obtaining validity evidence for an instrument. 160

Some of our model-data fit evidence will be obtained across all items (we call this instrumentlevel fit), whereas other evidence will be for each item (we call this item-level fit). Examples of instrumentlevel fit evidence include dimensionality assessment or model-level likelihood ratio statistics, whereas an example of item-level fit would be conditional dependence determination for a pair of items. In some cases, a lack of acceptable fit at the instrumentlevel may be explained by a lack of fit for a subset of items (i.e., if this subset is removed, then one would have instrument-level fit). In other situations, it is possible to have evidence of instrument-level fit but not observe fit for each item. For example, we might have evidence supporting the use of a unidimensional model, but we also identify one item-pair exhibiting conditional dependence. (Whether this item-level misfit can be tolerated depends on the context.) As a result, we need to obtain evidence of fit at both the instrument and item levels. Table 8.4 provides different aspects involved in assessing model-data fit along with example approaches. As can be seen, some aspects involve the tenability of assumptions because violations of assumptions may lead to inaccurate parameter estimates. Other aspects utilize some of the advantages of IRT over CTT discussed above (e.g., invariance) as vehicles to assess fit. Table 8.4 shows that performing a fit analysis will generally involve using multiple programs. For example, to assess a model’s dimensionality assumption would involve a statistical package that can perform a factor or principal component analysis or a specialized program (e.g., NOHARM for nonlinear factor analysis of binary data). However, because dimensionality assessment is only one aspect of a fit analysis, this step would then be followed by an examination of functional form, invariance, and conditional independence. As such, a proper fit analysis will involve not only an IRT calibration program but also a statistical package and/or a specialized program. Moreover, each estimation program provides its own approach to model-data fit analysis. For example, some programs will provide a Likelihood Ratio statistic (G 2 ) for instrument-level fit assessment, whereas others will also provide AIC and BIC statistics, and others will provide INFIT and OUTFIT statistics instead of G 2 , AIC, and BIC. This will also be true at the item level (i.e., the fit statistics in one program may not correspond to those available in another program). At the item level and in general, fit analysis approaches involve either fit statistics and/or

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

Table 8.4. Fit Analysis Aspects Aspect

Approach example

Instrument-level information Assess dimensionality

Dichotomous data: nonlinear factor analysisa (e.g., NOHARM) Polytomous data: linear factor analysisa or principal component analysisa

Overall model fit Item-level information Functional form

Fit statistics: Likelihood Ratio Statistic (G 2 ), AIC, BIC, INFIT, OUTFIT Fit statistics (χ 2 , INFIT, OUTFIT) Graphical comparison of predicted with observed Dichotomous Data: predicted IRF and observed IRF Polytomous data: predicted ORF and observed ORF

Conditional independence

Q3 statistic (Yen, 1984), Residual Correlation

Invariance

Divide calibration sample into subgroups and compare the parameter estimates either graphically and/or statistically (we include differential item functioning within this aspect)

a Exploratory

or confirmatory

graphical approaches. Typically, the fit statistics compare what is expected on the basis of the model with what was observed. In this regard, a nonsignificant fit statistic indicates a correspondence between what the model predicts and what is observed (i.e., fit). The graphical approach typically compares the predicted IRF (e.g., Fig. 8.3) to the observed IRF; with polytomous data the comparison would be between the predicted ORFs (e.g., Fig. 8.4) and the observed ORFs. When the predicted response function for an item corresponds with the observed response function, then there is evidence of fit. As such, item-level fit statistics and the graphical approach permit an evaluation of the functional form assumption. We believe that the graphical and statistical approaches are complementary and should always be used in conjunction with one another. Conditional item dependence may be observed when (1) using an IRT model with fewer latent traits than are necessary to correctly model the data (Yen, 1984; Tuerlinckx & De Boeck, 2001); (2) the response to one item increases the probability of a particular response to another item (i.e., item chaining or item interaction; Tuerlinckx & De Boeck, 2001; Yen, 1993); or (3) two or more items are related to one another because of some commonality, such as a group of reading comprehension items sharing a common passage. For an extensive list

of additional situations in which conditional item dependence may occur see Yen (1993). There are several ways to assess the tenability of the CI assumption. Two approaches are examining the residual matrix after fitting a factor model to data and Yen’s Q3 statistic (Yen, 1984, 1993). With the residual matrix approach we fit—say, a single-factor model (i.e., for one of our unidimensional models)—to the data and examine the residual matrix.5 If the values in the residual matrix are zero or very close to zero, then one has evidence of conditional item independence. The Q3 statistic is the correlation of the residuals (di ,dj ) for an item-pair, i and j, after the person location estimates are partialed out. The residuals for items i and j are di = xi − pi (θˆ ) and dj = xj − pj (θˆ ), respectively. The terms pi (θˆ ) and pj (θˆ ) are the probability of a correct response on items i and j, respectively, according to an IRT model using the estimated item parameters and the ˆ Because the item responses location estimate (θ). used in calculating the correlations are also used in estimating the person’s location, Q3 is expected to be negatively biased (Yen, 1984). When conditional independence holds, then the expected value of Q3 is approximately –1/(N – 1), where N is the sample size (Yen, 1993). Critical values for flagging the existence of CID with Q3 do not exist. Therefore, de ayal a

161

in practice, a cut-point for Q3 of ±0.2 has been used for identifying items that are exhibiting conditional dependence. Alternatively, one can conduct a simulation study in which conditional independence is true and obtain the optimal cut-points for Q3 for a given situation (e.g., a set of item parameter estimates, sample size, etc.). The invariance aspect of model-data fit assessment capitalizes on one of the IRT advantages over CTT. As mentioned above, item- (and person-) parameter estimates are invariant when one has model-data fit. Thus, if we observe invariance in our item parameter estimates, then we have evidence supporting model-data fit. By “invariance of parameter estimates” we mean that our estimates are the same up to a linear transformation. As an example of invariance, assume that we have five items with which we calibrate, for simplicity and without loss of generality, the 1PL model; assume that we have model-data fit. Our calibration produces item location estimates (δˆi ): item 1 (δˆ1 =) of –2, item 2 (δˆ2 =) of –1.75, item 3 (δˆ3 =) of –1.5, item 4 (δˆ4 =) of –1.25, and item 5 (δˆ5 =) –1. Another administration of our five items to a different sample produces the estimates of δˆ1 = −1, δˆ2 = −0.75, δˆ3 = −0.5, δˆ4 = −0.25, and δˆ5 = 0; again assume that we have model-data fit. As can be seen, our second set of estimates is simply shifted up the continuum by one logit. I can transform the second set of estimates to be on the same metric as the first by simply subtracting 1 (i.e., estimateNEW = estimateOLD – 1) from each estimate. Conversely, I can transform the first set of estimates to be on the same metric as the second by adding 1. Moreover, the correlation between the two sets of estimates is perfect with or without the transforming one metric to be the same as the other. The gist of invariance assessment is to divide the calibration sample into two subsamples. The two subsamples can be created by random assignment and/or on the basis of a particular interest (e.g., males and females, high person locations and low person locations, etc.). Each of the subsamples is separately calibrated and the item parameter estimates compared to one another. There are several ways of making these comparisons, such as the correlation of item parameters, calculating the Mantel-Haenszel statistic, and/or the calculation of the root mean square difference (RMSD) between IRFs (or ORFs). For the correlational approach, we simply calculate the correlation for a given parameter across our subsamples. For example, for two subsamples, 162

S and T , and the 2PL model we would have two correlations, one for item discrimination and one for item location. Thus, we would calculate the Pearson product-moment correlation between subsample S and subsample T ’s discrimination estimates as well as between the two subsamples’ sets of item location estimates. A large value for the correlation—say, above 0.9—would provide evidence of invariance across our subsamples. The correlational approach is sufficient for models that contain only a single-item parameter (e.g., the 1PL model). However, with multi-item parameter models (e.g., the 2PL model) the correlation does not reflect the interaction of the item’s parameters represented in the item’s IRF (or ORF). Therefore, to simultaneously compare the item parameter estimates across subsamples, one needs to, in effect, compare an item’s IRFs (or ORFs) across subsamples. The RMSD can be used for making this comparison. To calculate an item’s RMSD we need to specify a range of interest. Typically, this range is from –3 to 3. We then subdivide this range into W equally spaced θs. For example, if our range is –3 to 3 we can divide it into 121 equally spaced θ s using logit increments of 0.05 (i.e., –3, –2.95, –2.9, …, 3); the smaller the increment the greater the index’s accuracy as a measure of the difference between the two IRFs. Then, the RMSD for item i is given by RMSDi ! " W 2 " " # w=1 piS (θw ) − piT (θw ) , = W

(29)

where piS (θw ) and piT (θw ) are calculated using the item parameter estimates from subsamples S and T , respectively; θw is the w th θ value in the range of interest (e.g., –3, –2.95, –2.9, …, 3); and W is the number of equally spaced θ s in the range of interest (e.g., W = 121). Conceptually, RMSDi is the average absolute distance between the two IRFs. When RMSDi equals 0, then there is no difference between the two IRFs. However, one should expect that even with perfect model-data fit that estimation error will be reflected in an item’s non-zero, albeit small, RMSDi value. From this perspective, a small RMSDi reflects two IRFs that may be considered to be sufficiently similarly to not be reason for concern (i.e., subsample S’s IRF would fall within the confidence band for subsample T ’s IRF).

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

In those cases where one observes a large RMSDi , there may be various reasons for its magnitude. For example, the item may be poorly written and thereby interpreted differently across the subsamples or the model may have insufficient item parameters to accurately describe the item. Depending on the diagnosis of the cause(s) of the magnitude of RMSDi , one may decide to omit the item from the instrument and retain only those items with small RMSDi values. Unlike the correlational approach that yields a correlation across items, with RMSDi we have one value for each item. One approach to obtaining an instrument-level invariance assessment is to calculate the difference between the subsamples’ respective total characteristic functions (TCFs). The TCF is based on the expected trait score, E T (T is the Greek letter tau): ET =

L 

pi (θ),

(30)

i=1

where θ can be a value from a range of interest or a person’s estimated location, L is the instrument’s length, and pi is given by one of our dichotomous models. (The expected trait scores for polytomous models may be found in de Ayala [2009].) Combining Equation 30 with the idea symbolized by Equation 29, we have that our instrument-level invariance assessment, RMSDTCF , is given by RMSDTCF ! " W  L L 2 "   " piS (θw ) − piT (θw ) # w=1 i=1 i=1 , = W (31) where θw and W are defined as above, the first term in the numerator is the expected trait score for subpiS (θw )), and the second sample S (i.e., E TS = term is the expected trait score for subsample T (i.e., piT (θw )). A value of RMSDTCF close E TT = or equal to zero would provide evidence of invariance. Both RMSDTCF and RMSDi should be used in conjunction with plots of the TCFs and IRFs to determine whether the magnitude of these statistics is representative of a systematic difference across the continuum or reflects a difference for a particular portion of the continuum. Although RMSDTCF and RMSDi allow us to simultaneously compare an item’s parameters across subsamples, there is a price to pay for this convenience. Specifically, prior to their use, we need to

align the subsamples’ metrics to one another, otherwise the magnitude of RMSDTCF and RMSDi may reflect, in part, the differences in the two metrics. The alignment of metrics (also known as linking ) is discussed in the next section. As mentioned in Table 8.4, we include differential item functioning (DIF) in the invariance aspect. Differential item functioning is defined as an item that displays different statistical properties for different manifest groups after the groups have been matched on a proficiency measure (Angoff, 1993). In the DIF nomenclature, one of the manifest groups is known as the focal group, whereas the other is called the reference group. The focal group (e.g., females) is the one being investigated to see if it is disadvantaged by the item. The reference group is the comparison group (e.g., males). Graphically, DIF can be represented as the difference between two IRFs: one IRF is based on the item’s parameter estimate(s) from the focal group and the other IRF is based on the item’s parameter estimate(s) from the reference group. If an item is not exhibiting DIF, then the groups’ IRFs would be superimposed on one another (i.e., within sampling error) after we link the two groups’ metrics. However, if the item is exhibiting DIF, then the two IRFs are not superimposed after we link the two groups’ metrics. Therefore, the existence of DIF means that the DIF item’s parameter estimates are not invariant across the manifest groups (i.e., item-level misfit). Although defined in terms of proficiency assessment, DIF is potentially applicable to nonproficiency assessments. As an example, we return to our social anxiety example. As part of our fit analysis, we perform separate calibrations for males and females. If we find that females respond differently to an item than males, even after we account for their respective locations on the social anxiety continuum, then our item is exhibiting DIF. It may be that the item’s text elicits a different interpretation by female respondents than in male respondents (e.g., the text is sexist). There are a number of approaches for assessing DIF. Two of these approaches are the Mantel Haenszel statistic (MH) and the use of logistic regression (LR). The MH statistic allows us to determine whether the responses to an item are independent of group membership after conditioning on the observed scores; MH is evaluated against the standard X 2 critical values with degrees of freedom equal to 1. LR is a technique for making predictions about a binary variable from one or more quantitative and/or qualitative variables. In the current context, de ayal a

163

the binary variable is the response to an item and the predictors might be gender and/or some measure of the construct; Zumbo (1999) as well as French and Miller, (1996) discuss the technique’s application to ordinal responses. As such, we logistically regress the responses to an item on the construct measure and/or on a manifest group indicator (e.g., gender). Conceptually, the application of LR to DIF analysis requires performing a logistic regression analysis for an item using members of the reference group and a second analysis for the same item with members of the focal group. The group results are compared using the G 2 statistic. For both the MH and LR approaches, a nonsignificant test statistics indicate that DIF was not detected. See Camilli and Shepard (1994) for more information on DIF analyses.

Metric Transformations and Linking Examination of our models shows that there is an indeterminacy of our parameter estimates. As an example, consider the 2PL model (Equation 8). We can add or subtract a constant from θ and δi and not change the logistic deviate. As a result, the IRF is unaffected although its location moves up or down the continuum. Stated another way, the origin of the metric is arbitrary. Similarly, multiplying θ and δi by a constant and dividing αi by the same constant would leave αi (θr −δi ) unchanged. This implies that the unit for measuring θ and δi is also arbitrary. This indeterminacy is addressed in different ways by different programs. Thus, the program’s user does not have to be concerned about this matter per se. However, we mention it because this issue facilitates the transformation of our metric to have certain characteristics that facilitate interpretation of the scale or to align two metrics with one another. The need to align metrics would occur if we administer an instrument to two samples or administer alternate forms of an instrument to a sample or to different samples. We can rescale our parameters or their estimates by using the metric transformation coefficients ζ and κ. The values of ζ and κ may be given for a particular scale, such as the T-score scale (i.e., ζ = 10, κ = 50), or they may be calculated to transform one metric to be the same as another metric (e.g., for use with calculating RMSDTCF and RMSDi ). One simple approach for calculating ζ and κ uses the means and standard deviations of the item locations. In this approach, the transformation coefficient ζ is obtained by taking the ratio of the two metrics’ item location standard deviations sδ ∗ ζ = , (32) sδ 164

where sδ ∗ is the standard deviation of the item locations on the target metric and sδ is the standard deviation of the of the item locations on the initial metric. (The initial metric is the metric that is transformed to align with the target metric.) Once ζ is determined, the other transformation coefficient κ is obtained by ¯ κ = δ¯∗ − ζ δ,

(33)

where δ¯∗ is the mean of the item locations on the target metric and δ¯ is the mean of the item locations on the initial metric. An alternative approach for determining ζ and κ is known as the total (or test) characteristic curve method (Stocking & Lord, 1983). In this method, the total characteristic functions are obtained for the initial and target metrics. The values of ζ and κ that align the TCF on the initial metric with the TCF on the target metric are determined by minimizing the differences between the two TCFs. Typically, this is done by using a program such as EQUATE (Baker, 1993b), ST (Hanson & Zeng, 2004), or POLYST (Kim & Kolen, 2003). Once the values of ζ and κ are known, then we can use them to transform our item and person parameters. Each item’s discrimination parameter (or its estimate) is transformed by α αi∗ = i , (34) ζ where αi is item i’s discrimination on the initial metric and αi∗ is the transformed discrimination value on the target metric. In terms of a slope–intercept parameterization, the item-wise transformation would be ακ (35) γi∗ = γi − i . ζ To transform our item locations, we use the standard linear transformation of ξ ∗ = ζ (ξ ) + κ,

(36)

where ξ represents the parameter on the initial metric to be transformed (e.g., δi or its estimate) and ξ ∗ represents the same parameter on the target metric. For example, to transform our item locations, δi , to be on another metric we would use δi∗ = ζ (δi ) + κ. (Sometimes the application of Equation 36 with ξ = θ and ξ ∗ = θ ∗ is known as equating.) Equations 32 through 36 are used to transform one (initial) metric to another (target) metric so that we can subsequently use our estimates interchangeably across samples and/or alternate forms. Another transformation that is sometimes useful is

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

the conversion of our θ scale to one that may be intrinsically useful. As mentioned above, we can apply Equation 36 with ξ = θ to transform our θ scale to, for example, a T-score scale. Alternatively, we may to transform our θ scale to be on an observed or summed score scale. For example, assume that we have a 10-item social anxiety scale that uses a true/false response format. When we convey a person’s location on the social anxiety continuum, it may be more useful to the respondent to know that he or she has a score of 4 on an 11-point scale (i.e., 0, 1, …, 10) rather than a –1 (on an infinite θ scale). We can perform this transformation by using the expected trait score, ET (Equation 30), with L = 10 and θ (actually our estimate of ) equal to –1, and calculating each item’s probability according to one of our dichotomous model (e.g., the 2PL model). The sum of these 10 probabilities would be our expected trait score for a person located at –1. Although our expected trait score is on the summed score scale, it is simply a conversion of the IRT metric and only has the appearance of a summed score. Therefore, we still have all benefits and advantages of IRT (e.g., neither θ nor E T depend on the distribution of persons, invariance, etc.). Moreover, because for a given calibration each θ will yield the same E T, we can create a concordance table (i.e., a table that shows for each θ what the corresponding E T is) or graph the total characteristic curve to facilitate the conversion of a θ to its E T; this curve has an ogival pattern (e.g., Fig. 8.3) with E T on the ordinate.

Calibration Sample Size It cannot be stressed enough that sample size guidelines should not be interpreted as hard and fast rules. Specific situations may require more or fewer persons than other situations given the (mis)match between the instrument’s range of item locations and the sample’s range of person locations, response data characteristics (e.g., missing data, for polytomous data the distribution of responses across categories), and the purpose of the instrument’s administration (e.g., establishing norms). Some factors that also need to be considered in determining sample size are the calibration model, the estimation procedure and its possible interaction with instrument characteristics (e.g., instrument length), the desired degree of estimation accuracy of items and/or persons, model-data fit diagnostics (e.g., ancillary technique sample size requirements, fit statistics’ power), and the use of prior

distribution(s); in a case using a prior distribution, the match with the population distribution also requires consideration. As an example of the interaction of some of these factors, consider JMLE and MMLE. With JMLE our instruments should consist of at least 20 to 25 items to minimize estimation bias, whereas with MMLE the length is not important for estimation (all things being equal). However, the instrument’s length does have implications for the veracity of chisquared fit statistics. That is, with MMLE we could calibrate an instrument that had less than 20 items, but we should not use the item-level chi-squared fit statistics as part of our model-fit analysis. Additionally, with small calibration samples, our plots of predicted and observed IRFs (or ORFs) would be less useful and the power of our fit statistics would be adversely affected. Although some have suggested that “useful information can be obtained from samples as small as 100” (Wright, 1977, p. 224), typically sample sizes are substantially larger. (Wright’s statement was with respect to the Rasch model, and it was prefaced by a statement of a desirable sample size of 500 or more.) Another caveat about smaller calibration samples (i.e., 100 or less) is that with a smaller sample size there is a higher probability, all other things being equal, that everyone will provide the same response (e.g., a response of 0) to one or more items. In these cases one cannot estimate the item parameter(s). The same problem may occur with “short” instruments. That is, with short instruments there is an increased chance of a person providing the same response for all items. As such, this individual’s location could not be estimated using MLE. Assuming MMLE, the use of a prior distribution for estimating item discrimination and favorable conditions (e.g., θ /prior distribution match, etc.), it appears that a calibration sample size of at least 500 persons tends to produce reasonably accurate item parameter estimates. If we also assume that that the respondents distribute themselves across the response categories in reasonable numbers, then this guideline would also be applicable for the GPC and GRS models. With the NR model and making the same assumptions as above, then we suggest that the minimum sample size be 600. Of course, having more respondents than these minima is preferable. However, for all our models we anticipate that there is a sample size—for example, 1200 to 1500 or so— at which one reaches, practically speaking, a point of diminishing returns in terms of improvement in estimation accuracy for a given model. (These maxima de ayal a

165

should not be interpreted as upper bounds.) It must be noted that less favorable situations may necessitate larger sample sizes. To reiterate, the caveats and considerations previously mentioned as well as not interpreting sample size guidelines as hard and fast rules still apply to our recommendation. With models that do not require estimation of item discrimination and given the foregoing caveats and considerations at the beginning of this section, then a rough sample size guideline is that a calibration sample should have at least a couple hundred respondents. This should not be interpreted as a minimum but, rather, as a reasonable compromise. Certain applications may require more respondents, whereas in others a smaller sample may suffice.

Summary Item response theory is a latent variable modeling approach that is focused on item responses and their relationship to one or more latent variables that are our constructs of interest. We may apply IRT in two-facet situations (i.e., person by items) or in cases that have more than two-facet cases, such as to person by items by judges data (i.e., three-facets). Item response theory models may be used to fulfill either a descriptive or explanatory (predictive) objective. Further, IRT models may be used with discrete dichotomous and polytomous response data to obtain parameter estimates for items and people that are on a continuous scale. In contrast to Classical Test Theory, IRT offers a number of benefits and advantages, such as person and item parameter invariance, and the capacity to design instruments that have specific psychometric properties, such as equiprecise measurement. However, to realize these benefits and advantages requires having model-data fit. The assessment of model-data fit involves determining the tenability of the model’s assumptions as well as the presence of IRT properties (e.g., invariance). Obtaining the item and person parameter estimates is typically accomplished via MLE (conditional or joint) or by MMLE. With MMLE the item parameters are estimated separately from estimating the person parameters. Because this separation allows one to first determine whether there is model-data fit for the instrument before estimating persons, MMLE is a more computationally efficient approach than JMLE. Estimating person parameters can be accomplished using MLE or by one of the Bayesian approaches of expected a posteriori and maximum a posteriori. These Bayesian approaches 166

allow estimating a person’s location in cases where MLE fails. Once we have estimated our parameters, we have obtained a metric for our latent continuum. This is a relative, not an absolute, metric. As a result, our estimates can be transformed to (1) align the metrics from different samples and/or alternate forms of our instrument, (2) facilitate interpretations, and/or (3) to make comparisons (e.g., across groups, instruments, and/or longitudinally). It should be noted that the successful application of IRT does not preclude the necessity of obtaining validity evidence for an instrument.

Future Directions In the following, I present some areas that either are or should see greater interest as well as areas in need of greater research. To expand the use of IRT in other fields (e.g., Industrial-Organizational, clinical field), future research needs to develop estimation procedures for small-sample calibration. At present, IRT is constrained to large samples, and this presents an impediment to applying it to situations with 30 or 50 respondents. In this regard, the use of a Bayesian perspective may be beneficial for obtaining estimates. Separate from small sample estimation, but still within a Bayesian framework, we have our second area of interest—Markov chain Monte Carlo (MCMC) techniques for estimation. Markov chain Monte Carlo primary advantages are its flexibility and adaptability. In this regard, it allows one to experiment with new models “relatively” easily because one does not have to develop and validate complicated estimation algorithms to estimate these new (complicated and/or intricate) models. Moreover, MCMC would be particularly attractive when these models are of specific interest in a given context, and as a result, one could not justify a large time investment with limited utility. (It should be noted that MCMC’s proper use requires a sophisticated understanding of the particular MCMC algorithm being used.) Another area within the Bayesian perspective is the use and advancement of posterior predictive checking for examining IRT model-data fit. The above IRT models are applicable to singlelevel data collection schemes. However, because these models are examples of the generalized linear model, it is possible to extend our models to multilevel data collection. These multilevel situations (also known as hierarchical) arise in, for example,

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

the three-facet reference frames (e.g., judges rating people’s responses). Although these can be modeled using the MFRM, if we use a multilevel IRT model, then we do not have to assume conditional independence among our judges. Other multilevel cases would involve items that are chained together or that share a common passage, cross-national assessments, longitudinal data, and so on.In short, multilevel IRT would have applicability with clustered/nested data and assessment contexts in which conditional dependence is present. In the latter, one could also use a polytomous model or a testlet model (Wainer, Bradlow, & Wang, 2007) as an alternative to a multilevel IRT model. The last areas that we present are cognitive diagnostic modeling and automated item generation. In cognitive diagnostic modeling, we attempt to classify individuals in terms of his or her mastery of skills. These skills can be those that are used to correctly answer items in a proficiency assessment situation or coping skills for dealing with, for example, social anxiety. By automated item generation, we mean the creation of items in real time and as needed according to a cognitive model or according to a cognitive diagnostic model. This marriage would allow for a dynamic assessment process, more completely incorporate cognitive psychology into assessment, and more fully exploit the computerized administration platform. Moreover, by incorporating cognitive diagnostic capabilities into the item generation, we would have an automated system that would provide each respondent with a profile of information, such as his or her location(s), which skills are mastered, and which skills need to be mastered. Depending on how the system is implemented some of the above models (e.g., LLTM, 2PL, M2PL, mixture models) or Adams and Wilson’s random coefficient multinomial logit model (or its multidimensional variant [Adams, Wilson, & Wang, 1997]) are applicable.

Because odds have a range from 0 to ∞, with a value of 1 reflecting no difference between the event occurring and not occurring, we have an asymmetry in the odds scale. As a result, the odds of an event are sometimes transformed to the (natural) logarithmic scale (i.e., ln[odds(x)]). On the log scale, a value of 0 reflects no difference between the event occurring and not occurring, positive values indicate that the odds of success (e.g., x = 1) are greater than of failure, and negative values reflect that the odds of failure (e.g., x = 0) are greater than for success. This transformation gives the log odds or the logit of the event occurring. Therefore, taking the natural log of both sides of Equation 39 gives   p(x) = . (40) ln[odds(x)] = ln 1 − p(x) 2. In a proficiency assessment context, individuals at the lower end of the latent continuum may be expected to have a non-zero probability of providing a response of 0. For example, examinees that have low mathematics proficiency may be expected to incorrectly respond to, say, a topology question on a mathematics examination. If this mathematics examination uses a multiple-choice item format, then some of these low-proficiency individuals may select the correct option simply by guessing. In these cases the item’s response function has a lower asymptote that is not asymptotic with 0 but with some non-zero value. The three-parameter model addresses this non-zero lower asymptote. The three-parameter model can be viewed as an extension of the two-parameter model. To explain this we need to be concerned with two cases. The first case is “What is the probability of a response of 1 on an item when an individual responds consistent with his or her location on θ?” In this case, our answer is that the probability of the response of 1 is modeled by the 2PL model. The second case to consider is “What should be the probability of a response of 1 on an item due to chance alone?” To answer this question, let us symbolize this probability as χi . Therefore, when a person can be successful on item i on the basis of chance alone (i.e., irrespective of the person’s location), then the corresponding probability is given by χi (1 − pi ). In this case, as θ becomes progressively more negative, then pi approaches 0 and χi (1 − pi ) simplifies to χi . Stated another way, the probability of a response of 1 for an individual with an infinitely low location is χi . As such, χi represents the IRF’s lower bound or asymptote. Putting these two (mutually exclusive) cases together, we can obtain the probability of a response of 1 from pi∗ = pi + χi (1 − pi ),

(41)

where pi is given by the two-parameter model. Equation 41 may be rearranged to be pi∗ = χi +(1 − χi )pi .

Notes 1. Probabilities may be expressed in terms of the odds of an event occurring and vice versa. The probability of event x occurring expressed in terms of the odds of x is odds(x) . p(x) = 1 + odds(x)

(37)

Expressing the odds in terms of probabilities gives us p(x) odds(x) = . 1 − p(x)

(38)

Substituting of Equation 3 for p(x) in Equation 38 and simplifying leads to odds(x) = e  .

(39)

(42)

By substitution of the 2PL model for pi in Equation 42, we obtain the three-parameter logistic (3PL) model pi∗ (xi = 1|θr , αi , δi , χi ) = χi +(1 − χi )

e αi (θr −δi ) , 1 + e αi (θr −δi ) (43)

where χi is item i’s pseudo-guessing or pseudo-chance parameter and is the probability of a response of 1 when θ approaches −∞; δi and αi are defined as above. Therefore, with the 3PL model, there are three parameters characterizing item i (i.e., αi , δi , χi ). Because there is a normal ogive version of the three-parameter model, Equation 35.43 is sometimes presented incorporating the scaling factor D.

de ayal a

167

Author Note RJ de Ayala, Quantitative, Qualitative, and Psychometric Methods Department of Educational Psychology Teachers College Hall 114 University of Nebraska - Lincoln, Lincoln, NE 68588, USA.

References Adams, R.J., Wilson, M., & Wang, W. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21, 1–23. Andrich, D. (1978a). Relationships between the Thurstone and Rasch approaches to item scaling. Applied Psychological Measurement, 2, 449–460. Andrich, D. (1978b). A rating formulation for ordered response categories. Psychometrika, 43, 561–573. Andrich, D. (1978c). Application of a psychometric rating model to ordered categories which are scored with successive integers. Applied Psychological Measurement, 2, 581–594. Angoff, W.H. (1993). Perspectives on differential item functioning methodology. In P.W. Holland & H. Wainer (Eds.), Differential Item Functioning (pp. 3–23). Hillsdale, NJ: Lawrence Erlbaum. Assessment Systems Corporation. (1997). XCALBRE. St. Paul, MN: Author. Baker, F.B. (1993a). Sensitivity of the linear logistic test model to misspecification of the weight matrix. Applied Psychological Measurement, 17, 201–210. Baker, F.B. (1993b). EQUATE 2.0: A computer program for the characteristic curve method of IRT equating. Applied Psychological Measurement, 17, 20. Baker, F.B., & Kim, S. -H. (2004). Item response theory: Parameter estimation techniques (2nd edition). New York: Marcel Dekker. Bock, R.D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37, 29–51. Camilli, G. & Shepard, L.A. (1994). Methods for identifying biased test items. Thousand Oaks, CA: Sage. de Ayala, R.J. (2009). The theory and practice of item response theory. New York: The Guilford Press. Drasgow, F., & Olson-Buchanan, J.B. (Eds.) (1999). Innovations in Computerized Assessment. Hillsdale, NJ: Lawrence Erlbaum. Embretson, S.E. (1984). A general latent trait model for response processes. Psychometrika, 49, 175–186. Embretson, S.E. (1985). Test Design. New York: Academic Press. Embretson, S.E. (1996). Cognitive design principles and the successful performer: A study on spatial ability. Journal of Educational Measurement, 33, 29–40. Fischer, G. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 37, 359–374. Fraser, C., & McDonald, R.P. (2003). NOHARM: A Windows program for fitting both unidimensional and multidimensional normal ogive models of latent trait theory [Computer Program]. Niagara College, Welland, Ontario, Canada. Available at people.niagaracollege.ca/cfraser/download. Last accessed July 16, 2012. Frederiksen, N., Mislevy, R.J., & Bejar, I.I. (1993). Test Theory for a New Generation of Tests. Hillsdale, NJ: Lawrence Erlbaum.

168

French, A.W., & Miller, T.R. (1996). Logistic regression and its use in detecting differential item functioning in polytomous items. Journal of Educational Measurement, 33, 315–332. Haertel, E.H. (2006). Reliability. In R. Brennan (Ed.), Educational Measurement, (4th ed., pp. 65–110). Westport, CT: American Council on Education/Praeger. Hanson, B., & Zeng, L. (2004). ST : A computer program for IRT scale transformation [Computer Program]. Iowa City, IA: ACT. Program available at www.education.uiowa. edu/centers/casma/computer-programs.aspx. Last accessed July 16, 2012. Irvine, S.H., & Kyllonen, P.C. (2002). Item Generation for Test Development. Hillsdale, NJ: Lawrence Erlbaum. Kelderman, H., & Macready, G.B. (1990). The use of loglinear models for assessing differential item functioning across manifest and latent examinee groups. Journal of Educational Measurement, 27, 307–328. Kim, S., & Kolen, M.J. (2003). POLYST : A computer program for polytomous IRT scale transformation [Computer software]. Iowa City, IA: University of Iowa. Program available at www.education.uiowa.edu/centers/casma/computerprograms.aspx. Last accessed July 16, 2012. Linacre, J.M. (1988). Facets: A computer program for many-facet Rasch measurement. Chicago IL: Winsteps.com. Linacre, J.M. (1989). Many-facet Rasch measurement. Chicago IL: MESA Press. Linacre, J.M. (2001). A user’s guide to WINSTEPS/MINISTEPS. Chicago IL: Winsteps.com. Linacre, J.M. (2009). A user’s guide to FACETS: Rasch-Model computer programs. Chicago IL: Winsteps.com. Lord, F.M. (1952). A theory of test scores. Psychometric Monograph, No. 7. Lord, F.M., & Novick, M.R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley Publishing. Masters, G.N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47, 149–174. Mislevy, R.J., & Verhelst, N. (1990). Modeling item responses when different subjects employ different solution strategies. Psychometrika, 55, 195–216. Muraki, E. (1990). Fitting a polytomous item response model to Likert-type data. Applied Psychological Measurement, 14, 59–71. Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159–176. Muraki, E., & Bock, R.D. (2003). PARSCALE (Version 4.1) [Computer Program]. Mooresville, IN: Scientific Software. Muthén, L. K., & Muthén, B. O. (2007). Mplus User’s Guide. Los Angeles, CA: Muthén & Muthén. Parshall, C.G., Spray, J.A., Kalohn, J.C., & Davey, T. (2002). Practical Considerations in Computerized-Based Testing. New York: Springer. Rasch, G. (1961). On general laws and the meaning of measurement in psychology. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability (pp. 321–333). Berkeley, CA: University of California Press. Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago, IL: University of Chicago Press. (Original work published 1960.) R Development Core Team. (2007). R: A language and environment for statistical computing [Computer software]. Vienna, Austria: R Foundation for Statistical Computing. Program

t h e i rt t r a d i t i o n a n d i t s a p p l i c at i o n s

available at http://www.r-project.org. Last accessed July 16, 2012. Reckase, M.D. (1989, Fall). Adaptive testing: The evolution of a good idea. Educational Measurement: Issues and Practice, 11–15. Reckase, M.D. (2009). Multidimensional Item Response Theory. New York: Springer. Rost, J. (1990). A logistic mixture distribution model for polychotomous item responses. British Journal of Mathematical and Statistical Psychology, 44, 75–92. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monograph Supplement, No. 17. Sands, W.A., Waters, B.K., & McBride, J.R. (Eds.) (1997). Computerized Adaptive Testing: From Inquiry to Operation. Washington, DC: American Psychological Association. SAS Institute (2002). SAS for Windows: Version 9.1. Carey, NC: Author. Smith, E.V., & Kulikowich, J.M. (2004). An application of generalizability theory and many-facet Rasch measurement using a complex problem-solving skills assessment. Educational and Psychological Measurement, 64, 617–639. Stocking, M. L., & Lord, F. M. (1983). Developing a common metric in Item Response Theory. Applied Psychological Measurement, 7, 201–210. SYSTAT Software Incorporated (2007). SYSTAT for Windows: Version 12. San Jose, CA: Author. Thissen, D.J., Chen, W.-H., & Bock, R.D. (2003). MULTILOG (Version 7.0) [Computer Program]. Mooresville, IN: Scientific Software. Thurstone, L.L. (1925). A method of scaling psychological and educational tests. The Journal of Educational Psychology, 16, 433–451. Tucker, L.R. (1946). Maximum validity of a test with equivalent items. Psychometrika, 11, 1–13.

Tuerlinckx, F., & De Boeck, P. (2001). The effect of ignoring item interactions on the estimated discrimination parameters in item response theory. Psychological Methods, 6, 181–195. van der Linden, W.J., & Glass, C.A.W. (Eds.) (2010). Elements of Adaptive Testing. New York: Springer. Von Davier, M. (2001). WINMIRA 2001 [Computer Program]. Kiel, Germany: Institute für die Pädagogik der Naturwissenschaften an der Universität Kiel. Available at www.winmira.von-davier.de/ Last accessed July 16, 2012. Wainer, H., Bradlow, E.T., & Wang, X. (2007). Testlet Response Theory. New York: Cambridge University Press. Wilson, M., & De Boeck, P. (2004). Descriptive and explanatory item response models. In De Boeck, P., & Wilson, M. (Eds.), Explanatory Item Response Models: A generalized linear and nonlinear approach (pp. 43–74). New York: Springer-Verlag. Wright, B.D. (1977). Misunderstanding the Rasch Model. Journal of Educational Measurement, 14, 219–226. Wu, M.L., Adams, R.J., & Wilson, M.R. (1997). ConQuest: Multi-Aspect Test Software [Computer Program]. Camberwell, Australia: Australian Council for Educational Research. Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8, 125–145. Yen, W. M. (1993). Scaling Performance Assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187–213. Zimowski, M., Muraki, E., Mislevy, R.J., & Bock, R.D. (2003). BILOG-MG (Version 3.0) [Computer Program]. Mooresville, IN: Scientific Software. Zumbo, B. D. (1999). A Handbook On The Theory And Methods Of Differential Item Functioning (DIF): Logistic Regression Modeling As A Unitary Framework For Binary And Likert-Type (Ordinal) Item Scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation, Department of National Defense.

de ayal a

169

CHAPTER

9

Survey Design and Measure Development

Paul E. Spector

Abstract Fields that study psychological and social phenomena rely heavily on survey methods for data collection. Rigorous methods have been devised for the development of instruments suitable for survey research. Instruments developed with such methods can have adequate reliability and evidence for construct validity. They involve a step-by-step process of defining a construct, creating items, administering those items, conducting item analysis and other analyses to choose an internally consistent set of items, and collecting evidence for validity. Drawing inferences from survey studies requires consideration of issues concerning research design (e.g., cross-sectional vs. longitudinal), the nature of samples, and the likelihood of biases that might contaminate measurement. Studies done cross-nationally to draw inferences about country differences raise concerns about the measurement equivalence of measures (item intercorrelations are homogeneous across samples), and the equivalence of samples being compared. Key Words: Construct validity, measurement bias, institutional review board, measurement equivalence, method variance, reliability, sample equivalence, sampling

Fields that study psychological and social phenomena rely heavily on humans as measuring instruments. There are a variety of ways in which people’s reports about themselves and others can be assessed. With experimental and quasi-experimental designs, the independent variables are manipulations of the environment or respondent experiences, whereas dependent variables are often reports by the respondents themselves or reports of observers about the respondents. With nonexperimental research, perhaps the most common method is the survey that includes a set of measures, each of which represents a different variable of interest. Such surveys can be administered at one time-point and contain all the variables in a study, or they can be administered repeatedly over time and/or be used in conjunction with other sources and methods of data collection. This chapter will cover the basic principles and procedures involved in the development and use 170

of surveys. Included will be a discussion of survey designs, the development of instruments that can be used in surveys, issues of sampling, and strategies for putting together a survey for use in a study. As with any investigation, one begins a survey study with a purpose and a delineation of research questions to address. In some cases hypotheses are generated, often based on one or more theories. A key part of the development of research questions/hypotheses is specification of the constructs of interest and the statistics, both descriptive and inferential, that will be computed. Conducting the survey itself requires many choices about the wide variety of methods and procedures that are available for use.

Conducting a Survey Study The survey can be an extremely useful tool for studying human attitudes, behavior, cognition,

emotion, perceptions, personality, values, and many other variables. They can be studied at the level of the individual person, or aggregated to reflect characteristics of collectives, ranging from groups to organizations and even countries. Surveys are quite flexible and can include measures of numerous variables at one time, with the number limited mainly by respondents’ patience and tolerance. They can be used alone in a single-source design or in combination with other methods. They can be used once or repeatedly in a longitudinal design. Surveys can be conducted using some form of an interview or a questionnaire. In the interview, a researcher asks questions of respondents, either oneon-one or in groups. It can be conducted face-to-face or via communication technology, such as telephone or video conferencing. Generally the interview involves relatively open-ended qualitative questions, although it is possible to include questions that require quantitative ratings. The questionnaire, on the other hand, is administered in written form either in paper-and-pencil format or through the use of computer technology, such as Web-based methods (e.g., My Survey Lab, Survey Gizmo, Survey Monkey, and Zoomerang). Most questionnaires ask for quantitative ratings or short answers that are easily quantified, such as age or nationality. Openended questions are sometimes included that can be analyzed qualitatively or quantified with content analysis (Weber, 1990; Wilkinson, 2003). As illustrated in Figure 9.1, there are a number of steps involved in conducting a survey study (see Fowler, 1988, for a detailed description). First, one must specify the population of interest. The purpose of the study informs the sorts of individuals who will be surveyed. For example, if one wishes to study how people adjust to retirement, the population will be of individuals who have recently retired. If one wishes to study student bullying in schools, the population will be schoolchildren. Second, the variables to be measured must be selected. Whether the purpose of the survey is to address a practical problem, such as which of several marketing campaigns is likely to be most effective, or a purely theoretical problem, the variables must be carefully specified to inform measure choice or development. Failure to define variables precisely will often lead to poor choice of measures and ambiguity in interpretation. For example, one might be interested in stress, but the term is quite broad and difficult to precisely define. It would be better to define the variable as either an environmental condition (e.g., exposure to financial problems) or reactions (feeling anxious). Third,

Specify population

Select variables

Operationalize variables

Assemble questionnaire

Devise sampling procedure

Collect data

Disseminate results Figure 9.1 Steps involved in conducting a survey study.

one must operationalize the variables for the study (see Fowler, 1995, for discussion of survey question design). For example, financial problems might be operationalized as the discrepancy between monthly expenses and monthly income. Anxiety might be assessed with an anxiety scale. There are choices to be made about which variables will be assessed with ad hoc measures designed for the particular study and which will be assessed with existing off-theshelf measures that are available. Some of the latter might be copyrighted measures that can be purchased from testing companies, but there are many measures that researchers make available for noncommercial research use at no cost. Ad hoc measures can address the precise purpose of a study and may be necessary if appropriate existing measures cannot be located. On the other hand, ad hoc measures are untested and might not have adequate psychometric precision (i.e., reliability), and evidence for construct validity, both of which will be discussed later. Fourth, once the measures for a survey study are chosen, they are assembled into either an interview protocol or questionnaire. Interview protocols include a list of questions that are asked of each interviewee. Some protocols are very structured, with little or no deviation from what is asked. In other cases, the protocol allows for probing in which a question is asked, with follow-up questions spector

171

customized on the spot, depending on the answer. Questionnaires in most cases include a cover letter explaining the survey purpose and instructions to the respondent. Items are included to assess each of the variables chosen for the study. Fifth, a procedure is devised by which one will draw a sample of respondents from the population of interest. This might involve randomly selecting potential respondents from the phonebook of a city or getting a copy of a mailing list from a national professional association. Sixth, a sampling strategy is carried out to administer the survey to the sample and collect the completed surveys. For example, one might conduct a phone interview by calling every 100th person chosen at random from a phone book or mail a survey to every member of the American Psychological Association. Seventh, once the survey is conducted, the data are analyzed and interpreted either qualitatively or quantitatively. Finally, results of the study are disseminated in written (e.g., journal article) and/or spoken (conference presentation) format.

Variables and Measures in Surveys Surveys are designed to collect data on one or more variables that can vary from concrete factual information (e.g., gender) to abstract and subjective internal states that reflect a number of theoretical constructs, such as attitudes, cognitions, emotions, or perceptions of the environment. Measures of factual information typically ask respondents to indicate which category they are in (e.g., gender or political party), or they might require respondents to write in a quantity, such as years of tenure on a particular job. Often researchers will create ad hoc questions for this type of variable, although in some cases there may be typical lists of choices that are in common use that can be adopted from published studies. For example, in the United States, many researchers use the racial categories of Asian/Pacific Islander, African-American, Hispanic, Native American, White, and Other. Such categories do not necessarily work well in other countries that may have different views of race. Measuring subjective internal states that reflect an underlying theoretical construct often involves the use of rating scales in which respondents are asked to make ratings along a particular quantitative continuum, for example, by indicating their level of agreement with a statement that might reflect a positive attitude. Sometimes single items are used to reflect such constructs, but 172

more often multiple-item scales are used in which ratings from several items are summed to yield an overall score. Multiple-item measures are generally preferred because they tend to be more reliable, and they can do a better job of reflecting the range of content when constructs are broad. There are several types of rating scales, but the one that is most frequently used with surveys is the Likert scale (Likert, 1932), also called a summated rating scale. There are two psychometric issues with survey measures that are of particular concern with their use. First, there is the reliability or the extent to which a measure or a series of equivalent measures will yield the same assessment of an individual’s characteristic, assuming that characteristic hasn’t changed. If you measure an adult’s height today and tomorrow, reliability means the two measurements will agree. The second is validity, which is the weight of evidence upon which an inference can be made that a measure reflects the underlying theoretical construct intended. Although for straightforward factual information, such as the respondent’s age, we assume validity; for measures of more abstract and complex psychological variables, the nature of underlying constructs is sometimes open to question. For example, does a measure of whether smokers wish to quit smoking reflect their true feelings or what each person feels is the socially acceptable answer? Evidence should be provided that a particular measure can be reasonably assumed to reflect the construct that is claimed. Ultimately there is no way to prove validity; one can only make a case in support of a measure’s interpretation.

Reliability of Measures Reliability is the property of a measure that yields consistent measurement of a construct. There are two aspects of reliability: internal consistency and test–retest. Internal consistency is the extent to which items in a multiple-item scale relate to one another and thus presumably reflect the same construct. The individual items in a multiple-item scale are designed to be alternative measures of the same underlying construct. If this assumption is correct, then we would expect respondents to be consistent in their responses to the various items. For example, respondents will uniformly agree or disagree with items that all assess the same underlying attitude about something. We do not expect perfect consistency for a variety of reasons, including potential biases in some items, differences in interpretations

survey design and measure development

of meaning across items, and clerical errors. There is also the possibility that some items reflect different constructs in part. Coefficient alpha is the statistic most often used as a measure of internal consistency reliability. It assumes that all items reflect the same underlying single construct, which is not always the case. Alpha typically ranges in value between 0 and 1.0, with higher values indicating greater reliability. A generally accepted standard for alpha of 0.70 has emerged, based on advice by Nunnally (1978). Lance, Butts, and Michels (2006) pointed out that Nunnally made a number of recommendations and that his advice was a minimum alpha of 0.80 for basic research where the size of correlations or comparison of means among treatments is of concern. The reason for demanding the higher level for coefficient alpha is that unreliability can attenuate observed correlations among variables, rendering them likely underestimates of population values. In cases where one wishes to estimate the magnitude of correlation or compare correlations among different variables, low reliabilities will decrease the precision of estimation, thus increasing the possibility of reaching erroneous conclusions. Test–retest reliability is the extent to which repeated assessments of the same individuals yields the same score, assuming the underlying construct has not changed. It can be used for single as well as multiple-item measures. Test–retest reliability is indicated by the correlation between repeated assessment of the same respondents over some period of time, which can range from moments to years. The timeframe for determining test–retest reliability is based on the stability of the constructs of interest. Measures of personality are assumed to be relatively stable in adults, and so test–retest reliabilities across months and even years might be reasonable to compute. For more transient variables, such as emotional states, even a few minutes might produce a change in the level of construct, and thus determining test–retest reliability can be problematic. Regardless of the construct, in practice the correlation between two retestings has the potential to confound unreliability of measurement with instability in the construct. Test–retest reliability interpretation is informed by consideration of how stable we expect the underlying construct to be. When constructs are assumed to be unstable (e.g., mood) or for populations in which constructs might change rapidly (e.g., children), test–retest reliability might not be particularly useful as an indicator of a measure’s reliability.

Construct Validity Whereas reliability is considered a property of a measure, construct validity is not. Rather, it is our interpretation of what construct is represented by scores on a measure. Construct validity cannot be proven, but like a court case, we provide evidence to convince ourselves and others about the nature of the construct we have assessed. There are several kinds of validation evidence that can be part of a case in support of a measure’s construct validity. The basis for construct validity interpretation is a theory of what the construct in question is and how it relates to other variables. Although often much of the theorizing may be implicit, there is some theoretical framework that leads to a specification of a network of hypothesized relationships of the measure with other variables. Tests of those hypotheses can support or fail to support the case for construct validity, or in other words, that the researcher’s interpretation of the underlying construct being assessed is correct. Although other forms of validity exist, I will discuss six major types of validation evidence relevant to construct validity: convergent, discriminant, factorial, criterion-related, face, and content. The first four involve statistical tests, whereas the last two are primarily based on human judgment. Convergent validity and discriminant validity are typically assessed in relationship to one another, often in the context of the multitrait multimethod (MTMM) approach to validity (Campbell & Fiske, 1959). Convergent validity is the idea that independent measures of the same construct should converge—that is, be highly related. With a MTMM study, different methods are utilized to assess the same constructs. Discriminant validity, on the other hand, is that measures of different constructs should not be highly related or at least should not be as highly related as measures of the same constructs. Convergent and discriminant validity are assessed relative to one another, so that one shows higher correlations between measures of the same construct than between measures of different constructs. In an MTMM study, one assesses two or more constructs using two or more methods, with the same methods used for each construct. For example, one might survey a sample of employees (method 1) linked to ratings by observers (method 2) about employee workload (trait 1) and closeness of supervision they receive (trait 2). The employees might complete self-report measures of their own workload and closeness, whereas observers might watch each employee work for 4 hours and make spector

173

Table 9.1. Hypothetical Multitrait Multimethod Results Showing Evidence for Convergent and Discriminant Validity Closeness Workload Closeness employee employee observer Workload employee

0.32

Closeness observer

0.64

0.31

Workload observer

0.28

0.62

0.26

ratings of the same variables, using the same or different measures as the employees themselves. Table 9.1 indicates a pattern of correlations that would support both convergent and discriminant validity in that the correlations among measures of the same traits (bolded) are higher than among measures of different traits both within the same and across different methods. Factorial validity refers to the factor structure among items of a multiple-item measure and whether statistical analyses will show that items form the factors that are expected based on the proposed structure of the scale. For scales that are designed to be unidimensional, an analysis should support that the items form only a single factor. For scales that are multidimensional (designed to assess two or more subdimensions of the construct), items should line up in expected groupings. Furthermore, if items are included from two or more scales, the content of the factors should correspond to the items of the scales. Similar to the idea of convergent and discriminant validity with the MTMM, we expect that items designed to reflect the same construct should be more highly related with one another than items that are designed to reflect different constructs. In fact sometimes confirmatory factor analysis is used to analyze data from an MTMM study (Hofling, Schermelleh-Engel, & Moosbrugger, 2009; Lorenz, Melby, Conger, & Xu, 2007). Factorial validity is assessed with some form of exploratory or confirmatory factor analysis. In either case, relationships among items—either correlations or covariances—are analyzed to find item groupings or factors based on strength of interitem relationships. Items that are strongly related to one another will tend to load together onto the same factors. Items that are modestly related to one another will tend to load on different factors. Of course, what constitutes strongly versus modestly is to a great extent relative. With a confirmatory method, one specifies in advance the number of factors and which 174

items load on each factor. Loadings of items on other (nonspecified) factors are set to zero. The analysis provides indices of how well the data fit the proposed factor structure. An exploratory approach allows all items to load on all factors and suggests the best fitting structure based on the data. There is no consensus about which approach is best for scale development, as researchers tend to have varying opinions (for a discussion of issues concerning use of these two approaches, see Hurley et al., 1997). It should be kept in mind that factorial validity tests are not construct validity tests per se. They merely indicate the number and content of item factors reflected in a measure and shed only limited light on the nature of those constructs. Although factor analysis in both confirmatory and exploratory forms is a useful tool, it is only one piece of the construct validation process. Criterion-related validity links the measure in question to other “criterion” variables to which it is theoretically expected to relate or not relate. Because constructs that are presumed to underlie measures are generally embedded in a theoretical framework, one can generate hypotheses concerning the relationship of the construct in question to other variables. For example, suppose one is interested in developing a new scale of economic hardship as a type of stressful life condition. One might reference research and theory in the stress literature to generate an idea about how the new measure might relate to other variables. One might suppose that economic hardship would induce anxiety and worry, leading to physical manifestations of elevated emotional states such as headaches and digestive upset. A survey study could be conducted to test hypotheses that the new economic hardship measure would correlate significantly with measures of anxiety, worry, headache, and stomach distress. Measures would be included in a questionnaire that assessed the proposed economic hardship variable, as well as the supposed effects of this stressful experience. Finding significant relationships as hypothesized would provide evidence that the measure reflects the underlying economic hardship construct, thus providing support for the construct validity of the new measure. If a criterion variable is continuous, such as age or level of anxiety, then a correlation between the measure in question and the criterion would likely be used to test hypotheses about criterion-related validity. If the criterion is categorical, such as gender or race, then one can compare mean levels to see if the groups expected to be higher on the measure

survey design and measure development

are, in fact, higher. With our economic hardship scale, we might expect that individuals who have lost their jobs would score higher on economic hardship than individuals who are still employed. Statistics can be used that allow comparison of means, such as an independent group t-test for two groups or a one-way analysis of variance for two or more groups. Face validity is the extent to which the underlying construct presumed to be assessed by a given measure is transparent. For measures of attitudes, we normally ask respondents to indicate their agreement with items that ask directly about the attitude object in a way that is obvious. For example, a measure of attitude about the U.S. President might include an item “I think the President is doing a good job.” For some constructs we assume that respondents are able and willing to provide an accurate and honest answer to a straightforward question, particularly when it comes to attitudes or perceptions about aspects of the social environment. There are times, however, when what seems obvious to the researcher might appear otherwise to respondents, and there can be times when respondents do not provide accurate responses to questions. Inaccurate responding can be a particular concern when items are socially sensitive and potentially threatening, such as asking someone about their religious beliefs or about health problems. In such cases, although the intent of the item might be quite clear, responses to those items do not necessarily reflect what the researcher is after. Although face validity might be helpful in many cases, it is far from sufficient in providing evidence for what people’s responses to a measure might represent. Content validity is a judgment that the items in a measure do an adequate job of representing the entire domain of a construct. Being able to adequately represent a domain is important for the development of knowledge tests where one defines the content of a topic and then chooses items so that they broadly sample all the aspects of that topic. For example, a test on knowledge of basic statistics would not be content-valid if it only asked questions about measures of central tendency and dispersion. Including only this content would be too limited, as it omits the entire domain of inferential statistics. Content validity is generally assessed by having a group of subject matter experts (SMEs) review the items of a measure to determine whether it does an adequate job of covering the entire domain of interest. With knowledge tests, SMEs would be individuals who have in-depth knowledge and training in the area. For a measure of basic statistics

knowledge, college professors who teach introductory statistics might be such a group. They would provide judgments about whether the content of a measure is adequate or whether there are important omissions that need to be added.

Summated Rating Scale Originally designed for the assessment of attitudes, the summated rating scale is a useful device for the assessment of many different types of constructs that can vary along a quantitative continuum. In addition to attitudes, this type of measure can be used to assess behavior, emotions, perceptions of the environment, and personality, among other things. There are four properties that characterize a summated rating scale (Spector, 1992). First, there are multiple items, each of which reflects the underlying construct of interest. It cannot be a summated rating scale without multiple items to combine. The items are combined either by summing or averaging responses to them. Second, the items must reflect a property of something that can vary quantitatively from low to high (unipolar) or from negative to positive (bipolar). An item from an attitude scale, for example, will be a statement that either reflects a favorable or unfavorable opinion about the attitude object. Agreement with a favorable item will reveal a favorable attitude, whereas disagreement suggests an unfavorable attitude. Third, respondents are asked to make ratings for each item along a continuum that typically has four to seven choices, although there can be fewer or more choices in some circumstances. Generally the continuum represents agreement, evaluation, or frequency. Agreement is bipolar, asking respondents to indicate the extent of agreement or disagreement with each item. Some researchers like to include an odd number of choices, with a middle choice indicating neither agreement nor disagreement, whereas others prefer to avoid a middle point. Evaluation is unipolar asking for ratings ranging from poor to outstanding, much like a course grade. Frequency is also unipolar, asking how often something occurs (from never to often). Table 9.2 provides examples of all three types of response choices (see Spector, 1976, for a scaled list of response choices). Finally, summated rating scale items have no correct answers, which distinguishes them from multiple choice exams and measures of cognitive abilities (e.g., mathematical aptitude), in which respondents are asked to indicate the correct choice from a list of alternatives. spector

175

Table 9.2. Examples of Response Choices for Agreement, Evaluation, and Frequency Agreement

Disagree very much

Disagree somewhat

Neither agree nor disagree

Agree somewhat

Agree very much

Terrible

Bad

Fair

Good

Excellent

Rarely

Seldom

Sometimes

Often

Frequently

I like to eat apples. I like to eat grapes. Evaluation Rate Bob’s performance in class Rate Mary’s performance in class Frequency How often do you exercise vigorously? How often to you eat vegetables?

Table 9.3 contains four items from the Work Locus of Control Scale (WLCS; Spector, 1988). The WLCS is a personality scale that assesses an individual’s tendency to believe he or she controls (internality) or does not control (externality) rewards at work. As can be seen, each item reflects a control belief, with the first two indicating a belief in personal control and the second two indicating a belief in control outside of the individual. There are six response choices ranging from disagree very much to agree very much. Respondents are asked to circle the number corresponding to their beliefs for each item. The responses are quantified from 1 to 6, with 1 indicating the most extreme disagreement and 6 indicating the most extreme

agreement. Because items are not all written in the same (internality vs. externality) direction, one cannot combine responses to the items directly because individuals who are internal in their locus of control will tend to agree with the internal items but disagree with the external items. Individuals who are external in their locus of control will tend to do the opposite. Thus, to make responses to items written in opposite directions comparable, the numerical scaling must be reversed for one type of item. In this case, we reverse the internally worded items so that high scores will reflect an external locus of control. Thus, the strongest agreement with an internal item will receive a score of 1, and the strongest disagreement with an internal item will receive a score

Table 9.3. Shortened Version of the Work Locus of Control Scale (Spector, 1988) Disagree very much

Disagree moderately

Disagree slightly

Agree slightly

Agree moderately

Agree very much

1. On most jobs, people can pretty much accomplish whatever they set out to accomplish.

1

2

3

4

5

6

2. If employees are unhappy with a decision made by their boss, they should do something about it.

1

2

3

4

5

6

3. Getting the job you want is mostly a matter of luck.

1

2

3

4

5

6

4. Promotions are given to employees who perform well on the job.

1

2

3

4

5

6

The following questions concern your beliefs about jobs in general. They do not refer only to your present job

Copyright Paul E. Spector, All rights reserved, 1988

176

survey design and measure development

of 6. After these item reversals are completed, the scores can be combined by summing into a total locus of control score or averaged to compute the mean score per item. Choice of scoring approach is a matter of personal preference, as both will yield the same results with inferential statistical tests. One final feature of the WLCS is that it contains an instruction to the respondent about how the scale should be used. In this case, the respondent is asked to respond with their general views about jobs and not to the particular job they might have at the moment. Such instructions are necessary when the researcher wishes to control or limit how respondents might use a measure, for example, to limit the target (e.g., your oldest child) or timeframe (e.g., past week) to be considered. Instructions about how to complete this kind of measure would be necessary if the population sampled is likely unfamiliar with summated rating scales.

Development of a Summated Rating Scale The development of a summated rating scale proceeds in several steps that I will briefly summarize (for a more detailed treatment, see DeVellis, 1991; Spector, 1992). The process involves both conceptual and empirical work to develop a scale that has reasonable reliability and shows evidence of construct validity. Step 1: Define the Construct. Often the most difficult part of a measure development effort is to clearly define the construct of interest in an unambiguous way that distinguishes it from other related and unrelated constructs. Ultimately the quality of the measure and the extent to which a strong case can be made for construct validity is based on a clear construct definition. Constructs are generally embedded in a theoretical framework, often implicit, that relates to other existing constructs, and often one construct is defined in relation to others. Embedding a construct in this way can involve explicitly specifying constructs that are similar and how the new construct is different, and explicitly specifying constructs that are clearly different and why they are different. Often the specification is made with ideas about potential antecedents and consequences that can be the basis for tests of criterion-related validity that will be explored in subsequent steps of scale development. Coming up with a clear and unambiguous definition can be particularly challenging with constructs that are abstract and have no firm

objective reality and might exist more in the mind of the researcher than in the environment. Take, for example, the construct of organizational commitment. At a superficial level, it is simply the extent to which an employee is loyal to his or her employer, but what do we mean by loyalty? Is our concept of commitment limited to feelings and internal states, or does it include behavior? Mowday, Steers, and Porter (1979) define commitment as an employee’s acceptance of an organization’s goals, a willingness to exert effort for the organization, and a desire to remain part of the organization. Meyer and Allen (Allen & Meyer, 1996; Meyer, Allen, & Smith, 1993) suggest that there are three different types of commitment. Affective is a feeling of attachment, continuance has to do with investments that would be lost by leaving, and normative concerns a sense of obligation. Clearly what on the surface is a simple idea of loyalty can be complex, and even multifaceted. Step 2: Designing the Format of the Scale. There are many options in the design of a summated rating scale. One must choose the number and format of the response choices—that is, will it be agreement, evaluation, frequency, or something else? To a great extent, the nature of the construct will help determine which format makes the most sense. Agreement is almost always used for attitudes and personality. Frequency is used when one wants to know how often something occurs, such as a behavior or a particular type of experience (e.g., been bullied). Evaluation is used when one wishes to assess the quality of something—for example, how well someone (e.g., students) or something (e.g., a college curriculum) performs a purpose. The number of choices is a matter of personal preference. Up to a point, the greater the number of choices, the more precision there will be in the ratings. For example, two frequency choices would only distinguish if something occurs versus doesn’t occur or occurs often versus seldom. Five choices would distinguish often from seldom and frequencies in between. There is a limit to human judgment, however, so that one achieves a point of diminishing returns as the number of choices increases, so that it is not clear that having much more than six or seven choices gains much additional precision. Not needing more than six or seven choices is particularly true for summated rating scale measures where there are multiple items that will enhance precision. Some measures also include instructions to the respondent to provide a frame of reference or to explain the nature of the rating task. As noted earlier spector

177

and seen in Table 9.3, the WLCS has an instruction that the items refer to jobs in general and not just the current job. Finally, there are presentation issues concerning the modality of presentation/response and formatting issues in how the items and response choices are displayed. Surveys can be printed onto paper with responses made with pen or pencil, or surveys can be administered online with responses made with a mouse or other computer interface device. Responses can involve checking a box, circling a number, writing in a response, or some other option. The WLCS in Table 9.3 asks the respondent to circle the number that best represents their level of agreement with each item. Step 3: Writing the Items. The theoretical definition of the construct guides the creation of the items. The nature of the construct as well as the response choices influences the types of items that are written. For an attitude scale, each item is a statement that is either favorable or unfavorable about the target of the attitude. For a behavior, the item is a short description of a type of behavior (e.g., “arrive at work late,” “fail to do your homework,” or “vote in an election”). Evaluation scales will note the target to be rated, and each item is typically a dimension to be considered. For example, a job performance rating scale might have items for work quantity, work quality, professional appearance, and attendance. There are several principles involved in writing good items for a scale. First, each item should be clearly written, using language that is simple and straightforward. Second, avoid colloquial expressions, as their meaning can change over time and they might not translate well into another language should there be interest in doing so. Third, each item should reflect one and only one idea. For example, the item “do you drive to work or carry your lunch” conveys two ideas that will not be consistently endorsed by everyone. This can cause confusion if the person does one and not the other. Fourth, avoid the use of negating words, such as “not” to change the direction of item wording, as it is likely to cause response errors (Schmitt & Stults, 1985). If a respondent fails to see the “not,” then his or her response will be opposite to their actual standing on the item. It is difficult in advance to know which items will tend to elicit responses that are consistent with one another and produce a measure with adequate internal consistency reliability. Items that might seem to reflect the same intended construct sometimes fail to inter-relate, as respondents might interpret the meaning of items differently than the researcher. To 178

deal with this issue, often an initial item pool is generated for a new measure that contains many more items than needed. It is not unusual to begin with 50 or more items for an initial pilot test of the measure, with the final measure containing only a small subset of the items. The number chosen is based on how broad and clearly defined the construct is. Step 4: Pilot Test and Item Selection. Once the design of the scale is chosen and an item pool is generated, the scale is ready to be pilot-tested on a sample of respondents who represent the population on which the scale is intended to be used. The goal of the pilot test is to generate responses from a large enough sample so that an item analysis can be conducted to devise an internally consistent scale. If the scale is intended to reflect different components or facets of the construct, then factor analysis might also be used to see if the items form factors as expected. Each of these statistical analyses will be discussed in detail later. Finally, often additional data are collected to provide some evidence for construct validity. The size of the sample depends on the analyses that are to be conducted. A sample of 100 to 200 is probably sufficient for conducting an initial item analysis. Larger samples are desirable for a factor analysis that might be used to address factorial validity. Because the majority of researchers who develop scales work for or are affiliated with a university, college students are often used for a pilot study. This population is reasonable for the development of many types of scales but is not appropriate in all cases, as students are younger and more educated than the average person, and at many universities, few students are employed or married. It should also be kept in mind that results for a pilot study conducted in one country will not necessarily generalize to another, and often scales with reasonable internal consistency in one country will not have good internal consistency in another. I will discuss this issue further in the section on measurement equivalence/invariance. Responses to items from the pilot sample will usually be subject to an item analysis that helps determine which of the items forms an internally consistent scale with adequate internal consistency. Item analyses provide two statistics that are particularly useful in deciding which items to retain and which to eliminate. The item-remainder coefficient is the correlation of each item with the combination (sum or average) of all the remaining items not counting that one. For example, if there are 10 items, then the item-remainder coefficient for the first item

survey design and measure development

will be the correlation of item 1 with the combination of items 2 through 10. The item-remainder for the second item will be the correlation of item 2 with the combination of items 1 plus 3 through 10. The larger the item-remainder, the more the item in question relates to the remaining items. Typically at the initial stage, the items with the largest itemremainder coefficients are chosen to remain part of the scale. Another useful statistic that is associated with individual items is the coefficient alpha with the item in question removed. These statistics are compared to the overall alpha for the measure with all items. If the alpha goes up with an item removed, then one would conclude that the item is adversely affecting the measure’s internal consistency, and it might be removed. If the alpha declines, then the item is contributing to internal consistency and might be retained. Of course, often the differences between overall alpha and alpha with an item removed are small and might not be of practical significance. If internal consistency is the only consideration, then we would retain as many items as possible in our measures. There are practical limits, however, to how many items we can reasonably expect potential respondents to complete, and given we often include many measures in the same survey, efficiency in measure length is an important consideration. Thus, although we might begin with a rather large item pool, we usually wish to wind up with a scale that might have only a handful of items. Scales of four to eight items with good internal consistency are not unusual with measures in many domains. The item analysis can be helpful in determining which subset of items relate to one another and form an efficient scale with adequate internal consistency. This type of analysis, however, does not reflect on the validity of the scale. It indicates that the items likely reflect the same construct, but not what that construct might be. Some measures are designed to assess more than one dimension of a construct. For example, measures of job satisfaction sometimes assess a variety of facets, such as satisfaction with pay, supervisors, and the work itself (Spector, 1997). The development of such multidimensional measures involves creating subscales, each of which goes through the measure development process in parallel. Thus one generates item pools for each subscale and conducts an item analysis for each subscale separately. Once items are chosen for each subscale, factor analysis can be conducted to determine factorial validity, as we

would expect items to form factors that conform to the intended subscales. Deviation from the expected pattern would suggest that the items of the subscales are either not assessing different dimensions or some items might be placed into subscales incorrectly. It is also possible that some items reflect two constructs. For example, the item “My supervisor has been fair in giving me raises” reflects both pay and supervisor satisfaction. Another approach to the development of a measure is to generate a broad sample of items and then use exploratory factor analysis to determine the number and nature of subscales (and constructs) represented. This approach can be productive in new areas where the precise nature of constructs is not well understood. In such cases, it might be difficult to anticipate what the underlying structure of a set of items might be. Of course, once a structure is found, one must collect other forms of validation data to explore the construct validity of the factors, as one must be cautious not to automatically equate factors with constructs (Spector, Van Katwyk, Brannick, & Chen, 1997). Step 5: Collecting Validation Evidence. An internally consistent measure is a reliable measure of something, but the nature of what its scores represent needs additional study. Collecting validation evidence is sometimes done by adding additional measures in the pilot study that can be used to test hypotheses about relationships of the new measure with other variables. It is also done in subsequent studies in which the measure is part. In many cases, once the measure has been refined in a pilot study, it is used in subsequent substantive studies linking the proposed construct to other variables. Use of a new scale can occur because the researcher’s interest is in the substantive questions about the construct, but because there were no appropriate measures available, a new one had to be developed. In such cases, criterion-related validity evidence is provided at the same time as the researcher’s main interests are addressed. As with all such research, however, tests of underlying theoretical ideas are confounded with tests of the construct validity of scales, and it is often tough to disentangle the measurement issues from the substantive ones. In other words, we might misinterpret the evidence that appears to support or fail to support our research hypotheses, not because those hypotheses are correct or incorrect but because of a lack of construct validity that leads to our findings being caused by factors other than what we assume. spector

179

Survey Research Designs Surveys can be conducted using a variety of research designs that can vary in terms of the timeframe and whether data are combined from multiple sources. The most popular design is the crosssectional, single-source design in which data are all collected at one point in time from a sample of respondents. The popularity is undoubtedly caused by its efficiency, in that a large number of individuals can be surveyed at once, and there is no need to identify individual respondents to link their data from a given survey to subsequent survey administrations or to other sources of data. One can include a large number of measures in the same survey, thus showing relationships among a wide variety of variables. Drawbacks to this design are that it is unable to provide convincing evidence for a causal connection among variables, and it is vulnerable to potential measurement biases that are shared across measures of different constructs. A practical concern for researchers wanting to publish results of a survey study is that reviewers will often complain about common method variance (also called mono-method or same-source bias) with this design. I will discuss this issue later in the section on survey bias. The basic survey design can be expanded into a multisource design by collecting data from one or more additional sources. Data from the respondents who are the targets of study are linked to data from these additional sources. Often the additional sources are people who are in a particular relationship to the target of study. Additional sources might include coworkers, subordinates, or supervisors if the population being studied consists of employees; or it might include classmates or teachers if the population being studied is students. For adults one might survey partners/spouses, whereas for children one might survey parents. Another possibility is to have trained research assistants make observations of the target individual and then quantify those observations, perhaps by making ratings. In some studies the alternate sources might complete the same measures as the targets respondents, thus providing parallel data on the same constructs. Finally, data from a survey can be linked to data in records, such as hospital incident reports in a study of patient outcomes or school records in a study of achievement motivation. The multisource study allows for an assessment of convergent validity of measures that are common across methods. If respondents are asked to report how often they engage in a particular behavior, then 180

the additional source can verify that this is, in fact, the case. This design also helps control for some forms of bias that might affect variables that are all self-reported by the respondent. For example, if the variables of interest are likely to be affected by someone’s mood, observed relationships among those variables might be distorted. Individuals in a good mood might tend to respond high on all variables, whereas individuals in a bad mood might tend to respond low. This would inflate observed correlations among variables. The use of an additional source for data on one of the variables will likely avoid that bias, assuming the moods of the target person and additional source are not linked. Finding a similar relationship between the same-source and multisource data provides additional confidence in conclusions. Although the use of multisource designs can be an advantage over same-source, they are not a panacea. It has been pointed out that often the individual being studied is the most accurate source of information about his or her own behavior and experiences and that additional sources are not necessarily accurate (Frese & Zapf, 1988). For example, it has been shown in the occupational domain that employees demonstrate better discriminant validity (lower correlations among subscales of a measure) than do additional sources such as supervisors (Glick, Jenkins, & Gupta, 1986; Spector, Fox, & Van Katwyk, 1999). The basic cross-sectional survey design can also be expanded by making it longitudinal whereby the same individuals are surveyed two or more times. This design allows the researcher to explore relationships over time. One can compute the test–retest reliability of measures, although as noted earlier, it can be difficult to disentangle unreliability from instability of the constructs themselves. It can also be used to see if one Time 1 variable (X) can predict another Time 2 variable (Y) with Y’s Time 1 level controlled, which has the potential to yield more definitive tests of potential causal relationships than can cross-sectional designs. The ability to do that, however, assumes that one is able to assess the variables of interest both prior and subsequent to the causal process unfolding. The assessment of two variables at two arbitrary points in time after their causal process has unfolded and they have reached steady-state is unlikely to offer much advantage over cross-sectional designs. It is possible, however, to choose timeframes for longitudinal survey studies that provide tests before and after the occurrence of a condition or event of interest. Use of a longitudinal design can be done

survey design and measure development

quite readily if the variable in question is dichotomous; in other words, the person can be clearly placed in one category or the other. For example, Manning, Osland, and Osland (1989) conducted a longitudinal study of the effects of smoking cessation in which they surveyed a sample of individuals at two times over a period of 12 to 16 months. At each point in time, they included measures of job satisfaction, mood, and health behaviors. They also asked respondents if they currently smoked, thus allowing them to be placed into four groups depending on whether they smoked at each time period. Of particular interest was the group that smoked at Time 1 but not at Time 2 (the cessation group), with groups that smoked at both time periods or didn’t smoke at either time period serving as controls. Their results showed that all three measures went down from Time 1 to Time 2 only for the cessation group, thus suggesting that smoking cessation might have had a negative impact on people’s attitudes, mood, and health behaviors.

Biases and Method Variance in Surveys A concern with the use of survey methods is potential biases in people’s responses to questions. These biases can arise because respondents are unable or unwilling to provide accurate information. Inability to respond can occur when researchers ask about things the respondent has limited knowledge about or asks in a way that challenges the respondent’s abilities—for example, the survey questions are at too high a reading level. Bias can also occur inadvertently because the respondents are influenced by extraneous factors that render their responses inaccurate, despite their best efforts to respond honestly. For example, an individual might not be able to accurately evaluate the performance of a member of his or her own family. In other cases, respondents might be unwilling to be candid, particularly when questions deal with sensitive issues. Although some respondents might choose to leave such questions blank, others might respond in a way that does not accurately reflect their standing on the construct of interest. Anonymity can help but not eliminate inaccuracy or lack of response caused by unwillingness. Biases adversely affect the validity of measures because they can inflate or deflate scores. As noted earlier, mood is a potential bias for some variables. In the Manning et al. (1989) smoking cessation study, it is possible that the effects on job satisfaction resulted not from those who ceased smoking having

declining attitudes but, rather, from their bad mood at the time they completed the Time 2 survey. The bad mood of quitters might have led them to rate their jobs as less satisfying. Other biases might also affect survey responses. Of particular concern in many areas of psychology and related fields is the possibility that there is bias caused by the use of common methods. Common method variance is variance in observed measures that results from the particular methods used (Campbell & Fiske, 1959). The survey is often considered a method that can be the source of method bias, and if multiple variables in a study come from the same survey, then it is assumed that the method used is in common, and the study will suffer from common method variance. As I have written elsewhere (Spector, 1987, 2006), there is little evidence that the survey method does in fact produce bias that is common across all measures and leads to a constant inflation effect. For example, Spector (2006) demonstrated that nonsignificant and near-zero correlations are commonplace in survey studies. Thus the common method itself is not the source of bias. Rather, it is the combination of the method used with the nature of the specific construct that determines the biases in measurement (Spector & Brannick, 1995, 2009). Regardless of the cause (method or the combination of method plus construct), bias can have effects on relationships among variables. When bias is limited to only one variable, it will reduce measurement accuracy and act like error variance that can decrease reliability. When variables share the same biases, however, the relationships among them can be inflated because of the common bias. In other words, the construct of interest is confounded with the biasing factors, making it difficult to determine the underlying cause of the observed relationships. In point of fact, the relative strength of shared and unshared biases in measures determines the effect on observed relationships. Using a simulation, Williams and Brown (1994) showed that bias caused by method in many cases leads to an attenuation, rather than inflation, of observed relationships. Lance, Dawson, Birkelbach, and Hoffman (2010) conducted a simulation showing that even if method variance exists, its effect on observed correlations among variables is likely inconsequential because the amount of likely inflation is approximately equal to the extent to which measurement error (unreliability of measures) attenuates correlations. The possible inconsequentiality of potential measurement biases was illustrated by Williams and spector

181

Anderson (1994), who compared structural models with and without potential bias sources (i.e., emotionality). Using structural equation modeling they found that bias had little effect on tests of substantive models hypothesized to explain relationships among their variables. There is potential for bias in survey measures, as we cannot be certain that extraneous influences are not affecting people’s responses to items. The question that is difficult to answer is the extent to which observed relationships among measures in any investigation might have been distorted, or the direction of that distortion. There are a number of strategies that can be used to minimize the risk of distorted results (for detailed discussion, see Spector & Brannick, 2009). The best way is to design a study so that the possible effects of bias are minimized. Two such strategies are separating assessment of different variables over time and using multisource designs. Temporal separation can be useful for eliminating transient occasion factors that might have brief effects on measurement. The effects of mood or day-to-day events that might temporarily color someone’s perspective can be controlled by separating measurement over time. The use of multiple methods can remove effects of some biases, but it cannot remove all of them. In general, the closer the relationship between the additional source and the respondent, the more likely it is that there would be shared biases between them. For example, Morrison and Clements (1997) showed that the personality trait of neuroticism was related in cohabitating partners/spouses, and neuroticism has been noted as a possible source of bias in survey studies (Watson, Pennebaker, & Folger, 1986). The role of neuroticism, however, is likely complex (Spector, Zapf, Chen, & Frese, 2000), as this personality variable is not just a source of bias. Further, people in a similar situation, such as coworkers, can share stressful experiences (Semmer, Zapf, & Greif, 1996). To the extent that such experiences might bias responses to a survey, those biases would be shared between the respondent and an additional source of data. Ultimately, the best way to definitely rule out the possibility of biases affecting survey results is the use of a variety of methods, each of which can control for some biases. The scientific principle of converging operations is relevant here. Finding similar results across different methods adds confidence to our conclusions, as each method will be vulnerable to its own set of potential biases and weaknesses, but combined a series of distinct methods will capitalize on the strengths of each. 182

International and Cross-National Surveys The amount of survey research that is conducted across countries has been expanding, as new forms of communication have reduced barriers to international collaboration. Some of this work consists of simple replications of research using measures and testing theories from one country to another, whereas other work involves a comparison of two or more countries. There are two particular challenges one encounters in conducting cross-national research: measurement equivalence/invariance (ME/I) and sample equivalence. The former issue concerns whether results with measures developed in one country can be compared to another and whether the measures maintain construct validity. The latter issue has to do with assuring that the populations being sampled among countries are comparable, and country is not confounded with other variables, such as age or education of the respondents.

Measurement Equivalence/Invariance Measurement equivalence/invariance concerns both the semantic meaning of items and the calibration of ratings—in other words, does an item reflect the same construct across countries, and does a given score represent the same level of a construct across countries? These issues exist whether the survey is conducted in different languages or in the same language, although translation likely exacerbates the potential problem. With different languages, there is a potential problem in assuring that translations are accurate. Often there is no oneto-one correspondence of words between languages, so that the translation into the target language is only an approximation of the word from the source language. There are also connotative meanings of words and phrases, so that a literal word-to-word translation does not always best reflect the original meaning. These issues can also exist with the same language used across countries, as the meaning and connotation of words and expressions can vary across countries, such as the United States and the United Kingdom. A related problem has to do with the scale value of items to which individuals make ratings. Measures are designed to assess a person’s standing on the underlying continuum of a theoretical construct. Not only do people vary on that continuum, but items vary as well. Consider the following two items on a scale that reflects people’s attitudes about their automobile. “I drive a serviceable car.”

survey design and measure development

“I am driving the best car in which I’ve ever ridden.” The first item is modestly favorable, whereas the second item is extremely positive. Thus they vary on the continuum of automobile satisfaction. It has been shown for decades that people’s responses to items that vary in their values along the underlying continuum are not consistent, as people will tend to agree most strongly with items that come closest to their own standing on the continuum and do not necessarily agree with all items that are written in the same direction regardless of scale value (Thurstone, 1928). This unfolding principle explains that if a person is only modestly satisfied with his or her car, he or she might endorse “strongly agree” with the first item but only “slightly agree” with the second. In other words, the score on item 1 would be higher than the score on item 2. In fact, such an individual might even disagree with the second item entirely, despite a favorable leaning toward his or her current automobile because the statement is just too extreme. The tendency for item responses to be determined by the scale value of an item can create problems when items are transported across countries and languages. For the first item, a successful translation into a new language might retain the meaning of the item but not the scale value. The phrase “serviceable car” might be difficult to translate precisely and with the same scale value into a different language, as there might not be an equivalent word that means exactly the same as “serviceable” with the same connotative meaning and the same strength of meaning. Thus a comparison of mean responses between, say, a sample of North Americans responding to the item in English and a sample of Chinese responding to the Chinese translation might confound the scale value of the respective items with country differences in the underlying construct of automobile satisfaction. Finally, there is the possibility of response styles or tendencies that can vary across countries and cultures (both within and across countries). For example, Triandis (1994) noted that Asians have a tendency toward modesty and avoidance of strong agreement about positively worded items. In comparisons of American and Japanese respondents’ responses to items assessing depression, Iwata and colleagues (Iwata, Roberts, & Kawakami, 1995; Iwata et al., 1998) found that there were differences for positively worded items (Japanese scored lower) but not negatively worded items (Americans and Japanese scored the same). This line of research

raises questions about whether observed differences in depression result from the tendency for Japanese to be more depressed than Americans or from culturally determined response tendencies to positively worded items. There are both procedural and statistical methods for dealing with issues of ME/I. At the procedural stage of a study involving data collection among samples with different languages, the translation and independent backtranslation (van de Vijver & Leung, 1997) has become standard practice, with the majority of cross-national studies using it (Schaffer & Riordan, 2003). This method requires the services of two skilled bilinguals, one to translate the questionnaire from the original source language to the target language and another to independently back-translate the target version into the source. It is best to have a native speaker of the source language compare the original with the back-translation to be certain the two are equivalent. Errors of translation can be repaired and rechecked so that the two language versions are as close as possible. As noted earlier, semantic equivalence does not guarantee that responses to the items in different languages will be equivalent. Statistical methods are available to check for ME/I by comparing responses to items among two or more samples. One approach is to use structural equation modeling (SEM) procedures to check for equivalence of underlying factor structure reflected in the relationships among the items of one or more measures. This approach assumes that if the factor structure of a scale is proportionally equivalent across samples, then the underlying constructs being assessed are likely to be equivalent. There are several different tests of ME/I that can address different aspects of equivalence (Vandenberg & Lance, 2000). The most restrictive is an omnibus test that checks for equivalence of two or more inter-item covariance matrices across samples. This test indicates the extent to which all the interitem covariances and item variances are the same across samples, allowing for an expected amount of sampling error. Equivalence of inter-item covariance implies that the underlying factor structures are also equivalent. Other tests can be used to address more specific aspects of factor structure, such as whether the same items load on the same factors or whether corresponding item factor loadings are equivalent across samples. Another approach is to use Item Response Theory (IRT) methods to check for the equivalence of item characteristics for a measure across samples (Raju, Laffitte, & Byrne, 2002; van de Vijver & Leung, spector

183

1997). Item Response Theory can be used on unidimensional measures in which all items entered into an analysis are assumed to reflect the same underlying construct. The method itself is used to compare corresponding pairs of items to determine if they behave equivalently across samples. Data are used to produce item characteristic curves relating the probability of a response to people’s standing on the underlying construct of interest. Variations in the curves are indicative of a lack of equivalence. Of the two approaches, SEM is by far the more frequently used to establish ME/I (Schaffer & Riordan, 2003). This popularity perhaps results from several factors, including that SEM does not require unidimensional scales, requires smaller samples, and is perhaps more generally familiar to researchers. It is quite useful for providing overall tests of ME/I. Statistics (loadings) for individual items can be helpful in identifying items that might be adversely affecting ME/I across samples. Item Response Theory is designed specifically for exploring fit at the item level, and it can be very useful for identifying items that might be eliminated to improve ME/I. Raju et al. (2002) compared the two methods on the same data and found few differences in their ability to identify items that were not invariant (SEM found one addition invariant item than IRT). Thus, there is little basis at this point to recommend one approach over the other, as both can be potentially useful in establishing ME/I.

Sample Equivalence When conducting research in which samples will be compared across countries or cultures, care must be taken to minimize confounding between the country/culture differences of interest and characteristics of the samples. To accomplish this, one needs to be sure the samples are as equivalent as possible on relevant variables, such as demographics (age, gender, and socioeconomic status relative to the country), and other characteristics that are not of interest. For example, in studies of student learning, one would control for grade level, whereas in studies of employment, one would control for the nature of jobs. Perhaps the best way to maximize sample equivalence is with the use of matching, whereby sampling strategies are used to produce as much equivalence as possible. However, as Schaffer and Riordan (2003) pointed out, one must be cautious in using matching, as it does not necessarily entirely fix the problem. For example, a study that limits its respondents to teachers across samples might control for 184

occupation, but it does not control for the education level of teachers across samples. Differences between teacher samples might not result from culture but, rather, education. A comparison of teachers with an equivalent level of education across country samples might well yield different results. Perhaps the only way around this problem is to replicate comparisons among countries of interest so that a number of matched samples are compared. Finding consistency of results across different comparison groups would lend confidence to conclusions about country or culture differences.

Sampling Issues A critical step in conducting survey research is to define the underlying population of interest on which a sampling strategy is based. In areas in which researchers wish to make precise estimates of descriptive statistics, populations are carefully defined and sampling procedures are designed to accurately represent them. This strategy can be seen in marketing research and political polling where questions concern likely purchasing decisions and voting patterns. With research more typically found in scientific journals, where one tests theory-based hypotheses concerning relationships among variables, there is less focus on specifying populations and using appropriate sampling procedures. Unfortunately, when samples are taken from undefined populations, generalizeability of results can be uncertain. Rarely in academic research reports are the limits of generalizeability acknowledged or considered. The majority of psychological studies of many human phenomena are conducted on samples of college students who are on average, in comparison to the general population, higher in cognitive ability and education and lower in age and working experience. For many investigations, such factors might not be important, but for others, results might not generalize well to the less cognitively able, less educated, older, and more experienced people. Furthermore, cross-national differences might exist so that results do not generalize far beyond the country where the study was conducted. The purpose of the survey determines the underlying population of interest and the sampling strategy utilized that can best reflect that population. Choice of population to sample is a conceptual/theoretical issue in which the researcher logically determines the nature of the population to which the research is relevant. For example, studies of student learning will specify the age and grade

survey design and measure development

level of the student population to be sampled. Sampling strategy involves a tradeoff between the optimal approach and practical considerations that may necessitate compromises. Achieving a random and representative sample of a population can be quite expensive and time-consuming. Respondents in such samples are chosen in a nonsystematic way that reduces chances for bias, so that the characteristics of the sample are expected to match the characteristics of the population, allowing for sampling error. For some questions such samples are necessary, but for others a nonrepresentative sample might be sufficient, even if not ideal, although typically we cannot be certain when this might be the case. Surveys can be conducted in a variety of settings where a researcher can access individuals from the population of interest. In some cases, potential respondents can be chosen from a general population (e.g., telephone directory) and contacted via e-mail, phone, or post. In other cases, people might be accessed through an organization, such as an association to which they belong, an employer, or school. The organization might provide contact information on members so that surveys can be distributed. At times the survey might be conducted within the organization itself (e.g., surveys handed out in classes). There are a variety of procedures for recruiting respondents ranging from direct contacts (e.g., via e-mail) to more passive advertisements for individuals to participate in the study (Lee, 1993, discusses strategies for accessing particular populations). Another issue with conducting surveys is whether you will take samples from a sampling frame (list of all individuals eligible for a study) or attempt to survey the entire frame. For studies done of employees of an organization, it is not uncommon to send surveys to every employee. This approach is particularly likely if the study is being conducted by management to assess areas in which steps might to be taken to address employee concerns and dissatisfaction. With large organizations, however, sometimes to reduce costs, a restricted sample is drawn that represents a proportion of the total sampling frame. When sampling is done from a larger population, there are a number of strategies that can be utilized. Sampling procedures are classified as probability, in which it is possible to specify the probability that each member of the population is chosen versus nonprobability where one cannot specify the probability of being chosen (Judd, Smith, & Kidder, 1991). Probability sampling has the advantage of being able

to provide samples that we expect are representative. Although there is no guarantee that a particular sample is in fact representative, we accept that it is highly likely that a properly drawn probability sample is representative (Judd et al., 1991), just as we assume that random assignment to treatment conditions in an experiment likely produced equivalent groups. With nonprobability samples, we have no way of knowing whether a sample is likely to be representative of a given population. It is possible that it is, but it is also possible that it is not, and thus we cannot be certain about the characteristics of the population that was sampled. The simplest probability sample is the random sample in which each member of a sampling frame has an equal probability of being chosen. Such a sample is drawn through a process in which we randomly choose a given number of respondents from the sampling frame that represents the population of interest, such as all residents of Chicago who are listed in the phone directory. Note that this sampling frame does not represent all residents of Chicago, as many do not have land-lines and so would not be in the phone directory. More complex sampling strategies are possible, such as stratified sampling where a population is divided into different groups or strata, with a given number of individuals randomly chosen from each stratum. In political polling, for example, one might stratify by state, gender, income, and political party. The advantage to stratified sampling is greater precision, which means you need fewer respondents to achieve the same level of accuracy in estimating population statistics than with random sampling (Judd et al., 1991). Nonprobability sampling involves a number of techniques that survey individuals who are easily accessible but are not randomly chosen from a specified population. Sometimes referred to as convenience samples, such samples are chosen merely because they are available to the researcher. Thus, a professor might survey members of his or her class, or a researcher studying the workplace might survey members of one organization. The subject pools found in psychology and other university academic departments provide nonprobability samples because rather than choosing respondents randomly from a specified population, students volunteer to participate in studies of their choice. The main advantage of nonprobability sampling is the relatively low cost. The disadvantage is that it can be difficult to accurately define the nature of the underlying population of such samples and the extent to

spector

185

which such samples might or might not be representative. Further, if results vary across studies using nonprobability sampling, then it can be difficult to determine the factors that were responsible.

Human Subjects Issues With Surveys Survey research that involves humans is subject to the same ethical issues and standards as other forms of human research. Although this sort of research typically has limited risk for harm, there are still potential concerns about privacy and sensitive information that might put subjects at risk. Many times these issues can be remedied through the use of anonymous surveys where individuals cannot be identified. Of course, this protection is limited if data are aggregated to identified small groups, such as departments in a small organization. In cases where it is necessary to match data from a survey to other sources, such as members of a married couple, various strategies can be used to match corresponding data without identifying individuals. The nature of such strategies depends on how the survey is conducted. One possibility to match questionnaires by the same person is to ask a few questions similar to the security questions on secure websites, such as mother’s middle name, earliest address, or name of first pet. Questionnaires can be matched by responses to the same questions. In some cases, it is impossible to avoid having people identify themselves while being surveyed— for example, with interviews or questionnaires where one is matching data to organizational records (e.g., arrest). In those cases, care should be taken to protect the identities of respondents. One way is to remove identifying information as soon as possible once data are combined across sources, whether that information is collected in electronic or written form. Another is to use subject codes on the survey materials (questionnaire or interview notes), with a cross-reference to the name in a separate list. In many countries, human research is subject to governmental regulation, such as institutional review boards (IRBs) in the United States. In the United States, institutions that receive federal research funding are required to establish IRBs to oversee all human research. All human research projects must be approved by the IRB before data are collected. Researchers are required to submit applications to their IRB for approval of the project’s procedure or protocol and to approve any changes to the approved protocol. Applications are reviewed by the IRB and are either approved or disapproved 186

based on IRB members’ judgments about the appropriateness of the procedures, and the extent of risk to subjects. Survey projects in which respondents are not identified are generally exempt from IRB review, but the IRB must determine if that is the case. Researchers can request an exemption by submitting the details of the project to the IRB. It is advantageous to the researcher to design a study so that it will qualify for an exemption for at least three reasons. First, the process of receiving an exemption generally is quicker. Second, the exemption is valid for 5 years rather than 1 year for reviewed projects. Third, researchers with reviewed, but not exempt, projects must submit annual progress reports and complete paperwork if the study continues into subsequent years.

Conclusions The survey method is an effective and efficient way to study many social phenomena, so it is not surprising that it is used so often across many fields that study human social phenomena. The survey is flexible and can be used to assess a wide variety of variables. However, there are also limitations to the use of the survey, particularly when it is used in isolation. Nevertheless, it is often the method of choice when one wishes to assess the experiences and internal psychological states of people. Conducting a high-quality survey study that can lead to confident conclusions requires attention to many methodological details. The process begins with the careful consideration of the nature of the population of interest and how that population will be sampled. Ideally a sampling procedure will be used that will likely yield a representative sample from the population of interest. Unfortunately, such procedures are typically expensive and labor-intensive, putting it out of reach for many researchers, such as graduate students and university professors, unless they are able to secure research grants. Thus many studies rely on nonprobability sampling that is likely not to achieve representativeness of a specific population. In fact, in many such studies, the nature of the underlying population is unspecified. A key component of survey research is the measures that are used to assess the variables of interest. In many cases, the variables represent theoretical constructs that require multi-item scales to assess. The psychometric properties of such measures are important. Measures that are unreliable will lead to inaccurate estimates of descriptive statistics and decrease the power to detect significant

survey design and measure development

relationships. Scales with uncertain construct validity lower the confidence with which one can make inferences about the results of a study. Finally, biases can distort measurement leading to inaccuracy of results and erroneous conclusions. As with all types of research, the value of a survey study depends on the methodological rigor in the design and execution. A well-designed survey study can provide important insights into a wide range of social phenomena. Of course, the survey in isolation is far from sufficient in providing definitive answers to many research questions it is asked to answer. To do that, we need to apply a variety of methods using the principle of converging operations in hopes that combined they will provide the insights we seek from our research.

References Allen, N. J., & Meyer, J. P. (1996). Affective, continuance, and normative commitment to the organization: An examination of construct validity. Journal of Vocational Behavior, 49(3), 252–276. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. DeVellis, R. F. (1991). Scale development theory and applications. Thousand Oaks, CA: Sage. Fowler, F. J., Jr. (1988). Survey research methods (Revised Ed.). Thousand Oaks, CA: Sage. Fowler, F. J., Jr. (1995). Improving survey questions: Design and evaluation. Thousand Oaks, CA: Sage. Frese, M., & Zapf, D. (1988). Methodological issues in the study of work stress: Objective vs subjective measurement of work stress and the question of longitudinal studies. In C. L. Cooper & R. Payne (Eds.), Causes, coping and consequences of stress at work (pp. 375–411). Oxford, UK: John Wiley & Sons. Glick, W. H., Jenkins, G., & Gupta, N. (1986). Method versus substance: How strong are underlying relationships between job characteristics and attitudinal outcomes? Academy of Management Journal, 29(3), 441–464. Hofling, V., Schermelleh-Engel, K., & Moosbrugger, H. (2009). Analyzing multitrait-multimethod data: A comparison of three approaches. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 5(3), 99–111. Hurley, A. E., Scandura, T. A., Schriesheim, C. A., Brannick, M. T., Seers, A., Vandenberg, R. J., et al. (1997). Exploratory and confirmatory factor analysis: Guidelines, issues, and alternatives. Journal of Organizational Behavior, 18(6), 667–683. Iwata, N., Roberts, C. R., & Kawakami, N. (1995). JapanU.S. comparison of responses to depression scale items among adult workers. Psychiatry Research, 58(3), 237–245. Iwata, N., Umesue, M., Egashira, K., Hiro, H., Mizoue, T., Mishima, N., et al. (1998). Can positive affect items be used to assess depressive disorders in the Japanese population? Psychological Medicine, 28(1), 153–158. Judd, C. M., Smith, E. R., & Kidder, L., H. (1991). Research methods in social relations. Fort Worth, TX: Harcourt Brace Jovanovich.

Lance, C. E., Butts, M. M., & Michels, L. C. (2006). The sources of four commonly reported cutoff criteria: What did they really say? Organizational Research Methods, 9(2), 202–220. Lance, C. E., Dawson, B., Birkelbach, D., & Hoffman, B. J. (2010). Method effects, measurement error, and substantive conclusions. Organizational Research Methods, 13, 435–455. Lee, R. M. (1993). Doing research on sensitive topics. Thousand Oaks, CA: Sage. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22(140), 1–55. Lorenz, F. O., Melby, J. N., Conger, R. D., & Xu, X. (2007). The effects of context on the correspondence between observational ratings and questionnaire reports of hostile behavior: A multitrait, multimethod approach. Journal of Family Psychology, 21(3), 498–509. Manning, M. R., Osland, J. S., & Osland, A. (1989). Workrelated consequences of smoking cessation. Academy of Management Journal, 32(3), 606–621. Meyer, J. P., Allen, N. J., & Smith, C. A. (1993). Commitment to organizations and occupations: Extension and test of a threecomponent conceptualization. Journal of Applied Psychology, 78(4), 538–551. Morrison, D. L., & Clements, R. (1997). The effect of one partner’s job characteristics on the other partner’s distress: A serendipitous, but naturalistic, experiment. Journal of Occupational and Organizational Psychology, 70(4), 307–324. Mowday, R. T., Steers, R. M., & Porter, L. W. (1979). The measurement of organizational commitment. Journal of Vocational Behavior, 14, 224–247. Nunnally, J. C. (1978). Psychometric Theory (2nd ed). New York: McGraw-Hill. Raju, N. S., Laffitte, L. J., & Byrne, B. M. (2002). Measurement equivalence: A comparison of methods based on confirmatory factor analysis and item response theory. Journal of Applied Psychology, 87(3), 517–529. Schaffer, B. S., & Riordan, C. M. (2003). A review of crosscultural methodologies for organizational research: A bestpractices approach. Organizational Research Methods, 6(2), 169–215. Schmitt, N., & Stults, D. M. (1985). Factors defined by negatively keyed items: The result of careless respondents? Applied Psychological Measurement, 9(4), 367–373. Semmer, N., Zapf, D., & Greif, S. (1996). ‘Shared job strain’: A new approach for assessing the validity of job stress measurements. Journal of Occupational and Organizational Psychology, 69(3), 293–310. Spector, P. E. (1976). Choosing response categories for summated rating scales. Journal of Applied Psychology, 61(3), 374–375. Spector, P. E. (1987). Method variance as an artifact in selfreported affect and perceptions at work: Myth or significant problem? Journal of Applied Psychology, 72(3), 438–443. Spector, P. E. (1988). Development of the Work Locus of Control Scale. Journal of Occupational Psychology, 61(4), 335–340. Spector, P. E. (1992). Summated rating scale construction: An introduction. Thousand Oaks, CA: Sage Publications. Spector, P. E. (1997). Job satisfaction: Application, assessment, causes, and consequences. Thousand Oaks, CA: Sage Publications. Spector, P. E. (2006). Method Variance in Organizational Research: Truth or Urban Legend? Organizational Research Methods, 9(2), 221–232. Spector, P. E., & Brannick, M.T. (1995). The nature and effects of method variance in organizational research. In C. L. Cooper

spector

187

& I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology: 1995 (pp. 249–274). West Sussex, UK: John Wiley. Spector, P. E., & Brannick, M. T. (2009). Common method variance or measurement bias? The problem and possible solutions. In D. A. Buchanan & A. Bryman (Eds.), The Sage handbook of organizational research methods (pp. 346–362). Thousand Oaks, CA: Sage. Spector, P. E., Fox, S., & Van Katwyk, P. T. (1999). The role of negative affectivity in employee reactions to job characteristics: Bias effect or substantive effect? Journal of Occupational and Organizational Psychology, 72(2), 205–218. Spector, P. E., Van Katwyk, P. T., Brannick, M. T., & Chen, P. Y. (1997). When two factors don’t reflect two constructs: How item characteristics can produce artifactual factors. Journal of Management, 23(5), 659–677. Spector, P. E., Zapf, D., Chen, P. Y., & Frese, M. (2000). Why negative affectivity should not be controlled in job stress research: Don’t throw out the baby with the bath water. Journal of Organizational Behavior, 21(1), 79–95. Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33, 529–554. Triandis, H. C. (1994). Culture and social behavior. New York, England: McGraw-Hill.

188

van de Vijver, F. J. R., & Leung, K. (1997). Methods and data analysis for cross-cultural research. Thousand Oaks, CA: Sage Publications. Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–69. Watson, D., Pennebaker, J. W., & Folger, R. (1986). Beyond negative affectivity: Measuring stress and satisfaction in the workplace. Journal of Organizational Behavior Management, 8(2), 141–157. Weber, R. P. (1990). Basic content analysis (2nd ed.). Thousand Oaks, CA: Sage. Wilkinson, S. (2003). Focus groups. In J. A. Smith (Ed.), Qualitative psychology: A practical guide to research methods (pp. 184–204). Thousand Oaks, CA: Sage. Williams, L. J., & Anderson, S. E. (1994). An alternative approach to method effects by using latent-variable models: Applications in organizational behavior research. Journal of Applied Psychology, 79(3), 323–331. Williams, L. J., & Brown, B. K. (1994). Method variance in organizational behavior and human resources research: Effects on correlations, path coefficients, and hypothesis testing. Organizational Behavior and Human Decision Processes, 57(2), 185–209.

survey design and measure development

CHAPTER

10

High-Stakes Test Construction and Test Use

Neal M. Kingston and Laura B. Kramer

Abstract The development of tests used for high-stakes purposes requires an understanding of measurement theory and the appropriate use of a variety of techniques. This chapter assumes the reader already has an understanding of test theory but not necessarily any experience developing tests. The substance of the chapter starts with the two major approaches to score interpretation used in testing—norm referenced and criterion referenced—and discusses the test development implications of either. A variety of quantitative methods used to develop high-stakes tests are described, and some others, beyond the scope of this chapter, are referenced. Key Words: Criterion referenced, differential item functioning, norm referenced, psychometric formulae, reliability, test construction, test development

Introduction The purpose of this chapter is not to teach the reader about test theory. That would take a book (or a set of chapters in this book) in itself. Rather, this chapter assumes the reader broadly understands test theory and is looking for a compendium of approaches and methods. A study of quantitative methods for the construction and support of appropriate use of high-stakes tests requires an understanding of the various possible philosophical underpinnings of such tests. Although a good understanding of these quantitative methods is necessary to develop and interpret scores from high-quality assessments, it is insufficient without a broad understanding of the test development process. Consequently, this chapter will cover the goals of the test development process, major score interpretation schema, and the quantitative methods that support this endeavor. Many books have been written on each major component of this process, and no one chapter can hope to cover all important material. The goal of this

chapter is to provide a quick reference and a framework for practitioners that will guide their further explorations.

High-Stakes Testing High-stakes testing is all around us. More than 100 years of research on test results has provided strong support for the efficacy of well-constructed tests, as well as concerns regarding misuse. Tests produce a number (or numbers) that can readily be used to make decisions about individuals or aggregated to make decisions about groups. This simplicity has great appeal for policymakers who often seem to fall into the trap indicated by a paraphrasing of H.L. Mencken: “For every complex problem there is a simple solution . . .and it’s wrong.” This same concern is reflected by testing professionals. For example, the American Psychological Association states, “. . . high-stakes decisions should not be made on the basis of a single test score, because a single test can only provide a ‘snapshot’ of student 189

achievement and may not accurately reflect an entire year’s worth of student progress and achievement” (APA, no date). Nonetheless, testingresultsarefrequentlyprimary determiners or significant parts of a noncompensatory (if you fail based on the test score, or any other single requirement, then no other information will be considered) decision for employment selection, licensure and certification, school promotion and graduation, and school accountability. Millions of individuals are also affected by the typically compensatory use of test scores (a poor test or subtest score can be compensated for or “made up for” by high scores on other variables or another portion of the test) for admission to colleges, graduate schools, many private K-12 schools, and some public high schools. Testing professionals have long realized that with so many important decisions based on or influenced by test scores, it is necessary to have well-researched test development processes bolstered by strong quantitative techniques to maximize testing program quality and minimize test weaknesses. The state of the art has changed over time, although interestingly some artifacts have survived well beyond their useful lives (e.g., the Kelley d statistic, which will be discussed later).

Score Interpretation Systems The purpose of a test is to provide scores from which useful inferences may be drawn. To this end, there are three major approaches used to support such inferences: norm referenced, criterion referenced, and ipsative. Ipsative methods (Baron, 1996) are based on intra-individual comparisons (usually based on forced-choice preferences) and are typically used for personality assessments and not commonly used for high-stakes purposes. Although the purposeful development of ongoing testing systems can be traced back at least 1,400 years to the systemization of the Chinese Keju system during the Sui dynasty (Dubois, 1966), the modern science of testing could be said to start with the application of statistical methods to test data. Although clearly influenced by the work of Galton and Wundt, Cattell’s (1886) doctoral dissertation, entitled “Psychometric Investigations” and the founding of the Psychometric Laboratory at Cambridge in 1887 might be taken as the beginning of scientific test development and interpretation.

Norm-Referenced Interpretations From the founding of psychometrics through about 1970, the primary schema for providing test 190

scores with meaning was norm referencing. An early example of norm referencing was the use of age scores on intelligence tests (Binet, 1903). For example, a score of 7.6 would mean you scored as well as the typical child of age 7 years 6 months. The concept of age-equivalent scores presented logical inconsistencies (Thurstone, 1926), and its use declined, although the related concept of gradeequivalent scores is still popular in educational achievement testing. Around 1912, intelligence quotient scores were developed in attempt to meld age-normative information with the ability to readily track scores over time (Stern, 1914). Thus, a quotient score of 115 was intended to mean that you scored the same as a typical child 15% older than you. Unfortunately, intellectual growth is not perfectly linear with age, and so norm referencing with quotient scores was replaced by deviation scores. A deviation score of 115 (assuming a mean of 100 and a standard deviation of 15) would mean you scored one standard deviation above the mean, which, if the distribution were normal, would mean you scored better than 68% of the normative population. Because distributions are often not perfectly normal (e.g., intelligence scores are negatively skewed), the normative interpretation of scores can be further bolstered by the reporting of the percent of the normative population whose score was exceeded. Although percentile ranks (the percent of the normative group that a certain score exceeds) have become one of the most common normative approaches to providing scores with meaning, other common approaches include age- and grade-equivalent scores. The use of normative information as the primary source of meaning for test scores has important implications for test development and the use of statistical tools. Good norm-referenced assessments must create score distributions that maximally differentiate among examinees. If one’s interest in differentiating were weighted proportionally to the distribution of examinees, then this would mean focusing the item difficulty, p1 (within Classical Test Theory the proportion of examinees responding correctly), of a test so that the median examinee had a 0.5 probability of knowing the correct answer. This can be demonstrated by thinking about a five-item test. For simplification, let us consider items that cannot be answered correctly by guessing. Table 10.1 shows the score distributions (based on the simple binomial distribution) that would arise when p equals 0.5 and 0.9.

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

Table 10.1. Effect of Item Difficulty on Test Score Distributions p = 0.5

p = 0.9

0

0.031

0.000

1

0.156

0.000

2

0.313

0.008

3

0.313

0.073

4

0.156

0.328

5

0.031

0.590

s2

0.250

0.090

Score

When p = 0.9, 59% of all examinees achieve a score of 5 and no differentiation can be made among them. Only three of the six possible score points are attained by 1% or more of the examinees. When p = 0.5, the population is more spread out, with all six possible scores attained by at least 3% of the population. This is further reflected by the variance of the two distributions, which for the binomial distribution is at a maximum for p = 0.5. When applied to tests based on multiple-choice items (for which examinees can sometimes respond correctly without knowing the answer), possible middle difficulty is defined by Equation 1, p = 0.5 + 0.5 ×

1 , nc

(1)

where nc is equal to the number of choices. Thus, the middle difficulty p for a four-choice  selected response item is 0.625 0.5 + 0.5 × 14 . Often test developers elect not to choose item difficulties to maximize the variance of the examinee score distribution. This may occur when important decisions are made at a variety of points in the score scale continuum. Under this model, test designers decide to improve measurement at those areas of the score scale by including several very easy and/or very difficult items to improve the ability to differentiate among the relatively few examinees at the ends of the score scales.

Criterion-Referenced Interpretations An alternative to comparisons with a reference group of examinees is comparisons with a criterion external to the test-taking population. Such comparisons can be based on a well-defined content domain (e.g., “Kelsey can answer correctly 95% of

multiplication problems based on the numbers 0– 10”). However, it is difficult to define many content domains sufficiently well that all users have a common understanding of the questions that would be answered correctly. Another approach to criterion-referenced measurement for complex content domains requires a standard of adequate performance be set. Many specific techniques exist to develop such performance standards (see, e.g., Cizek, 2001) and will not be discussed here. More recently several key features have been combined to create a specific form of criterionreferenced interpretation: standards-based features. Key features of a standards-based system include well-defined content standards and multiple levels of performance standards (such as Below Basic, Basic, Proficient, and Advanced). Another type of criterion-referenced interpretation can be based on the probability (or expected value) of obtaining certain outcomes of interest given a test score (or test score range). For example, if a test was developed to select salespeople, then it might be validated by regressing sales volume in dollars on test score. The regression model would provide an expected sales volume (and standard error of estimate) for each test score. Similarly the probability of being in a particular quartile on a criterion (such as sales volume, undergraduate GPA, or defect-free widgets produced) could be based on the quartile attained on a test. For example, when a predictor and criterion correlate 0.6, attainment of the first quartile on the predictor indicates a 54% chance of attaining the first quartile on the criterion (and only a 5% chance of being in the fourth quartile on the criterion). Recent advances in the use of cognitivediagnostic test models hold forth the promise of reporting results by categories of cognitive misunderstanding that lend themselves to specific prescription. For example, Tatsuoka (2009) has developed a rule-space method that categorizes examinees according to specific cognitive misconceptions. Rather than producing numerical scores, such cognitive-diagnostic interpretation systems provide information such as, “treats parentheses as absolute value notation.” Regardless of the type of criterion-referenced interpretation system, the primary goal of test development is not to produce score distributions that maximally spread out examinees. Thus, the goal of the development process is not to produce middle-difficulty items. Moreover, if the kingston, kramer

191

criterion-referenced assessment system is one where it is expected the majority of examinees will master the content, then one should expect most items to be relatively easy. If this is not the case, then the items are not representative of the intended content. This can lead to conflicting test development goals, as optimally accurate classification of examinees into performance categories would require items to be of middle difficulty (or maximum information) for examinees at the cut score of interest.

Overview of the Test Development Process Professional test development is a systematic process of moving from an abstract construct to the creation of replicable concrete data collection devices (test forms) that will support meaningful, useful inferences regarding that construct. Depending on many factors—including, but not limited to, cost per examinee—this process will vary in the steps of that process or the order of the steps. Many (but not all) steps in the process will be supported by well-established quantitative techniques. Key steps may vary in their order and include the following: 1. Construct definition. A brief description of what will be measured and that can be used for high-level communication with test takers and other constituencies. The construct definition is best when short—perhaps a sentence or two. 2. Score report design. The number and type of scores that will be reported will have a profound effect on the length and content sampling of the test. 3. Content specification. The content specifications will form the basis of the domain sampling plan. Content specifications can be oneor multidimensional. They can be set up as taxonomies or multiway tables. Often the content specification will address both substantive categories and cognitive levels (Bloom et al., 1956). So content specifications for a seventh grade math test might be developed in a two-dimensional table. One dimension might list ratios, proportions, the number system, expressions, equations, geometry, statistics, and probability. The other dimension might list knowledge, comprehension, application, analysis, evaluation, and synthesis. 4. Test blueprint2 . A test blueprint provides greater detail than the content specifications. The detail should be sufficient to ensure that if two competent test developers each created forms that met the blueprint requirements, then you would be 192

equally happy with each form. For example, if the blueprint for a reading comprehension test specifies the aspects of reading comprehension but not the context, one test form might have passages about a variety of topics and another might have passages entirely about illness and death.3 5. Creation of draft items. Draft items are usually created using a process that avoids construct-irrelevant variance. Usually this involves the use of a large number of item writers to minimize the impact of any item writer effect. Each item writer is assigned items from one or more parts of the blueprint. Often content experts (rather than professional test developers) are used at this stage, after first providing the content experts with item writing training. 6. Content review. The content of the question must be aligned with the content specifications. Also, the content must be without error and the answer key must be easily defended. Content review also can be used to allow the test sponsor or its constituents to have a hand in the development process. 7. Editorial review. Correct spelling, grammar, and use of a single clear style helps reduce ambiguity, examinee distraction, and other sources of content irrelevant variance. Additionally, for tests that receive a high degree of scrutiny from the public or other stakeholders, producing error-free tests is vital for establishing confidence in the assessment or assessment system. 8. Data gathering. Some high-stakes testing programs do not gather data to assess item quality before a test is first administered operationally, but most do. The manner in which data are gathered varies considerably. Sometimes items are first administered to small samples of examinees to remove any further investment (gathering data costs time and money) for items that are of very low quality. Other times items are administered to larger samples to provide more stable estimates of an item’s statistical characteristics. At least as important as sample size is the match between the sample on which data are gathered and the intended test population. A sample should be representative of the intended population both in background characteristics and motivation. Items will look more difficult and less discriminating when data are collected on a nonmotivated sample. An additional consideration—particularly when a test is going to be used to measure attainment or

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

mastery—is that the data gathering should take place at approximately the same point in the examinee’s exposure to the content. For example, if a test is being developed to measure student proficiency in mathematics at the end of fifth grade, then the item data should be collected toward the end of the fifth grade (April or May) rather than halfway through instruction (January or February) to better quantify what those students know and are able to do. Data gathered on the wrong sample is likely to be useless or, worse yet, misleading. 9. Bias review. Test fairness is of critical importance. Fairness does not mean equal results for all definable subgroups; rather, it means there must be no construct irrelevant variance associated with being a member of a definable subgroup. Many studies have shown human judgment regarding item bias does not align closely with empirical data (Engelhard et al., 1990; Plake, 1980). Also, readily attained empirical data cannot differentiate between construct-irrelevant and construct-relevant differences. Thus, empirical approaches are referred to as differential item functioning (DIF; Holland & Wainer, 1993) as they can demonstrate differences but not whether those differences are construct irrelevant and thus a form of bias. Differential item functioning results are usually brought to committees of experts who use the statistical data to focus discussions of potential bias. 10. Draft test form creation. A pool of items does not a test make! Regardless of the type of interpretations a test is designed to support, test forms are best created using appropriate item and test statistics. Which statistics to use, decision rules for item selection, and target values for certain item and test characteristics are generally established as part of the content specification or the test blueprint. A well-defined content specification and test blueprint will provide a sound framework for developing a test form and will allow test form creation to be automated to some extent. However, unless every item in the pool is highly specified and tagged with an enormous amount of metadata, it is next to impossible to remove a human review from the process to ensure that items are not too similar or provide context or association clues for each other. 11. Sensitivity review. Sensitivity review is different than bias review. Bias review focuses on

individual items. But even if each individual item was fine, the collection of items might not be. For example, if, in a mathematics test for sixth grade students, 25 items referred to students by name and 22 of those names were male and only 3 were female, then the collection of items might be disconcerting to some students or serve as an unnecessary lightening rod for the public. The same would hold true if 22 of the names were female and 3 were male. Anything, such as balance issues, that distracts students from the construct of interest is best avoided. 12. Administration. The test is given to the intended population using a set of proscribed procedures. Some variations in how the test is administered may be allowed for certain subgroups of test takers, such as providing a large-print version of the test for individuals with visual acuity impairments. It is generally agreed that such an accommodation would not affect the interpretation of the test results. Other variations could invalidate the test results entirely, such as allowing extended time for a test whose purpose is to measure how many widgets can be successfully assembled in 1 minute. Administration variations that are believed to change the construct being measured are generally referred to as test modifications as opposed to accommodations. 13. Standard Setting. Tests that require cut scores must perform standard setting to determine those scores. Although some standard-setting methods can be performed before the final test is administered, most require data from the administration of an intact form of the test, and thus score reporting for the first administration must wait until after the standard setting process is completed. 14. Equating. Equating is usually required to adjust raw scores to account for small differences in difficulty between test forms that occur regardless of the rigor of the test development process. Equating can occur before or after the operational test administration, depending on the chosen data collection model. 15. Technical Documentation. The Standards for Educational and Psychological Testing (AERA, APA, NCME, 1999) suggest certain forms of validity and reliability data be documented. Tests that may be subject to legal challenge or governmental regulation may have a higher burden kingston, kramer

193

of proof. A test developer should be cognizant of the demands of the particular field in which a test will be utilized.

Data Collection Schemata Is an item easy or hard? Is it clear or ambiguous? Is a test score sufficiently reliable to support the intended inferences and consequences? Although it might seem straightforward to collect data and answer these questions, it is not always so. An item might be hard for the intended population of fifth graders before they receive instruction but easy for high school students. Use of data collected from high school students (or even sixth graders) would likely be misleading. So would fifth grade data that were collected only from highachieving schools. Reliability estimates are higher when data are obtained from groups with larger variability. Reliability coefficients can be inappropriately inflated by collecting and aggregating the data from a sample of fourth, fifth, and sixth graders rather than having separate reliability estimates for each grade. Good data are the foundation for useful inferences, and good data are representative of the population for which you want to make inferences. However, there are costs to data collection (financial and other), so different models have developed to best serve different needs. 1. Cognitive Labs. Cognitive labs (Wilson & Peterson, 1999), protocol analysis (Ericsson & Simon, 1999), and think-alouds all refer to a process for gathering broad, rich data from a small number of examinees. In an assessment context, examinees are presented with an item and asked to orally express in detail how they would solve the problem. For example, if presented with the problem, “What is the square of 15?” an examinee might respond as shown in Table 10.2. In this think-aloud process, we see this child does not carry the 2 in the first multiplication, remembers to carry the 1 in the addition, and does not use the proper place value when multiplying 10 times 15.

Information gathered in cognitive labs can help test developers create plausible and useful distractors and identify potential sources of construct irrelevant variance. Such data are usually not subject to quantitative analysis. 2. Pilot testing. We use the term pilot testing to refer to a relatively small-scale assessment of item quality, typically 30 to 200 examinees. Statistics based on pilot testing will often point out an item is not working as intended, but item statistics will have relatively large errors of estimation, and thus results will not be overly useful in comparing the efficacy of the majority of the items. Table 10.2 presents the 0.95 confidence intervals of the item difficulty (p) and item-total correlation (point biserial correlation) at different sample sizes. The confidence interval for $ p was calculated using the pq

normal approximation n . The confidence interval for the item-total correlation was calculated by using the Fisher-z transformation, approximating the sampling variance in the z 1 , and then transforming back to metric using n−3 the correlation metric. From Table 10.3 we can see how crude a tool classical item statistics are until a sample size of about 100 is reached. 3. Field testing. We differentiate field testing from pilot testing based on the sample size. As Table 10.3 shows, at sample sizes of 800 and above, classical item statistics are estimated with great precision. However, there are factors other than sample size that can bias estimates of item statistics. Testing on a more able sample than the population of interest will make items appear easier than they will be when administered operationally. Administering test items to examinees who have not been exposed to the content will likewise make items appear more difficult. Similarly, the distribution of examinee proficiency can affect estimates of item discrimination. Moreover, sometimes examinees know that field test items do not count, and therefore the examinees are not motivated to perform to the best of their ability.

Table 10.2. Result of a Hypothetical Think Aloud A square is a number multiplied by itself, so I need to multiply 15 times 15. I write down the number and start to multiply. 5 times 5 is 25, so I write down the 5. 5 times 1 is 5, so I write down another 5. So that makes 55. Now I multiply 1 times 15. That’s easy—15. I add 15 and 55 and get 70. So the square of 15 is 70.

194

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

Table 10.3. Confidence Intervals Around Item Difficulties and Item-Total Correlations at Several Sample Sizes Sample size

Item difficulty (true = 0.6)

Item-total correlation (true =0 .4)

Lower bound of 0.95 CI

Upper bound of 0.95 CI

Lower bound of 0.95 CI

Upper bound of 0.95 CI

25

0.41

0.79

0.01

0.69

50

0.46

0.74

0.14

0.61

100

0.50

0.70

0.22

0.55

200

0.53

0.67

0.28

0.51

400

0.55

0.65

0.31

0.48

800

0.57

0.63

0.34

0.46

1,600

0.58

0.62

0.36

0.44

Thus, field test items may appear more difficult than they actually are. Embedded field testing is a technique that can be used to minimize these issues. For embedded field testing, items that do not count toward a student’s operational score are mixed in with items that do count. Examinees do not know which items count and which do not, so examinees are equally motivated on all items. 4. Operational administration. An operational administration is one for which examinees’ scores will count. Data from an operational administration are often the basis for technical documentation of test quality. A test form is often administered operationally many times and to groups with different distributions of proficiency, which can affect item statistics. Sometimes it is advisable to combine data from several operational administrations so statistics represent the population as a whole. Alternatively, data can be weighted to be more similar to the total population.

Analytical Approaches 1. Classical Test Theory. Classical Test Theory statistics are relatively simple transformations of observed data. Common Classical Test Theory item statistics for item difficulty are p, p+, and delta. Common statistics for item discrimination are biserial and point-biserial correlations (often corrected for part-total contamination). Common test-level statistics include measures of internal consistency (especially coefficient alpha) and test speededness.

2. Latent trait methods (Item Response Theory [IRT]). Lord (1952) and Rasch (1960) have put forth models where observed data are used to estimate examinee scores and item statistics on a common underlying scale (latent trait). Lord’s model provided three item statistics and one examinee statistic. The a-parameter is a measure of item discrimination, b is a measure of item difficulty, and the c is a lower asymptote (the probability of a correct response by a very low proficiency examinee). A theta parameter is estimated for each examinee and represents the examinee’s level of proficiency on the latent trait. Theta and the item b-parameter are reported on the same scale. In the three-parameter item response model, all three item parameters are estimated for each item. In simpler models, one or more of these parameters might be set to a constant. More complex models exist for multidimensional tests and tests with items that provide polytomous scores. In the Rasch model, item difficulty and examinee proficiency are reported on the same logit scale. Although the results for the Rasch model are equivalent to those of a one-parameter (b) item response model, the philosophical underpinnings are quite different, and thus so is the way the model is used in high-stakes test development.

Quantitative Methods This section will deal with formulae for the estimation of item statistics. Because IRT item statistics require the use of maximum likelihood or Bayesian approaches that do not have closed form kingston, kramer

195

solutions (see, e.g., Hambleton & Swaminathan, 1985, Chapter 5), estimation of these statistics is beyond the scope of this chapter.

Item Analysis item difficulty p The simplest measure of item difficulty is proportion correct, p. nc p= , (2) nt where nc is the number of correct responses and nt is the total number of examinees. Note that, particularly for multiple-choice items, the proportion of examinees who answer the item correctly is not necessarily the proportion of examinees who know the correct answer. There is some unknown number of examinees who will answer the item correctly by guessing, and there are some examinees who know the correct answer but have indicated an incorrect answer (such as darkening the circle for the answer choice on an incorrect line of a scannable answer document). Although one hopes that these two types of errors balance each other out in the case of a well-constructed item, in a poorly constructed item, such as a multiple-choice item with obviously incorrect distractors, they may not. Correcting p for Guessing Although not commonly used, one can correct p to adjust for the probability of getting a correct answer by guessing. pcg =

nc −

nt −nc k−1

, (3) nt where nc is the number of correct responses, k is the number of answer choices for the item, and nt is the total number of examinees. Use of this formula might be appropriate if a test used item types that did not all have the same number of distractors. Correcting p for Test Speededness Another variant of p is p+ , which is used on speeded tests to adjust the item difficulty of items at the end of the test under the assumption that examinees respond to items in order. nc , (4) p+ = nt+ where nc is the number of correct responses and nt+ is the maximum of the total number of examinees who have responded to that item or any subsequent item. 196

Normalized Percent Correct It has been long known that the percent correct scale has undesirable statistical characteristics (including not being an interval level measure), so often percent correct is transformed to a z-score that corresponds to that percent of a normal distribution (Ayres, 1915; Thurstone, 1926; Brigham, 1932). For example, a z of 0 would be equivalent to a p of 0.50, and a z of 1 would be equivalent to a p of 0.84. To avoid negative scores, z’s are sometimes linearly transformed to a new metric: delta.

= 13 + 4 × z.

(5)

This delta metric, developed by Broyler and described by Brigham (1932) is still commonly used at Educational Testing Service. One advantage of normalized item difficulty indices is that they can be adjusted for differences between the samples on which data were gathered. This is particularly useful for testing programs that use embedded field testing and regularly experience proficiency distribution shifts in groups that take the test at different times of the year. This adjustment is usually performed by setting the means and standard deviations of the deltas of the common items equal.

y =

s y s x

× x + y −

s y s x

× x

(6)

Using this relationship derived from the means and standard deviations of the common items administered to groups y and x, deltas for new items administered to group x can be placed on the existing group y delta metric. Because of statistical considerations (see, e.g., Gulliksen, 1950, p. 369), this approach does not actually lead to population invariant estimates of item difficulty, a goal that cannot be met without the use of IRT (and then only when the assumptions of the IRT model are met). Item Response Theory b-parameter The b-parameter, also called the threshold, is the theta value where c + (1 − c)/2 of the examinees would be predicted to answer the item correctly. When c = 0, this is equal to 50% of the examinees answering correctly. Recall that one of the advantages of IRT is that items and examinees can be placed on the same scale. Given an item with a threshold of 1.4, a relatively difficult item, examinees with ability estimates below 1.4 would be decreasingly likely to answer this item correctly, whereas examinees with higher ability estimates would be more likely to answer this item correctly.

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

Item Response Theory scale metrics are arbitrary, and an approach must be chosen to establish the metric. A typical approach used by advocates of the three-parameter model is to scale the metric so the mean theta of the examinee population is 0 and the standard deviation is 1. Advocates of the Rasch model typically scale the logit metric so the average item has a logit of 0. Because these choices are arbitrary, there is little reason to prefer one over the other.

item discrimination It has long been shown that items vary considerably in their ability to differentiate or discriminate among examinees who are highly proficient and nonproficient on the construct of interest. (Some test developers have started using the phrasing “differentiate among” because of the connotations of “discrimination,” which leads to confusion with DIF. It then becomes difficult to persuade laypeople that items discriminate BETWEEN examinees but do not discriminate AGAINST examinees.) Item discrimination is typically measured with one of three statistics: the point-biserial correlation, biserial correlation, and the IRT a-parameter (also referred to as slope). A fourth statistic, Kelley’s D-Index (Kelley, 1939) is still used; because its only advantage was computational efficiency in a pre-computer world, there is no reason for its continued use. However, like Michael Meyers in the never-ending Halloween movie series, it does not seem that it can be killed. Point-Biserial Correlation The point-biserial is a special case of the product moment correlation and thus can be calculated as such.

this can be argued two ways. Some say an item is either answered correctly or incorrectly and thus is a true dichotomy, and a point-biserial correlation is appropriate. Others argue that the item is measuring an underlying continuum that is merely collapsed when represented by the item score. The relationship between the biserial correlation and point-biserial correlation is as follows: % p(1 − p) , (8) rbis = rpbis × y where p is the percent responding correctly and y is the ordinate of the normal distribution at the point that the area under that curve is divided into p and 1 − p. Based on this formula, when p = 0.5, the biserial correlation is 25% larger than the point-biserial. This difference increases as difficulty is larger or smaller than 0.5. For example, when p = 0.75 the difference is 37% . The point-biserial is limited (or contaminated, depending on your point of view) by item difficulty. Part-Total Contamination Item-total test score correlations (both pointbiserial and biserial) suffer from part-total correlation. That is, both the true and error variance of the studied item is included in the total test score variance. This artificially inflates the observed correlation by & 1 , (9) k

where, in the case of test data, xi is a 0 to 1 variable representing whether examinee i answered the item correctly (= 1) or not (= 0); n is the number of examinees, and yi is the total test score of examinee i (generally the sum of the 1s, indicating the correct responses).

where k is the number of items on the test. Thus on a 10-item test, point-biserial correlations have an expected inflation of 0.32 and even on a 100-item test, point-biserial correlations have an expected inflation of 0.10. When comparing item-total correlations from tests of the same length, this might not lead to incorrect inferences. If items for a new test form are selected based on item-total correlations, then there will be a clear bias in favor of items selected from shorter tests. Three methods can be used to avoid this problem: the appropriate correction factor can be subtracted from each item-total correlation, the studied item can be removed from the total score, or the criterion total score might exclude all field test items.

Biserial Correlation Biserial correlations are appropriate when the 0 to 1 variable represents an underlying continuous variable. With item analysis of dichotomous data,

Item Response Theory a-Parameter The two- and three-parameter item response models contain a discrimination parameter, a, also called the slope. Estimation of a is outside the

n

 n  i=1 xi i=1 yi r= $ 2 $ n 2 n 2 , n  n ni=1 xi2 − x n i yi − i i=1 i y n

n

i=1 xi yi −

(7)

kingston, kramer

197

1.00

1.00

0.90

0.90

0.80

0.80

0.70

0.70

0.60

0.60

0.50

0.50

0.40

0.40

0.30

a = 1, b = –.5, c = .2

0.20

a = 7, b = 0, c = .25

0.10

0.20 a = .6, b = 10, c = .3 a = .8, b = 20, c = .32

0.10

0.00 –3

–2

–1

0

1

2

3

Figure 10.1 Two item response functions.

Response Functions When using IRT, the item response function represents the probability of a correct response conditioned on theta (proficiency). The formula for the item response function is given in Equation 10, and an example of two-item response functions is provided in Figure 10.1. pi (θ) = ci +

1 − ci . 1 + e −1.7ai (θ −bi )

0.00 –3

–2

–1

0

1

2

3

Figure 10.2 Two almost identical item response functions with very different item parameters.

realm of this chapter, but like the biserial correlation, a assumes item scores represent an underlying continuous variable and similarly are theoretically independent of item difficulty. In practice, a’s have been shown to correlate with b’s (Kingston & Dorans, 1982).

(10)

It should be noted that the 1.7 in the denominator of Equation 10 is not strictly necessary. It is a scaling factor that maximizes the similarity of the scales between the logistic model (which was developed for its relative computational simplicity) and the original normal ogive model (Lord & Novick, 1968). Although psychometricians like to think of b as the measure of item difficulty, consideration of the entire item response function can produce a situation that appears paradoxical. For items with relatively low a-parameters and high b-parameters, the c-parameter is the major determiner of the probability of a correct response. Figure 10.2 provides an example of this. When using IRT, it makes more sense to focus on the item response function as a whole than the item parameters as having unique meaning. 198

0.30

Information Information is an important quantity in IRT as it provides information regarding the precision of measurement provided by an item. Further, the information provided by items can be accumulated to present the information provided by the test as a whole. Equation 11 provides the formula for item information, and Figure 10.3 shows the information functions for the two items from Figure 10.1.

2 2 2 qi (θ ) pi (θ) − c I (θ) = 1.7 a (11) pi (θ ) 1−c As in Equation 10, the 1.7 in Equation 11 is simply a scaling factor and is not necessary as long as the user is consistent. When c is 0 (truly 0, not set to 0 for convenience) the item information function is symmetrical with a peak at b. As a gets larger, information peaks at a 0.25

a = 1, b = .5, c = .2 a = .7, b = 0, c = .25

0.2 0.15 0.1 0.05 0 –3

–2

–1

0

1

2

3

Figure 10.3 Item information functions for the two items in Figure 10.1.

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

higher value, but information is high in a narrower range. The standard error of estimate of theta can be calculated from the test information function by using Equation 12. 1 SE (θ) = √ 1(θ)

(12)

Test Construction: Item Selection Methods difficulty Depending on the purpose for which a test is to be used, acceptable ranges of item difficulty may vary. It is, however, generally agreed that an item that all examinees answer correctly, or an item that all examinees answer incorrectly, is not a particularly informative item. A test that is used to maximally differentiate examinees should have all items with a middledifficulty p or item information peaked at the median of the theta distribution. For a test that has a single cut, score items should be chosen to be maximally informative at that cut score. For a test that is used to sort examinees into multiple categories, such as the National Assessment of Educational Progress (NAEP) with its four categories of Below Basic, Basic, Proficient, and Advanced, the test developer would have to compromise and select items with difficulties around each of the cut scores to give the test additional “power” to correctly categorize examinees. Similarly, tests that are used to make decisions at many places along the score scale (such as a college admissions test that is used by colleges with differing degrees of selectivity and for scholarship decisions) will need to have items with a great range of difficulty. Regardless of the approach used, it is important to remember that from a psychometric perspective, no knowledge is gained by administering items that everyone answers correctly or no one answers correctly.

item-total correlations Test developers, like everyone else, want to use items that provide the most “bang for the buck.” Assessment, particularly in the education arena, is frequently criticized for the amount of time it takes to administer the test. Thus, test developers have frequently come under pressure from stakeholders to limit the amount of time spent testing, which generally means making shorter tests. Reliability increases with the number of test items. A perfectly reliable test would thus be infinitely long; however, finding

examinees willing to take this test would be challenging, as would finding a funding source to aid in the development of this instrument. With external pressure to put fewer items on a test, to preserve reliability the items selected should clearly contribute to the estimate of examinee proficiency and be wellsuited for differentiating between examinees of high and low ability. A common item-selection criterion is that each item has an item-total correlation (biserial correlation) greater than 0.30; however some domains are more homogenous than others, and specific guidelines should be determined for different tests. When the theta metric is set based on the mean and standard deviation of examinee thetas, programs that use IRT methods prefer slopes of 1.00 or greater, although depending on the construct being measured, lower slopes may be acceptable (or even necessary to measure the construct of interest).

lower asymptote The IRT c-parameter is the lower asymptote of the item response function and is often misnamed as a guessing parameter or (slightly better) pseudo-guessing parameter. The c-parameter is also sometimes used as an item-selection criterion for multiple-choice items. The non-IRT assumption that examinees will answer a multiple-choice item correctly by guessing with a probability inversely proportional to the number of answer choices is somewhat naïve. An examinee with absolutely no knowledge of the content whatsoever (or with no motivation to answer correctly or even expend a minimal amount of brainpower in answering the test items) may select all answer choices of A or, in the case of scannable answer documents, another aesthetically appealing pattern and will, by sheer chance, get some of the items correct. However, in most testing situations, examinees have some degree of partial knowledge or motivation to answer correctly. Well-constructed answer choices that use common errors that test takers make or misconceptions that examinees may hold not only make stronger items but, when analyzed, can provide diagnostic information as to an examinee’s particular strengths or weaknesses. Even four-choice multiplechoice items, where theoretically one would assume a guessing parameter of 0.25 (one-fourth), with well-thought-out diagnostic foils can have very, even vanishingly, small c-parameters. At certain grades of one state testing program, the average c-parameter for four-choice mathematics items was 0.11, less than half the expected value (Bazemore et al., 2006). kingston, kramer

199

Testing programs that use the three-parameter IRT model may opt to not use items with c-parameters greater than 0.300 or 0.350. These items may instead be revised and field-tested again later with stronger distractors. However, there are cases where no matter what the item writer tries to contrive, there really may be only one or two good distractors. Science test developers everywhere were delighted when a fourth state of matter was announced—solid, liquid, gas, and, at last, plasma! Some other classic examples of items with not very many good distractors: Arrange drinking glasses partially filled with water in order from highest to lowest pitch (when the glass is tapped). The only two logical choices are to start with the glass with the least amount of water and monotonically increase to the glass with the most amount of water, or to start with the most full glass and monotonically decrease to the glass with the least amount of water. Any other answer choice that could be included would be endorsed only by those students who were selecting at random without reading the item at all. Describe the relationship between two variables as shown in a scatterplot. The fundamental answer choices are positive, negative, or no relationship. Any fourth foil here is likely to be ignored. It is well established in the literature (although inexplicably almost completely ignored in practice) that three-choice selected response items are generally more efficient than four- or five-choice items (Rodriguez, 2005).

test information functions When test scores are reported on the theta metric, the test information function is equal to the sum of the constituent item information functions. Thus, it is possible to set a target information function and add and subtract items until one maximizes the similarity of the obtained and target functions. This is especially useful when one needs to create parallel test forms. Alternatively, if one is creating a single form of a measure, then you can use this approach to maximize measurement efficiency at any desired point (or points) on the score scale. When test scores are reported on a number right or linearly scaled number right metric, the calculation of the test information is more complex (Lord, 1980, p. 73), but the same test development principals can be applied.

scale purification Sometimes the test developer must create several scales that will be used together. To maximize 200

the utility of these scales, it is important that the reliability of each measure is high and the correlation between measures is relatively low. Although many practitioners use factor analysis to address this issue, factor analytic approaches will not be optimal when operational scoring is based on raw scores rather than factor scores. Loevinger et al. (1953) developed another approach more appropriate for raw (or linearly scaled raw) scores. Take a large pool of items and determine small sets of items (for example 3) that appear to measure the intended construct and have high covariances with each other. Determine the saturation that is defined as the ratio of the sum of the inter-item covariances to the total variance. For each subscale, remove any items for which doing so would increase the saturation. Then for each subscale, add the item that would maximally increase the saturation. Repeat until all items are used or discarded.

more complex constrained approaches Selecting items for a test is usually far more complex than the simple approaches discussed so far can address. Item selection is subject to a large number of constraints that sometimes conflict with each other. You may want to minimize the standard error of measurement at a cutpoint, but you also need to cover content specifications (both for individual items and for stimulus materials), readability, item-type specifications, and many more. Linear and nonlinear optimization approaches have been used to address the item selection problem. Several pertinent articles or chapters that discuss approaches follow: Huitzing, 2004; Luecht, 1998; Stocking & Swanson, 1993; van der Linden, 2010.

Scoring Although most professionally developed tests are reported on some scaled score metric, before a score can be scaled, an initial score that is a function of an examinee’s item responses must be created. The three typical ways of doing this are number right, formulascore, and pattern scoring (maximum likelihood estimation of theta). The most straightforward scoring method, number right, is to simply count up the number of test questions each examinee answered correctly. Because examinee guessing adds to error of measurement, sometimes test developers dissuade examinees from responding to questions to which they do not know the answer by instructing them that there will be a penalty for a wrong response. Consistent with such instructions, incorrect and omitted

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

responses are treated differently under formula scoring. The usual approach to formula scoring is given in Equation 13. FS = R −

W . c −1

(13)

In which FS is the formula score, R is the number of items answered right, W is the number of items answered wrong, and c is the number of answer choices per question. Using this formula, if examinees were to guess at random, then their expected formula score would be 0. Generally, the literature comparing formula scoring to number rights scoring shows little difference between the two. Two possible exceptions favoring formula scoring may be for (1) difficult tests with low cut scores and (2) tests that are speeded (Frary, 1988). Although IRT pattern scoring is typically accomplished using maximum likelihood estimation, when item parameters are already known, there are simpler approaches that can be used for the one- and two-parameter models. When the oneparameter model holds, the number right score is optimal, as number-right score is a sufficient statistic for theta. For the two-parameter model, weighting each 0 to 1 (incorrect-correct) response by the item’s a-parameter is optimal (Lord, 1980, pp. 76–77).

Test Analysis reliability Classical Test Theory starts with the axiom that every observed score can be decomposed into true score and error score. True score is the expected value of examinee scores from an infinite number of strictly parallel tests (assuming the examinee neither learned nor forgot anything from the experience of taking an infinite number of tests!). Error score is the difference between an examinee’s observed score and their practically unknowable true score. By the above definition, the expected value of error is 0 and the correlation between true score and error score is 0. There remains some argument in the measurement community whether it is also a definition within Classical Test Theory that errors cannot correlate with each other or an assumption. Based on these definitions and/or assumptions, reliability is defined in Equation 14 as the ratio of true to observed variance. As such, reliability is bounded between 0 and 1. rxx 

σ2 σ2 = T2 = 2 T 2 . σx σT + ρE

(14)

Understanding reliability and making good choices in how to estimate reliability requires an understanding of sources of error variance so one can choose a data collection design that takes into account the most important ones. Such a discussion is beyond the scope of this chapter and unfortunately is usually not treated in-depth in contemporary texts. The interested reader is referred to Thorndike (1951). There are several ways to evaluate a test’s reliability. Test–retest reliability is calculated when a group of examinees takes the same test form on more than oneoccasion.Thecorrelationbetweenexaminees’test scoresfromthefirsttestadministrationandthesecond test administration, or the consistency of the examinees test scores from time 1 to time 2, is a measure of the test’s reliability. Although the reliability of a test is bounded between 0 and 1, correlational estimates of reliability are bounded by –1 and 1. Another problem with estimating test–retest reliability is that examinees may remember test questions and think about them or discuss them between test administrations, contaminating the reliability estimate. To minimize the issue of test familiarity, another method is to look at alternate forms reliability. The same examinee takes two parallel forms of the same test. Again, the correlation between all examinees’ test scores from the two administrations is a measure of the test’s reliability. Familiarity effects from seeing the same specific items are mitigated; however, there may still be a familiarity effect from the presentation of the test or the manner in which questions are asked. In many types of assessment situations, it is logistically difficult or politically unpopular to test examinees twice. In this case, reliability must be estimated from a single test administration. These methods are most appropriately referred to as internal consistency rather than reliability. The simplest internal consistency method is to split a test into two equal length parts and correlate the two. However this correlation represents the reliability of a half-length test. The obtained correlation should be adjusted using a special case of the Spearman-Brown formula to represent the reliability of a full length test. This formula is given in Equation 15. rxx  =

2rx1 x2 1 + rx1 x2

(15)

One problem with split-half reliability is that there are many different ways to split a test in two, and each split will give a somewhat different answer. kingston, kramer

201

In fact, some splits might give very different answers. Imagine a test where about half the questions are verbal and half are quantitative (such as the Graduate Management Admissions Test at the time of the writing of this chapter). If one split the items so all of the verbal ones were in one split and all the quantitative ones were in the other, then the correlation is likely to be significantly lower than if half of each item type were in each half-test. An early approach to this problem was called odd–even reliability. Items 1, 3, 5, . . . were placed in one half, and items 2 ,4, 6, . . . were placed in the other. If items of different types and content were administered contiguously within their categories, then this approach would serve to stratify on that categorization and prevent grossly uneven splits. Nonetheless, this still remains but one of many possible splits. The solution to this problem was to avoid splitting the test at all but to instead estimate reliability from total test and item variances. Cronbach’s coefficient alpha (Cronbach, 1951) is the most commonly used approach and is presented in Equation 16.    σx2 − σx2i n , (16) rxx  = n−1 σx2

of estimating conditional standard errors of measurement have been around for more than 60 years (Mollenkopf, 1949) and Qualls-Payne (1992) compared many of these methods and found a quadratic smoothing method developed by Feldt (1984) to be superior. Item Response Theory methods provide a conditional standard error of estimate for theta based on test information as presented in Equation 11.

speededness

where n is the number of items in the test, σx2 is the total test score variance, and σx2i is the variance of item i. A special case of coefficient alpha, KR-20 is appropriate for dichotomously scored items and was developed by Kuder and Richardson (1937). Reliability estimates are greatly affected by the variance of the population on which they are estimated. The appropriate sample on which to estimate reliability is one that is representative of the population for which the test is intended. If a test is intended for fifth grade students but reliability information is based on a combination of fourth, fifth, and sixth grade students, then it is likely that the obtained estimate will be inflated.

Tests may be intended to be measures of speed or power. Measures of speed, in theory, are constructed of items that in the population of interest would be answered correctly by every examinee if sufficient time were provided. Measures of power are ones for which extra time would not lead to increased test scores. Practical considerations, including the high cost and logistical difficulties of providing unlimited time for tests intended to be power tests, lead to some level of speededness in tests intended to be power, and thus the test developer should ascertain the extent to which this is true. Traditional (paperand-pencil) administration approaches make this hard to do without making the somewhat unlikely assumption that examinees respond to test items in the order they appear in the test booklet. In this case, a common approach is to consider as unreached all contiguous items at the end of the test to which an examinee has made no response. Then two measures of speededness can be (1) the last item to which 100% of the examinees responded (expressed as the percent of items) and (2) the percent of examinees who responded to the last item. For many decades, Educational Testing Service used as a rule of thumb that a test is speeded if 100% of the examinees responded to fewer than 75% of the items or fewer than 80% of examinees responded to the last item (Swineford, 1974).

standard error of measurement

differential item functioning

The standard error of measurement (se ) is a function of the variability of test scores and the test’s reliability as expressed in Equation 17. % (17) se − sx 1 − rxx  ,

For both ethical and legal reasons, test developers need to know whether items in their tests are biased against members of protected classes. To ensure the validity of inferences made from test scores, test developers need to know whether items in their tests contain construct-irrelevant variance. This second goal is a superset of the first. Unfortunately there is no way to statistically determine whether differences in group performance result from construct-irrelevant versus construct-relevant reasons.

where se is the standard error of measurement, sx is the standard deviation of the test scores and rxx  is the test’s reliability. Equation 17 provides an average standard error of measurement but the standard errors of measurement actually vary with true score. Methods 202

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

As an example, when the first author was Director of Test Development and Research for the Graduate Record Examinations, it was noticed that females taking the GRE Biology Subject Test did less well on biochemistry items then males. This was true even when looking at groups of females and males matched on their total scores. But was this bias? Further analysis of data showed that at that time, women taking the GRE Biology Subject Test had taken far fewer biochemistry and molecular biology courses than had male examinees. This analysis also showed that women had taken more ecology and organismal biology courses than men and on average did better on items tapping into those areas of content. When the full set of evidence was presented to an external bias review committee, they determined there was no evidence that the score differences resulted from construct irrelevant reasons. There may, in fact, be many construct-relevant reasons why two groups of examinees have different distributions of scores on a test. Thus, the first step in looking for DIF is to use some statistical approach to conditioning differences on total test scores (or other proficiency estimates). The Mantel Haenszel log odds ratio has long been used for in biostatistical research and is a commonly used approach for operational testing programs (Holland & Thayer, 1988). To use the Mantel-Haenszel approach, examinees are stratified on two dimensions: total test score (0, 1, 2, 3, . . . n), and group membership (focal and reference), where the focal group is the group traditionally considered disadvantaged and the reference group is the traditionally non-disadvantaged group or majority. In checking for gender DIF on an engineering test, the focal group may be females while the reference group is males; however, in a test of nursing, the focal and reference groups may be reversed. Once the test developer has decided which is the focal group and which is the reference group, then, conceptually, the probability of getting each item right based on group membership is calculated, given that the two groups have the same total score distribution. Because the set of examinees from each slice of the test score distribution have the same total score, we can sum up the results from those slices and know that the resulting difference does NOT result from any difference in the total score. Thus, any statistically significant difference in proportion correct can be taken as evidence of DIF. Because DIF—even if statistically significant— may signal construct-relevant or construct-irrelevant

differences, all items with DIF are usually reviewed by a committee that has knowledge of both the subject matter and the perspectives and experiences of the different groups. Unfortunately the research literature shows little agreement between DIF results and human judgments of item bias (Plake, 1980; Sandoval & Miille, 1980, Engelhard et al., 1990), and thus more work is needed on the methodological state of the art. Many other methods exist for assessing DIF.

Scaling Without contextual support, it is very hard to interpret a test score. Is 52 a good score or a bad score? Perhaps it is a good score if it means 52 correct out of 52 items. But the meaning is still obscured without knowledge of whether the test is easy or hard for the population. To facilitate the accrual of meaning, test developers usually transform scores to a scale with better psychological properties. Usually professional test developers try to avoid a 0 to 100 metric because people are think they understand that metric from their experiences with classroom tests. For example, the public may believe a grade of 70 is barely passing, yet a test developer might build a test to be of middle difficulty (to better differentiate examinees), and thus if the test is based on four-choice items, then the average examinee will score about 62.

linear One way to facilitate interpretations of test scores is to incorporate normative information into the scale (Kolen, 2006, p. 163). This can be readily done by transforming the original raw scores to have a set scaled score mean and standard deviation—for example, a mean of 500 and a standard deviation of 100. In this way, a score of 400 means you are one standard deviation below the mean, and if the scores are normally distributed then you have scored better than about 16% of the population.

normalizing If the distribution of raw scores is non-normal, then one might want to consider using an area transformation to normalize the scaled score distribution. Depending on the construct, this might make the resulting score scale closer to interval in nature. To do this, calculate the percentile associated with each score and then assign to that raw score the z-score that cuts off that portion of the sample. After the z-scores are assigned, choose a mean and standard deviation for the final score scale. kingston, kramer

203

equal standard error of measurement method Another approach suggested by Kolen (1988) is to scale scores so that the standard errors of measurement are (nearly) the same at all points along the score scale. Performance standards based When cut scores are set for multiple tests (such as tests within a battery or a series of tests for grades 1–12) there is advantage to having the scaled score corresponding to the lowest passing score be the same for all tests—for example, 100 (one will still need to determine a scaled score standard deviation or other approach to transform the other scores). For tests with two significant scores, perhaps a minimally passing score and an honors or exemplary score, setting each of those two points (say to scaled scores of 100 and 150) will define a straight line that can determine the transformation for all other raw score points. For tests with three or more cut scores, a test developer has several choices, the most straightforward of which is to use different linear transformations between each pair of cut scores. If linearity of the transformation is considered particularly important, then one can build this consideration into the standard setting process.

Conclusion Constructing high-stakes tests is a complex process that cannot be adequately described in one chapter and is seldom explained well in a single book. This chapter has tried to provide some background and describe some important quantitative methods that are commonly used.

Future Directions Two near-term important areas of research are automated test assembly and computer-facilitated item development. As pointed out in the discussion of DIF, more work is needed in that area too. Whether using human judgment or statistical approaches, we do not know enough to consistently predict from item features which items will show DIF.

Notes 1. Although p is usually referred to as item difficulty, some refer to it as item easiness because larger values of p correspond to easier items. 2. The terms content specifications, test specifications, and test blueprint do not have agreed upon meaning in the testing

204

community. We will consistently use these terms as defined here. 3. This actually happened in a testing program for which the first author later inherited responsibility. Parents did not react well to a test of this sort. Also, this may have affected the performance of young children.

References American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (AERA, APA, NCME). (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association. APA (no date) Appropriate Use of High-Stakes Testing in Our Nation’s Schools. Retrieved May, 15 from http://www.lib.wsc.ma.edu/webapa.htm. Ayres, L.P. (1915). A measuring scale for ability in spelling. New York: Russell Sage Foundation. Baron, H. (1996). Strengths and limitations of ipsative measurement. Journal of Occupational and Organizational Psychology, 69, 49–56. Bazemore, M., VanDyk, P., Kramer, L., Brown, R., & Yelton, A. (2006). The North Carolina Mathematics Tests Technical Report. Raleigh, NC: North Carolina Department of Public Instruction, Division of Accountability Services. Binet, S. (1903). Etude expérimentale de l’intelligence. Retrieved May 15, 2012 from ehttp://books.google.com/books?id= kJSwrR_KkwMC& printsec=frontcover& dq=Etude+exp% C3% A9rimentale+de+l’intelligence+(1903).& source=bl& ots=-NJHdAvuiT& sig=cTZ0yPz0yJajq_coKRkXmA_Ylzo& hl=en& ei=5PkDTNCVGJmeMqPAoTw& sa=X& oi=book_ result& ct=result& resnum=2& ved=0CBYQ6AEwAQ# v=onepage& q& f=fals. Bloom, B.S., Englehart, M,D., Furst, E.J., Hill, W.H., & Krathwohl, D.R.(1956). Taxonomy of Educational Objectives. Handbook I: Cognitive Domain. New York: David McKay and Company. Brigham, C. C. (1932). A study of error. New York: College Entrance Examination Board. Cattell, J. McK. (1886) Psychometrische Untersuchungen. Philosophische Studien, 3, 305–335; 452–492. Cizek, G. (2001). Setting Performance Standards: Concepts, Methods, and Perspectives. Edited by Gregory J. Cizek. Mahwah, NJ: Lawrence Erlbaum. Cronbach L J. (1951). Coefficient Alpha and the internal structure of tests. Psychometrika, 16, 297–334. Dubois, P. (1966). A test dominated society: China, 1115 B.C.–1905 A.D. in Anne Anastasi (Ed.), Testing Problems in Perspective (pp. 29–36). Washington, D.C.: American Council on Education. Engelhard, Jr., G., Hansche, L., Rutledge, K. E. (1990). Accuracy of bias review judges in identifying differential item functioning on teacher certification tests. Applied Measurement in Education, 3(4), 347–360. Ericsson, K. A., & Simon, H. A. (1999). Protocol analysis: Verbal reports as data. Cambridge, MA: Massachusetts Institute of Technology. Feldt, L.S. (1984). Some relationships between the binomial error model and classical test theory. Educational and Psychological Measurement, 44(4), 883–891. Frary, R.B. (1988). NCME instructional module: Formula scoring of multiple-choice tests (correction for guessing). Educational Measurement: Issues and Practice, 7(2), 33–38.

h i g h - s ta k e s t e s t c o n s t r u c t i o n a n d t e s t u s e

Gulliksen, H. (1950). Theory of mental tests. Hoboken, NJ: John Wiley and Sons, Inc. Hambleton, R.K. & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston: Springer. Holland, P.W. & Thayer, D.T. (1988). Differential item functioning and the Mantel-Haenszel procedure. In H. Wainer & H.I. Braun (Eds.), Test validity (pp. 129–145). Hillsdale, NJ: Erlbaum. Holland, P.W. & Wainer, H. (1993). Differential item functioning. Hillsdale, NJ: Lawrence Erlbaum Associates. Huitzing, H.A. (2004). Using Set Covering With Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models. Applied Psychological Measurement, 28(5), 355–375. Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. Journal of Educational Psychology, 30, 17–24. Kingston, N.M. & Dorans, N.J. (1982). The feasibility of using item response theory as a psychometric model for the GRE Aptitude Test. GRE Board Professional Report 7912P. Princeton, NJ: Educational Testing Service. Kolen, M.J. (1988). Defining scale scores in relation to measurement error. Journal of Educational Measurement, 25(2), 97–110. Kolen, M.J. (2006). Scaling and norming. In R.L. Brennan (Ed.), Educational measurement (4 ed., pp. 156–186). Westport, CT: Praeger Publishers. Kuder, G. F. & Richardson M.W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151–160. Loevinger, J., Gleser, G. & DuBois, P.H. (1953). Maximizing the discriminating power of a multiple-score test. Psychometrika, 18(4), 309–317. Lord, F.M. (1952). A theory of test scores. Psychometric monograph No.7. Lord, F.M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates. Lord, F.M., & Novick, M.R. (1968). Statistical theories of mental test scores. Oxford, England: Addison-Wesley. Luecht, R.M. (1998). Computer-assisted test assembling using optimization heuristics. Applied Psychological Measurement, 22, 224–236. Mollenkopf, W.G. (1949). Variation of the standard error of measurement. Psychometrika, 14(3), 189–229.

Plake, B. S. (1980). A comparison of a statistical and subjective procedure to ascertain item validity: One step in the test validation process. Educational and Psychological Measurement, 40(2), 397–404. Qualls-Payne, A. (1992). A comparison of score level estimates of the standard error of measurement. Journal of Educational Measurement, 29(3), 213–225. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute of Educational Research. Rodriguez, M.C. (2005). Three Options Are Optimal for Multiple-Choice Items: A Meta-Analysis of 80 Years of Research. Educational Measurement: Issues and Practice, 24(2), Sum 2005, 3–13. Sandoval, J. & Miille, M. (1980). Accuracy of judgments of WISC-R item difficulty for minority groups. Journal of Consulting and Clinical Psychology, 48(2), 249–253. Stern, W. L. (1914). The psychological methods of testing intelligence. Translated by G. M. Whipple. Baltimore: Warwick & York. Stocking, M.L. & Swanson, L. (1993). A model and heuristic for solving very large item selection problems. Applied Psychological Measurement, 17, 151–166. Swineford, F. (1974). Test analysis manual. Statistical report SR74-06. Princeton, NJ: Educational Testing Service. Tatsuoka, K. K. (2009). Cognitive Assessment: An Introduction to the Rule Space Method. New York: Routledge. Thorndike, R.L. (1951). Reliability. In E.F. Lindquist (Ed.) Educational measurement (pp. 560–620). Washington: American Council on Education. Thurstone, L.L. (1926). The Mental Age Concept. Psychological Review 33, 268–278. van der Linden, W.J. (2010). Constrained adaptive testing with shadow tests. In Wim van der Linden & Cees Glas (Eds.) Elements of adaptive testing (pp. 31–56). New York: Springer. Wilson, B. F., & Peterson, L. S. (1999). Using the NCHS cognitive lab to help design cycle VI of the national survey of family growth. Proceedings of the Survey Research Methods Section, American Statistical Association (2000), 997–1002. Retrieved August 14, 2010, from http://www.amstat.org/sections/srms/Proceedings/papers/ 1999_174.pdf.

kingston, kramer

205

CHAPTER

11

Effect Size and Sample Size Planning

Ken Kelley

Abstract An important aspect of study design is sample size planning. However, research questions in psychology and related disciplines can generally be framed from at least two different perspectives: (1) one in which the existence of an effect addresses the question of interest and (2) one in which the magnitude of an effect addresses the question of interest. Correspondingly, depending on which of the two perspectives to addressing a research question is appropriate, there is a different perspective that should be taken to planning an appropriate sample size. In particular, statistical power analysis addresses questions related to the existence of an effect, such as “What size sample is necessary to correctly reject a false null hypothesis with some desired probability?” whereas the accuracy in parameter estimation perspective address questions related to the magnitude of an effect, such as “What size sample is necessary to have a sufficiently narrow confidence interval for the population parameter?” Neither one of these questions is necessarily better than the other, but each addresses a fundamentally different question. This chapter focuses on the interplay of the effect size and the research question of interest to plan an appropriate sample size from either the power analytic or the accuracy in parameter estimation perspective. Key Words: Effect size, sample size planning, confidence intervals, null hypothesis significance test, power analysis, accuracy in parameter estimation, study design, research design

Effect Size and Sample Size Planning Inferential statistics provides a means of extracting information from data to answer research questions in psychology and related disciplines. Research questions can be framed from (at least) two different perspectives, one in which the existence of an effect addresses the question of interest and one in which the magnitude of an effect addresses the question of interest. An “effect” in this context is a measure of effect size that quantifies some aspect of the phenomenon of interest as it relates to the research question. Although there are many things to consider when planning a research study, one important aspect of study design is sample size planning. This chapter focuses on the interplay of the effect

206

size and the question of interest in an effort to plan an appropriate sample size. Before a research study can be adequately planned, the question of interest needs to be precisely articulated. For example, the question of interest could relate to (1) The difference between the means of multiple populations; (2) difference between a population value and a benchmark; (3) unique impact of several explanatory variables on an outcome variable; (4) correlation between two variables; and so forth. Vague research questions— such as “Do the treatment and control groups differ?”—do not avail themselves to sample size planning, as the effect size of interest is not precisely defined. For example, differences between a

treatment and a control group can be operationalized as a (1) difference in means, (2) difference in medians, (3) difference in variances, or (4) probability of superiority other than 0.50, among others. Thus, the research question needs to be precisely defined to appropriately map an effect size onto the research question. Once the research question and effect size(s) of interest are chosen, a researcher needs to decide on the perspective to take with regards to the effect size—namely, if showing the existence of an effect is the primary goal of the study or if showing the magnitude of an effect is the primary goal of the study. That is, for whatever effect size addresses the question of interest, the primary goal of the study could be to show the existence of a non-null effect size in the population (e.g., reject the null hypothesis that a population effect size equals zero) or to estimate the magnitude of an effect (e.g., provide a point estimate accompanied by a narrow confidence interval ). In some situations, both existence and magnitude are of interest and the two approaches to study design can, and many times should, be combined. The rationale of a combined approach is to reject the null hypothesis and also provide an accurate estimate of an effect size in the population. This chapter begins with a discussion of effect size and then provides an overview of inferential statistics from the perspectives of null hypothesis significance testing as well as confidence interval formation. Sample size planning approaches for statistical power (with the goal of showing the existence of an effect) and sample size planning for accuracy in parameter estimation (with the goal of showing the magnitude of an effect) are then discussed. The two approaches to sample size planning are explicitly linked to the type of question the researcher seeks to answer. Throughout the chapter, it is assumed that researchers will not plan sample size “by hand” or with the use of specialized tables but, rather, that researchers will use software to plan sample size. A table of sample size planning software titles is provided to assist researchers in finding a way to implement the sample size planning procedure of interest. By discussing from a broad perspective how sample size planning is wedded to effect size and the research question(s) of interest, my hope is that researchers will be able to better link the goals of the research with the study design so that the study will be able to better contribute to the cumulative knowledge of the discipline.

Effect Size An important area of discussion over the last several decades in the methodological literature has been effect size. Emphasis on the estimation of effect size stems from the fact that null hypothesis significance testing, which is discussed later, does not always answer a scientifically interesting research question. In general, null hypothesis significance tests address questions related to the existence of an effect, such as “Is the effect nonzero in the population?” In some cases, the direction of a targeted effect can also be discerned, such as “Is the population regression coefficient positive?” Estimation of the effect size in the population, however, relates to the magnitude of an effect, not simply to its existence. Even if a null hypothesis is rejected, with the implication that the value of the population effect size is not equal to the specified null value (e.g., 0.00), it is important to realize that the population value of the effect size can be very small or very large—the rejection of the null hypothesis provides no specific information about the magnitude of the effect. Null hypothesis significance testing quantifies how improbable an effect size that is as extreme or more extreme than the value obtained is by conditioning the probability on the null hypothesis being true. That is to say, the p-value is the probability that, given the null hypothesis is true, that an effect size at least as large as that obtained would be observed by chance alone. The implication is that if a sufficiently small p-value is obtained, then the idea that the null hypothesis is true is rejected, where “sufficiently small” is operationalized as the p-value being less than the prespecified Type I error rate (e.g., α = 0.05). However, a null hypothesis significance test is unable to quantify the magnitude of an effect. Correspondingly, it has become clear in the methodological literature that in almost all cases, it is important to report and interpret the estimated effect sizes of interest in empirical research. Effect size has been defined in several ways in the methodological literature. Kelley and Preacher (2012) discuss common definitions and go on to propose an inclusive definition of effect size that will be used here, which encompasses many existing definitions of effect size but does not unnecessarily wed effect size to other issues (e.g., any single effect size measure, practical significance/importance, null hypothesis significance tests, or standardization). The Kelley and Preacher definition of effect size is “a quantitative reflection of the magnitude of some

kelley

207

phenomenon that is used for the purpose of addressing a question of interest” (2012, p. 140). Effect size can be thought of as a statistic or parameter with a purpose—namely, a purpose that quantifies some aspect of the research question. Kelley and Preacher note that effect size as defined in this manner encompasses a variety of quantities that are of interest in empirical research, such as variability, association, difference, odds, rate, duration, proportionality, superiority, or degree of fit or misfit, among others. Correspondingly, means, mean differences, standardized mean differences, unstandardized or standardized regression coefficients, contrasts among means, correlation coefficients, (co)variances, coefficients of variation, polynomial change coefficients, path coefficients, and fit indices, for example, are all special cases of effect sizes. Effect size in one way or another is the driving force of research, as effect size quantifies the phenomenon of interest, ultimately linking the data to the hypothesis of interest. Kelley and Maxwell (2010) have discussed how effect sizes can generally be classified in a two-by-two-by-two array, where one dimension is scaling (standardized or unstandardized), one dimension is specification (targeted or omnibus), and one dimension is scope (population or sample).1 Considering effect size in such a fully crossed factorial array with eight cells is helpful because it makes clear that effect size is not a narrowly defined concept. Correspondingly, linking the particular effect size to the questions of interest is an important aspect of the research design and data analyses. The scaling of an effect size is important and should always be communicated to readers. A standardized effect size is one that is not wedded to the measurement unit of the variable(s) upon which the effect size is based due to the measurement units canceling due to division. Therefore, standardized effect sizes can be regarded as being free of a specific measurement unit. A standardized effect size has the property that any linear transformation of the variable(s) will not change the value of the particular standardized effect size. However, linear transformations will change the value of the corresponding unstandardized effect size. That is, standardized effect sizes are invariant to linear transformations, implying that the locations (i.e., means) or scales (i.e., variances) of the variables involved in the calculation of the effect size can be modified and the value of the standardized effect size itself does not change. In general, unstandardized effect sizes are wedded to the particular scaling of the variable(s) in the model, 208

and their interpretation must be linked to the scaling of the instrument(s). This implies that linear transformations will generally yield different values for the unstandardized effect size for different linear transformations of the data. For example, consider the standardized mean difference. The standardized mean difference remains the same for linear transformations of the scores, whereas the mean difference between two means will generally change for transformations of the scores in the groups. Kelley and Preacher (2012) discuss standardization more formally as well as the idea of dimensionlessness in the context of effect sizes. The scientific and practical value of standardized versus unstandardized effect sizes has been debated in the field (e.g., Baugley, 2009; Lenth, 2001). For purposes of this chapter, both standardized and unstandardized effect sizes are regarded as potentially useful. Limitations of either type of effect size, however, may be a result of the particular context and the particular research question. Correspondingly, researchers have to decide on a case-by-case basis the most appropriate effect size to communicate the result(s) of interest and at a minimum ensure that readers understand (1) the metric the effect size is reported and (2) the information the effect size conveys. The specification of an effect in terms of it being omnibus or targeted is another classification dimension. An omnibus effect size relates to the overall model, whereas a targeted effect size relates to a specific well-defined part of the model. Consider multiple regression, where a basic application considers both the overall effectiveness of the model—namely, the squared multiple correlation coefficient—as well as specific relationships linking each regressor to the outcome variable while controlling for the other regressors—namely, the regression coefficients. In multiple regression, the squared multiple correlation coefficient is an omnibus effect, whereas the regression coefficients are targeted effects. Because of the typical situation in which there are multiple effect sizes in a particular statistical model, linking the question of interest to the effect size is very important. If there is a particular regressor that largely drives the research question, then the value of the squared multiple correlation coefficient of the overall (i.e., full) model with all potentially relevant available variables might be of relatively little concern from a scientific perspective. Issues of omnibus and targeted effect sizes are not unique to multiple regression but, rather, are included in many statistical models.

effect size and sample size planning

In any given situation where an effect size estimate is obtained, there is a corresponding population value that the estimate estimates. In general, of course, the population effect size is never known. However, the population value is what is ultimately of interest. The primary role of inferential statistics, in fact, is to make a decision about the population effect size (e.g., it is not zero, it is positive, it is negative, the lower and upper limit bracket the population value with 95% confidence) based on sample data. Effect size has been discussed in this section from a general perspective. First, an encompassing definition was provided and then effect size was set in a two-by-two-by-two array framework, where the dimensions are scaling, specification, and scope. Of vital importance when planning a study is linking the question of interest to the particular effect size. From that point, the inferential perspective from which to plan sample size can be chosen and ultimately an appropriate sample size selected. These latter points are discussed in the forthcoming sections. Effect size is a rich topic, and a section in a chapter certainly cannot do it justice. Readers interested in more details about effect size would benefit from reading Grissom and Kim (2012) and the references contained therein.

Making Inferences from Data There are two primary ways researchers make inferences about population effect sizes based on sample data: (1) null hypothesis significance testing to evaluate, given the specified null hypothesis is true, how likely is an effect size as large or larger than the effect size obtained, and (2) confidence interval formation for the population effect size of interest. Because null hypothesis significance testing and magnitude estimation are so important for making inferences from data, each approach is reviewed to provide a framework for connecting effect size and research goals to sample size planning.

Inference from Null Hypothesis Significance Testing The rationale of null hypothesis significance testing is to specify a null hypothesis, often with the null value of the population effect size being zero, and then determine the probability of observing data as extreme or more extreme than the data actually observed, if the null hypothesis was actually true (via the test statistic). If the results obtained are sufficiently unlikely under the null hypothesis,

then the null hypothesis is rejected. “Sufficiently unlikely” is operationalized as the p-value from the test of the null hypothesis being less than the specified Type I error rate (e.g., 0.05). Recall that the meaning of a p-value in the context of a null hypothesis significance test is the probability, given that the null hypothesis is true, of obtaining results as or more extreme than those obtained. Thus, when p-value < α, where α is the Type I error rate, the null hypothesis of the population effect size being equal to the null value specified is rejected, with the conclusion being there is a difference between the population value of the effect size and the specified null value.2 In some cases, the question of interest involves directionality. For example, (1) does the treatment provide an increase in the mean of the outcome variable as compared to the control group?; (2) are increases in the level of a particular regressor associated with decreases in the conditional mean value of the dependent variable after controlling for the other regressors?; (c) is there a higher proportion of a particular subgroup that reports successful completion of a task than another subgroup?; and so forth. Inference for directionality usually is only meaningful for targeted effect sizes that allow positive and negative values, so as to clearly define the direction of the effect. In other cases, the question of interest involves only the existence of an effect, not the direction. For example, consider a fixed-effects one-way analysis of variance situation with more than two groups. In this context, a statistically significant F -value provides probabilistic evidence that not all of the means are equal in the population. However, with more than two groups, it is not clear from the F -test alone which groups have different means in the population. The F -test evaluates an omnibus (i.e., overarching) effect size rather than a targeted effect (e.g., the difference between two particular group means). Inferring an effect exists, but not knowing any directional information occurs for many situations when an omnibus effect is tested. Depending on the research question, in some situations, omnibus null hypotheses are followed-up with more targeted research questions (e.g., pairwise comparisons of means or contrasts in the ANOVA context), but that need not be the case.

Inferences From Confidence Intervals The rationale of confidence interval formation for population parameters comes from the realization that in applied research point estimates almost kelley

209

certainly differ from their corresponding population values. Providing a confidence interval that will bracket the population parameter with (1 – α)100% confidence explicitly acknowledges the uncertainty in the estimated value of the effect size. A (1– α)100% confidence interval comes from a procedure that, assuming that the correct model is fit, observations are randomly sampled, and the appropriate assumptions are met, provides an interval where (1 – α)100% of intervals computed under the same conditions will bracket the population parameter. The probability of (1 – α)100% is a theoretical value that is based on the realization that (1 – α)100% of an infinite number of confidence intervals calculated in the same situation will contain the population parameter, again, provided the appropriate assumptions are satisfied. Hahn and Meeker have described the meaning of confidence intervals as “if one repeatedly calculates such [confidence] intervals from many independent random samples, 100(1 – α)% of the intervals would, in the long run, correctly bracket the true value of [the parameter of interest]” (1991, p. 31). Because any realized computed confidence interval is a realization from the infinite set of confidence intervals that exist, that particular confidence interval either does or does not contain the population value, leading to a 0 or 1 probability, yet whether it is 0 or 1 is unknown. The probability level refers to the procedures for constructing a confidence interval, rather than to any particular confidence interval (Hahn & Meeker, 1991). Once real limits are obtained, the interval becomes a statement of confidence and is not technically a probabilistic statement. This is why, for example, in the presentation of methods of confidence interval formation (e.g., statistics books), when the general equations are presented for confidence intervals, the probability of the interval is said to be 1 – α, but when limits are calculated the term, confidence, rather than probability, is used. In many situations, what is ultimately of interest is the magnitude of the population effect size. Thus, not only should the point estimate itself be reported, so too should the corresponding confidence interval that brackets the parameter with (1 – α)100% confidence (95% confidence intervals are the de facto standard in many areas of research). The values contained within the confidence interval represent the set of parameter values for which the null hypothesis significance test cannot reject at the α level; these values can be regarded as “plausible” parameter values. However, the values outside of the confidence 210

interval limits can be rejected as the value of the null hypothesis, at the α level; these values can be regarded as “implausible” parameter values. When wide confidence intervals are obtained, the uncertainty with which an observed effect size has been estimated is clearly laid out for the reader. What constitutes a “narrow” or “wide” confidence interval in any given situation is context-specific. However, all other things being equal, when the magnitude of an effect size is of interest, narrower confidence intervals are preferred, as such intervals illustrate a narrower range (i.e., less uncertainty) of plausible parameter values. However, a confidence interval does not necessarily have to be exceedingly narrow for it to be useful, especially when the existence of an effect is of interest. The ideal narrowness depends on the goals of the researcher in the particular situation. In some cases, a confidence interval that is very narrow will be necessary to offer convincing evidence that a particular theory should be supported or that some finding has practical value, whereas in other situations the width of a confidence interval can be relatively wide but still exclude parameters values that would support an alternative theory or provide practical value. Thus, judgment of the usefulness of a narrow confidence interval in a particular situation very much depends on the specifics of the situation.

The Relationship Between Hypothesis Testing and Confidence Intervals Although null hypotheses significance testing and confidence interval formation are two different approaches to statistical inference, there is a clear link between them. In particular, when a particular null hypothesis value is rejected at the α level by a null hypothesis significance test, the corresponding (1 − α)100% confidence interval limits will necessarily exclude the specified null value. More specifically, if a value is outside of the limits of a (1 − α)100% confidence interval, that value, if it were set to the value of the null hypothesis, would be rejected by the corresponding null hypothesis significance test at a Type I error rate of α. An implication of the one-to-one relationship between confidence intervals and hypothesis tests is that it is unnecessary for a null hypothesis test at the α level to be performed solely for purposes of rejecting or failing-to-reject the null hypothesis if a (1 − α)100% level confidence interval is calculated. This is the case because if the null value is contained within the confidence interval limits, then the null hypothesis cannot be rejected at the

effect size and sample size planning

α level. A null hypothesis significance test, however, provides an additional piece of information that a confidence interval cannot provide—namely, the exact p-value. Whereas a confidence interval used for a null hypothesis significance test only shows implicitly if the p-value is greater than α (i.e., if the null value is contained within the interval) or if the p-value is less than α (i.e., if the null value is outside of the confidence interval), the p-value quantifies the exact probability of observing data as extreme or more extreme than that obtained, if the null hypothesis were true, provided appropriate assumptions hold. Therefore, in general, one would not know how close the p-value is to α if only a confidence interval is presented, only that it does or it does not exceed the threshold of reaching statistical significance. Because the p-value and the confidence interval provide different pieces of useful information, both should generally be reported.

Types of Sample Size Planning Given the preceding discussion, it becomes clear that before considering sample size planning, the question of interest as it relates to one or more effect sizes needs to be clearly vetted. Furthermore, the research goal of regarding the existence (or direction) of an effect and/or its magnitude needs to be articulated so that sample size can be planned based on either or both perspectives. As will be discussed, when interest is in showing that an effect exists, sample size planning is best approached from a power analytic perspective. However, when interest is in the magnitude of an effect, sample size planning is best approached from an accuracy in parameter estimation perspective. Although other perspectives to sample size planning exist, the following discussion focuses only on these two methods in the following sections. For a review of sample size planning methods, see Maxwell, Kelley, and Rausch (2008).

Statistical Power and Power Analysis Statistical power is the probability of correctly rejecting the null hypothesis—it is the complement of a Type II error (i.e., statistical power = 1 – p [Type II error]). Statistical power is a function of four things: (1) the effect size, (2) the model error variance, (3) the Type I error rate (i.e., α), and (4) sample size. In many cases, the size of the effect and the model error variance can both be incorporated into a standardized effect size. In cases where directionality is sensible, such as for a t-test of the difference between means, in addition to specifying

only the Type I error rate, the type of alternative hypothesis (e.g., directional or nondirectional) must also be specified. In other situations for tests that are inherently one-tailed, such as for an analysis of variance, such a distinction is unnecessary. The effect size and model error variance depends, in part, on the research design and statistical model used to analyze the data. The Type I error rate is a design factor known a priori, often set to α = 0.05. Correspondingly, after the research design is specified and a particular value is chosen for the (unstandardized) effect size and model error variance (or the standardized effect size) to base the sample size planning procedure, statistical power depends only on sample size. Taken together, this implies that sample size can be planned to obtain a desired level of statistical power based on a specified set of conditions articulated by the researcher. If the conditions specified are not correct, then of course the nominal (i.e., stated) power will differ from the empirical (i.e., actual) power. When testing a particular null hypothesis, the sampling distribution of the effect size of interest is transformed to a test statistic (e.g., via a t-test, χ 2 -test, F -test). When the null hypothesis and appropriate assumptions are true, the test statistic follows a particular statistical distribution (e.g., a central t, χ 2 , F ). However, when the null hypothesis is false, the test statistic follows the noncentral version of the statistical distribution (e.g., a noncentral t, χ 2 , F ). The noncentral version of a statistical distribution has a different mean, skewness, and variance, among other properties, as compared to its central distribution analog. Whereas a known percentage (e.g., 5% ) of the sampling distribution under the null hypothesis is beyond the critical value(s) from the null distribution, the noncentral distribution has a larger proportion of its distribution, in the direction of the effect, beyond the critical value from the central distribution, which is how null hypothesis significance tests are evaluated (i.e., assuming a null distribution). If the effect size actually came from a distribution in which the null hypothesis is false, then there will then be a higher probability of rejecting the null hypothesis than the value of α specified, provided the effect is in the direction of the rejection region. It is, of course, advantageous to have a sufficiently large area, which translates into a high probability, of the alternative hypothesis distribution beyond the critical value under the null hypothesis (i.e., central distribution). The area of the alternative distribution beyond (i.e., more extreme than) the critical value of the null distribution can be quantified and is termed statistical power. kelley

211

Values of δ –0.6 –0.4 –0.2

Density of the Null (H0) and Alternative (Ha) t -Distributions

0.5

0

0.2

0.4

0.6

1

0.8

1.2

Nonrejection Region (95% of H0)

Type ll Error (20% of Ha)

Rejection Region (5% of H0)

Power (80% of Ha)

0.4

0.3

0.2

0.1

0.0 –4

–3

–2

–1

0

1

2

3

4

5

6

7

Values of t Figure 11.1 Illustration of the concept of statistical power in the context of a two independent group comparison of means when the population variances are assumed equal.

For a concrete example, suppose a researcher is interested in having a statistical power of 0.80 in a situation in which he or she believes that the true standardized difference (denoted δ) between two independent group means is 0.50 and that the assumption of homogeneity of variance holds for a two-sided alternative hypothesis with a Type I error rate of 0.05. The value of δ = 0.50 implies that the mean difference is equal to one-half of the within-group standard deviation. For the two-sided alternative hypothesis (i.e., μ1 = μ2 ) situation, the necessary sample size is 64 participants per group (i.e., a total sample size of 128). Figure 11.1 illustrates this scenario, in which the distribution when the null hypothesis is true is on the left (denoted H0 ) and where the distribution when the alternative hypothesis is true on the right (denoted Hα ). Thus, the two distributions and probability values contained within Figure 11.1 are conditional on the null hypothesis being true (for the probability of the nonrejection region and critical regions) or the null hypothesis being false to the degree specified in the figure (for the probability of the Type II error and statistical power). Note that both distributions cannot simultaneously be true but can both be false, which occurs when the null hypothesis is false and the noncentral parameter something other than the value specified. 212

The distribution on the left is the sampling distribution of the t-statistic when μ1 = μ2 is true—that is, under the condition in which there is no mean difference. In this null distribution, the critical regions are the regions more extreme than the critical values (here, –1.98 and 1.98). In this situation, when an observed t-statistic is more extreme than the critical values, the null hypothesis is rejected in favor of the alternative hypothesis, denoted Ha . However, when the null hypothesis is true, 5% of the time the observed t-statistic will be more extreme than the critical values. In those situations the null hypothesis will be rejected, when in reality the null hypothesis is true, which is a Type I error (i.e., a false–positive). However, if the null hypothesis is false and in reality the population standardized mean difference is in fact 0.50, the sampling distribution of observed t-statistic will follow the distribution on the right, which is the distribution under the alternative hypothesis. Eighty percent of the distribution under the alternative hypothesis is more extreme than the upper critical value of the null hypothesis distribution. Correspondingly, if all of the assumptions of the model and conditions as specified by the researcher are true, 80% of the time the null hypothesis will be rejected. Often a statistical power of 0.80 is regarded as sufficient (e.g., Cohen, 1988).

effect size and sample size planning

3000

Power = .50

Power = .80

Power = .95

Width = .35

Width = .25

Width = .15

Necessary Per-Group Sample Size

2500

2000

1500

1000

500

0 0.1

0.2

0.3 δ

0.4

0.5

Figure 11.2 Comparison of the power analytic and the accuracy in parameter estimation approaches to sample size planning for desired statistical power of 0.50, 0.80, and 0.95 and for desired confidence interval width of 0.15, 0.25, and 0.35, both in the situation where the Type I error rate is 0.05 for the standardized mean difference (taken from Kelley & Rausch, 2006).

However, this implicitly implies that a Type II error (i.e., false–negative) is four times more likely (i.e., 0.05 × 4 = 0.20) than the usual Type I error of 0.05. Whether this is reasonable depends on the context. However, it should be clear that even if the null hypothesis is false as described, 20% of the time there would be a failure to reject the null hypothesis (i.e., the area from the alternative distribution to the left of the upper critical value, yet not beyond the lower critical value, from the null distribution). The four scenarios depicted in Figure 11.1 are generalized in Table 11.1. In Table 11.1, the columns distinguish between a true and a false null hypothesis, whereas the rows distinguish between the statistical conclusions. However, in reality a researcher will tend to not know whether the null hypothesis is true or false but must make a decision based on incomplete information (i.e., sample data). For a Type I error rate of 0.05, the value in the upper left cell of Table 11.1 is 0.95 and the value in the lower left cell is 0.05, which both depend on the null hypothesis actually being true (visually these areas are illustrated in the left distribution of Figure 11.1). For statistical power of 0.80, the value in the upper right cell of Table 11.1 is 0.20 and the value in the lower right cell is 0.80, which both depend on the null hypothesis being false with δ = 0.50 (visually these areas are illustrated in the right distribution of

Fig. 11.1). Linking Table 11.1 to Figure 11.1 is helpful to better understand how the null and alternative distributions in the figure relate to the probabilities in the table. Although seemingly only a mean difference exists between the two distributions upon first glance, the two distributions displayed in Figure 11.1 have different variance, skew, and kurtosis values. The alternative distribution is a noncentral t-distribution that is not symmetric, with a noncentrality parameter of 2.828, whereas the null distribution is a central distribution (noncentrality parameter of 0) that is symmetric. Holding everything else constant, increases in sample size will lead to a larger area (i.e., higher probability) of the noncentral distribution being beyond the critical value from the central distribution. Additionally, holding everything else constant, an increase in the mean difference, a decrease in the variability of the scores, and/or an increase in the Type I error rate will lead to increases in statistical power. Researchers interested in increasing the statistical power of the tests of their hypotheses should realize that the design of the study itself can increase the statistical power while holding sample size constant. For example, incorporating the pretest as a covariate in an analysis of covariance for a randomized pretest, posttest, follow-up design increases the statistical

kelley

213

Table 11.1. Decision Table for Null Hypothesis Testing Truth in Population H0 True H0 False Statistical Conclusion

Fail to Reject H0

Correct Decision p =1−α

Type II Error p=β

Reject H0

Type I Error p=α

Correct Decision p =1−β

Note: H0 represents the null hypothesis, p represents probability, α represents the Type I error rate, and βrepresents the Type II error rate. Statistical power is 1 − β.

power of detecting group differences as compared to incorporating the pretest as part of the dependent variable or including it as a level of the time factor (e.g., Rausch, Maxwell, & Kelley, 2003). Muthén and Curran (1997) have shown how holding constant the time interval but increasing the number of time-points or holding constant the number of timepoints but increasing the time interval leads to more statistical power for the same number of participants in a between-groups longitudinal design in the latent variable modeling framework. Maxwell and Delaney (2004) have discussed how using a within-subjects design rather than a between subjects design can increase statistical power at a fixed sample size, as can using multivariate statistical methods rather than a simpler univariate analysis for some research questions (e.g., O’Brien & Muller, 1993). The point is, it is often possible to modify the design of the study and/or the analytic method, while still addressing the same or similar question of interest, holding constant the particular sample size, as a way to increase statistical power. Maxwell et al. (2008) have made clear that understanding issues of statistical power is important from the standpoint of an individual investigator as well as the discipline more generally. For example, rejecting a null hypothesis is often seen as being so important that it is an implicit assumption that publications in many empirical journals involve one or more effect sizes that have reached statistical significance. Correspondingly, even if a researcher wished to avoid the whole process of formal study design, which often includes a statistical power analysis, then he or she would be setting up himself or herself for potential failure. The “potential failure” results from the fact that with a poorly designed study, the statistical power may be low, which in turn implies that there is only a small probability of showing support for the existence of the primary effect size of interest (i.e., obtaining statistical significance). When a statistical 214

power analysis is done, a researcher can decide if the study as currently envisioned is even worth conducting. For example, if the statistical power for finding a statistically significant effect was 0.15 for a particular value of sample size that the researcher has access to, many researchers may not want to conduct the study because of the small probability (i.e., 15% chance) of realizing success (i.e., rejecting the null hypothesis). In those situations with the knowledge provided by a statistical power analysis, it may be decided that (1) the study should be conducted with the realization that the desired outcome is improbable, (2) the study should not be done at the present time, (3) a larger sample size is needed, or (4) a multisite study should be performed.

Accuracy in Parameter Estimation A point estimate of an effect size almost certainly differs from the population value of the effect size. It is the population value of an effect size that is ultimately of interest. Correspondingly, a point estimate should always be accompanied with a confidence interval. Failing to accompany a point estimate with a confidence interval ignores the sampling variability inherent in all estimates. When a confidence interval is wide and brackets values ranging from small to large (whatever that means in a particular context), it illustrates the uncertainty with which the parameter has been estimated and calls into question the tenability of the magnitude of the observed effect size. Because a wide confidence interval for an effect size is undesirable when interest concerns magnitude-estimation, sample size can be planned a priori such that the computed confidence interval has an expected width that is sufficiently narrow or has probabilistic assurance that the observed width will be sufficiently narrow. The idea of the accuracy in parameter estimation approach to sample size planning is to avoid “embarrassingly large” confidence intervals, which was postulated by Cohen as

effect size and sample size planning

a reason researchers often fail to provide confidence intervals (1994, p. 1002). When a researcher or consumer of research is interested in the magnitude of a parameter, failing to provide a confidence interval is problematic. Historically, confidence intervals were seldom reported in psychology and related disciplines. However, much has been written in the methodological literature in the not-so-distant past on the importance of providing an effect size and confidence interval for the population effect size. For example, Wilkinson and the APA Task Force on Statistical Significance concluded that researchers should “always present effect sizes for primary outcomes” and went on to say that “interval estimates should be given for any effect sizes involving principal outcomes” (1999, p. 599). These recommendations are made explicit in the newest edition of the American Psychological Association (APA) publication manual, which states that null hypothesis significance tests are “but a starting point” (APA, 2010, p. 33). The newest edition of the APA publication manual goes on to state without ambiguity that the effect size needs to be reported (“it is almost always necessary to include some measure of effect size in the Results section,” p. 34) and that confidence intervals should be reported because they provide an indication of the precision of the estimated effect size (“whenever possible, provide a confidence interval for each effect size reported to indicate the precision of estimation of the effect size,” p. 34). Because reporting confidence intervals for population quantities is essentially a required component of research studies reported in APA journals, as they are “minimum expectations,” and because obtaining narrow confidence intervals is so advantageous, the traditional method of sample size planning from the power analytic perspective can be supplemented or supplanted by an approach where the goal is to obtain a narrow confidence interval. The calls from the APA are not esoteric to psychology. In fact, education (American Educational Research Association, 2006) and medicine (Consolidated Standard of Reporting Trials [CONSORT] [Moher et al., 2010] and the Transparent Reporting of Evaluations with Nonrandomized Designs [TREND] [Des Jarlais et al., 2004]) have authoritative calls for reporting effect sizes and their corresponding confidence intervals in published research that are consistent with the APA expectations. The approach to sample size planning, in which the goal is to obtain a narrow confidence interval, has been termed accuracy in parameter estimation, which

is often abbreviated AIPE (e.g., Kelley & Maxwell, 2003). The goal of the AIPE approach to sample size planning is the confidence interval for the parameter of interest will be sufficiently narrow, where “sufficiently narrow” is necessarily context-specific. Sample size planning with the goal of obtaining a narrow confidence interval dates back to at least Guenther (1965) and Mace (1964), yet the AIPE approach to sample size planning has taken on a more important role in the research design literature recently. This is the case due to the increased emphasis on effect sizes, their confidence intervals, and the undesirable situation of “embarrassingly wide” confidence intervals. Recent literature has discussed AIPE as an alternative to, or supplement for, statistical power analysis because of the push for more of an effect-size-based literature for making scientifically based inferences. The AIPE approach to sample size planning seeks to obtain an accurate estimate, which is operationalized by obtaining a narrow (1 – α)100% confidence interval for the population parameter of interest. Confidence interval width is in part, but not wholly, a function of sample size. Holding everything else constant, the larger the sample size, the smaller the standard error of the estimated value, and the smaller the standard error, the narrower the confidence interval. Of course, sample size cannot generally increase without bound. What the AIPE approach to sample size planning addresses is the minimum sample size in which the goal of a narrow confidence interval will be satisfied. To understand why the AIPE approach to sample size planning is termed accuracy in parameter estimation, it is helpful to consider the statistical definition of accuracy, which is operationalized as the square root of the mean square error (RMSE) for estimating some parameter of interest, say θ , which is formally defined as $ RMSE = E[(θˆ − θ )2 ] $ ˆ 2 ] + (E[θˆ − θ ])2 = E[(θˆ − E[θ]) $ = σ ˆ2 + B 2ˆ (1) θ

θ

where σ ˆ2 is the variance of the estimated parameter, θ which is inversely proportional to the precision of the estimator, and B 2ˆ is the squared bias of the estiθ mator. From the third way of expressing Equation 1, it can readily be seen for a fixed σ ˆ2 , an increase in θ

B 2ˆ yields a less accurate estimate (i.e., larger square θ root of the mean square error), with the converse also being true. Because the goal is an accurate estimate, kelley

215

precision and bias must be considered simultaneously. In general, the most widely used estimates are unbiased or nearly unbiased, or at least consistent (i.e., they converge to their population value as sample size increases). It would be entirely possible, however, to have a very precise estimate that was not unbiased. For example, suppose that regardless of the observed data, a researcher estimated a parameter based on a theory-implied value. Doing so would not be statistically optimal, in general, but the estimate would be very precise. The AIPE approach to sample size planning is named as such because it simultaneously considers the precision and bias of the estimate. One approach for planning sample size from the AIPE perspective is having a confidence interval whose expected (i.e., mean) width is sufficiently narrow. The standard AIPE approach to sample size planning answers the question “What sample size is necessary such that the 95% confidence interval has a sufficiently narrow expected width?” However, because the confidence interval width is a random variable (as it is based on data that contains one or more random variables), any particular realization of the confidence interval will tend to be either narrower or wider than desired (i.e., the expected width will be larger or smaller than the expected width). An optional specification allows a researcher to incorporate a specified degree of assurance (e.g., 99% ) that the obtained confidence interval will be sufficiently narrow. That is, a modification to a standard AIPE procedure would answer the question “What size sample is necessary so that there is 99% assurance that the 95% confidence interval has a sufficiently narrow width?” Other values of assurance and confidence level could be used, of course. As noted, operationalizing what a “sufficiently narrow” width means necessarily depends on the particular context and the research goals. An important point is that a confidence interval will bracket the population value with the specified level of confidence, which ultimately implies that the best estimate of the population effect size is contained within a narrower range of plausible parameter values (i.e., the confidence interval limits). Holding everything else constant, the narrower the confidence interval the better when interest concerns magnitude-estimation, as the range of the confidence interval is small. Just as the Type I error rate is usually fixed at 0.05, as previously discussed for statistical power, the confidence level is essentially a fixed design factor, generally set to 0.95 (i.e., 1 – 0.05). With the 216

level of confidence essentially regarded as fixed, and with estimates for the model error variance and, in some situations, the size of the effect, sample size is a design factor that can be planned so that the expected (i.e., mean) confidence interval width or with some additional assurance is sufficiently narrow. The particulars of how to plan sample size from the AIPE approach, as with the power analytic approach, are relegated to software programs and more technical works, as the implementation of sample size planning for different effects sizes can vary a great deal. The calls for using effect sizes and confidence intervals by methodologists have been unrelenting (e.g., see Morrison & Henkel, 1970; Bakan, 1966; Wilkinson & the APA Task Force, 1999; Harlow, Mulaik, & Steiger, 1997; Thompson, 2002; Schmidt, 1996; Grissom & Kim, 2005; Hunter & Schmidt, 2004; Cohen, 1994). These calls seemed to have been heard by various organizations, as evidenced by recent requirements stipulating that effect sizes and confidence intervals be included as part of a research study. Given what can be described as essentially the new requirement of reporting effect sizes and confidence intervals, coupled with the fact that wide confidence intervals are generally undesirable when interest concerns the magnitude of the population effect size, the AIPE approach to sample size planning is poised to become a more widely used approach to planning sample size.

Software for Sample Size Planning Software for sample size planning has been developed for many designs and statistical procedures from different perspectives. However, even with all of the software available, there are still some designs and statistical procedures used in psychology and related disciplines that do not have easy-to-use sample size planning software programs available. Nevertheless, when a software program does exist, the actual planning of sample size given the necessary information can generally be done relatively easily, provided necessary input values are available or can be estimated. Later in the section, a list of selected software titles is provided that may be helpful for planning sample size in many—but certainly not all—situations. The point of devoting a section on software is so that researchers realize some of the resources available that implement the necessary computations in planning sample size, in which those computation can often be thought of as taking place “in

effect size and sample size planning

the background.” At one time, sample size planning required hand calculations, the use of tables with selected values of effect size, or tri-entry or quad-entry tables of nonstandard distributions (e.g., noncentral t, χ 2 , and F ) to generally approximate the appropriate sample size value. Two decades ago, an associate editor for Psychological Bulletin believed that researchers failed to perform statistical power analyses because they were too difficult and spurred Cohen (1992) to write a primer on statistical power analysis. In general, almost all implementation of sample size planning today is relegated to computers. Had this chapter been written, say, a decade earlier, it is likely that much of the chapter would have been spent demonstrating how to plan sample size for commonly used designs. However, because it is assumed that readers interested in planning sample size will tend to use a computer software program, considerable attention has been devoted to the underlying concepts and issues involved in planning sample size to answer research questions of interest. Table 11.2 includes various relevant software programs, their publisher/author(s), whether they are freely available, and an Internet address for more information. Revelle and Zinbarg (2009) have argued the lack of quality software will prevent many researchers from implementing important methodological techniques, of which sample size planning would be a special case. One take-away message is for developers of methods: If you develop a method or improve an existing method, unless that method is implemented in user-friendly software, then it will not likely be widely used. Another take-away message is for researchers who apply statistical methods to their data: If a methodological technique is not in one of your favorite statistical packages, then look elsewhere for the method being implemented in another software program. In addition to looking for the implementation of a method in other programs, researchers should not shy away from unfamiliar programs, as they may be easy to use with a relatively small time investment and can expand the size of one’s “methodological toolbox.” Although some researchers have a considerable amount of anxiety about using unfamiliar programs, the benefits of implementing new methods can often be worth the difficulty in using something new. Additionally, using another program may not be nearly as time consuming or difficult as it may seem initially. It is important to keep an open mind about new methodological software because important methodological techniques are not always implemented in the most

widely used packages in psychology and related disciplines. In fact, new developments with relevance to many researchers in psychology and related disciplines often take many years to be implemented, if they are ever implemented. Methodologists who develop software that is available at the time of publication of the article will have the biggest impact. Nevertheless, methodologists should not feel as though they need to develop a new software program for each methodological development. This idea is consistent with the argument set forth by Revelle and Zinbarg—namely, that when implementing new methodological developments, existing open source software systems should be considered, such as R, that run on the main computer platforms (Windows, Macintosh, & Unix/Linux) and supply the underlying code so that exactly what is being done by the program can be examined, updated, and extended. Allowing users access to the underlying code opens up the “black box” that exists in some programs and more easily allows future developments based on the previous programming work to be made.

Discussion The design phase of a research study is an integral part of a research project, as it is advantageous to design a research study so as to have a sufficiently high probability of success in accomplishing the particular goal. Publishing a study in a scientific outlet is an important goal of almost any study, because without such a publication, no new knowledge can be communicated to the discipline. In an effort to increase the likelihood that a study will be publishable and potentially have an impact on the discipline, researchers should carefully design the study with the sample size clearly justified and discuss the design in any manuscripts that are based on the data collected from the study. Without a properly designed research study, the likelihood of the study contributing to the cumulative knowledge of a discipline is drastically reduced. Potentially even more problematic than a study not adding anything to the cumulative knowledge of a discipline is when the study adds incorrect information, resulting in whole or in part from a poorly designed study, which can serve to detract or confuse the cumulative knowledge. Although there are many important factors to consider when designing a study (e.g., see Shadish, Cook, & Campbell, 2002; Maxwell & Delaney, 2004; Kirk, 1995; Myers & Well, 2003; Winer, kelley

217

Table 11.2. Software Titles Useful for Planning Sample Size Software title*

Author(s)/Publisher Operating system(s)

G*Power

E. Erdfelder, F. Faul & A. Buchner

Windows/Mac

Yes

http://www.psycho.uniduesseldorf.de/aap/projects/gpower/

nQuery Advisor

Statistical Solutions

Windows

No

http://www.statistical-solutionssoftware.com/productspage/nquery-advisor-sample-sizesoftware/

Optimal Design

J. Spybrook, S. W. Raudenbush, R. Congdon, & A. Martinez

Windows

Yes

http://www.wtgrantfoundation.org/ resources/overview/research_tools

PASS

NCSS

Windows

No

http://www.ncss.com/pass.html

PinT

T. Snijders, R. Bosker, & H. Guldemond

Windows

Yes

http://stat.gamma.rug.nl/multi level.htm# progPINT

Power and Precision

Biostat

Windows

No

http://www.power-analysis.com

Windows/Mac/ Unix

Yes

Package: MBESS

B. W. Brown, J. Lovato, K. Russel, & K. Halvorsen K. Kelley & K. Lai

Package: pamm

J. Martin

Package: pwr

S. Champely

http://cran.rproject.org/web/packages/ asypow/index.html http://cran.rproject.org/web/packages/ MBESS/index.html http://cran.rproject.org/web/packages/ pamm/index.html http://cran.rproject.org/web/packages/ pwr/index.html

PROC POWER

SAS Institute

Windows/Unix

No

http://support.sas.com/documentation/cdl/en/statug/63033/HTML/ default/ power_toc.htm http://support.sas.com/documentation/cdl/en/statug/63033/HTML/ default/glmpower_toc.htm

SIZ

Cytel

Windows

No

http://www.cytel.com/Software/ SiZ.aspx

SPSS (SamplePower)

SPSS

Windows

No

http://www.spss.com/software/ statistics/samplepower/

Statistica (Power Analysis and Interval Estimation)

StatSoft

Windows

No

http://www.statsoft.com/products/ statistica-power-analysis/

Package: asypow

R

SAS

218

Free?

PROC GLMPOWER

effect size and sample size planning

Web resource

Table 11.2. (Continued) Software title*

Author(s)/Publisher Operating system(s)

STPLAN

B. Brown, C. Brauner, A. Chan, D. Gutierrez, J. Herson, J. Lovato, K. Russel, & J. Venier

Windows/Unix

Free?

Web resource

Yes

https://biostatistics.mdanderson.org/ SoftwareDownload/

Note: Software titles are listed in alphabetical order. The failure to list a sample size planning software does not imply that it should not be considered. Purposely not included, for example, are very narrowly focused sample size planning software titles. Also not included are “web resources” (e.g., online calculators), some of which can be very helpful. Additionally, general software titles that could be made to plan sample size with the appropriate programming are not included, as the listed software titles are those that were developed specifically to plan sample size or contain specialized functions/procedure for planning sample size.

Brown, & Michels, 1991; Keppel & Wickens, 2004), an important factor is sample size planning. Sample size planning can be defined as the systematic approach to selecting an optimal number of participants to include in a research study so that some specified goal or set of goals can be satisfied with some degree of expectation or probabilistic assurance, where the expectation or probabilistic assurance depends on the specified assumptions. Research goals may relate to establishing the existence of an effect and/or estimating the magnitude of an effect. When research goals are concerned with showing the existence of an effect, statistical power analysis is generally the most appropriate approach to sample size planning. However, when research goals are concerned with estimating the magnitude of an effect, the accuracy in parameter estimation approach is generally the most appropriate approach to sample size planning. In cases where both the existence and magnitude are of interest, statistical power and AIPE can be combined into a unified framework (e.g., Jiroutek, Muller, Kupper, & Stewart, 2003). Historically, sample size planning has often been seen as a difficult task. One reason for the inherent difficultyinplanningsamplesizeisthatthereareoften multiple effect sizes of interest in a given study. Additionally, the goal of having adequate statistical power, sufficient accuracy in parameter estimation, or both, can potentially lead to a different necessary sample size for each of the effects. That is, for the same study, multiple “appropriate” sample sizes may exist, each of which linked to a specific goal. In these situations, generally the best solution from a methodological perspective is using the largest of the planned sample sizes. In some cases, the appropriate sample size is well beyond what is obtainable by the researcher. In

such cases, it is still important to know what a formalized sample size planning procedure suggests, as knowing the ideal sample size value may lead to the realization that the study is simply unlikely to be successful with the available resources. Correspondingly, a cost–benefit analysis can be done to assess whether the study should even be conducted as envisioned. One possibility when the necessary sample size is too large to obtain for a researcher is to conduct a multisite study, which is much more common in medical research than in psychology. The idea of a multisite study is to spread the burden but reap the benefits that arise from appropriately large sample sizes (Kelley & Rausch, 2006). Because multiple null hypothesis significance tests will often be conducted in a single study, it could be the case that statistical power is not adequate for any particular effect size, but overall there is a high degree of statistical power for at least one test because of the multiplicity issue. Maxwell (2004) has reviewed issues of underpowered studies from the perspective of a single researcher and from the perspective of an entire discipline. From the researcher’s perspective, if enough statistical tests are performed, then there will often be a high probability of finding statistical significance somewhere among the set of null hypothesis significance tests. From the discipline’s perspective, however, underpowered studies produce inconsistencies in findings and tend to overestimate the magnitude of effect size. Published but underpowered studies tend to overestimate effect sizes because studies most likely to be published are those with statistically significant findings, which may be caused by sampling error not resulting from a population effect size that differs from the null value. Correspondingly, if a nontrivial proportion of published studies are kelley

219

based on statistically significant findings that are a result of sampling error (i.e., only the studies with large effect sizes are published because those are the ones that reach statistical significance), then estimates of effect sizes based on the literature are based on a biased sample. Largely because of this issue, Kraemer, Gardner, Brooks, and Yesavage (1998) recommend excluding underpowered studies from meta-analysis. Vickers (2003) studied how estimates of the population standard deviation used in controlled randomized trials tended to underestimate the population value (in approximately 80% of the studies examined), thereby leading to studies that were often underpowered. Browne (1995) has provided correction factors for standard deviations based on pilot studies so that there is probabilistic assurance that sample size planned from those standard deviations will not underestimate power. The point is that using estimates of effect sizes or standard deviations from (1) different populations, (2) under different situations, or (3) pilot studies where there may be substantial sampling error, can lead to erroneous estimates of the corresponding population value which is often used when planning sample size. Correspondingly, some effect sizes will be in the literature because they happen to be large or the standard deviations happen to be small, simply because of sample error. Thus, caution is clearly warranted when basing sample size planning on estimates obtained from a pilot study or the literature, especially when the study used a rather small sample size. A seemingly simple question commonly asked in the initial phase of study design is “What size sample should be used?” However, answering this question is not so easy, as there are multiple issues that need to be considered. These issues relate to the particular effect size that addresses the question of interest and the goals of the researcher. Numerous booklength treatments have been written on the topic of sample size planning (Aberson, 2010; Bausell & Li, 2002; Chow, Shao, & Wang, 2003; Cohen, 1988; Dattalo, 2008; Davey & Savla, 2010; Kraemer & Thiemann, 1987; Lipsey, 1990; Machin, Campbell, Tan, & Tan, 2009; Murphy, Myors, & Wolach, 2008). These books include specifics on exactly how to plan an appropriate sample size in many conditions in a variety of ways. Most sample size planning questions can be addressed with software. Correspondingly, this chapter did not provide specifics on any particular sample size planning method. Rather, this chapter 220

attempted to provide an overview of the variety of issues that need to be considered when planning an appropriate sample size. Hopefully, this chapter has been successful in providing an effective overview of effect sizes, research goals of interest, and sample size planning methods and how each of these three issues are intertwined. A better understanding of these issues will better facilitate the design of research studies, which hopefully will contribute to a more unbiased and cumulative science.

Future Directions 1. When will the majority of top-tier journals in psychology and related disciplines require, rather than encourage, reporting effect sizes and their corresponding confidence intervals? 2. When will the majority of scientific conclusions in psychology and related disciplines be based on effect sizes and their corresponding intervals for effect sizes rather than the dichotomous results of a null hypothesis significance test? 3. When will discussing sample size planning in the methods section of a journal article be given the importance it deserves by editors, reviewers, and readers? 4. When will sample size planning from the perspective of accuracy in parameter estimation (AIPE) be widely used? 5. When will some of the more complicated designs used in psychology and related disciplines be implemented in sample size planning programs? 6. When will widely used computer programs provide commonly used effect sizes, especially standardized effect sizes, and automatically compute the corresponding confidence intervals?

Author note The author would like to thank Joseph R. Rausch, University of Cincinnati and Cincinnati Children’s Hospital Medical Center, and Keke Lai, University of Notre Dame, for helpful comments on a previous version of this chapter.

Notes 1. Some effect sizes fall between unstandardized and standardized as they are partially standardized. An example of a partially standardized effect size is a regression coefficient in a model where the predictors/explanatory variables are standardized but the outcome variable is not. Additionally, some effect size fall between targeted and omnibus effect sizes. An example of an effect size that is partially targeted (and thus partially omnibus) is the change in the squared multiple correlation coefficient when two variables

effect size and sample size planning

are added to a multiple regression model. In such a situation the change in the squared multiple correlation coefficient cannot be attributed to any specific variable, thus it is not targeted, because both regressors variables are added simultaneously. So as to not complicate the discussion presented in the chapter, partially standardized effect sizes and partially omnibus effect sizes are not explicitly discussed. 2. The chapter has been framed in terms of non-directional alternative hypotheses, where the null hypothesis is set equal to the null value and is rejected without explicit consideration of direction. However, modification to single-sided tests (e.g., the population mean from group 1 is larger than the population mean from group 2, rather than testing to see if the population mean from groups 1 and 2 are different) is straightforward.

References Aberson, C. L. (2010). Applied Power Analysis for the Behavioral Sciences. New York: Psychology Press. American Educational Research Association (2006). Standards for reporting on empirical social science research in AERA publications. Washington, DC: American Educational Research Association. American Psychological Association (2010). Publication manual of the American Psychological Association (6th ed). Washington, DC: American Psychological Association. Baugley, T. (2009). Standardized or simple effect size: What should be reported? British Journal of Psychology, 100, 603–617. Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 423–437. Bausell, R. B., & Li, Y-F. (2002). Power analysis in experimental research: A practical guide for the biological, medical, and social sciences. New York: Cambridge. Browne, R. H. (1995). On the use of a pilot sample for sample size determination. Statistics in Medicine, 14, 1933–1940. Chow, S-C., Shao, J., & Wang, H. (2003). Sample size calculations in clinical research. New York: Taylor & Francis. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159. Cohen, J. (1994). The world is round (p< .05). American Psychologist,49, 997–1003. Dattalo, P. (2008). Determining sample size: Balancing power, precision, and practicality. New York: Oxford University Press. Davey, A., & Savla, J. (2010). Statistical power analysis with missing data: A structural equation modeling approach. New York: Routledge. Des Jarlais D.C., Lyles C.M., Crepaz N, & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. American Journal of Public Health, 91, 361–366. Grissom, R. J. & Kim, J. J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY: Routledge. Guenther, W. C. (1965). Concepts of statistical inference. New York: McGraw-Hill. Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (1997). What if there were no significance tests? Mahwah, NJ: Erlbaum. Hahn, G. J., & Meeker, W. Q. (1991). Statistical intervals: A guide for practitioners. New York, NY: Wiley.

Hunter, J. E. & Schmidt, F. L. (2004). Methods of metaanalysis: Correcting error and bias in research findings (2nd ed.). Thousand Oaks, CA: Sage. Jiroutek, M. R., Muller, K. E., Kupper, L. L, & Stewart, P. W. (2003). A new method for choosing sample size for confidence-interval based inferences. Biometrics, 59, 580–590. Kelley, K. & Maxwell, S.E. (2003). Sample size for multiple regression: Obtaining regression coefficients that are accurate, not simply significant. Psychological Methods, 8, 305–321. Rausch, J. R., Maxwell, S.E., & Kelley, K. (2003). Obtaining power or obtaining precision: Delineating methods of sample-size planning. Evaluation and the Health Professions, 26, 258–287. Kelley, K. & Preacher, K. J. (in press). On effect size. Psychological Methods. Kelley, K., & Rausch, J. R. (2006). Sample size planning for the standardized mean difference: Accuracy in parameter estimation via narrow confidence intervals. Psychological Methods, 11, 363–385. Keppel, G. & Wickens, T.D. (2004). Design and Analysis: A Researcher’s Handbook (4th ed.). Upper Saddle River, NJ: Prentice Hall. Kirk, R.E. (1995). Experimental design: Procedures for behavioral sciences (3rd ed.). Pacific Grove, CA: Brooks/Cole. Kraemer H.C., Gardner C., Brooks J.O., & Yesavage J.A. (1998). Advantages of excluding underpowered studies in meta-analysis: Inclusionist versus exclusionist viewpoints. Psychological Methods 3, 23–31. Kraemer H.C. & Thiemann S., (1987). How many subjects? Statistical power analysis in research. Newbury Park, CA: Sage. Lenth, R. V. (2001). Some practical guidelines for effective sample size determination. The American Statistician, 55, 187–193. Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park, CA: Sage. Mace, A. E. (1964). Sample-size determination. New York: Reinhold. Machin, D., Campbell, M. J., Tan, S. B., & Tan, S. H. (2009). Sample size tables for clinical studies (3rd ed.). Hoboken, NJ: Wiley-Blackwell. Maxwell S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147–163. Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: A model comparison perspective (2nd ed.). Mahwah, NJ: Erlbaum. Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537–563. Moher, D., Hopewell, S., Schulz, K., Montori, V., Gøtzsche, P. C., Devereaux, P. J., Elbourne, D., et al.. (2010). CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomized trials, British Medical Journal, 340, 698–702. Morrison, D. E., & Henkel, R. E. (Eds.). (1970). The significance test controversy: A reader. Chicago: Aldine. Murphy, K. R., Myors, B., & Wolach, A. (2008). Statistical power analysis: A simple and general model for traditional and modern hypothesis tests (3rd ed.). Mahwah, NJ: Erlbaum.

kelley

221

Muthén, B. O., & Curran, P. J. (1997). General longitudinal modeling of individual differences in experimental designs: A latent variable framework for analysis and power estimation. Psychological Methods, 2, 371–402. Myers, J. L., & Well, A. (2003). Research design and statistical analysis (2nd ed.). Mahwah, NJ: Earlbaum. O’Brien, R.G. & Muller, K.E. (1993). Unified power analysis for t-tests through multivariate hypotheses. In L. K. Edwards (Ed.), Applied analysis of variance in behavioral science (pp. 297–344), New York: Marcel Dekker. Preacher, K. J., & Kelley, K. (2011). Effect size measures for mediation models: Quantitative strategies for communicating indirect effects. Psychological Methods, 16, 93–115. Rausch, J. R., Maxwell, S.E., & Kelley, K. (2003). Obtaining power or obtaining precision: Delineating methods of sample-size planning. Evaluation and the Health Professions, 26, 258–287. Rausch, J. R., Maxwell, S.E., & Kelley, K. (2003). Analytic methods for questions pertaining to a randomized pretest, posttest, follow-up design. Journal of Clinical Child and Adolescent Psychology, 32, 467–486.

222

Revelle, W. & Zinbarg, R. E. (2009). Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 74, 145–154. Shadish W. R., Cook T. D., & Campbell D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton-Mifflin. Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115–129. Thompson, B. (2002). What future quantitative social science research could look like: Confidence intervals for effect sizes. Educational Researcher, 31, 25–32. Vickers, A. J. (2003). Underpowering in randomized trials reporting a sample size calculation. Journal of Clinical Epidemiology, 56, 717–720. Wilkinson, L., and theTask Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604. Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design (3rd ed.). New York: McGraw-Hill.

effect size and sample size planning

CHAPTER

12

Experimental Design for Causal Inference: Clinical Trials and Regression Discontinuity Designs

Kelly Hallberg, Coady Wing,Vivian Wong, and Thomas D. Cook

Abstract Two evaluation designs are widely accepted as yielding results that are causally interpretable: the randomized experiment (RE) and the regression-discontinuity design (RDD). This paper explores theoretical and practical similarities between these two designs that have led some researchers to view them as “close cousins.” We also examine important differences between the designs. We conclude that the theoretical strength and possibility for unbiased implementation in practice warrant the privileged position these two designs hold among researchers concerned with the causal effects of interventions. However, the advantage in statistical power, more generally interpretable effect estimates, and straightforward approach to statistical modeling lead us to advise researchers to choose REs over RDDs when all else is equal. Key Words: Causal inference, regression discontinuity designs, randomized controlled trials, quasi-experiments

Introduction Causal questions often dominate scientific and policy debates because they are central to the construction of better theories and more effective policies. Two evaluation designs are widely accepted as yielding results that are causally interpretable: the randomized experiment (RE) and the regressiondiscontinuity design (RDD). In REs, treatment assignment is based on a chance process, such as the flip of a coin, the pull of a lottery ball, or the application of a random number generator. Units do not have to have an equal probability of being assigned to treatment or control; the important feature is that each unit has a known non-zero probability of receiving treatment. Random assignment procedures create two or more groups that are initially comparable on all measured and unmeasured covariates, at least in theory. That is, the groups so constructed are equivalent in expectation, although in any one research application they are equivalent only

within the limits of the sampling error obtained. Estimates of the average effect of the intervention are constructed by comparing mean outcomes in groups exposed to the different treatments under test— often just a treatment and no-treatment comparison group. The pertinent logic is that posttreatment group differences cannot be the product of pretreatment differences or any posttreatment group differences other than the intervention whose effects are being investigated. In RDD, treatment assignment is determined on the basis of a single cutoff score on a continuous assignment variable measured prior to treatment. Units that score on one side of the cutoff are assigned to treatment status, and those scoring on the other side are assigned to the contrast condition, often a no-treatment comparison group. A discontinuity— at the cut-off—in an otherwise smooth relationship between average outcomes and the assignment variable represents the treatment effect. The logic here is 223

that units just below and above the cut-off value are nearly identical in expectation in every way except the treatment condition. In the absence of a treatment effect, there are few alternative explanations for a sharp change in outcomes at such a specific value of the assignment variable. For a RDD to yield valid treatment effect estimates, two design conditions must be met. First, the RDD requires a discontinuity in the probability of treatment at the cutoff, conditional on the assignment variable (Hahn, Todd, & van der Klaauw, 2001). This means that the design successfully induced individuals to enter the appropriate treatment condition solely on the basis of their assignment score and cutoff. The assumption may be examined empirically by modeling the probability of treatment receipt and assessing whether there is a marked discontinuity at the cutoff. The second design requirement is that there should be no discontinuity in potential outcomes at the cutoff. This is often referred to as the “continuity restriction” (Hahn et al., 2001). In practice, it means that there are no alternative explanations that would cause a sudden shift in the regression line at the cutoff. The assumption can be problematic if the assignment variable is used to assign multiple interventions. For example, states often use a threshold value of the percentage of students eligible for free and reduced price lunch as the criterion for eligibility for school activities like after-school programs, academic coaching for teachers, and receipt of additional school funding. In cases like these, RDD would be inappropriate for identifying the causal impact of any specific program because the receipt of one program at the cut-off would be confounded with the receipt of the other programs. Although the continuity condition is not directly testable, it may be probed by examining whether there are discontinuities in observable baseline covariates at the cutoff conditional on the assignment variable, and by examining the conditions under which the assignment rule was developed and implemented. The RE has long been considered the method of choice for causal inference in medicine, agriculture, and parts of psychology. It has more recently gained popularity in education, microeconomics, and criminal justice. Randomized designs offer a number of advantages, foremost among which are transparent and testable assumptions, well-known social dynamics that threaten perfect design implementation, and results that can be presented in a simple group difference form that is intuitive and 224

nontechnical. By contrast, the rising popularity of RDD is a more recent phenomenon. Originally introduced by Thistlewaite and Campbell (1960), RDD languished for more than 30 years in specialized academic textbooks in a few fields. The design was seldom used in practice, and this lack of practical use left its implementation dynamics underexplored (Cook, 2008). But now the design has taken off. Since about 1995, a cadre of microeconomists has promoted RDD as a viable and valuable method for addressing selection bias in observational studies. Their renewed interest improved the theory of the design and generated practical methods for detecting and remediating shortfalls in its implementation to such an extent that RDD is now the officially preferred alternative to RE at the Institute for Educational Sciences whenever the latter is not possible (Schochet, Cook, Deke, Imbens, Lockwood, Porter, et al., 2010). At first glance, REs and RDDs seem to be radically different. Randomized experiments create groups that, in expectation, completely overlap on all observed and unobserved variables. By contrast, RDDs create groups that are totally different from each other on the assignment variable, having no overlap because all units on one side of the cutoff get treatment and all units on the other side do not. The marked overlap difference obscures deeper conceptual similarities that have led Lee and Lemieux (2009) to call REs and RDDs “close cousins.” The main theoretical similarity between REs and RDDs arises because both designs require complete knowledge of the procedures by which study units, such as people, schools, or neighborhoods, are assigned to treatment or comparison status. In a properly implemented RE, chance alone determines treatment assignment, making the receipt of the intervention and potential outcomes statistically independent events. Chance plays almost exactly the same role in a RDD, although in a conditional way, that is restricted to subpopulations immediately above and below the cutoff. The idea is that among units near the cut-off value, chance is the major determinant of whether a unit scores at exactly the cutoff value or one value away. A small difference in assignment scores leads to a complete reversal in treatment assignment. Secure application of any theoretical method depends on more than theory. It also requires evidence demonstrating that all the assumptions required for unbiased application are met in any specific case of research. Many assumptions about treatment implementation are similar across REs

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

and RDDs and also have similar solutions should the assumptions be violated. Given such similarity in statistical theory and strategies for dealing with implementation shortfalls, it is perhaps not surprising that nearly all carefully implemented studies contrasting RE and RDD estimates have concluded that they rarely differ in the causal estimates achieved (Cook & Wong, 2008). Even so, the two designs are not identical. They do not detect the same effect with equal statistical precision; a RDD requires more cases than a RE because it must account for the effects of differences in both the assignment variable and the treatment assignment. This makes RDD less efficient in a statistical sense. Also, the two designs usually produce estimates of different population parameters. When cleanly implemented, a RE produces an estimate of an average treatment effect in the experimental sample, whereas a RDD produces an estimate of the average treatment effect among units with assignment scores very near to the cut-off value. This means that, all other things being equal, RDD results are less general than RE ones. Also, unless a particular RDD application is characterized by very dense sampling immediately around the cutoff, causal interpretation of the results requires correctly dealing with the possibility that the functional form relating the assignment variable to the outcome is unknown. In contrast, RE is less dependent on functional form issues. It is easy to see, therefore, that when confronted with a choice between a RE and a RDD, a RE is the recommended choice.

Similarities Between the Randomized Experiment and Regression Discontinuity Designs This chapter seeks to detail the central similarities and differences between the RE and RDDs noted above. Our discussion requires familiarity with the potential outcomes framework of Rubin (1978), and we provide that before launching into a detailed description of what makes the two designs such close cousins in theory and practice.

Theoretical Justifications for Randomized Experiments and Regression-Discontinuity Designs Randomized experiments and RDDs answer a particular class of causal questions of an if–then nature. The generic research question is of the form: If the treatment is made to vary, then will we later observe an outcome to differ between groups

with and without treatment? In any one research application, the answer we get is always limited to the specific way the treatment is constructed, the specific way the outcomes are measured, the particular population that is studied, the particular settings in which the study takes place, and the specific time period in which the study takes place. A large fraction of the causal research questions pursued in the social and health sciences belongs in this if–then question category, and study conclusions are inevitably conditioned by sampled study specifics. So typical causal questions might be: What is the effect of attending a charter school on student test scores, given the particular charter and control schools sampled, the test score measure used, the grades and localities sampled, and the time the study occurred? What is the effect of taking a particular medicine on the blood pressure of people with a particular health condition, given the many contextual features of the study in question? What is the effect of distributing campaign literature on voting behavior, given the context? This chapter deals with such if–then questions conditioned by many study details that are usually not part of the explicit if–then causal formulation but are nonetheless contained within any answer that might be offered. Such answers require a comparison between those study outcomes that occurred in an observed state of the world and those that would have occurred in an alternative state of the world that is, alas, totally conjectural, totally hypothetical. It is easiest to understand this when considering just two alternatives. In the treated state, we can observe who attends a charter school, takes blood pressure medication, or receives political campaign literature in the mail. We can also observe the postintervention performance of these individuals on study outcomes. In the untreated state, we contemplate what outcomes would have occurred to these same persons on these same outcomes had there been no intervention—that is, if they had attended a neighborhood public school rather than a charter one, if they did not take blood pressure medication and continued their existing lifestyle, or if the campaign literature had not been distributed. It is logically impossible for a single person to experience the treated and untreated states of the world at the same time. Yet this is exactly what is central to interpreting an intervention’s effect. So the absence of the unobserved state of the world is very serious and has been called the “fundamental problem of causal inference” (Holland, 1986, p. 947).

hallberg, wing, wong, cook

225

The Solutions to This Problem Offered by Randomized Experiments and RegressionDiscontinuity Designs. Solving the problem of causal inference requires adding assumptions. Rubin (1974) has pointed to one solution that he and many others prefer. Although we cannot obtain valid simultaneous estimates of outcome differences for the same person observed under different treatment conditions, we can observe average group differences. These are causally interpretable so long as it can be assumed that the missing control group data in the treatment group are missing at random. The most convincing practical circumstance meeting this missing variable assumption is RE where the average unit in the randomly formed control group is identical on expectation to the average unit in the treatment group. This is a key point: individual units in the treatment and control groups may not be identical, but successful randomization ensures that the average characteristics and treatment responses of treatment and control group members are identical. A RE is so powerful because it justifies the key assumption needed to attach a causal interpretation to simple mean outcome differences between treatment and control groups. In the classical RE with full compliance to the assigned treatments, the mean difference is interpreted as the average treatment effect for the study population because it is plausible to assume that the distribution of observed and unobserved variables is similar in the treatment and the comparison groups. Hence, selection threats are randomly distributed across the treatment conditions and cannot constitute an internal validity threat like they would if they were differently distributed across the groups being compared. Another way of thinking about the advantage of random assignment is that the selection process into one treatment or the other is completely known and can be modeled by the researcher (Shadish, Cook, & Campbell, 2002). At first glance, the theoretical justification for causal inference under RDD seems at odds with the justification for causal inference in the case of RE. Although RE seeks to solve the problem of causal inference by maximizing the overlap between treatment groups on observed and unobserved characteristics, RDD seeks to achieve the same goal by minimizing overlap, at least on the assignment variable. On closer inspection, however, it becomes clear that these apparent differences are only at the surface level. In fact, the characteristics that produce a causally interpretable result in a 226

well-implemented RDD are actually very similar to the features responsible for the strength of RE. In one conceptualization of RDD, the design is viewed as an actual random assignment experiment among units with assignment scores near the cutoff value (Lee & Lemieux, 2009). To see the argument more clearly, consider two high school sophomores who take the PSAT: one student scores at the cutoff and is considered eligible for a national merit scholarship, whereas the other scores one point below the cutoff. A one-point difference in PSAT scores is very unlikely to reflect a real difference in ability between the two students. It is much more likely that random noise or measurement error in the PSAT, rather than true ability differences, accounts for whether a unit is assigned to the treatment or control condition. The difference in assignment scores between the two students results almost entirely from chance, and it is this chance difference that determines treatment assignment. Seen in this way, RDD draws its interpretive power from the same treatment assignment mechanism as RE. In a second and more traditional conceptualization of RDD, treatment effects are not estimated by extrapolating the relationship between the assignment variable and posttest on the untreated side of the cutoff into the treated side. The counterfactual is given by the slope and intercept of a regression line, and the simplest null hypothesis is that both treatment and comparison group regression lines have the same intercept at the cutoff. Should there be a difference and all other conditions for causal inference are met—especially the comparability of regression functions on each side of the cutoff— then an inference is drawn that the treatment caused the difference in the intercept. Again, note that the theoretical justification for the RDD is the same as that for the randomized experiment—the selection process is perfectly known and can be modeled by the researcher. An additional assumption is needed in the RDD case, as the functional form of the regression relating assignment to outcome has to be perfectly modeled. Nonetheless, both RE and RDD studies, when implemented properly, create conditions where it is reasonable to assume that the potential outcomes in the treatment and control conditions are missing at random, entailing that the potential outcomes in the control group are equal to what would have been found in the treatment group had it not experienced the treatment. Making this Clear Through Potential Outcomes Notation. Let us formalize this with some notation.1 We begin by characterizing each

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

member of a population on a set of variables (Yi (1), Yi (0), Ti , Xi ). The subscript i indexes members of the population whom, for convenience, we will consider to be individual persons. Then, Xi is a vector of baseline characteristics, and Ti is a treatment indicator such that Ti = 1 if the i th person received the treatment and Ti = 0 if the person received the control condition. Note that the treatment and control conditions are assumed to be internally homogeneous so that, in a job training program, every person with Ti = 1 must receive the same job training2 . Y(0)i and Y(1)i are the person’s potential outcomes under the control and treatment conditions, respectively. Y (0)i is the outcome that the ith subject will experience if he is exposed to the control condition and Y (1)i is the outcome the same subject will receive if he is exposed to the treatment. Although values of (Ti , Xi ) are observable for every member of the population, only one of the two potential outcomes (Y (1)i , Y (0)i ) can be observed for any single individual. The treatment condition a person actually receives determines which potential outcome can be observed for that person. The observed outcome for each person is formally given as Yi = (1 − Ti )Y (0)i + Ti Y (1)i . However, the unobserved, latent, or counterfactual potential outcome represents what this treatment recipient’s outcome would have been if he or she had experienced the alternative treatment. Every person in the population is missing one of these two potential outcomes, and this means that we can never directly measure a treatment-control contrast at the person level. Such missing data are not the product of faulty data collection, as with survey nonresponse. They merely express the physical reality that two distinct treatment conditions cannot be simultaneously experienced by the same person. Although we are never able to estimate an individual treatment effect, we are able to estimate average group difference. In a RE, randomization assures potential outcomes are missing at random in expectation, so the average treatment effect, E [Yi (1) − Yi (0)], can be estimated using the difference in mean outcomes in the treatment and control groups. In practice, it is very common for researchers to estimate treatment effects in a RE using a regression model that includes the vector of measured covariates to improve statistical precision. A typical regression looks like Yi = Xi β + Ti τRE + εi . Under mild assumptions about the distribution of the error term εi , the regression-adjusted experimental treatment effect estimator produces valid estimates of

the treatment effect and the standard error of the treatment effect. Although strictly speaking it is not necessary to control for covariates randomized experiments, in practice many researchers do this for two reasons. First, including pretreatment covariates controls for any chance differences between the treatment and control groups. And second, including pretreatment covariates can improve statistical power by explaining some of the variance in the outcome. The situation is conceptually very similar in a RDD. We start by extracting a particular covariate from the vector of covariates X. The new covariate is the continuous assignment variable and we denote it by Z. In a RDD, individuals are assigned to treatment solely on the basis of a cutoff score, zc , on the assignment variable (Z ). When the assignment rule is implemented perfectly, the causal quantity of interest is the discontinuity directly at the cutoff, which can be written as the expected difference in potential outcomes at the cutoff such that τSRD = E [Yi (1) − Yi (0)|Zi = zc ] = E [Yi (1)|Zi = zc ] − E [Yi (0)|Zi = zc ]. (1) Because we observe only control cases but no treatment cases at the cutoff, the causal estimand is better defined in terms of the difference in limits of conditional expectations as we approach the cutoff from below and above: τSRD = lim E [Yi (1)|Zi = z] − lim E [Yi (0)|Zi = z] z↑zc

z↓zc

= lim E [Yi |Zi = z] − lim E [Yi |Zi = z]. z↑zc

z↓zc

(2) The second equality is with the observed rather than potential outcomes. This holds because we observe only the potential treatment outcomes below the cutoff and only the potential control outcomes above or at the cutoff. The difference in limits represents the discontinuity at the cutoff. There are many estimation strategies in a RDD. One of the most common approaches is a regression model control for a flexible polynomial series in the assignment variable as well as the treatment variable. Other methods, such as locally weighted regression, attempt to weaken functional form assumptions even further. As with the analysis of a RE, it is quite common to incorporate covariates into the estimation of treatment effects in RDDs. These methods are a straightforward way of increasing the statistical power of the design.

hallberg, wing, wong, cook

227

Implementation Challenges in Practice Given that the theoretical warrants for both designs are so similar, it should not be surprising that the RDD and the RE share common implementation challenges that threaten the validity of their causal estimates. They include violation of the Stable Unit Treatment Assumption (SUTVA), attrition, treatment contamination, treatment misallocation, and treatment manipulation. SUTVA. To produce an unbiased estimate of the treatment, both the RE and the RDD assume that each subject’s potential outcomes are individualistic—that is, they depend on whether he or she receives the treatment but not on whether other people receive the treatment. This common assumption goes by different names. In statistics, Cox (1958) describes it in terms of no interference between units, whereas Rubin (1990) uses the phrase stable unit treatment value assumption to refer to the joint assumptions of individualistic treatment response and homogeneously defined treatment conditions. In economics, the same type of restriction is imposed by assuming that there are no general equilibrium effects, no externalities, and no social interactions. This individualistic response assumption underlies the analysis of most RE and RDD studies. In practice, there are many possible violations of SUTVA. Peer effects are a prominent example. When treatments are administered to individuals in groups rather than independently, the effectiveness of the treatment can depend not only on the treatment itself but on the quality and behavior of the other individuals assigned to the treatment. For example, even if students are randomly assigned to attend a charter school, it is conceivable that their potential outcomes are affected both by attending the charter school and by which other students are assigned to attend the school. General equilibrium effects are another common violation of SUTVA. General equilibrium effects are a concern when there are plans to “scale up” a program by providing the treatment to a larger group of people. Changing the scale of the experiment can alter the potency of the treatment or lead to behavioral adjustments that alter the net response to the treatment in the population. Garfinkel, Manski, and Michalopoulos (1992) have described a series of ways that microlevel experiments may produce economic effects that are very different from full-scale policies that are implemented on a macro level. A recent example from education policy is the effort to mandate reduced class sizes in California public 228

schools. The state mandate was motivated in part by evidence from the Tennessee Star randomized controlled trial, which illustrated that smaller classes had positive causal effects on students’ academic achievement. But efforts to adopt smaller class sizes across California led to lower achievement scores in inner city schools. One interpretation of the California experience is that there is an important interaction between the effects of small class sizes and the supply and distribution of high-quality teachers across suburban and inner city schools. The beneficial effects of smaller class sizes do not survive when the supply of teachers is relatively fixed and teachers are able to sort out of inner city schools and into suburban schools (Stetcher & Bohrstedt, 2000; Krueger & Whitmore, 2001; Mishel & Rothstein, 2002). The efforts to make use of the results of a RE study in California show that violations of SUTVA can be practically important. In most studies, the SUTVA assumption is not directly testable. Theoretical analysis of social processes in which the potential outcomes of one unit might be affected by treatment assignment of other units in the same study are perhaps the central way that researchers can assess the validity of the SUTVA assumption. Study Attrition. In many field experiments, postassignment attrition can be a serious problem when participants drop out of the study after random assignment. When this occurs, the researcher is only able to collect outcome data for some of the participants assigned to each treatment condition. This is particularly problematic when the pattern of attrition from the study varies by treatment condition. If differential attrition occurs, then the treatment and control groups can no longer be assumed to be equivalent, and posttest differences cannot be attributed to the intervention alone, thus threatening the validity of the causal estimates. The best ex ante solutions for attrition are to reduce obstacles for participants’ involvement in the study, to ensure that participants complete outcome measurements, and to institute careful tracking of participants for the full duration of the study. Pretests are the best measures for determining whether differential attrition occurred and the size and direction of the bias because they are more likely to be highly correlated with the outcome than any other variables. Given this high correlation, it is difficult to imagine group differences that affect the outcome but not the pretest measure. In RDD, differential attrition in the treatment and control groups poses similar challenges, producing biased causal estimates even when samples are truncated to a neighborhood near the cutoff.

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

However, differential attrition often is less of a concern in RDD studies because they analyze “natural policy cutoffs” and examine administrative data sets that include assignment, treatment, and outcome variables for all units in the study. In general, however, researchers should investigate and report any cases of differential attrition that occurs in a RDD as they would in an experiment and take similar steps to mitigate the threat. Treatment Contamination. Estimating treatment effects in both REs and RDDs requires that there is no treatment contamination—that is, all of the units assigned to the treatment group actually receive the treatment, and all of the units assigned to the control group do not receive the treatment. The latter is an often overlooked possibility and can attenuate estimates of program effects. Treatment contamination is particularly problematic when treatment and control units in a study are in close proximity with one another. For example, suppose a group of teachers within a school are either randomly assigned or assigned through a cutoff and assignment score to receive new professional development materials, whereas another set of teachers are assigned to continue using the materials that had previously been available in the school. One could conceive of a situation in which the teachers who were assigned to the treatment group shared the resources they received with teachers in their school that were assigned to the control group. Although such a situation would reflect positively on the collaborative culture within that school, it would also negatively bias the estimate of the effectiveness of the new materials because the effect of the materials in the control classrooms would be differenced out of the effect estimate from the treated classrooms. Researchers should carefully track not just what conditions units were assigned to but also whether they actually received treatment regardless of treatment status. Treatment Noncompliance. Noncompliance with treatment assignment can be a problem in both REs and RDDs, although it is typically labeled as treatment crossover in REs and as “fuzziness” in RDDs. Much of the implementation literature devoted to randomized experiments addresses problems related to treatment noncompliance or misallocation. This occurs when participants have knowledge of treatment conditions and override the assignment mechanism to select into a preferred treatment status. In applied research, there are many instances in which individuals involved in a study may knowingly or unknowingly subvert the randomization process. For example, when students are

randomly assigned to attend a charter school, school officials may make exceptions to randomization for students whose siblings already attend the school, students whose parents are politically connected, and/or students who they think would particularly benefit from attending the school. Treatment noncompliance introduces bias because individuals are no longer assigned randomly but by some process that is not observed by the researcher. In a RDD context, participants also may override the cutoff rule and introduce selection into the assignment mechanism. For example, a prekindergarten (pre-K) program may enroll children based on their birthdates and a state cutoff. Children with birthdays before the cutoff date may be admitted into the program, whereas those with birthdays after the cutoff must wait to enter pre-K. Treatment misallocation would occur if children with birthdays before the cutoff were held back another year before entering school, and children with birthdays after the cutoff were sent to pre-K early. For both the RE design and the RDD, the literature identifies the former cases as “treatment no-shows” (individuals are assigned a treatment, but do not receive it) and the latter cases as “treatment crossovers” (individuals are assigned to a control condition, but receive treatment anyway). These cases are often called “fuzzy” research designs, because assignment to treatment— either through random assignment or side of the assignment variable—does not cause a sharp change from zero to one in a unit’s probability of receiving the treatment. There are several methods of adjusting for noncompliance and crossover. Each method requires that researchers have separate measures of the treatment assigned and the treatment is received by each subject. In the case of treatment noncompliance, Imbens, Angrist, and Rubin (1996) have shown that in experiments, the local average treatment effect (LATE) may be inferred for the subset of units limited to those who actually take up treatment (the TOT estimate). Hahn, Todd, and van der Klaauw (2001) have shown that in a RDD, the LATE may be estimated for a subset of units that are induced to take up the treatment as a result of their score on the assignment variable. In both a RE and a RDD, the LATE can be computed as the difference in mean outcomes for the treatment and comparison groups divided by the difference in treatment receipt rates for both groups at the cutoff. The ratio, called the Wald estimator, is equivalent to two-stage least squares regression estimation when there is a single instrument and a binary treatment. Interpreting

hallberg, wing, wong, cook

229

230

0.04

0.03 Density

the Wald ratio as the LATE requires the additional assumption that no subjects are strict assignment “defiers” who are entering the opposite of their assigned treatment condition no matter which condition they are assigned too. That is, there are no units that would choose the treatment if assigned to control and that would choose control if assigned to treatment. Although this assumption is usually not verified empirically, researchers should assess the plausibility of the threat by considering whether subjects have motivation to behave as “defiers.” It is important to note that this procedure requires that the researcher has complete knowledge of participants’ assigned treatment status and any deviations from that assigned status. Manipulation of Treatment Assignment. In some REs or RDDs, units or program administrators may deliberately manipulate their assignment status to enter a desired treatment, but the researcher has no knowledge of what condition the participant would have received in the absence of manipulation. Consider a hypothetical experiment where individuals are randomly assigned to two rooms where they will participate in either a reading or math intervention. Individuals assigned to the first room receive a reading intervention, whereas those assigned to the second room receive math training. Say some of the participants are math phobic and wish to avoid the math intervention, so they proceed to the first room. If the researcher fails to record to which rooms participants were originally assigned, then he or she may fail to recognize that participant sorting occurred after random assignment, and thus treatment effects would be biased. In the RDD literature, this implementation threat is described as “manipulation of the assignment score,” and it is most likely to occur when the following three conditions are met: when the assignment score is under the participant or program administrator control, when the cutoff is publicly known, and when the participant has strong motivation to avoid (or enter) treatment. For example, the No Child Left Behind (NCLB) legislation passed by Congress to hold low-performing schools accountable, established well-known cut-off scores for establishing whether a particular school was making Adequate Yearly Progress (AYP). Because of the high-stakes consequences of NCLB, schools near the cut-point had strong incentives to do anything in their power to push their scores above the AYP cutpoint. This pressure around the cutoff can be seen in Figure 12.1, which illustrates AYP data from Texas. The histogram illustrates that there was a marked

0.02

0.01

0

0

20

40 60 Math proficiency score

80

Figure 12.1 Example of drop in density of observation at cutoff from Texas AYP data

dropoff in observations just below the cutoff, and more cases than would be expected just above the cutoff. Such a histogram would give the applied researcher pause if he or she wanted to apply a regression discontinuity design (RDD) because it is evidence that there is manipulation around the cutoff (Wong, 2010). Unfortunately, one cannot definitively test whether individuals are manipulating their assignment status in either the RE design or the RDD. For both the RE and RDD, the best way to ensure that there is no participant sorting is to make sure that the assignment process is not publically known to individuals who could manipulate their treatment status. In the randomized experiment, researchers should record what conditions participants were initially assigned and check to make sure that the protocol was followed appropriately. This is often feasible because most experiments are planned prospectively and need only to be implemented thoughtfully. In RDD, however, addressing manipulation of the assignment score may be more difficult, especially in cases where a desired (or undesired) treatment is allocated by a broad-based policy cutoff. Here, the researcher should gather data on how the cutoff was implemented and what information individuals had on the cutoff score prior to the measurement of the assignment variable. If this is not possible, then researchers should examine the distribution of cases empirically to determine whether there is a discontinuity in the density of cases at the cutoff point. Researchers should first examine the data graphically using a histogram or kernel density plot. Then statistical testing can be done using tools such as the McCrary (2008) test, which examines whether there is a discontinuity in the density of cases at the cutoff. However, although these visual representations and

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

statistical tests can provide reassurance to the analyst, they do not guarantee that there is not manipulation at the cutoff. The best course of action combines thoughtful consideration of the assignment process with empirical analyses of the distribution of cases around the cutoff.

RDD is less efficient. To understand why, it helps to review expressions for the impact estimator and variance of the RDD and RE.3 For ease of comparison, we begin with the basic model for estimating treatment effects in an RE design that includes an unnecessary assignment variable term: Yi = α0 + α1Ti + α2 Zi + εj .

The Similarity of Causal Estimates in Practice In recent years, researchers have empirically examined the extent to which various kinds of nonrandomized experiments can approximate results from REs for testing the effects of policies and practices in fields such as education, medicine, public health, job training, and psychology (e.g., Cook, Shadish, & Wong, 2008; Glazerman, Levy, & Myers, 2003; Heckman, Ichimura, & Todd, 1997; Shadish, Clark, & Steiner, 2008). Several implementations of the RDD have been compared to similar randomized experiments to test the comparability of their estimates (Aiken, West, Schwalm, Carroll, & Hsiung, 1998; Buddelmeyer & Skoufias, 2004; Black, Galdo, & Smith, 2007; Berk, Barnes, Ahlman, & Kurtz, 2010; Shadish, Galindo, Wong, Steiner, & Cook, 2011). These within-study comparisons take a causal estimate from an experiment and compare it to the estimate from a RDD that may share similar settings, interventions, and/or measures, but with different units. The goal of these studies is to assess whether the RDD produces the same causal estimate as the RE when implemented in the real world. Cook and Wong (2008) and Wong (2010) summarize results from these comparisons and found that the RDD generally replicates experimental benchmark result. It is especially impressive to note that all five comparisons arrived at similar conclusions about the empirical validity of the RDD, regardless of substantial variation in setting, treatment, population type, outcome measures, and timing.

Differences Between the Two Designs Based on the similarities described above, some scholars as early as Sween (1971) have argued that the RDD should be treated as a RE rather than as an observational study. However, substantial enough differences exist between the two designs that we argue that they should be viewed as distinct.

Statistical Power One primary difference between RE and RDD is that they do not have the same statistical precision;

(3)

As above, Yi is the outcome score for unit I ; Ti is an indicator variable for whether unit i was randomly assigned to treatment or control; and Zi is the assignment variable, which is an unnecessary regressor that is uncorrelated with treatment status in the case of random assignment. The treatment effect is estimated by α1 , and εi ∼ N(0, σe2 ) is an iid error term. The formula for the variance of the RE impact estimator with some misallocation of treatment is then   σ 2 1 − Rre2 , (4) Varre (α1 ) = np(1 − p)(1 − ns − co)2 where σ 2 is the variance of mean outcomes across units within the treatment or comparison groups, R2re is the square of the correlation between outcomes and the assignment variable, np(1 − p) is the total variation in treatment status across units, ns is the no-show rate (units assigned treatment but do not receive it), and co is the crossover rate (units assigned the comparison group but receive treatment). The expression for the impact estimator for the RDD is the same as the formula presented in Equation 2 with the exception that Ti is an indicator variable for whether unit i was assigned to treatment or control on the basis of the assignment variable and cutoff. In this setting, the assignment variable, Z, is no longer an unnecessary regressor. The expression for the variance of the RDD estimator with misallocation is   2 σ 2 1 − Rrd  , Varrd (αˆ 1 ) = 2 np(1 − p)(1 − ns − co)2 1 − Rca (5) where the error variances [numerators in Equations 3 and 4], treatment status variances [np(1 − np)], and misallocation rates [(1 − ns − co)2 ] are equivalent in the RDD and the RE (Schochet, 2009). The 2 )] only exception here is the inclusion of [1/(1 − Rca 2 in the RDD expression, where Rca is the square of the correlation between the treatment and assignment variables. The variance of the RDD estimator is penalized because of the multicollinearity between the treatment status T i and the assignment variable Z i. The multicollinearity is measured in the regres2 )]. Notice that in the sion context by [1/(1 − Rca

hallberg, wing, wong, cook

231

RE, there is no correlation between the assignment variable and treatment status because of the random assignment procedure. This important difference is the main reason that the RDD estimator is statistically less precise than the RE estimator. The degree of collinearity between the assignment variable and the treatment status defines the RDD effect relative to an equivalent RE: 1 . 2 1 − Rca

RDDesignEffect = 

(6)

This result suggests that most power considerations for the RE also apply to the RDD, including sample size, distribution of the outcome, treatment misallocation, clustered designs, and R-squares of control covariates. However, other factors, such as the shape of the distribution around the cutoff, the location of the cutoff along the assignment variable, the balance of treatment and comparison units, and the shape of the response function, affect collinearity of the assignment and treatment variables. These are power considerations unique to a RDD. The fact that it is the collinearity of the treatment and assignment variable that is responsible for the design effect in a RDD leads to a surprising result related to the preferred balance of the sample in a RDD compared to a RE. In a RE, balanced sample splits are ideal for increasing power because they maximize p(1 − p) in the denominator of the variance equation. However, in a RDD, this is not the case because the collinearity of the assignment and treatment variables can increase with balanced samples. Depending on the empirical distribution of the assignment variable, unbalanced designs may have greater statistical power than balanced designs.

Analytic Modeling In RE data, analysis is fairly straightforward. The analyst simply takes the difference in posttest mean. In the case of a RDD, however, the analyst must carefully model the relationship between the assignment variable and outcomes using parametric, semi-parametric, or non-parametric approaches. In the parametric approach to modeling regression discontinuity, the outcome is regressed on treatment status, and the assignment variable centered at the cutoff and the discontinuity at the cutoff is interpreted as the treatment effect. The perceived size of this discontinuity can be very sensitive to functional form assumptions, specifically to nonlinear relationships between the assignment variable 232

and interactions between the assignment variable and treatment. In every RDD, there are many nonlinear functions and interactions that could be included. The challenge is choosing the right ones. Visual inspection of the data and overfitting can provide some assistance. Analysts should err on the side of including terms in the equation because this does not affect bias but should note that doing so will reduce statistical power. When the data set is large enough to support sensitivity analyses, one can develop the model using a randomly selected half of the data and then use the other half of the data to cross-validate the findings. However, given the power requirements of regression discontinuity, discussed below, this advice is often impractical in practice. Given the difficulty in correctly specifying the functional form in parametric regression discontinuity designs, analysts are increasingly turning to nonparametric and semi-parametric approaches to analyzing regression discontinuity designs. Although non-parametric and parametric approaches to modeling in regression discontinuity relax the functional form assumptions away from the cut, they rely on specifications of bandwidth. Current best practice is to employ multiple approaches to modeling the response function (parametric, non-parametric, and semi-parametric) and examine the extent to which the results present a consistent picture of program effects.

Different Causal Estimands A final distinction between a RE and a RDD is that they—in principle—produce information on different counterfactual parameters. The RDD produces estimates of counterfactual parameters that prevail in the subpopulation defined by the cutoff value of the assignment score. For example, if a treatment is assigned to all people in a population who are over age 65 years, then a RDD produces estimates of the average treatment effect for the subpopulation of 65-year-olds. Average treatment effects for other subpopulations, such as 66-year-olds or 65- to 80year-olds or the entire population, are not identified. Specifically, the typical sharp RDD study produces estimates of E [Y (1)|x = c] and E [Y (0)|x = c] and then combines these estimates to compute. In contrast, the RE design produces estimates of counterfactual parameters that prevail in the entire study population. Formally, a typical RE study will report estimates of E [Y (1)] and E [Y (0)] and then combine these estimates to form an estimate of a mean difference. The difference is in the conditioning:

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

an RDD produces estimates of average treatment effects within the cutoff subpopulation, and a RE produces estimates that are not conditional on the value of the assignment variable. Notice that if the RE includes data on the assignment covariate used in a RDD study, then it is possible to estimate the RDD parameter using the RE data. In principle and ignoring sampling error, valid RDD and RE studies based on the same study population are each capable of estimating the RDD parameters. But without additional assumptions, the RDD study is not capable of estimating the RE parameters. This means, all else being equal, that RDD results are less general than results from RE. Recent research on RDD has focused on extrapolating the local treatment effect at the cutoff to broader populations of interest. The validity of such extrapolations depends on the validity of the assumptions that undergird them. Three approaches to extrapolation are often considered: (1) extrapolations based on estimates of the functional form; (2) extrapolations based on a pretest measure of the outcome; (2) extrapolations based on a nonequivalent comparison group that was not subject to the RD assignment procedures (Cook & Campbell, 1979). One straightforward way to extrapolate effects away from the cutoff subpopulation is the use of estimates of the functional form on untreated side of the cutoff to estimate the counterfactual on the treated side of the cutoff. Such extrapolations rely heavily on correct estimations of the parametric functional form and on the assumption that there is not a change in functional form across levels of the assignment variable. Because neither condition can be verified empirically, this approach may not yield very convincing results. On the other hand, small extrapolations from the cutoff subpopulation may be quite trustworthy. A potentially more credible approach to extrapolation away from the cutoff involves the use of pretest data on outcomes of interest. For sample units with assignment scores below the cutoff value, both the pretest and posttest untreated outcome data are observed, and their slopes can be compared. For sample units above the cutoff, only the pretreatment untreated outcome data are observable. One strategy is to use information about the relationship between the pretest and posttest outcomes from below the cutoff to make inferences about the unobserved posttest data above the cutoff. The approach is quite similar to the difference in differences strategies that are often employed in a panel data context.

The key additional assumptions involve the out-ofsample invariance of the differences between mean pretest and posttest outcomes. Weaker assumptions that do not assume a completely stable difference but only a weak ordering of the two outcomes can be used to produce bounds on treatment effects outside the subpopulations. One advantage in the RDD setting is that additional assumptions can be partly validated for units of the nontreated side of the cutoff. A closely related approach is to perform extrapolation by incorporating information on the outcomes experienced by as closely matched comparison population as possible that was ineligible for the treatment and so did not experience a discontinuity in treatment assignment. Here, again, the key advantage is the untreated outcomes are observed for the control group on both sides of the cutoff. As with the pretest extrapolation, the idea is to use information about the relationship between the comparison group outcomes and posttest (untreated) outcomes below the cutoff to make inferences about the untreated outcomes that would have prevailed above the cutoff. Some methodologists have begun to work on using comparison groups in RDD (see Lemieux & Milligan, 2008; Battisin & Rettore, 2008). A fourth approach exists for generalizing treatment effects, but this requires multiple sites that vary in their cutoff points, thus creating the potential to identify average treatment effects at a range of values on the assignment variable rather than a single one (Rubin, 1977). The opportunity to have multiple cutoffs occurs often in education because resource allocation often depends on cutoff decisions made locally—thus at school, district, or state levels. For example, a RDD evaluation of five state pre-kindergarten programs (Wong, Cook, Barnett, & Jung, 2008) is based on states whose enrollment birthdates ranged from September 1 to December 31. One state even varied its cutoff dates by district within the state. With only one cutoff, the average treatment effect is limited to children with birthdays around that date; but with more cutoffs, treatment effects can be estimated across a 6-month interval of birthdays. A related approach includes sites that vary in the variable used for assignment. In Reading First, funds were distributed to some schools by the percentage of students receiving free lunch, to others by the percentage on public aid, and to others by school reading averages. Synthesizing such results requires generalizing beyond a single assignment variable and can also facilitate the inclusion of larger and more

hallberg, wing, wong, cook

233

heterogeneous samples. In the Reading First evaluation, 17 different school districts and one state were therefore used, this itself enhancing generalization. In cases where multiple cutoffs exist, there are two main options for summarizing RDD estimates. In one approach, researchers conduct a single analysis on an aggregated data set after recentering the assignment variable to create the same threshold for all sites. Alternatively, data from different cutoff points or sites can be analyzed separately, and then meta-analysis can be employed to aggregate the results. Although the first approach is dominant today, it requires intercepts and slopes that are constant across sites. Researchers can add site dummy variables and interaction terms, or they might pool observations only across those sites with homogeneous response functions. But such procedures increase the number and complexity of assumptions. The meta-analytic approach does not require these complicated procedures. However, the power of the meta-analytic approach depends more on the (usually modest) number of effect sizes than on the number of respondents, leading to a tradeoff between the increased efficiency of the aggregated approach and the more transparent bias reduction achieved by meta-analyzing RDD estimates.

Conclusion Randomized experiments and RDDs are the only two research designs that are widely accepted as yielding causally interpretable results. Table 12.1

below summarizes the key similarities and differences between the two designs. Although they may appear radically different at first glance, we have shown that they rely on two common theoretical principles to account for the missing potential outcomes that cause the fundamental problem of causal inference. In both designs, the selection process is completely known and can be modeled, and in both designs chance plays a role in determining treatment receipt, either overall or at the cutoff. Both RDDs and REs share common implementation challenges that threaten the validity of their causal estimates. These include violation of the SUTVA, attrition, treatment contamination, treatment misallocation, and treatment manipulation. However, all of the assumptions and possible threats to validity in these designs are open to empirical probing. Careful examination of the data and the assignment process can rule out most plausible threats to validity in both RDDs and REs. This sets these two designs apart from other quasiexperimental designs that require researchers to put faith in fundamentally untestable assumptions to support causal inference. Propensity score matching, for example, requires the strong ignorability assumption to support causal inference. This assumption requires that we observe all covariates the determined selection into treatment that are correlated with the outcome. In practice, one never knows whether this assumption holds and must appeal to a theoretical understanding of what covariates are likely to be correlated with selection and the

Table 12.1. Randomized Experiments and Regression Discontinuity Designs: Key Similarities and Differences RE

RDD

Warrant for causal inference

Selection process completely known Chance plays role in determining treatment receipt

Selection process completely known Chance plays role in determining treatment receipt

Implementation challenges

Violation of SUTVA Attrition Treatment contamination Treatment misallocation Treatment manipulation

Violation of SUTVA Attrition Treatment contamination Treatment misallocation Treatment manipulation

Causal parameter identified

Average treatment effect

Average treatment effect at the cutoff

Statistical power

Fairly high

Lower than RE

Statistical modeling

Fairly straightforward—mean comparison often examined within a regression framework

More stringent functional form assumption—overfitting and non-parametric approaches recommended

234

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

outcome to argue for the validity of causal inference. Similarly, using instrumental variables to reach causal conclusions requires the exclusion assumption. This assumption requires that the instrument is related to the outcome only through its relationship with the treatment. With the exception of using random assignment as an instrument, this approach again must resort to a theoretical understanding of the relationship among the variables to support causal inference (Dinardo & Lee, 2010). Based on the similarities described above, scholars since Sween (1971) have argued that the RDD should be treated as a RE rather than as an observational study. However, we have shown that RDDs and REs differ because the two designs produce different causal estimands, a RE has greater statistical power than a RDD, and statistical modeling is more complicated. All three of these differences favor the RE, which provides greater efficiency, more general causal effects, and relies on fewer modeling assumptions.

Acknowledgment The authors were supported in part by grant R305U070003 from the Institute of Education Sciences, U.S. Department of Education. In addition, the first author was supported in part by grant R305B080027 from the Institute for Education Sciences, U.S. Department of Education.

Notes 1. The notation we use here has a complicated history in different scientific disciplines. Researchers sometimes attribute the original use of the potential outcomes notation to Neyman’s (1923) description of randomized experiments; others give the credit to Rubin (1976) for popularizing the idea of the potential outcomes framework and using it to clarify the basic causal inference problem in observational settings. These debates aside, we think there is little doubt that the framework is a useful way of mathematically expressing the ideas of treatments and counterfactual outcomes. 2. Conversely, if Center A trains some persons and Center B others and each center runs a somewhat different training program, then two programs are at stake here and not one; such a situation violates the assumption that treatments are homogenous. 3. This section summarizes work presented by Schochet (2009). Readers should refer to Schochet’s paper Statistical Power for Regression Discontinuity in Education Evaluations for further discussion.

References Aiken, L. S., West, S. G., Schwalm, D. E., Carroll, J., & Hsuing, S. (1998). Comparison of a randomized and two quasiexperimental designs in a single outcome evaluation: Efficacy

of a university-level remedial writing program. Evaluation Review, 22, 207–244. Black, D., Galdo, J., & Smith, J. A. (2007). Evaluating the regression discontinuity design using experimental data. Retrieved from economics.uwo.ca/newsletter/misc/2009/smith_ mar25.pdf. Accessed October 15, 2011. Battistin, E. & Rettore, E. (2008). Ineligibles and eligible non-participants as a double comparison group in regressiondiscontinuity designs. Journal of Econometrics, 142, 715–730. Berk, R., Barnes, G., Ahlman, L., & Kurtz (2010). When second best is good enough: A comparison between a true experiment and a regressiondiscontinuity quasi-experiment. Journal of Experimental Criminology. Buddelmeyer, H., & Skoufias, E. (2004). An evaluation of the performance of regression discontinuity design on PROGRESA. World Bank Policy Research Working Paper No. 3386; IZA Discussion Paper No. 827. Retrieved from http://ssrn.com/abstract=434600. Accessed October 17, 2011. Cook, T., Shadish, W., & Wong, V. (2008). Three conditions under which observational studies produce the same results as experiments. Journal of Policy Analysis and Management, 274, 724–750. Cook, T.D. (2008). Waiting for lide to arrive: A history of regression discontinuity in psychology, statistics, and economics. Journal of Econometrics, 142(2), 636–654. Cook, T. D., & Campbell, D. (1979). Quasi-experimental design and and analysis issues for field settings. Boston, MA: Houghton Mifflin. Cook, T. D., & Wong, V. C. (2008). Better quasi-experimental practice. In P. Alasuutari, J. Brannen, & L. Bickman (Eds.), The Handbook of Social Research (pp. 134–165). London: Sage. Cook, T. D., & Wong, V. C. (2008). Empirical tests of the validity of the regression-discontinuity design. Annales d’Economie et de Statistique. Cox, J.R. (1958). Some problems connected with statistical inferences. The Annals of Mathematical Statistics, 29(2), 357–372. Dinardo, J., & Lee, D. (2010). Program evaluation and research designs. Cambridge, MA: The National Bureau of Economic Research. Garfinkel, I., Manski, C., & Michalopoulos, C. (1992). Micro experiments and macro effects. In C. Manski & I. Garfinkel (Eds.), In Evaluating Welfare and Training Programs. Cambridge, MA: Havard University Press. Glazerman, S., Levy, D., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. The Annals of the American Academy, 589, 63–91. Hahn, J. Todd, P., & Van der Klaauw, W. (2001). Identification and estimation of treatment effects with a regression discontinuity design. Econometrica, 69(1), 201–209. Heckman, J., Ichimura, H., & Todd, P. (1997). Matching as an econometric evaluation estimator: Evidence from evaluating a job training program. Review of Economic Studies, 64(3), 605–654. Holland, P. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81 (396), 945–960. Imbens, G., Angrist, J., & Rubin, D.B. (1996). Identification of causal effects using instrumental variables. Journal of Econometrics, 71(1), 145–160. Krueger, A. & Whitmore, D. (2001). The effect of attending a small class in the early grades on college-test taking and middle

hallberg, wing, wong, cook

235

school test results: Evidence from Project STAR. Economic Journal, 111, 1–28. Lee, D., & Lemieux, T. (2009). Regression Discontinuity Design in Economics. Cambridge, MA: National Bureau of Economics Research. Lemieux, T. & Milligan, K. (2008). Incentive effects of social assistance: A regression discontinuity approach. Journal of Econometrics, 142, 715–730. McCrary, J. (2008). Manipulation of the running variablein regression discontinuity design: A density test. Journal of Econometrics, 142, 698–714. Mishel, L. & Rothstein, R, (Eds.) (2002). The class size debate. Washington, DC: The Economic Policy Institute. Neyman, J. (1923). Statistical problems in agricultural experiments. Journal of the Royal Statistical Association, 107–108. Rubin, D.B. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. Journal of Educational Psychology, 66 (4), 688–701. Rubin, D.B. (1977). Assignment to treatment group on the basis of a covariate. Journal of Educational Statistics, 2, 4–58. Rubin, D.B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34–58. Rubin, D.B. (1990) Formal means of statistical inference for causal effects. Journal of Statistical Planning and Inference, 25, 279–292. Schochet, P., Cook, T., Deke, J., Imbens, G., Lockwood, J., Porter, J., et al. (2010). Standards for Regression Discontinuity. Retrieved October 15, 2010, from What Works Clearinghouse: http://ies.ed.gov/ncee/wwc/pdf/ wwc_rd.pdf.

236

Schochet, P.Z. (2009). Statistical power for regression discontinuity designs in education evaluations. Journal of Educational and Behavioral Statistics, 34(2), 238–266. Shadish, W.R., Clark, W.R., & Steiner, P.M. (2008). Can randomized experiments yield accurate answers? A randomized experiment comparing random and non-random assignments. Journal of the American Statistical Association, 103(484), 1334–1336. Shadish, W.R., Cook, T.D., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Company. Shadish, W. S., Galindo, R., Wong, V. C., Steiner, P. M., & Cook, T. D. (2011). A Randomized Experiment Comparing Random to Cutoff-Based Assignment. Psychological Methods. Stetcher, B.M. & Bohrstedt, G.W., Eds. (2000). Class size reduction in California: The 1998-1999 evaluation findings. Sacremento, CA: California Department of Education. Sween, J. A. (1971). The experimental regression design: An inquiry into the feasibility of nonrandom treatment allocation. Unpublished doctoral dissertation, Northwestern University, Evanston, IL. Thistlewaite, D., & Campbell, D. (1960). Regression discontinuity analysis: An alternative to the ex-post-factor experiment. Journal of Educational Psychology, 51, 309–317. Wong, V.C. (2010). Addressing theoretical and practical challenges in the Regression Discontinuity Design. Unpublished doctoral dissertation, Northwestern University, Evanston, IL. Wong, V. C., Cook, T. D., Barnett, W. S., & Jung, K. (2008). An effectiveness-based evaluation of five state pre-kindergarten programs. Journal of Policy Analysis and Management, 27(1), 122–154.

e x pe r i m e n ta l d e s i g n f o r c au s a l i n f e r e n c e

CHAPTER

13

Matching and Propensity Scores

Peter M. Steiner and David Cook

Abstract The popularity of matching techniques has increased considerably during the last decades. They are mainly used for matching treatment and control units to estimate causal treatment effects from observational studies or for integrating two or more data sets that share a common subset of covariates. In focusing on causal inference with observational studies, we discuss multivariate matching techniques and several propensity score methods, like propensity score matching, subclassification, inverse-propensity weighting, and regression estimation. In addition to the theoretical aspects, we give practical guidelines for implementing these techniques and discuss the conditions under which these techniques warrant a causal interpretation of the estimated treatment effect. In particular, we emphasize that the selection of covariates and their reliable measurement is more important than the choice of a specific matching strategy. Key Words: Matching, propensity scores, observational study, Rubin Causal Model, potential outcomes, propensity score subclassification, inverse-propensity weighting, propensity score regression estimation, sensitivity analyses.

Introduction In quantitative research, “matching” or “statistical matching” refers to a broad range of techniques used for two main purposes: matching or integrating different data sets, also known as data fusion, and matching of treatment and control cases for causal inference in observational studies. With regard to matching datasets, researchers or administrators are frequently interested in merging two or more data sets containing information on either the same or different units. If the data sets contain key variables that uniquely identify units, then the matching task is straightforward. However, matching becomes more fuzzy if a unique key is not available so that not all units can be unambiguously identified. Even more challenging is the integration of two independent data sets on different units that share a set of covariates on which the units

may be matched (D’Orazio, Di Zio, & Scanu, 2006; Rässler, 2002). Rässler (2002) has provided an example where researchers are interested in the association between television viewing and purchasing behavior but lack data from a single source panel covering information on both behaviors. Thus, the idea is to combine data from an independent television and consumer panel by matching similar subjects. For each subject in the consumer panel, the matching task consists of finding a corresponding subject that is identical or at least very similar on the shared covariates. Such a matching of subjects is equivalent to imputing missing covariates on the television viewing behavior. Because data from different units are matched on a case-by-case basis, this type of matching is frequently referred to as individual case matching or statistical matching. Note that hot deck procedures for imputing missing data

237

(item nonresponse) pursue the same goal but within a single dataset. Statistical matching is very popular in causal inference where the goal is the unbiased estimation of treatment effects for an outcome of interest (Heckman, 2005; Rosenbaum, 2002, 2009; Rubin, 2006). Also, here we face a missing data problem: For the treatment units, we only observe the outcome under the treatment condition but miss each unit’s respective control outcome. And for the control units we observe the control outcome but miss their treatment outcome. Hence, for inferring the treatment effect, we need to match the treatment and control group because we cannot estimate the treatment effect from one group alone. However, the treatment and control groups must be matched in such a way that they only differ in the treatment received but are otherwise identical on all other characteristics. The mean difference in the treatment and control group’s outcome reflects the average causal effect of the treatment only if the groups are comparable. If the matched groups differ with respect to some observed or unobserved covariates, then the estimated treatment effect may be biased. One way for creating comparable groups is random assignment of individuals to the treatment and control condition. Randomization statistically equates treatment and control groups such that the distribution of all observed but also all unobserved baseline covariates (covariates that are measured before treatment assignment) is the same for both groups—within the limits of sampling error. Although randomization balances treatment and control groups on average, units are not matched on a case-by-case basis. Individual case matching is not necessarily required as long as we are only interested in the average causal effect for well-defined groups, as opposed to individual causal effects for single units (Steyer, 2005). However, when randomization is not feasible or individual causal effects are of interest, we typically match cases individually on observed baseline covariates. The task is identical to merging two data sets—in this case, the data of the treatment group and the control group. Having a rich set of covariates for both groups, we need to find a control unit for each treatment unit with identical or very similar observed characteristics. The control unit then donates its control outcome to the treatment unit whose control outcome is missing. After imputing the treatment units’ missing control outcomes, the treatment effect for the treated can be estimated.

238

Although we discuss matching from the causal inference point of view, the same assumptions and techniques apply for matching two different data sets. During the last decades, many matching strategies have been proposed. These strategies either match units directly on the observed covariates or use a composite score—the propensity score (PS), which represents a unit’s probability of belonging to the treatment group. Since its invention by Rosenbaum and Rubin in 1983, the popularity of PS techniques has increased considerably. However, as we will discuss in detail, a causal interpretation of the treatment effect is only warranted if some strong assumptions are met. We begin by giving a brief introduction to the Rubin Causal Model (RCM) and its potential outcomes notation. The RCM framework enables a clear exposition of the causal estimands of interest as well as the assumptions required for warranting a causal interpretation of matching estimates. We then describe the most frequently used matching and PS techniques, including individual case matching, PS subclassification, inverse-propensity weighting, and PS regression estimation. Thereafter, we discuss several issues associated with the practical implementation of PS techniques. We particularly focus on the importance of the choice of baseline covariates for matching, their reliable measurement, the choice of a specific matching technique, and the importance of achieving balance on observed covariates (i.e., matched groups that are homogenous on observed covariates).

Rubin Causal Model The RCM, with its potential outcomes notation, offers a convenient framework for defining causal quantities and deriving corresponding estimators (Rubin, 1974, 1978). The RCM also has the advantage that it emphasizes the counterfactual situations of the units in the treatment or control condition. That is, what would the outcome of the treated units have been had they not been treated, and what would the outcome of the untreated have been had they been treated? These two counterfactual situations define the missing outcomes for the treatment and control units, respectively. Matching techniques can be broadly considered as methods for imputing these missing counterfactual outcomes either at the individual level (individual case matching) or the group level. More formally, each unit i has two potential outcomes, the potential control outcome Yi0 under the

m at c h i n g a n d p r o p e n s i t y s c o r e s

control condition (Zi = 0) and the potential treatment outcome Yi1 under the treatment condition (Zi = 1). Yi1 and Yi0 are called potential outcomes because these are the unknown but fixed outcomes before unit i gets assigned or selects into the treatment or control condition. After treatment, only one of the two potential outcomes is revealed—the potential treatment outcome for the treated and the potential control outcome for the untreated. The respective other potential outcome remains hidden. Given the pair of potential outcomes (Y 0 , Y 1 ), two causal quantities are frequently of main interest: the average treatment effect for the overall target population or sample (ATE) or the average treatment effect for the treated (ATT). The ATE and ATT are defined as the expected differences in potential outcomes—that is, τ = E (Yi1 − Yi0 ) = E (Yi1 ) − E (Yi0 ) for ATE, and τT = E (Yi1 − Yi0 |Zi = 1) = E (Yi1 |Zi = 1) − E (Yi0 |Zi = 1) for ATT. (1) The average treatment effect τ is defined as the expectation (mean value) of the difference in potential outcomes across all units in our target population, which is identical to the difference in expected potential outcomes E (Yi1 ) and E (Yi0 ). The ATT τT is defined as the conditional expectation of the difference in treatment effects for treated units only. The vertical bar within the expectation indicates a conditional expectation; in Equation 1, it is the conditional expectation for those units that are assigned to treatment (Z = 1). In practice, the choice of the causal quantity of interest depends on the research question, that is, whether the interest is in estimating the treatment effect for the overall target population (i.e., treated and untreated units together) or the treatment effect for the treated units only. For example, if we are interested in evaluating the effect of a labor market program, then we are typically interested in the ATT—that is, the effect for those persons that participated in the program or will do so in the future. The ATE might be more appropriate if a successful labor market program should be extended to the entire labor force, or if a new curricula for fourth graders, which is tested in volunteering schools, should later be adopted by all schools. Sometimes the average treatment effect for the untreated is of interest, but we are not separately discussing this causal estimand

because it is equivalent to ATT except for the conditioning on the control group (Zi = 0) rather than the treatment group (Zi = 1). If we were able to observe both potential outcomes, then we could determine the causal effect for each unit—that is, Yi1 − Yi0 for i = 1, . . . , N , and simply estimate ATE and ATT by averaging the difference in potential treatment and control outcomes (Imbens, 2004; Schafer & Kang, 2008): 1 1 (Yi − Yi0 ) N N

τˆ =

i=1

=

1 0 1 1 Yi − Yi for ATE and N N N

N

i=1

i=1

1  1 (Yi − Yi0 ) τˆT = NT i∈T

1  1 1  0 = Yi − Yi for ATT, NT NT i∈T

i∈T

} is the index set for the where T = { i : Zi = 1 treated units and NT = N i=1 Zi is the number of treated. However, in practice, we never observe both potential outcomes (Y 0 , Y 1 ) simultaneously (“fundamental problem of causal inference”; Holland, 1986). Because the outcome we actually observe for unit i depends on the treatment status, we can define the observed outcome as Yi = Yi0 (1 − Zi ) + Yi1 Zi (Rubin, 1974). Thus, at the group level, we can only observe the expected treatment outcomes for the treated, E (Yi |Zi = 1) = E (Yi1 |Zi = 1), and the expected control outcomes for the untreated, E (Yi |Zi = 0) = E (Yi0 |Zi = 0). These conditional expectations differ in general from the unconditional averages E (Yi1 ) and E (Yi0 ) because of differential selection of units into the treatment and control condition. Therefore, the simple difference in observed group means 1  1  τˆ = Yi − Yi (2) NT NC i∈T

i∈C

is, in general, a biased estimator for ATE and ATT, with T and NT as defined before and where } is the index set for the control C = { i : Zi = 0  units, and NC = N i=1 (1 − Zi ) is the number of control units. The estimator is only unbiased if the design and implementation of a study guarantees an ignorable selection or assignment mechanism. One way of establishing an ignorable selection mechanism is to randomize units into treatment and control conditions. Randomization ensures that potential outcomes (Y 0 , Y 1 ) are independent of s t e i n e r, c o o k

239

treatment assignment Z —that is, (Y 0 , Y 1 )⊥Z . Note that independence is required for the potential outcomes but not for the observed outcome (indeed, the latter always depends on the treatment assignment unless treatment has no effect). Because of this independence (i.e., ignorability of treatment assignment), the conditional expectation of the treated units’ outcome is equivalent to the unconditional expectation of the potential treatment outcome, E (Y |Z = 1) = E (Y 1 |Z = 1) = E (Y 1 )—similarly for the control outcome. Thus, the ATE is given by the difference in the expected outcome of the treatment and control group, τ = E (Y |Z = 1) − E (Y |Z = 0), which is identical to ATE in Equation 1 because of the independence established via randomization. The same can be shown for ATT. Therefore, the difference in observed group means as defined in Equation 2 is an unbiased estimator for both ATE and ATT in a randomized experiment. Note that randomization not only establishes independence of potential outcomes from treatment assignment but also independence of all other observed and unobserved baseline characteristics from treatment assignment, which implies that the treatment and control groups are identical in expectation on all baseline characteristics. In that sense, we may consider the treatment and control group as matched or balanced at the group level (but not at the individual level). In practice, randomization is frequently not possible because of practical, ethical, or other reasons such that researchers have to rely on observational studies. In such studies, treatment assignment typically takes place by self-, administrator-, or third-person selection rather than randomization. For example, unemployed persons might select into a labor market program because of their own motivation, friends’ encouragement, or recommendation but also administrators’ assessment of the candidates’ eligibility. This style of selection process very likely results in treatment and control groups that differ not only in a number of baseline covariates but also in potential outcomes. Thus, potential outcomes cannot be considered as independent of treatment selection. In this case we need a carefully selected set of observed covariates X = (X1 , . . . , Xp ) such that potential outcomes (Y 0 , Y 1 ) are independent of treatment selection conditional on X—that is, (Y 0 , Y 1 )⊥Z |X.

(3)

If we observe such a set of covariates and if treatment probabilities are strictly between 0 and 1, 240

0 < P(Z = 1|X) < 1, the selection mechanism is said to be strongly ignorable (Rosenbaum & Rubin, 1983a). The strong ignorability assumption is frequently called conditional independence, unconfoundedness, or selection on observables. Assuming strong ignorability, we may write the ATE as the difference in conditional expectations of treatment and control group’s outcomes—that is, τ = E {E (Y |Z = 1, X)} − E {E (Y |Z = 0, X)}, which is again identical to E (Y 1 ) − E (Y 0 ) because E {E (Y |Z = 1, X)} = E {E (Y 1 |Z = 1, X)} = E {E (Y 1 |X)} = E (Y 1 ) and similarly E {E (Y |Z = 0, X)} = E (Y 0 ). The inner expectations refer to the expected potential outcomes for a given set of values X, whereas the outer expectations average the expected potential outcomes across the distribution of covariates X. The same can be shown for ATT. From a practical point of view, the strong ignorability assumption requires observing all covariates X that are simultaneously associated with both treatment status Z and potential outcomes (Y 0 , Y 1 ). If ignorability holds, then statistical methods that appropriately control for these confounding covariates are potentially able to remove all the bias. Under certain circumstances (e.g., when ATT is the causal quantity of interest), somewhat weaker assumptions than the strong ignorability assumption are sufficient (Imbens, 2004; Steyer, Gabler, Davier, & Nachtigall, 2000). In the following section we discuss a very specific class of such statistical methods, called matching estimators, for removing selection bias. These methods try to match treatment and control units on observed baseline characteristics X to create comparable groups just as randomization would have done. If treatment selection is ignorable (i.e., all confounding covariates are measured) and if treatment and control groups are perfectly matched on observed covariates X, then potential outcomes are independent of treatment selection. Matching estimators are, of course, not alone in their aim of estimating causal treatment effects. Other methods like standard regression, analysis of covariance models, structural equation models (Kaplan, 2009; Pearl, 2009; Steyer, 2005; Steyer et al., 2000), or Heckman selection models (Heckman, 1974, 1979; Maddala, 1983) also try to identify causal effects. Because these methods have a different focus on causal inference and typically rely on stronger assumptions, particularly functional form and distribution assumptions, they are not discussed in this chapter.

m at c h i n g a n d p r o p e n s i t y s c o r e s

Matching Techniques Multivariate Matching Techniques As discussed above, we observe only the potential treatment outcomes for the treated units while their potential control outcomes are missing. Matching estimators impute each treated unit’s missing potential control outcome by the outcome of the unit’s nearest neighbor in the control group. In estimating the ATT, the basic concept of matching is rather simple: For each unit in the treatment group, find at least one untreated unit from the pool of control cases that is identical or as similar as possible on all observed baseline characteristics. If our interest is in estimating the ATE, then we also need to find treatment matches for each unit in the control group to impute the control units’ missing treatment outcome. Thus, each unit draws its missing potential outcome from the nearest neighbor (or set of nearest neighbors) in the respective other group. Creating a matched data set involves three main decisions: (1) the choice of a distance metric on observed baseline covariates that quantifies the dissimilarity between each treatment and control unit; (2) the decision on a specific matching strategy— that is, the number of matches for each unit, the width of the caliper for preventing poor matches, and whether to match with or without replacement; and (3) the choice of an algorithm that actually performs the matching and creates the matched data set. Given all these choices, which we describe in more detail below, matching results in a complete data set of actually observed and imputed potential outcomes and, therefore, allows the estimation of average treatment effects. Let M be the predetermined number of matches and JM (i) = {j: unit j belongs to the group of the M nearest neighbors to unit i} the index set of matches for each unit i = 1, . . . , N that indicates the M closest matches for unit i. We then define the (imputed) potential treatment and control outcomes as ⎧ if Zi = 0 ⎨ Yi 1  Yˆi0 = Yj if Zi = 1 and ⎩ M j∈JM (i) ⎧ ⎨ 1  Yj if Zi = 0 1 M j∈JM (i) Yˆi = . ⎩ if Zi = 1 Yi These (imputed) potential outcomes consist either of unit i’s actually observed value or the average outcome of its M nearest neighbors (Imbens, 2004). If M = 1, then only the nearest neighbor donates its outcome for imputing the missing potential outcome. Then, the simple matching estimator is the

average difference in estimated potential outcomes (Abadie & Imbens, 2002)—that is, 1  ˆ1 ˆ0 (Yi − Yi ) for ATE and N N

τˆ =

(4)

i=1

τˆT =

1  ˆ1 ˆ0 (Yi − Yi ) NT i∈T

1  = (Yi − Yˆi0 ) for ATT. NT

(5)

i∈T

For appropriate standard error estimators, see Abadie and Imbens (2002) or Imbens (2004). Because ATT is most frequently estimated with individual case-matching techniques, we discuss distance metrics and matching strategies for ATT only and assume that the pool of control units is much larger than the pool of treatment units. If the pool of control units is not large enough, then it might be hard to find close matches for each treated unit (Rosenbaum & Rubin, 1985; Rubin & Thomas, 1996). Distance Metrics. For determining exact or close matches for a given unit i, we first need to define a distance metric (dij ) that quantifies the dissimilarity between pairs of observations—say, between units i and j. The metric is defined on the originally observed set of baseline covariates X. A distance of zero (dij = 0) typically implies that the two units are identical on all observed covariates, whereas a nonzero distance suggests a difference in at least one of the baseline covariates—the larger the difference the less similar are the units on one or more covariates. A large variety of distance metrics has been suggested for different types of scales (Krzanowski, 2000), but the most commonly used metrics are the Euclidean and Mahalanobis distance. The standard Euclidean distance between units i and j is the sum of the squared differences in covariates xg (for g = 1, . . . , p covariates): dij = (Xi − Xj ) (Xi − p 2 Xj ) = g =1 (xig − xjg ) . Researchers frequently standardize covariates because the Euclidean distance depends on the scaling of covariates. With standardized scores, the Euclidean metric no longer depends on the scaling, but it is still sensitive to the correlation structure of measurements (constructs that are represented by two or more highly correlated measures have more influence on the distance than constructs represented by a single measure only). The sensitivity to the correlation of covariates is avoided by the Mahalanobis distance dijM = (Xi − Xj ) SX−1 (Xi − Xj ), which takes the correlation structure via the inverse variance– covariance matrix SX into account. For that reason, s t e i n e r, c o o k

241

the Mahalanobis distance is frequently preferred to the Euclidean distance. However, because the Mahalanobis distance metric exhibits some odd behavior in case of extremely outlying observations or dichotomous variables, one may consider substituting rank scores for originally observed covariates (Rosenbaum, 2009). Matching Strategies. After the computation of all pairwise distances between treatment and control units, we have to decide on a specific matching strategy. First, how many units (M ) should we match to each treatment unit? Second, should we allow all possible matches even if the distance is rather large? Third, should matching be done with or without replacement of already matched cases? The number of matches for each treated unit affects the precision and efficiency of matching estimators. With a 1:1 matching strategy, only one control unit is matched to each treatment unit, guaranteeing minimum bias because the most similar observation is matched only (the second or third best matches are not considered). But it implies a loss of efficiency, as all unmatched control cases are discarded—not all the information available is exhausted in estimating the treatment effect. In using a 1:M matching strategy, where each treatment unit is matched to its M nearest neighbors, we increase efficiency but very likely increase bias because with an increasing number of matches, less similar cases are matched. Independent of the number of matches, a researcher has also to decide whether he is willing to allow all possible matches even if they are rather distant. Frequently, the permissibility of matches is defined by a benchmark (caliper) on the overall distance metric or some covariate-specific distances (Althauser & Rubin, 1970; Cochran & Rubin, 1973). If the distance exceeds the benchmark, then units are not considered for matching. Calipers are usually defined in terms of standard deviations (SDs) on the original covariate—if two units differ by more than 0.2 SDs, for example, they are not considered as permissible matches. Thus, caliper matching protects against matching very different units and, therefore, against residual bias caused by poor matches. The smaller the caliper, the more accurate but less efficient are the estimated treatment effects. If the variables are of discrete type and the number of variables is small, then one might even consider an exact matching strategy by setting the caliper to 0. With a caliper of 0, only units with identical baseline characteristics are matched. 242

Finally, we can match cases with or without replacing previously matched cases. Matching with replacement allows a more precise estimation of the treatment effect because a single control case might belong to the nearest neighbor set of two or even more treated units. Once again, the drawback of matching with replacement is a decrease in efficiency because fewer control units are typically matched, as compared to matching without replacement. However, despite the theoretical differences in the matching strategies, several studies have shown that the number of matches and the choice of matching with or without replacement usually has a minor effect on treatment effect’s bias and efficiency (Ho et al., 2007; for a review, see Stuart, 2010). Matching Algorithms. Once we have computed the distance measures between units and decided on a specific matching strategy, units are then matched using a computer algorithm that guarantees optimal matches. For matching strategies with replacement, matching is straightforward because each treatment unit is assigned its nearest neighbor or set of nearest neighbors, regardless of whether these cases have already been matched to another unit. Because each unit is matched according to the minimum distance principle, the overall heterogeneity of the matched data set is automatically minimized. However, if we want to match treatment and control units without replacement, then the choice of a specific matching algorithm matters because matching the first treatment unit in the data set with its nearest control unit may result in rather suboptimal matches for treatment units matched later (already matched control units are no longer available). Here, we discuss two rather different matching algorithms for matching without replacement: greedy matching (which can also be used for matching with replacement) and optimal matching. Greedy matching typically starts with finding the nearest neighbor for the first treatment unit in the data set. After the identification of the nearest neighbor, the matches are put into the matched data set and deleted from the matching pool. Then, the nearest neighbor for the second treatment unit in the data set is identified, and so on. It is clear that the set of matches depends on the order of the data set. With a different ordering, one typically gets a different set of matched pairs. Because greedy matching does not evaluate the obtained matched sample with regard to a global distance measure, greedy matching rarely results in globally optimal matches. Optimal matching avoids this drawback by minimizing a global distance measure

m at c h i n g a n d p r o p e n s i t y s c o r e s

using Network Flow Theory (Gu & Rosenbaum, 1993; Hansen, 2004; Rosenbaum, 2002). Minimizing a global distance measure implies that for some treated observations, only the second best or even a more distant unit is selected if their nearest neighbors need to be matched to other treatment units whose second best matches would have been even worse. Nonetheless, optimal matching selects the cases in a way such that the finally matched sample minimizes the global distance between groups. The optimal matching algorithm allows a more general type of matching, with multiple treatment units matched to one or more control cases and vice versa. It also allows for full matching—that is, matching of all units without discarding any cases (Rosenbaum, 2002, 2009; Hansen, 2004). An alternative to optimal matching is genetic matching, as suggested by Sekhon (2011). Genetic matching makes use of genetic algorithms for exploring the space of potential matches and identifying an optimal solution. As with the choice of a specific matching strategy, using a greedy or optimal matching algorithm usually has a minor effect on the treatment effect of interest. Although optimal matching performs better on average, there is no guarantee that it does better than greedy matching for a given data set (Gu & Rosenbaum, 1993). As we will discuss later, the availability of selection-relevant covariates is much more important than selecting a specific matching procedure. In practice, multivariate matching reaches its limits when treatment and comparison cases are matched on a large set of covariates. With an increasing number of covariates, finding matches that are identical or at least very similar on all observed baseline characteristics becomes inherently impossible because of the sparseness of finite samples (Morgan & Winship, 2007). For example, with 20 dichotomous covariates, we get more than 1 million (220 ) distinct combinations, which makes it very unlikely to find close matches for all units even if the treatment and comparison group samples are rather large. Thus, it would be advantageous to have a single composite score rather than multivariate baseline characteristics. Such a score is the PS, which we discuss next.

Propensity Score Techniques Propensity score methods try to solve the sparseness problem by creating a single composite score from all observed baseline covariates X. Units are

then matched on the basis of that one-dimensional score alone. The PS e(X) is defined as the conditional probability of treatment exposure given the observed covariates X—that is, e(X) = P(Z = 1|X). The PS indicates a unit’s probability of receiving treatment given the set of observed covariates. It does not necessarily represent the true selection probability because the strong ignorability assumption does not require all constructs determining treatment selection being measured. Strong ignorability necessitates only those covariates that are correlated with both treatment Z and potential outcomes. Rosenbaum and Rubin (1983a) proved that if treatment assignment is strongly ignorable given observed covariates X (see Equation 3), it is also strongly ignorable given the PS e(X)—that is, (Y 0 , Y 1 )⊥Z |e(X). Thus, instead of the overall set of covariates, we may use a single composite for balancing baseline differences in covariates, and multivariate matching techniques can be replaced by univariate PS matching techniques. The PS is a balancing score, meaning that it balances all pretreatment group differences in observed covariates X. Covariates are balanced if the joint distribution of X is the same in the treatment and control group, P(X|Z = 1) = P(X|Z = 0) (Rosenbaum, 2002; Rosenbaum & Rubin, 1983a). In randomized experiments, randomization of units into the treatment and control group guarantees balance of both observed and unobserved covariates within the limits of sampling error. In observational studies, the PS has to establish balance on observed covariates via matching, weighting, subclassification, or covariance adjustment such that the joint distribution of X is the same for the treatment and control group for each specific PS e(X) = e—that is, P(X|e(X) = e, Z = 1) = P(X|e(X) = e, Z = 0). If the treatment and control group are accordingly balanced, all overt bias—the bias that results from observed covariates—can be removed. Hidden bias that results from unobserved covariates cannot be removed by matching or conditioning on the observed covariates or PS. Hidden bias results when the strong ignorability assumption is not met. However, because the PS e(X) is not known in practice, it has to be estimated from the observed data via binomial regression models (logistic regression or probit models) or other semi- or nonparametric methods (we discuss methods and strategies for estimating the PS in the section on the “implementation in practice”). Note that the strong ignorability assumption might be violated if the PS model is not correctly specified even if all covariates for s t e i n e r, c o o k

243

establishing strong ignorability are observed. Once the estimated propensity score eˆ (X) is available, we estimate the treatment effect using one of the many PS methods suggested in the broad literature on PS. In general, PS methods can be classified in four main categories (overviews on these methods can be found in Guo & Fraser, 2010; Imbens, 2004; Lunceford & Davidian, 2004; Morgan & Winship, 2007; Rubin, 2006): (1) PS matching; (2) PS subclassification; (3) inverse-propensity weighting; and (4) PS regression estimation. Within each main category, several variants of PS techniques exist. In the following we present the rationale of each PS approach and give estimators for the ATE and the ATT. We also discuss appropriate methods for estimating standard errors. Note that the logit of the estimated PS lˆ(X) = log{ˆe (X)/(1 − eˆ (X))}, also called linear PS, is more frequently used than the PS eˆ (X) itself because the logit is typically more linearly related to the outcome of interest than the PS—with the exception of PS subclassification, where it does not make any difference, and PS weighting, which is based on the PS. Propensity Score Matching. Propensity score matching is probably the most frequently applied class of PS techniques, and basically the same matching techniques as described above apply. The only difference is that distance measures are calculated from the (linear) PS as opposed to the original covariates. However, researchers frequently combine both the PS and the original covariates for identifying the optimal matches. One specific strategy is Mahalanobis distance matching on key covariates with PS callipers (Rosenbaum, 2009; Rosenbaum & Rubin, 1985). Units are matched using the Mahalanobis distance computed from key covariates, but only if units are within a calliper of 0.2 SDs of the PS or PS-logit. Given the algorithmic nature of all matching strategies, efficient matching procedures are available in almost all standard statistical software tools. For example, in R, the packages optmach (Hansen & Klopfer, 2006), MatchIt (Ho, Imai, King, & Stuart, in press), and matching (Sekhon, 2011) provide efficient algorithms for different matching approaches, including optimal full and pair matching; Stata offers match (Abadie, Drukker, Herr, & Imbens, 2004), psmatch2 (Leuven & Sianesi, 2003), and pscore (Becker & Ichino, 2002). The macros Greedy (Parsons, 2001), Gmatch, and Vmatch (Kosanke & Bergstralh, 2004) are available in SAS (proc assign and proc netflow can also be used for optimal matching). However, a note of caution needs to be made. 244

All the matching functions usually come with a set of default settings—for example, the size of the caliper or the number of control cases to be matched to each treatment case. Although they are quite reasonable for most analyses, they need to be carefully checked for each single analysis. Guo and Fraser (2010) demonstrate how to implement these methods using Stata. Propensity Score Subclassification. An alternative method to PS matching is PS subclassification, where we use the estimated PS eˆ (X) for subclassifying all observationsintoq = 1, . . . , Q homogeneousstrata. The underlying rationale is that observations belonging to the treatment and control groups within each single PS stratum are rather homogeneous—not only on the PS but also with regard to the observed baseline covariates. The ideal would be that within each stratum, treatment and control cases show the same covariate distribution (as it would be the case if observations within each stratum would have been randomized to the treatment and control group). In that case, treatment and control groups are perfectly matched at the group level within each stratum, and thus, unbiased estimates of the treatment effect for each stratum would result. We may also interpret PS subclassification in terms of individual case matching where each unit’s missing potential outcome is imputed by the stratum-specific average outcome of the opposite group. More formally, PS subclassification stratifies all observations on the PS into q = 1, . . . , Q homogeneous strata, with index sets Iq = {i : observation i ∈ stratum q} indicating each unit’s stratum membership. For each of the Q strata, the treatment effect is estimated by computing the simple difference in means for the treated and untreated—that is, 1  1  Yi − Yi , τˆq = NTq NCq 

i∈T ∩Iq

i∈C ∩Iq

where NTq = i∈T ∩Iq Zi is the number of treated  units and NCq = i∈C ∩Iq (1 − Zi ) is the number of control units in stratum q. The average treatment effect, then, is the weighted average of stratum-specific estimates across strata, Q τˆ = Wq τˆq for ATE and q=1

τˆT =

Q

q=1

WTq τˆq for ATT.

(6)

Depending on the treatment effect of interest, the weights for the ATE are Wq = (NCq + NTq ) N

m at c h i n g a n d p r o p e n s i t y s c o r e s

 and for ATT WTq = NTq NT (for q = 1, . . . , Q ), where N = NC + NT is the total number of control and treatment units across all strata. Hence, ATE weights reflect the distribution of all units across strata, whereas ATT weights represent the treated units’ distribution across strata. Similarly, the variances of the treatment effects are obtained by pooling stratum-specific variances—that is, v2 = v2 =

Q q=1

Wq2 vq2 for ATE and

q=1

2 2 WTq vq for ATT,

Q

2 + v 2 is the squared standard error where vq2 = vCq Tq 2 = of the mean difference in stratum q with vCq * * 2 2 = s2 NCq , vTq sCq Tq NTq (also the pooled version can be used). The strata are typically formed using quantiles (e.g., quintiles or deciles), although more optimal strategies for determining the strata exist (Rosenbaum, 2002). The advantage of the subclassification approach is that both the treatment effect and its variance can be easily estimated with each statistical software tool without using more advanced procedures. However, one drawback of the subclassification approach is that the within-stratum distributions of PSs usually slightly differ between the treatment and control groups, which results in some residual bias in the treatment effect. Rosenbaum and Rubin (1984; see also Cochran, 1968) showed that with five strata, an average of approximately 90% of the overt bias can be removed. In any case, the number of strata should depend on the number of observations. With a small number of treated or untreated units, using more than five strata is usually not useful because the number of treated or untreated units in the first and last stratum is frequently very small (less than 10 observations) such that effect estimates for these strata might not be very reliable. However, with a large number of treatment and control cases, the number of strata can and should be increased to an extent such that the number of treated or untreated cases is still large enough for getting reliable within-stratum estimates. Inverse-Propensity Weighting. Another technique that is easy to implement is PS weighting. The idea of inverse-propensity weighting is the same as for inverse-probability weighting in survey research (Horvitz & Thompson, 1952). Units that are underrepresented in the treatment or control group are upweighted, and units that are overrepresented in one of the groups are downweighted. If ATE is

the estimate of interest, then the inverse-propensity weights for  the treated units (i ∈ T ) are given by Wi = 1 eˆ (Xi ), andfor the control units (i ∈ C ) weights are Wi = 1 (1 − eˆ (Xi )). For both groups together, we may write the weights as a function of treatment status and PS: Wi = Zi /ˆei +(1−Zi )/(1− eˆi ). The difference in the weighted treatment and control means defines the ATE estimator:   Wi Yi W i Yi i∈T i∈C τˆ =  −  . (7) Wi Wi i∈T

i∈C

For ATT the same estimator applies but with different weights: WTi = 1 for the treated and WTi =  eˆ (Xi ) (1 − eˆ (Xi )) for the untreated or, as a single formula for both groups together, WTi = Zi + (1 − Zi )ˆei (1 − eˆi ). Alternatively to Equation 7, we might estimate τˆ using a weighted regression analysis (weighted least squares) with Yi = α + τ Zi + εi and weights Wi or WTi , respectively. However, regression estimates of the variance differ from a more appropriate variance estimator for Equation 7 that also reflect the uncertainty associated with the estimated PS. Robins et al. (1995; see also Schafer & Kang, 2008) derived variance estimators for the inverse-propensity weighting estimator that takes the uncertainty associated with the estimated PS into account—given it is estimated via a logistic regression. An alternative approach for estimating the treatment effect’s variance is bootstrapping, but bootstrapping has to take the uncertainty with respect to the PS into account, requiring at least re-estimating the PS model for each bootstrapped sample. In comparison to PS stratification, inversepropensity weighting is rather sensitive to outliers— treated units with a PS close to 1 or untreated units with a PS close to 0 result in extremely large weights. In estimating ATT, only the latter case matters because the weights for treated are fixed at 1. Of course, suggestions for trimming the weights exist, but trimming introduces bias (e.g., Potter, 1990). Alternatively, we may use PS subclassification, which can be considered as a robust version of inverse-propensity weighting because of its more robust stratum weights, but the increased robustness results in some residual bias as discussed above. Regression Estimation with Propensity-Related Predictors. Regression estimators rely on regression models for imputing the missing potential outcomes. In determining the ATE, we first estimate a separate regression model for the treatment and s t e i n e r, c o o k

245

control cases, where Yi = α1 +X  i β1 +εi and Yi = α0 +X  i β0 +εi (8) are the regression models for the treated (T = {i : Zi = 1}) and control units (C = {i : Zi = 0}), respectively. The predictor vector Xi may represent a cubic polynomial of the PS-logit or a set of dummy variables derived from the PS (different approaches are discussed below). Then, using the estimated regression models, we predict for all units of both groups the expected treatment and control outcomes—that is, Yˆi1 = αˆ 1 + X  i βˆ 1 and Yˆi0 = αˆ 0 + X  i βˆ 0

(9)

for i = 1, . . . , N , and use the simple matching esti ˆ1 ˆ0 mator τˆ = N1 N i=1 (Yi − Yi ) as an estimator for ATE (compare Equation 4). For both groups, we can use the predicted rather than actually observed outcomes because the mean of the predicted values equals the mean of the observed values. Running two separate regressions allows for a different functional form in each group and avoids modeling the treatment effect in a parametric way. Moreover, this regression estimator is well defined in terms of RCM’s potential outcomes notation, whereas the parametric modeling of the treatment effect within a single regression model for both groups together would estimate ATE (as defined in Equation 1) only under certain circumstances (like constant treatment effects; Schafer & Kang, 2008). If the researcher’s interest is on the ATT, then the estimation procedure is the same, except that we no longer estimate the potential treatment outcomes for the treated but use the observed ones instead. Using the predicted control outcomes Yˆi0 = αˆ 0 + X  i βˆ 0 , we can estimate ATT by τˆT = 1  ˆ0 i∈T (Yi − Yi ). NT As mentioned above, the predictor matrix X may consist of different PS-related predictors. One option is a quadratic or cubic polynomial of the PS-logit. Another option consists of including the inverse-propensity weights as predictors (Bang & Robins, 2005). However, both approaches rely on rather strong functional form assumption. To avoid such assumptions, Little and An (2004) suggested using more flexible cubic splines. Here, we briefly describe a simpler approach suggested by Kang and Schafer (2007; Schafer & Kang, 2008), which includes stratum dummies derived from subclassifying on the PS. The stratum dummies can be computed algorithmically as follows: (1) Classify all units into Q ≥ 5 strata by using quantiles; (2) Iteratively split strata—particularly 246

those with rather heterogeneous PS—into two separate strata as long as the split does not result in strata with the number of treated and the number of untreated falling below a minimum threshold (e.g., 50 units per group); and (3) For the resulting Q ∗ , homogeneous strata generate Q ∗ − 1 dummy variables. The dummy variables are then included as predictors in the regression models for the treatment and control outcomes (Equation 8). Bootstrapping or variance formulas for regression estimation (Schafer & Kang, 2008) may be used for getting appropriate variance estimates for ATE and ATT. Another regression estimator is kernel-based matching (Heckman, Ichimura, & Todd, 1997, 1998). Generally, the idea is similar to the regression approaches described in the previous paragraphs, but rather than using a parametric regression approach for imputing the missing potential outcomes, nonparametric kernel methods are used (local averaging or local linear regression; see also Imbens, 2004, or, for a more accessible introduction, Guo & Fraser, 2010). In its simplest version for estimating ATT, the predicted potential control outcome for a given treatment unit i is the locally weighted average outcome of control units in the PS-neighborhood of treatment unit i (local averaging). More formally, the predicted potential control ˆ0 outcome for treatment *unit i is given by Yi =  eˆj −ˆei eˆj −ˆei j∈C K ( h ) · Yj j∈C K ( h ), where K (·) is a normal, tricube, or Epanechnikov kernel, for example, which assigns decreasing weights to control units j as their PSs eˆj increasingly differ from unit i’s PS eˆi . The bandwidth h controls the width of the local window for estimating the treatment effect. The smaller the bandwidth, the narrower the window, and the more local the estimate. Hence, the estimated control outcome for treatment unit i is a local average of control outcomes. The advantage of that approach is that it does not rely on functional form assumptions. The drawback is its relative inefficiency and requirement of large sample sizes for minimizing bias caused by bandwidth selection. Mixed Methods. The PS methods described above only use the PS or transformations thereof for balancing initially heterogeneous treatment and control groups. However, all these methods can be combined with an additional covariance adjustment in the outcome analysis—that is, by regressing the outcome on all or key covariates. The hope with such a covariance adjustment is that it corrects for residual bias caused by a misspecified

m at c h i n g a n d p r o p e n s i t y s c o r e s

PS model (Rubin, 1979). Indeed, as Robins and Rotnitzky (1995) showed, combining PS methods and covariance adjustments protects against residual bias resulting from a misspecified PS model but only if the outcome model is correctly specified. If both models are misspecified, then there is no guarantee for an unbiased or improved estimate. Kang and Schafer (2007) demonstrated that such a doubly robust adjustment could even increase bias as opposed to using one adjustment alone. However, an additional covariance adjustment usually improves the estimate—because it corrects for residual bias caused by inexact matches or subclassification—and typically reduces its standard error (as covariance adjustment does in randomized experiments). Additional covariance adjustments are easily implemented for all PS methods described above. For the matching approach, it is done by running the standard regression Yi = α + τ Zi + X  i β + εi using the matched data set, where Zi is the treatment indicator, τ the treatment effect, X i the vector of covariates, and β the corresponding coefficient vector (Ho et al., 2007; Rubin, 1979). If matching results in a set of weights (indicating the frequency with which units were matched), then they may be used in a weighted least squares (WLS) regression. The same adjustments apply to the subclassification approach, except that we need to run separate regressions for each stratum (Rosenbaum & Rubin, 1984). The resulting stratum-specific treatment effects are then pooled according to Equation 6. If inverse-propensity weighting is the method of choice, then the best way to control for covariates is to estimate a WLS regression with inverse-propensity weights for the treated and control groups separately (Equation 8) and then to proceed as described for the regression estimation approach (Schafer & Kang, 2008). The correspondingly predicted potential outcomes are then used for estimating the treatment effect of interest. Finally, for the regression estimation approach, we add all or only the key covariates to the PS-related predictors (Equation 8).

Implementation in Practice Estimating a causal treatment effect from observational data seems to be rather straightforward: Assume a strongly ignorable selection process, choose a PS method, and estimate the treatment effect. However, just “assuming” strong ignorability is not enough. That the assumption actually holds for the data on hand needs to be justified. Moreover,

even if strong ignorability is met, unbiased treatment effects result only if the PS model is correctly specified and an appropriate PS technique is used. In this section, we discuss issues related to the selection of covariates, the choice of method, the estimation of the PS, and the importance of sensitivity analyses.

Selection and Measurement of Baseline Covariates Matching and PS methods can only remove all the selection bias if the strong ignorability assumption is met. If the strong ignorability assumption is violated, then hidden bias caused by unobserved covariates remains and causal claims are hardly warranted. As discussed above, establishing strong ignorability requires observing a set of covariates X that establishes conditional independence of potential outcomes (Y 0 , Y 1 ) and treatment Z , given X or the corresponding PS e(X). Although the assumption is simple in technical terms, it is very opaque for practitioners such that they are frequently not aware of the concrete implications regarding the data on hand. That the implications of the strong ignorability assumption are not fully understood is reflected in published observational studies using PS analyses where the crucial ignorability assumption is frequently strongly ignored. Researchers either assume strong ignorability without any substantive reasoning whether it is actually justified, or it is not even mentioned, although causal claims are nonetheless made. Here, we give a more detailed discussion of the crucial assumption such that the practical implications become clearer. Strong ignorability implies three requirements. First, it requires the valid measurement of all constructs that are simultaneously correlated with both treatment and potential outcomes. Second, if both the selection process and the outcome model are based on some latent constructs rather than observed covariates alone, as it is typical for self-selection processes, these constructs need to be measured reliably—otherwise not all bias can be removed. Third, the treatment and control groups need to overlap—that is, share a region of common support on the PS. Having overlapping groups implies that group membership is not perfectly predictable from observed covariates. If the group membership is perfectly predictable (i.e., the treatment and control group do not overlap on the PS), then the treatment and control groups cannot be considered as being comparable, and causal effects cannot be estimated without relying on extreme extrapolations. The first two requirements are directly implied s t e i n e r, c o o k

247

by the strong ignorability assumption. The third requirement derives from the necessity that all observations must have a non-zero probability of being in both the treatment and control groups—that is 0 < e(X) < 1. Only if all three requirements are fulfilled hidden bias due to unobserved or unreliably measured confounders can be ruled out. Selection of Constructs. It is important to note that the set of covariates required for an ignorable selection process is not uniquely determined. A minimal set of covariates consists of nonredundant covariates—that is, covariates that are partially correlated with both treatment and potential outcomes given all other observed covariates. Omitting one of these covariates would necessarily result in hidden bias. For example, if we have two competing measures of the same construct, then either of them could suffice to remove selection bias together with the other baseline covariates. However, in practice, a set of observed covariates typically includes redundant covariates—covariates that are either conditionally independent of treatment selection or the potential outcomes, given the other observed covariates. Such redundant covariates are ineffective in removing selection bias because they are not related to treatment or the potential outcomes. The crucial question in practice is “Which constructs have to be measured for ruling out hidden bias?” Because the absence of hidden bias is empirically not testable, we have to rely on theory, expert judgment, common sense, and empirical investigations of the actual selection process. In planning a study, it might be worth investigating the actual selection process and its determining factors in a pilot study before conducting the main study. However, even if the most important constructs determining the selection process are presumably known, measuring covariates in addition to the theoretically hypothesized constructs is advisable, as knowledge about the selection mechanism might be imperfect or the selection process might change during the implementation of the main study. Steiner, Cook, Shadish, and Clark (2010) have suggested that researchers should cover different construct domains—particularly motivational or administrative factors that directly determine selection into treatment but also direct pretest measures of the outcome (or at least proxies if direct measures are not available) and other constructs that are directly related to the outcome or selection process like demographics. They further suggest taking multiple measures within each of these construct 248

domains because we rarely know for certain which of several possible constructs of a specific domain better describes the selection process under investigation. This advice is not very satisfying for a given data set where the set of covariates is fixed. Thus, the question is whether there are some general types of covariates that are more important than others. Within-study comparisons that compare the treatment effect of an observational study to the effect of an equivalent randomized experiment within a single study (Cook & Steiner, 2010; Pohl, Steiner, Eisermann, Soellner, & Cook, 2009; Steiner et al., 2010) and meta-analyses (Cook, Shadish, & Wong, 2009; Glazerman, Levy, & Myers, 2003) have shown that at least two types of covariates play a special role. The first type refers to direct pretest measures of the outcome of interest, and the second type refers to direct measures of the selection process. The rationale for pretest measures of the outcome is that they are typically strongly correlated with the outcome and that it is hard to think of selection mechanisms that introduce selection bias to the outcome of interest but not to its pretest measure— particularly if pretest and posttest are measured close in time. Therefore, a pretest measure on the same content and scale as the outcome very likely removes a considerable part or even almost all the selection bias. The higher the correlation between the pretest and posttest, the more bias reduction is typically achieved. The second type of covariates comprises direct measures of the selection process. In the case of administrator or other third-person selection, we need all important measures on which treatment assignment decisions are made. In the case of self-selection, researchers need measurements of all motivational factors affecting participation or avoidance of a specific treatment or control condition. These covariates directly aim at modeling the actual selection process. Even if one has valid and reliable measures of the selection process and pretest measures on the outcome, one should be very careful about making strong causal claims because there is always the possibility of some unobserved and unexpected confounders such that some bias might remain. In any case, without having a reliable pretest measurement of the outcome and direct measures of the selection process, we should be cautious in claiming a causal treatment effect unless the selection mechanism is fully known and observed. Selection should definitively not be considered as ignorable when only untargeted measures from archival

m at c h i n g a n d p r o p e n s i t y s c o r e s

data, like demographics, are available. In selecting covariates for matching treatment and control groups, one also has to pay attention to when the covariates were measured. Because treatment might affect covariate measures during or after treatment, one should only consider baseline covariates that were measured before units got assigned or selected into the treatment or control condition, unless they cannot be affected by treatment, like sex or age. Measurement Error in Observed Covariates. Although having valid measures on all relevant constructs is necessary, it is frequently not sufficient for establishing a strongly ignorable selection process. Whenever selection is on latent constructs, these constructs need to be reliably assessed. Selection on latent covariates typically occurs in self-selection processes but may also occur with administrator selection, when administrators’ assignment decisions are not exclusively based on observed measures but on intuitive assessments. Unreliability in measuring such latent constructs results in hidden bias—but only if the outcome is also determined by the latent construct rather than the observed covariate, as is typically the case in most practical situations. Whenever selection is on directly observed covariates—for example, when an administrator selects participants according to their recorded years of schooling, occupational experience, or income— the selection process is completely known with regard to these covariates and no hidden bias from their unreliable measurement can emerge. In fact, trying to correct for their unreliability would introduce bias. When selection is on latent covariates, the influence of measurement error in covariates on bias reduction depends in a complex way on several factors. First, measurement error in a covariate only matters if the reliably measured construct would effectively reduce selection bias—that is, if it is correlated with both treatment and potential outcomes. Covariates that are unrelated either to treatment or potential outcomes have no bias-reducing potential; hence, measurement error in these covariates is of no harm, although it might decrease the efficiency of the estimated treatment effect. Second, a covariate’s potential to reduce selection bias diminishes as unreliability increases. For the single covariate case, it can be shown that for each decrease in its reliability (0 ≤ ρ ≤ 1) by 0.1 points—say, from ρ = 1.0 to ρ = 0.9—the covariate’s potential for removing bias decreases by 10% (Cochran, 1968; Steiner, Cook, & Shadish, 2011).

Thus, only 90% of the overt bias can be removed by the unreliable covariate. However, if we have a set of (highly) correlated baseline covariates, then they might partially compensate for each other’s unreliable measurement. The degree of compensation depends on the covariates’ correlation structure and each covariates’ potential to reduce selection bias. A covariate that is correlated with other covariates but does not remove any selection bias cannot compensate for the attenuated bias reduction caused by the other covariates’ unreliability. Third, the influence of measurement error depends on the initial heterogeneity of the treatment and control groups on the unreliably measured covariates. If the treatment and control groups do not show baseline differences in observed covariates, then measurement error has no effect on the point estimate of the treatment effect (as there is no selection bias to be removed). As the baseline differences on unreliably measured constructs increase, their reliable measurement becomes more and more vital. For the single covariate case, we know that a reliability of ρ = 0.8, for example, results in a 20% attenuation of the covariate’s bias reduction potential. Assume further that the treatment effect is biased by 0.3 SD of the outcome. Then, the unreliably measured covariate would only remove a bias of 0.24 SD—a bias of 0.06 SD would remain. However, if the initial bias is 1.0 SD, then the remaining bias would be 0.2 SD. This simple example demonstrates how important it is to start with treatment and control groups that are not too different. In any case, when selection is on latent constructs, a careful measurement of these constructs is required for establishing strong ignorability. Structural equation modeling might then be used for addressing the unreliability in measures (Kaplan, 1999; Steyer, 2005).

Choice of Methods Given a set of covariates X, matching and PS methods aim at removing overt bias, the bias that is caused by observed covariates. Note that they cannot remove any hidden bias caused by unobserved covariates. Above we described the rationale of the most frequently used PS methods and outlined their advantages and disadvantages. Now the question is which PS method should be used for a given research question and a specific data set? And does the choice of a specific method really matter? The choice of a PS method depends on the estimand of interest, the number of treatment and s t e i n e r, c o o k

249

control cases, the robustness and efficiency of the estimators, the expected residual bias, and the potential to deal with residual bias via additional covariance adjustments. Matching estimators are typically used when the causal estimand of interest is ATT and when the pool of control units is large. It should be considerably larger than the number of treatment cases because the likelihood of finding very close matches increases with the number of control units (Stuart, in press; Rubin & Thomas, 1996). Subclassification, weighting, and regression estimation as well as full optimal matching work equally well for both ATE and ATT and are presumably more robust when sample sizes are small (Pohl et al., 2010). A drawback of inversepropensity weighting is that it is sensitive to large weights that occur whenever the PS is close to 0 or 1. For that reason, standard errors for the weighting approach are usually larger than for other PS methods. On the other hand, PS regression estimation relies on functional form assumptions—kernel matching relaxes them, but standard errors of the treatment effect are comparatively larger. Matching and subclassification typically results in some residual bias caused by inexact matching and the roughness of subclasses (i.e., the small number of strata), respectively. However, we can try to remove this residual bias by combining the PS adjustment with an additional covariance adjustment in the outcome analysis. Despite the comparative advantages and disadvantages of each approach, within-study comparisons, simulation studies, and other publications reporting results on different matching and PS methods regularly show that estimates do not significantly differ. In particular, differences between methods are minimized when mixed methods that combine PS and covariance adjustments are used (Bloom, Michalopoulos, Hill, & Lei, 2002; Glazerman et al., 2003; Pohl et al., 2010; Schafer & Kang, 2008; Shadish, Clark, & Steiner, 2008). Additional covariance adjustments also minimize differences in the treatment effect’s standard error. However, the meta-analytic evidence, which is not yet definitive, does not imply that the choice of a specific method does not matter for a single study. For a given data set and hypothesis on the treatment effect, some matching or PS methods might indicate rejecting the null hypothesis, others not. Therefore, it is advisable to analyze the data with different methods and, in case of contradictory results, to be careful in making conclusive claims about the effect of a treatment. 250

Balancing Baseline Covariates Although selection is ignorable if we have a reliably measured set of covariates that formally establishes conditional independence of potential outcomes and treatment, it does not imply that all the bias is automatically removed in estimating the treatment effect. Propensity score techniques successfully remove bias only if the PS model is correctly specified (or the outcome model if mixed methods are used). With a misspecified PS model, the observed covariates’ potential for removing all the overt bias is not completely captured by the estimated PS. The correct specification of the selection model is probably the most challenging part in implementing a specific PS technique for two main reasons. First, no generally accepted and completely satisfying criteria for assessing the adequacy of an estimated PS model exist. Second, specifying a satisfying PS model is a tedious process with no guarantee of success—particularly if the number of covariates is large. Most of the suggested criteria for specifying a PS model investigate the estimated PS’s ability to balance baseline differences in observed covariates. That means that for each unique value of the estimated PS, the distribution of X is the same for the treatment and control groups. The balancing property of the PS directly reflects the expectation associated with the strong ignorability assumption: Given that all confounding covariates are observed and that the estimated PS balances all their baseline differences between the treatment and control groups, we can expect that the potential outcomes are accordingly balanced (i.e., potential outcomes are independent of the selection mechanism). So, how can we test balance in observed covariates, and how can we specify a PS model such that we obtain PSs that remove at least the observed baseline differences in covariates? Before we discuss a strategy for estimating such a balancing PS, we first describe possible approaches for estimating the PS and criteria for checking balance. Methods for Estimating Propensity Score. Because the true PS e(X) = P(Z = 1|X) is rarely known in practice, we have to estimate the scores from the observed data. In general, two classes of estimation methods may be used: binomial regression models or statistical learning algorithms such as classification trees or ensemble methods (Hastie, Tibshirani, & Friedman, 2001; Berk, 2008). Binomial regression models include logit and probit models but also the linear-probability model. All these models can be estimated with parametric linear or nonlinear

m at c h i n g a n d p r o p e n s i t y s c o r e s

regression models or with (semi-parametric) generalized additive models (Wood, 2006). The drawback of these models is that they rely on functional form assumptions. If the PS model is not correctly specified, then biased estimates of the PSs result. In contrast, statistical learning methods do not depend on functional form assumptions and, thus, are better suited for highly nonlinear relations between the treatment probability and the observed covariates. These methods include classification trees and ensemble methods like boosting, bagging, or random forests (Berk, 2006; McCaffrey, Ridgway, & Morral, 2004). Because classification trees tend to overfit the data, ensemble methods are usually preferred to classification trees. McCaffrey, Ridgway, and Morral (2004) have suggested a boosted regression method that they especially customized to PS estimation. Despite the theoretical advantages of these more flexible methods, they are not frequently used for estimating PSs. Binomial regression models—particularly logistic regression—are most frequently used in research practice for several reasons (Shadish & Steiner, 2010). First, they are easy to use, and researchers are familiar with them. Second, even if the functional form of the true PS models is not linear in practice, linear models (which include higher order terms) frequently result in satisfying approximations and only minor bias (Setoguchi, Schneeweiss, Brookhart, Glynn, & Cook, 2008). Third, there is not yet enough research available that convincingly demonstrates the comparative advantage of statistical learning algorithms in the practice of PS analysis. Fourth, if the initial PS estimate does not balance baseline differences in covariates, then it is even less clear than for binomial regression models how to recalibrate the learning algorithms for achieving better balance. And fifth, statistical learning algorithms aim at correctly predicting the treatment status, which is not the ultimate goal in estimating PSs (the aim is to balance baseline differences in covariates). However, if a researcher suspects a complex nonlinear selection process, then statistical learning algorithms might well outperform binomial regression models (Lee, Lessler, & Stuart, 2009; Lullen, Shadish, & Clark, 2005; Setogutchi et al., 2008). In such a case, it is advisable to compare treatment effect estimates obtained from different PS estimation methods. Balancing Criteria. Since the invention of PSs (Rosenbaum & Rubin, 1983a), very different criteria for assessing balance in observed covariates have been proposed. The different suggestions arose

from the practical impossibility of comparing the treatment and control group’s multivariate distribution of X (caused by the “curse of dimensionality”). Therefore, most criteria focus on the comparison of univariate distributions, meaning that balance in each observed covariate is assessed separately. All balancing criteria can be categorized into two groups: descriptive criteria and inferential criteria. Descriptive criteria typically compare the first two moments—mean and variance—of the treatment and control groups’ covariate distributions. Other focus on the overall distribution by using, for example, cumulative density functions or QQplots (Sekhon, 2011). But they may also investigate differences in bivariate correlations, which focus on characteristics of bivariate distributions. Inferential criteria typically test differences in distributions comparing means (univariate t-tests or Hotelling’s T test statistic for multivariate comparisons) or cumulative density functions (KolmogorovSmirnov test). Here we describe the most frequently used descriptive criteria—standardized mean difference and variance ratio—in more detail. The standardized mean difference in covariate means, also called Cohen’s d , is probably the most popular criterion for comparing univariate mean (Rosenbaum & Rubin, 1985; Rubin, 2001). % Cohen’s d is given by d = (¯xt − x¯c )/ (st2 + sc2 )/2, where x¯t and x¯c are the covariate means of the treatment and control group, respectively, and st2 and sc2 are the corresponding covariate variances (sometimes only the variance of the control group is used). This metric should be applied to each covariate before and after the PS-adjustment but also to the PS-logit, which represents a composite of all covariates entered into the PS model. Before PS adjustment, the standardized mean differences indicate the initial imbalance (i.e., the baseline difference) in covariates and the PS-logit. Huge differences in means—particularly if they exceed 1 SD (|d | > 1)—indicate that the treatment and control groups are very heterogeneous in their composition; they might even be too heterogeneous for a useful causal investigation. Treatment and control groups are heterogeneous if their distributions of the PS-logit only overlap on their tails such that for a large portion of units, no equivalent matches are available. After PS adjustment, the mean differences should ideally be zero or close to zero. In practice, the question is “How close is close enough to establish balance?” Here, no clear guidelines exist. Some researchers suggest that the absolute s t e i n e r, c o o k

251

252

200 Frequency

standardized mean differences of the PS-logit and each observed covariate should at least be less than 0.25 SD (e.g., Stuart & Rubin, 2007). Others use a benchmark of 0.1 SD (Shadish et al., 2008; Steiner et al., 2010). However, one should be very cautious about these benchmarks because imbalance in a covariate of 0.25 SD may easily result in remaining bias in the outcome of the same magnitude. Assume that the pretest on the outcome is the most important—maybe single—confounder and that, after balancing, the pretest still shows a standardized mean difference of 0.24 SD. Hence, a bias of the same magnitude may very likely result for the outcome of interest. Or assume that an observational study is designed to detect a small effect size of 0.2 SD. Would we be willing to accept standardized biases in covariates of 0.25 SD? Probably not. Thus, in balancing baseline differences, one should try to get standardized mean differences as close as possible to zero—particularly for those covariates that we theoretically expected to be strongly correlated with selection and potential outcomes. Significance testing does not solve the problem (Imai et al., 2008). If the treatment and control groups’ sample sizes are small, then significance tests tend to be underpowered. If the sample sizes are large, even substantively negligible differences might be significant. In addition to the standardized mean difference d , one should also compare higher order moments of the distribution such as the variance between the treatment and control groups by using the variance ratio v = st2 /sc2 (Rubin, 2001). After PS adjustment, variance ratios v for the PS-logit and each observed covariate should be close to one (Rubin, 2001). The drawback of these criteria is that they only focus on the first and second moments of each covariate’s distribution. However, for more thorough balance checks, we may investigate balance for subgroups defined by PS-quantiles (Dehejia & Wahba, 1999, 2002). These checks are useful because, according to theory, for each unique PS or PS-quantile, the covariate distribution of treatment and control cases should be equivalent, at least in expectation (Rosenbaum, 2002). Balancing Procedure. Balancing baseline group differences in covariates is an iterative procedure with no guarantee for success. In the following, we describe the procedure, which involves three steps: (1) Estimate an (initial) PS model and predict the PS and PS-logits; (2) Check overlap on the estimated PS-logit and delete non-overlapping cases;

Control Group

100 0 100

Treatment Group –7

–6

–5 –4

–3 –2 –1 PS-logit

0

1

2

Figure 13.1 Overlap of treatment and control group’s PS-logit distribution.

and (3) Check balance on the PS-logit and observed covariates. If balance is not satisfactory, go back to (1) and improve the PS model. 1. Estimating the PS model and PS. Estimate an initial PS model using traditional model-fitting criteria (for logistic regression, these are likelihood-ratio tests or Akaike’s Information Criterion [AIC], for example). Usually it is not sufficient to include main effects only—higher order terms or other transformations of covariates also need to be considered. The aim of this step is to model the unknown selection process as good as possible. If we would succeed in modeling the true selection process, then the estimated PSs could be expected to remove all the overt bias. Thus, model selection is crucial for a successful PS analysis. After a satisfying model is found, get the predicted values of the PS and PS-logit. 2. Checking overlap and deleting nonoverlapping cases. Use the estimated PS-logits for checking overlap of the treatment and control groups’ distribution—for example, by plotting a histogram. Because it is usually not possible to achieve balance with groups that show regions of nonoverlap on the PS-logit, nonoverlapping cases need to be discarded. Figure 13.1 gives an example were the PS-logit distributions do not completely overlap. Control units at the left tail of the distribution have no corresponding matches in the treatment groups. Thus, without extrapolation, we cannot estimate the ATE, but we can do so for the restricted population with overlap. The ATT can be estimated for the overall population of treated units because their distribution does not show regions of considerable nonoverlap with the control distribution (only on the right tail of the distributions there is a slight lack of overlap). The deletion of cases is not only restricted to the

m at c h i n g a n d p r o p e n s i t y s c o r e s

margins of the distribution, it should be done for all regions of nonoverlap. Observations with outlying PSs in one group usually produce inner regions of nonoverlap. Although discarding cases on the observed PS-logit is straightforward, it results in reduced generalizability of results (unless one assumes constant treatment effects). Note that matching with a PS caliper automatically deletes control units that fall outside each treated unit’s caliper-defined neighborhood. 3. Checking balance. After deletion of nonoverlapping cases, check balance on the PS-logit and all the observed covariates using one or multiple balancing criteria as described above. Figure 13.2 shows an example of a balance plot for 25 baseline covariates. Before the PS adjustment, many covariates show absolute standardized mean differences between the treatment and control group of 0.1 SD or more (left panel). The mean difference in the PS-logit is even larger than 1 SD (indicated by the asterisk). After the PS adjustment, in this case subclassification, almost all absolute mean differences are less than 0.1 SD (right panel). Note that the variance ratios between groups also improved: After balancing, they are closer to 1 than before balancing. In checking balance on observed covariates, the same PS method as for the outcome analysis should be used. For example, if a researcher decides to do a PS stratification analysis, then balance should be checked with exactly the same method—the outcome variable is simply replaced by the PS-logit or observed covariates. If PS weighting is the method of choice, then do balance

checks with the same weighting procedure. Or, if we conduct a PS matching, then we check balance on the matched dataset. The rationale for using the same PS method for checking balance as for analyzing the outcome is that the PS method chosen will most likely succeed in removing overt bias from the outcome if the very same method also removes bias from all the observed covariates and the estimated PS-logit. Moreover, if the outcome of interest depends in a nonlinear way on observed covariates, then balance should also be checked for transformed covariates (e.g., the quadratic, cubic, or interaction terms). If balance tests indicate (almost) perfect balance, then one can proceed with the outcome analysis, but if balance statistics reveal remaining imbalance on the PS-logit or some of the observed covariates, then the PS model needs to be improved. Include the previously deleted nonoverlapping cases, and restart with step 1 and try to improve the model by including or deleting terms (particularly include higher order and interaction terms of covariates that were not balanced by the initial PS estimate).

Sensitivity Analysis The causal interpretation of an estimated treatment effect rests on the strong ignorability assumption. If it is violated, then the treatment effect will be biased. Unfortunately, whether treatment assignment is ignorable with regard to the outcome of interest cannot be empirically tested. Indirect tests are possible if highly correlated nonequivalent

After PS Adjustment

1.4

1.4

1.2

1.2

Variance Ratio

Variance Ratio

Before PS Adjustment

1.0 0.8 0.6

1.0 0.8 0.6

–1.0 –0.5 –0.0 0.5 1.0 Standardized Mean Difference

–1.0 –0.5 –0.0 0.5 1.0 Standardized Mean Difference

Figure 13.2 Balancing plots: Initial imbalance before PS adjustment (left panel) and balance after PS adjustment (right panel) of 25 covariates and the PS-logit (indicated by the asterisk).

s t e i n e r, c o o k

253

outcomes that are not affected by the treatment are available or if a large enough subpopulation of treatment units actually did not receive treatment (Rosenbaum, 1984; Shadish, Cook, & Campbell, 2002). For example, if we are interested in the effect of a math coaching program on students’ math achievement scores, then we can test the plausibility of the strong ignorability assumption indirectly on the students’ reading scores (the nonequivalent outcome) because we are not expecting any impact of the math coaching on reading achievements. A significant difference in the PS adjusted means of treatment and control groups’ reading outcome would cast strong doubt on the ignorability assumption. Although (nearly) identical group means of the nonequivalent outcome cannot prove strong ignorability with respect to the outcome of interest, their equality increases the credibility of the assumption at least. Another indirect test can be performed if not all units who selected into the treatment condition receive treatment. For example, if some students who choose to participate in a math coaching program cannot attend the program (because of class size limitations or shortage of teachers), then the plausibility of the ignorability assumption may be probed on the potential control outcomes by comparing the PS adjusted math means of the untreated “treatment” students and the regular control students. However, such plausibility checks are frequently not possible and cannot verify the strong ignorability assumption. Sensitivity analyses that assess the potential impact of unobserved confounders on the treatment effect are another useful alternative (Rosenbaum, 1986; Rosenbaum, 2002, 2009; Rubin & Rosenbaum, 1983b). They investigate the following question: How sensitive is the estimated treatment effect to a potentially unobserved covariate that is highly correlated with both treatment and potential outcomes? Or alternatively, how strongly must an unobserved covariate be associated with treatment and potential outcomes such that the treatment effect vanishes? Although sensitivity analyses demonstrate the treatment effect’s sensitivity to unobserved confounders, it cannot indicate whether the effect estimate is actually biased— that is, whether the strong ignorability assumption is met. We may implement a sensitivity analysis either within the framework of parametric regression (Rosenbaum, 1986) or nonparametric test procedures (Rosenbaum, 2002). Guo and Fraser (2010) provide a very accessible introduction to the latter and demonstrate their implementation using 254

available software in Stata (Gangl, 2007). A similar software package is also available in R (Keele, 2009). Given that we hardly know whether the strong ignorability assumption is actually met for an observational study, sensitivity analysis should always complement a PS analysis.

Conclusion In the last decade, individual case matching became one of the standard tools for causal inference with observational studies. The ultimate goal of matching is to create treatment and control groups that are matched and, therefore, balanced on all observed covariates. For the matched data, the implicit hope is that the potential outcomes are independent of the selection mechanism that guarantees an unbiased estimate of the treatment effect—just like in a randomized experiment. However, a causal interpretation of the estimated treatment effect is only warranted if the strong ignorability assumption is actually met and the analytic method correctly implemented. Most important for establishing a strongly ignorable selection mechanism is the measurement of constructs that determine the selection process and the outcome of interest. If we fail in measuring some of these confounding constructs, then hidden bias remains. Hidden bias also occurs when selection-relevant latent constructs are measured with error. Measurement error attenuates the covariates’ potential for reducing selection bias. Thus, without having reliable measures of all the confounding constructs, causal claims are hardly warranted. Next in importance is the estimation of a PS that balances all observed baseline differences between the treatment and control group. We can reasonably expect a complete removal of overt bias only if the PS balances all baseline covariates. If some covariates still show imbalance after the PS adjustment, then residual bias very likely results. We can, however, try to reduce this type of residual bias by an additional covariance adjustment in the outcome analysis. Because there is no guarantee that such a mixed strategy will succeed, it is advisable to estimate a PS that achieves balance on observed covariates as much as possible. Given such a PS and an additional covariance adjustment in the outcome model, the impact of choosing a specific matching or PS methods on the treatment effect and its standard error is relatively small (Schafer & Kang, 2008; Shadish, Clark, & Steiner, 2008). However, the relative unimportance of method choice does not imply that conclusions drawn from an observational

m at c h i n g a n d p r o p e n s i t y s c o r e s

study do not depend on the choice of a specific method. Because of slight differences in methodspecific treatment effects and standard errors, one method might indicate a significant treatment effect, whereas another might indicate no significant effect. In such a case, it is important to critically assess the method’s appropriateness for the data set on hand. That is, which method achieves the best balance on observed covariates, is subject to the least residual bias or relies on the weakest assumptions (e.g., functional form assumptions)? In this chapter we also discussed four different types of matching methods: individual case matching (on covariates or the PS), PS subclassification, inverse-propensity weighting, and regression estimation with propensity-related predictors. All these methods aim at removing baseline differences in observed covariates by equating the treatment and control groups’ covariate distributions. Although we only described the matching and PS techniques with regard to the standard case of one treatment and one control group, they extend to multiple treatments and also continuous treatment variables like dosage of a treatment (Imai & Van Dyk, 2004; Imbens, 2000; Joffe & Rosenbaum, 1999). Another class of flexible approaches that also handles multiple and time-varying treatments is marginal mean modeling (Hong, 2009; Hong & Raudenbush, 2008; Murphy, van der Laan, Robins, & CPPRG, 2001; Orellana, Rotnitzky, & Robins, 2010; Robins, 1999).

Future Directions Although an enormous body of literature was created during the last decades on matching and PS matching in particular, there are still open issues. One concerns matching strategies in the context of clustered or multilevel data—for example, when students are nested within schools (Hong & Raudenbush, 2006; Hong, 2009). The application of PS methods for equating pretreatment group differences in multilevel data is more challenging than for non-nested data because selection processes may take place at all levels and may even work in different directions. For that reason, the modeling of the selection mechanism needs careful consideration of covariates at multiple levels. One matching strategy for clustered data might be local matching. For example, if students are nested within schools and treatment assignment or selection is at the school level, then we would like to match comparable schools from the same

neighborhood or at least the same school district as opposed to schools from very distant districts. In doing so, the hope is that even unobserved background characteristics of students, teachers and the entire environment will be rather similar if we match locally neighboring units. The same applies for matching persons participating in, for example, a labor market program. Matching should take place within the same local labor market or, if that is not feasible, a comparable neighboring labor market. Although local matching is known to be a good strategy in practice, it is not clear how important it is for establishing strong ignorability (Cook, Shadish, & Wong, 2008)—particularly how well local matching does without any further matching of individual cases. More research is also needed on PS techniques with regard to time-varying treatments (Hong & Raudenbush, 2009; Murphy et al., 2001). That is, units might receive different dosages or types of treatment over time (including no treatment for some periods). For example, some students may attend a math coaching program only for one quarter, whereas others attend for two or three quarters during the year. Even among students who got the coaching for three quarters, treatment might vary over time—for example, if some students switch coaching classes and thus get different teachers. More work is also required on balancing metrics and corresponding benchmarks. Currently, a variety of balancing metrics has been suggested, but it is not yet clear which balancing metrics work best under which conditions and, particularly, when the balance achieved is good enough. Moreover, the challenge to balance baseline covariates increases as the number of covariates increases—for example, Hong and Raudenbush (2006) had more than 200 covariates. Achieving satisfying balance on such a large number of covariates is nearly impossible, and finding a useful specification of the PS model is already a challenge on its own. The task gets even more complex if the data set has fewer observations than covariates or includes only very small samples of treated units (Kolar & Vehovar, 2012). Finally, although PS techniques have become more and more popular for causal inference, they are not a magic bullet that remedies all the problems associated with standard regression methods. Despite the theoretical advantage of PSs with regard to design and analytic issues, it is not clear whether they actually perform better in practice than standard regression methods (i.e., regression analyses with originally observed covariates but without any s t e i n e r, c o o k

255

PS adjustments). Meta-analyses in epidemiology (Shah, Laupacis, Hux, & Austin, 2005; Stürmer et al., 2006) but also within-study comparisons and reviews thereof (Cook, Shadish, & Wong, 2008; Glazerman et al., 2002; Shadish, Clark, & Steiner, 2008) demonstrate that PS and standard regression results barely differ, but more systematic metaanalyses on this topic are required. One reason for this negative finding might be that researchers are better trained in regression techniques than in PS techniques and, thus, cannot capitalize on the comparative advantage of PS approaches. Hopefully, this chapter guides researchers to improved PS analyses.

Acknowledgment The first author was supported in part by a grant from the W.T. Grant Foundation and grants R305U070003 and R305D100033 from the Institute of Education Sciences, U.S. Department of Education.

References Abadie, A., Drukker, D., Herr, J. L., & Imbens, G. W. (2004). Implementing matching estimators for average treatment effects in Stata. The Stata Journal, 4, 290–311. Abadie, A., & Imbens, G. W. (2002). Simple and biascorrected matching estimators. Technical Report. Department of Economics, University of California, Berkley. Althauser, R., & Rubin, D. B. (1970). The computerized construction of a matched sample. American Journal of Sociology, 76, 325–346. Bang, H., & Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61, 962–972. Becker, S. O., Ichino, A. (2002). Estimation of average treatment effects based on propensity scores. The Stata Journal, 2, 358– 377. Berk, R. A. (2006). An introduction to ensemble methods for data analysis. Sociological Methods & Research, 34, 263–295. Berk, R. A. (2008). Statistical Learning from a Regression Perspective. New York: Springer. Bloom, H. S., Michalopoulos, C., Hill, C. J., & Lei, Y. (2002). Can Nonexperimental Comparison Group Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs? Washington, DC: Manpower Demonstration Research Corporation. Cochran, W. G. (1968): The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics, 24, 295–313. Cochran, W. G., & Rubin, D. B. (1973). Controlling bias in observational studies: A review. Sankhya: The Indian Journal of Statistics, Series A, 35, 417–446. Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from

256

within-study comparisons. Journal of Policy Analysis and Management, 27(4), 724–750. Cook, T. D., & Steiner, P. M. (2010). Case matching and the reduction of selection bias in quasi-experiments: The relative importance of the pretest as a covariate, unreliable measurement and mode of data analysis. Psychological Methods, 15(1), 56–68. Cook, T. D., Steiner, P. M., & Pohl, S. (2009). Assessing how bias reduction is influenced by covariate choice, unreliability and data analytic mode: An analysis of different kinds of within-study comparisons in different substantive domains. Multivariate Behavioral Research, 44, 828–847. D’Orazio, M., Di Zio, M., & Scanu, M. (2006). Statistical Matching: Theory and Practice. Chichester: Wiley. Dehejia, R., & Wahba, S. (1999). Causal effects in nonexperimental studies: Re-evaluating the evaluation of training programs. Journal of the American Statistical Association, 94, 1053–1062. Dehejia, R., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. The Review of Economics and Statistics, 84(1): 151–161. Gangl, M. (2004). RBOUNDS: Stata module to perform Rosenbaum sensitivity analysis for average treatment effects on the treated. Statistical Software Components S438301, Boston College Department of Economics. Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. The Annals of the American Academy, 589, 63–93. Gu, X., & Rosenbaum, P. R. (1993). Comparison of multivariate matching methods: Structures, distances, and algorithms. Journal of Computational and Graphical Statistics, 2, 405–420. Guo, S., & Fraser, M. W. (2010). Propensity score analysis. Statistical Methods and Applications. Thousand Oaks, CA: Sage. Hansen, B. B. (2004). Full matching in an observational study of coaching for the SAT. Journal of the American Statistical Association, 99, 609–618. Hansen, B. B, & Klopfer, S. O. (2006). Optimal full matching and related designs via network flows. Journal of Computational and Graphical Statistics, 15, 609–627. Hastie, T., Tibshirani, R., & Friedman, J. (2001). The Elements of Statistical Learning. New York: Springer. Heckman, J. J. (1974). Shadow prices, market wages, and labor supply. Econometrica, 42, 679–694. Heckman, J. J. (1979). Sample selection bias as a specification error. Econometriaca, 47, 153–161. Heckman, J. J. (2005). The scientific model of causality. Sociological Methodology, 35(1), 1–98. Heckman, J. J., Ichimura, H., & Todd, P. E. (1997). Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. Review of Economic Studies, 64, 605–654. Heckman, J. J., Ichimura, H., & Todd, P. E. (1998). Matching as an econometric evaluation estimator. Review of Economic Studies, 65, 261–294. Ho, D. E., Imai, K., King, G., & Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15, 199–236. Ho, D. E., Imai, K., King, G., & Stuart, E. A. (in press). MatchIt: Nonparametric preprocessing for parametric causal inference. Journal of Statistical Software.

m at c h i n g a n d p r o p e n s i t y s c o r e s

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–970. Hong, G. (2009). Marginal mean weighting through stratification: Adjustment for selection bias in multi-level data. Unpublished Manuscript. Hong, G., & Raudenbush, S. W. (2006). Evaluating kindergarten retention policy: A case study of causal inference for multilevel observational data. Journal of the American Statistical Association, 101, 901–910. Hong, G., & Raudenbush, S. W. (2008). Causal inference for time-varying instructional treatments. Journal of Educational and Behavioral Statistics, 33(3), 333–362. Horvitz, D. G., & Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47, 663–685. Imai, K. & van Dyk, D. A. (2004). Causal inference with general treatment regimes: Generalizing the propensity score. Journal of the American Statistical Association, 99, 854–866. Imbens, G. W. (2000). The role of the propensity score in estimating dose-response functions. Biometrika, 87, 706–710. Imbens, G. W. (2004). Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and Statistics, 86(1), 4–29. Joffe, M. M., & Rosenbaum, P. R. (1999). Propensity scores. American Journal of Epidemiology, 150, 327–333. Kang, J., & Schafer, J. L. (2007). Demystifying double robustness: a comparison of alternative strategies for estimating population means from incomplete data. Statistical Science, 26, 523–539. Kaplan, D. (1999). An extension of the propensity score adjustment method for the analysis of group differences in MIMIC models. Multivariate Behavioral Research, 34(4), 467–492. Kaplan, D. (2009). Causal inference in non-experimental educational policy research. In D. N. Plank, W. E. Schmidt, & G. Sykes (Eds.), AERA Handbook on Education Policy Research. Washington, DC: AERA. Kosanke, J., & Bergstralh, E. (2004). Match cases to controls using variable optimal matching: URL http://mayoresearch.mayo.edu/mayo/research/biostat/upload/vmatch.sas and Match 1 or more controls to cases using the GREEDY algorithm: URL http://mayoresearch.mayo.edu/mayo/research/biostat/upload/gmatch.sas. Keele, L. J. (2009). rbounds: Perform Rosenbaum bounds sensitivity tests for matched data. R package. http://CRAN.Rproject.org/package=rbounds. Kolar, A., & Vehovar, V. (2012). Small samples and propensity score methods. Working Paper. Krzanowski, W. J. (2000). Principles of Multivariate Analysis: A User’s Perspective. New York: Oxford University Press. Lee, B., Lessler, J., & Stuart, E. A. (2010). Improving propensity score weighting using machine learning. Statistics in Medicine, 29, 337–346. Leuven, E., & Sianesi, B. (2003). PSMATCH2. Stata module to perform full Mahalanobis and propensity score matching, common support graphing, and covariate imbalance testing. Statistical Software Components S432001, Boston College Department of Economics. Little, R. J. A., & An, H. (2004). Robust likelihood-based analysis of multivariate data with missing values. Statistica Sinica, 14, 949–968. Luellen, J. K, Shadish, W. R., & Clark, M.H. (2005). Propensity scores: An introduction and experimental test. Evaluation Review, 29, 530–558.

Lunceford, J. K., & Davidian, M. (2004). Stratification and weighting via propensity score in estimation of causal treatment effects: A comparative study. Statistical Medicine, 23, 2937–2960. Maddala, G. S. (1983). Limited-dependent and Qualitative Variables in Econometrics. Cambridge: Cambridge University Press. Murphy, S. A., van der Laan, M. J., Robins, J. M., & CPPRG (2001). Marginal mean models for dynamic regimes. Journal of the American Statistical Association, 96, 1410–1423. McCaffrey, D. F., Ridgeway, G., & Morral, A. R. (2009). Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9, 403–425. Morgan, S. L., & Winship C. (2007). Counterfactuals and Causal Inference: Methods and Principles for Social Research. Cambridge: Cambridge University Press. Parsons, L. S. (2001). Reducing bias in a propensity score matched-pair sample using greedy matching techniques. SAS Institute Inc., Proceedings of the Twenty-Sixth Annual SAS ®Users Group International Conference, Paper 21426. Cary, NC: SAS Institute Inc., URL http://www2. sas.com/proceedings/sugi26/p214-26.pdf. Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). Cambridge: Cambridge University Press. Pohl, S., Steiner, P. M., Eisermann, J., Soellner, R., & Cook, T. D. (2009). Unbiased causal inference from an observational study: Results of a within-study comparison. Educational Evaluation and Policy Analysis, 31(4), 463–479. Potter, F.J. (1990). A Study of Procedures to Identify and Trim Extreme Sampling Weights. In: Proceedings of the Section on Survey Research Methods of the American Statistical Association, San Francisco, California. (pp. 225–230). Journal of the American Statistical Association. Rässler, S. (2002). Statistical Matching: A Frequentist Theory, Practical Applications and Alternative Bayesian Approaches. New York: Springer. Robins, J. M. (1999). Associations, causation, and marginal structural models. Synthese, 101, 151–179. Robins, J. M., & Rotnitzky, A. (1995). Semiparametric efficiency in multivariate regression models with missing data. Journal of the American Statistical Association, 90, 122–129. Robins, J. M., & Rotnitzky, A. (2001). Comment on ‘Inference for semiparametric models: Some questions and answers’ by Bickel and Kwon. Statistica Sinica, 11, 920–936. Robins, J. M., Rotnitzky, A., & Zhao, L. P. (1995). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data. Journal of the American Statistical Association, 90, 106–121. Rosenbaum, P. R. (1984). From association to causation in observational studies: The role of tests of strongly ignorable treatment assignment. Journal of the American Statistical Association, 79, 41–48. Rosenbaum, P. R. (1986). Dropping out high school in the United States: An observational study. Journal of Educational Statistics, 11, 207–224. Rosenbaum, P. R. (2002). Observational Studies (2nd Ed.). New York: Springer-Verlag. Rosenbaum, P. R. (2009). Design Observational Studies. New York: Springer-Verlag. Rosenbaum, P. R., & Rubin, D. B. (1983a). The central role of the propensity score in observational studies for causal effects. Biometrika, 70 (1), 41–55.

s t e i n e r, c o o k

257

Rosenbaum, P. R. & . Rubin, D. B. (1983b). Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. Journal of the Royal Statistical Society, B, 45, 212–218. Rosenbaum, P. R., & Rubin, D. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79, 516– 524. Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39, 33–38. Rosenbaum, P. R., & Rubin, D. B. (1985). The bias due to incomplete matching. Biometrics, 41, 103–116. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. Rubin, D. B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127, 757–763. Rubin, D.B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34–58. Rubin, D. B. (1979). Using multivariate matched sampling and regression adjustment to control bias in observational studies. Journal of the American Statistical Association, 74, 318–328. Rubin, D. B. (2001). Using propensity scores to help design observational studies: Application to the tobacco litigation. Health Services and Outcomes Research Methodology, 2, 169–188. Rubin, D. B. (2006). Matched Sampling for Causal Effects. Cambridge: Cambridge University Press. Rubin, D. B., & Thomas, N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52, 249–264. Rubin, D. B., & Thomas, N. (2000). Combining propensity score matching with additional adjustments for prognostic covariates. Journal of the American Statistical Association, 95, 573–585. Schafer, J. L., & Kang, J. (2008). Average causal effects from nonrandomized studies: A practical guide and simulated example. Psychological Methods, 13(4), 279–313. Sekhon, J. S. (2011). Multivariate and propensity score matching software with automated balance optimization: the matching package for R. Journal of Statistical Software, 42(7), 1–52. Setoguchi, S., Schneeweiss, S., Brookhart, M. A., Glynn, R. J., & Cook, E. F. (2008). Evaluating uses of data mining techniques in propensity score estimation: A simulation study. Pharmacoepidemiology and Drug Safety, 17, 546–555. Shadish, W.R. (in press). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods. Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random to nonrandom assignment. Journal of the American Statistical Association, 103, 1334–1343. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton-Mifflin. Shadish, W. R., & Steiner, P. M. (2010). A primer on propensity score analysis. Newborn and Infant Nursing Reviews, 10(1), 19–26. Shah, B. R., Laupacis, A., Hux, J. E., & Austin, P. C. (2005). Propensity score methods gave similar results to traditional

258

regression modeling in observational studies: a systematic review. Journal of Clinical Epidemiology, 58, 550–559. Steiner, P. M., Cook, T. D., & Shadish, W. R. (2011). On the importance of reliable covariate measurement in selection bias adjustments using propensity scores. Journal of Educational and Behavioral Statistics, 36(2), 213–236. Steiner, P. M., Cook, T. D., Shadish, W. R., & Clark, M. H. (2010). The importance of covariate selection in controlling for selection bias in observational studies. Psychological Methods, 15(3), 250–267. Steyer, R. (2005). Analyzing individual and average causal effects via structural equation models. Methodology, 1, 39–64. Steyer, R., Gabler, S., von Davier, A. A., Nachtigall, C., & Buhl, T. (2000). Causal regression models I: Individual and average causal effects. Methods of Psychological Research Online, 5(2), 39–71. Steyer, R., Gabler, S., von Davier, A. A. & Nachtigall, C. (2000). Causal regression models II: Unconfoundedness and causal unbiasedness. Methods of Psychological Research Online, 5(3), 55–87. Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Sciences, 25(1), 1–21. Stuart, E. A., & Rubin, D. B. (2007). Best practices in quasiexperimental designs: matching methods for causal inference. In: Best Practices in Quantitative Methods, Chapter 11, Osborne J (Ed.). (pp. 155–176), Thousand Oaks, CA: Sage Publications. Stürmer, T., Joshi, M., Glynn, R. J., Avorn, J., Rothman, K. J., & Schneeweiss, S. (2006). A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods. Journal of Clinical Epidemiology, 59, 437–447. Wood, S. N. (2006). Generalized Additive Models. An Introduction with R. Boca Raton: Chapman & Hall/CRC.

Statistical Symbols

α intercept in a regression equation β vector of regression coefficients d Cohen’s d ; standardized mean difference ε error term in a regression equation e(X) propensity score l (X) logit of the propensity score M fixed number of matches for each treatment (or control) case N total number of cases NC number of control cases NT number of treatment cases ρ reliability coefficient τ average treatment effect for the overall target population (ATE) τT average treatment effect for the treated (ATT) X vector of observed covariates Yi observed outcome Yi0 potential control outcome; the outcome of unit i under the control condition (Zi = 0) Yi1 potential treatment outcome; the outcome of unit i under the treatment condition (Zi = 1)

m at c h i n g a n d p r o p e n s i t y s c o r e s

Zi indictor variable of treatment condition; Zi = 0 if unit i is in the control condition and Zi = 1 if unit i is in the treatment condition

Key Terms and Concepts Average treatment effect for the overall target population (ATE) The average treatment effect (mean difference in potential treatment and control outcomes) for the treated and untreated populations together. Average treatment effect for the treated (ATE) The average treatment effect (mean difference in potential treatment and control outcomes) for the treated population only. Balance Balance refers to equality of treatment and comparison groups with respect to the set of observed covariates. Groups are perfectly balanced if they have an identical joint distribution of observed covariates. Hidden bias Hidden bias represents that part of the total selection bias that is caused by unobserved covariates. Matching Matching is a statistical technique for equating groups—for example, a treatment and nonequivalent control group. Matched groups should be balanced in all observed covariates. Overlap Overlap refers to the treatment and control group’s region of common support on the propensity score or the set of observed covariates. Overlap is required for matching treatment and control cases. Without overlap, no comparable treatment and control matches are available.

Overt bias Overt bias is that part of the total selection bias that is caused by observed covariates. Potential outcomes The potential treatment outcome is a unit’s outcome if assigned to the treatment condition. The potential control outcome is a unit’s outcome if assigned to the control condition. Depending on treatment assignment, only one of the two potential outcomes is observed; the other one remains hidden. Propensity score (PS) The propensity score represents a unit’s conditional probability of being assigned to or selecting into the treatment condition (as opposed to the control condition), given a set of observed covariates. Selection bias Selection bias occurs when selection processes (e.g., administrator, third-person, or self-selection) result into heterogeneous groups that differ in observed or unobserved characteristics. Sensitivity analysis Sensitivity analysis probes the treatment effects sensitivity to unobserved confounding covariates. Strong ignorability The strong ignorability assumption, also called conditional independence assumption, is one of the main conditions for getting an unbiased estimate of the treatment effect. The strong ignorability assumption is met if valid and reliable measures of all confounding constructs are available and if the conditional probability of being in the treatment group, given the set of observed covariates, is strictly between zero and one.

s t e i n e r, c o o k

259

CHAPTER

14

Designs for and Analyses of Response Time Experiments

Trisha Van Zandt and James T. Townsend

Abstract This chapter provides historical background and a review of the design of response time experiments in psychology and human performance research. It also presents the most common techniques for the analysis of response time data, focusing in particular on parameter estimation and some “meta-theoretic” approaches for testing cognitive architecture. Key Words: Response Time, Experimental Design, Cognitive Modeling, Data Analysis.

Response times, sometimes referred to as reaction times or latencies, are measured as the time elapsed between the onset of a stimulus and the response to that stimulus. Response times (RTs) are very widely used in the study of human performance. In cognitive psychology and neuroscience, RTs are used to develop and test models of cognitive processing and brain function (e.g., Ratcliff & Smith, 2004). In ergonomics and human factors, sometimes called engineering psychology, they are used to evaluate training regimens, user interface design, vehicle operation performance and to perform task analyses (e.g., Borowsky, Oron-Gilad, & Parmet, 2009; Stevens, Brennan, Petocz, & Howell, 2009; Sullivan, Tsimhoni, & Bogard, 2008). In clinical psychology, psychiatry, and education, they are used to evaluate medical conditions and assist in diagnoses of such conditions as schizophrenia, learning disorders, and other psychological disorders (e.g., Heiervang & Hugdahl, 2003; Querne & Berquin, 2009). Modeling the processes that give rise to RT data forms the foundation for much work in

260

cognitive psychology (Luce, 1986; Townsend & Ashby, 1983). This chapter discusses the design of RT experiments and how RTs may be analyzed. It should be noted not only that the vista of research involving RTs is vast but also that the design of any experiment depends less on the variable to be measured than on the question that experiment is intended to answer. Some questions need experimental designs in which RT is controlled. Others require designs where RT is the dependent measure. Therefore, we cannot hope to provide a comprehensive index of all issues and designs relevant to RT data, but we can provide a broad summary of the kinds of designs that are likely to be most useful in varying circumstances. We begin this chapter with a history of RT measurements and the logic behind using RT to discover the structure of mental events. We then present the most common experimental designs, grouping them by the relationship between stimuli and responses. We will then discuss methods of data analysis, including parameter estimation and how

RTs are used to test hypotheses about the structure of a cognitive task. Historically, research that uses RTs can be roughly divided into two major and often overlapping realms. First, RTs have been used to describe changes in performance under different experimental conditions, usually in applied situations. We will present some of these descriptive analyses in the first half of the Analysis section. However, the most influential use of RT data has been to answer a theoretical question or test a theoretical hypothesis—for example in determining characteristics of cognitive information processing systems. Theoretical approaches can themselves be classified into (1) verbally based models or theories, (2) models expressed as specific stochastic processes with psychologically meaningful parameters, and (3) “meta-theories” in which entire classes of models based on one or more psychological principle are tested via theory-driven experimental methodologies. Throughout this chapter we will emphasize the use of RTs in evaluating theories of mental function. Our main focus will be on modeling and metatheory, approaches that have been most useful in answering questions about how the mind works. Although the modeling approach is presently far more popular than the meta-theoretic approach, the meta-theoretic approach appeared first, and the modeling approach derived from it. Hence, we will discuss the development of the meta-theoretic approach in our brief history and expand on it later in the second half of the Analysis section. We conclude the chapter by outlining current approaches to RT data and developing methodology.

History: From Astronomy to the Arrangement of Mental Processes Some of the earliest recorded attempts to evaluate task performance with response time were made by seventeenth-century astronomers. They referred to the personal equation to describe individual differences in the times taken by different observers to estimate the transit times of stars as they moved across the visual field. Exactly measuring the personal equation was important because astronomers hoped to calibrate their equipment to cancel out the effects of these individual differences and so arrive at more accurate measurements of the stars (Duncombe, 1945). Astronomer and mathematician Fredrich Bessel (1784–1846) was even more interested in why there should be such a personal equation. Using what we would now

recognize as a psychological approach, he formulated a hypothesis about the interactions between the visual and the auditory systems that we now refer to as the doctrine of prior entry (Shore & Spence, 2005). Over the most recent century, time has been used to explore hypotheses about the architecture of mental processing. Such uses of RT are sometimes called mental chronometry, but it is important to recognize that mental chronometry not only includes the measurement of RTs but also the careful control of stimulus exposure times, which determines how much information gets into the perceptual system. The first application of mental chronometry was performed by Donders, who proposed the method of subtraction (Donders, 1868/1969). Donders was inspired by von Helmholtz’s (1850) work demonstrating that the time taken by a neuron to transmit information could be measured. If it takes time for nerves to send information around the body, then perhaps it might be possible to estimate the time taken by different components of a mental task. These components are called stages, and the tasks he considered are now called simple reactions, go/no-go reactions, and choice reactions. In a simple reaction an observer responds as soon as he sees any stimulus at all (such as red and green lights) with a single response, such as depressing a telegraph key. Donders reasoned that simple responses require only a perceptual encoding stage, where the perceptual system apprehends the presentation of a stimulus, and a response execution stage, where the key is depressed. By contrast, in a go/nogo reaction, an observer responds to only one of two possible stimuli, pressing the key only when, say, the green light is presented. This task requires perceptual encoding and response execution like the simple reaction and an additional stage of stimulus identification in which observers determine the color of the presented stimulus. For a choice reaction, an observer makes one keypress to a red stimulus but a different keypress to a green stimulus. This task requires all the stages of a go/no-go reaction plus a response selection stage in which the observer chooses the key appropriate to the color of the light presented (see Fig. 14.1). To apply the method of subtraction, Donders had to make three important assumptions. The first was that the different stages of each task were arranged serially. That is, only one stage could be operating at any time, and each stage had to be completed before the next could begin (see Fig. 14.1, top panel). The second assumption is independence of stages. For example, however long the perceptual va n z a n d t, tow n s e n d

261

Serial Processing Perceptual Encoding

Stimulus Identification

Response Selection

Response Execution

Parallel Processing Perceptual Encoding

Stimulus Identification

Response Selection

Response Execution Figure 14.1 Serial processing (top panel) and parallel processing (bottom panel) of stages in Donders’ choice reaction task.

encoding stage, the response selection stage will not be affected. The third assumption was that the stages in each task did not change as other stages were added. So, for example, the time required for response execution was the same whether or not there was a stage of response selection. This third assumption is sometimes called pure insertion. Given these three assumptions, because the three tasks differ only by a single processing stage, the time required for stimulus identification should equal the go/no-go RT minus the simple RT. Similarly, the time for response selection should equal the choice RT minus the go/no-go RT. Donders’ method of subtraction is the earliest example of a meta-theoretical approach in experimental psychology, and many variants of it are still in use today. The method of subtraction provides a statement about an entire class of models: models with serial processing stages, independence of those processing stages, and invariance of processing with the addition of stages (pure insertion). Furthermore, the assumption that a mental process could be added or subtracted by the experimenter led to many of the experimental designs discussed in the next section. Very little was done with Donders’ idea until the 1960s when Sternberg published two very significant papers (Sternberg, 1966, 1969). The most influential article, which appeared roughly 100 years after Donders’ historic studies, introduced

262

the additive factors method (Sternberg, 1969). This method, like Donders’, proposed that cognitive architecture could be examined by looking at the difference between RTs in different experimental conditions. In particular, the method required that the experimenter identify experimental factors that selectively influenced different stages of processing. For example, consider a short-term memory search task in which observers are required to determine whether a target stimulus (e.g., a numeral) is one of a previously memorized set of stimuli called the search set. The plot of mean RT as a function of search set size is called the search set function. The search set function is usually a linear increasing function of search set size, as if the addition of each search set member increases the number of comparisons between the search set and the target. Sternberg (1966) therefore proposed (not on the basis of the additive factors method) that this task is accomplished by a serial process in which observers compare the target to each member of the memorized search set one at a time. Sternberg (1969) applied the additive factors method to examine all the stages of processing in the memory search task. He reasoned that the overall task could be broken into at least two serial processing stages. The first was a perceptual encoding stage and the second was the search stage in which the

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

target was compared to each of the stimuli held in memory. His idea, like Donders’, was to prolong each of these stages and examine the increases in mean RT. Making the target difficult to see would prolong the encoding stage but (he assumed) not influence the search stage. Similarly, increasing the memory load, the size of the search set, would prolong the search stage but not influence the encoding stage. Sternberg (1969) asked his subjects to make memory decisions in a factorial design that varied both memory load and stimulus visibility. He then implemented the additive factors method by plotting the search set functions separately for each stimulus visibility condition. The search set functions under different visibility conditions were parallel; there was no interaction between visibility and memory load. That is, the effects of visibility and search set size were additive: The extent to which RT was prolonged by poor visibility was the same regardless of the size of the search set. Similarly, the extent to which RT was prolonged by increasing the size of the search set was the same regardless of the visibility condition. This noninteraction, this additive effect of the two factors (and not the linearity of the mean RTs) supported the notion of serially organized and selectively influenced stages of processing: encoding, followed by search. The additive factors method had a huge influence across the field of experimental psychology. Later work generalized the additive factors method from the simple serial structures of Donders and Sternberg to other kinds of cognitive architectures (e.g., Schweickert, 1978; Schweickert & Townsend, 1989; Townsend, 1984; Townsend & Wenger, 2004b). One alternative architecture is parallel processing, where all stages operate simultaneously (see Fig. 14.1, bottom panel). Parallel processing has always been considered the antithesis of serial processing and, along with serial processing, has been the most investigated. Most applications of additive factors logic use measurements of mean RT. More modern treatments of additive factore, including factorial methodologies, make use of the detailed RT probability distributions (Ashby & Townsend, 1980; Balakrishnan, 1994; Roberts & Sternberg, 1992; Townsend & Nozawa, 1995; Schweickert, Giorgini, & Dzhafarov, 2000) Factorial approaches are metatheoretic in that they attempt to rule out entire classes of models with a set of data—for example, all parallel models for a certain kind of task. Another line of RT modeling is less focused on questions

of mental architecture but, rather, on theoretically motivated models of the cognitive system. These models, called sequential sampling models, can also predict the entire distribution of RTs, as well as the accuracy of responding (e.g., Ratcliff & Smith, 2004). Many modern experiments are designed to test predictions generated by the sequential sampling models. Thus, there are many aspects of experimental design and RT analysis that have been derived from consideration of this sort of data-generating mechanism. Other work has followed the additive factors tradition of testing more general cognitive architectures without being very specific about the kinds of mechanisms that might give rise to different processing stage durations. In the rest of this chapter we will discuss both approaches, emphasizing how RT experiments are designed from both theoretical perspectives and touching briefly on how RTs from such experiments are analyzed.

Design The design of an RT experiment can be classified according to the degree of information compression between the number of possible stimuli that can be presented and the number of possible responses that can be made. Simple response tasks, which have only a single response for a potentially very large number of stimuli, have the highest degree of information compression. Identification tasks, which have N different responses for each of N different stimuli, have no information compression. In this section we will discuss the different kinds of RT experiments, outlining for each the major variables that influence RT and potentially confounding variables. There are some variables that influence RT that we will not discuss. These include level of arousal or fatigue, extent of practice, gender, handedness, intelligence, the effect of drugs, and presentation to different brain hemispheres. The interested reader may consult Welford and Brebner (1980) for older but still accurate and comprehensive reviews of some of these additional variables.

Simple Reaction Tasks Simple RT designs are characterized by having only a single response option, although many different stimuli may be presented. For example, in Donders’ simple RT experiment, there were two stimuli: a red light and a green light. However, there was only one response, which was to press va n z a n d t, tow n s e n d

263

a key when a light had appeared, regardless of its color. Simple RT tasks are sometimes called detection tasks, because the observer’s job is simply to detect the presence of a stimulus no matter what it is. A number of variables influence simple RT, most importantly the stimulus modality (the sensory system that encodes the stimulus), the intensity of the stimuli, and the temporal structure of the trials.

stimulus modality and intensity Stimuli presented auditorily elicit significantly faster responses than stimuli presented visually (Woodworth & Schlosberg, 1954, p. 16), but this difference decreases as the intensities (detectabilities) of the stimuli increase (Kohfeld, 1971). The difference attenuates because simple RT decreases rapidly overall as intensity increases, attaining a minimum simple RT for both visual and auditory stimuli somewhere between 150 and 200 milliseconds. This decrease is so reliable that, for intensity defined on a physical scale (e.g., amplitude of a tone), it can be captured by a relationship known as Piéron’s law (Piéron, 1920). Piéron’s law states that mean RT is equal to a + bI −c , where a, b, and c are parameters, all greater than zero, to be estimated from the data. We can think of simple RT as being influenced more generally by stimulus energy. Energy is computed as the intensity I of the stimulus multiplied by its duration t. Most studies have found that as the energy in a display increases, mean RT decreases (Teichner & Krebs, 1972; Ueno, 1978). For visual signals of very short duration (t < 20 ms, approximately), mean RT is approximately equal to a+b(It)−c , but for longer durations, Piéron’s law holds (Mansfield, 1973). The point to remember is that for a range of stimulus durations, intensity I can be traded for increases in duration t (or vice versa) to produce the same effect on RTs. If the stimulus is presented for a fixed duration (say, 50 ms), the total amount of energy presented to the observer is also fixed (at 50I ). However, if the stimulus remains on until a response, the amount of energy continues to increase over time until the response is executed. As energy increases, the observer will eventually be able to see the stimulus, a process referred to as “summation,” which is closely related to the evidence accumulation models we will discuss below. Thus response-terminated stimuli introduce a confound into the design: longer RTs mean that some stimuli have been presented for longer durations. Presentations with longer RTs have higher energy displays, so 264

stimulus energy is no longer a controlled, independent variable. This may place important restrictions on the kinds of conclusions that can be drawn from the data. Constant energy displays introduce a notinsignificant problem in the design of simple RT experiments. Consider, for example, what could happen when a low-intensity stimulus is presented for a fixed duration. This low-energy display may be undetectable by an observer, and so he will not make a response (if he is performing the task correctly). If the experimenter has not designed the experimental trials in light of this possibility, then the experiment will stop at this point: The observer will wait indefinitely for a stimulus that has already been presented, and the next experimental trial can’t begin until he responds that he has seen the stimulus that he couldn’t see. For this reason, simple RT designs may use one of several strategies to ensure the experiment will continue. In addition to using stimuli that are response-terminated, this includes using a stimulus stream that continues even if a response is not made, presenting stimuli at fixed, predictable points in time, and/or using warning signals. All these possibilities are considerations for the temporal structure of the simple RT task.

temporal structure A stimulus stream that continues even in the absence of a response will be either random, with stimuli occurring at unpredictable times, or nonrandom. A nonrandom stream presents signals at the end of fixed time intervals, such as every 500 milliseconds or every 3 seconds. The difference between the onset of a signal and the onset of the next signal is sometimes called the stimulus onset asynchrony. A random stream uses stimulus onset asynchronies drawn at random on each trial. A nonrandom stream implicitly informs the observer about when stimuli are presented by creating a rhythmic context for the task. Such rhythms create a temporal expectation about when the next signal will be presented (Large & Jones, 1999). There are concerns, however, that this fundamentally changes the nature of the task from one of detection to one of timing, in which observers tend to respond by rhythmic tapping. To prevent this, researchers can introduce “catch trials” on which no signal is presented. The number of times observers respond on catch trials (the number of anticipations or false alarms) can be used as an indication of the extent to which they are timing their responses

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

rather than responding to a detected signal. Even with catch trials, however, RTs are susceptible to the rate at which rhythmic stimuli are presented (Van Zandt & Jones, 2012), which means that nonrandom streams may confound the influence of other variables on simple RT. Because of concerns like this, most simple RT designs use random streams. A random stream with stimuli of fixed duration is called a vigilance task. In a vigilance task, one important dependent variable is the number of misses the observer makes as a function of the amount of time he or she has been performing the task. If the stimulus onset asynchronies are quite large, resulting in “rare” stimulus events, a vigilance task can be quite tiring. The number of misses an observer makes will increase as the task duration increases, an effect called the vigilance decrement. For shorter stimulus onset asynchronies, the vigilance decrement is not as severe, presumably because the higher event rate results in a higher level of arousal in the observer. The distribution of the stimulus onset asynchrony also affects RT. The most common methods of selecting the stimulus onset asynchrony are to select at random from a small set of durations or to generate a random duration from a uniform or exponential distribution. The reason why the choice of distribution is important is that the length of the stimulus onset asynchrony can provide information about when the signal will occur. For example, if the stimulus onset asynchrony is selected at random from any distribution on a fixed interval (e.g., from 0 to 1000 ms) if the observer has waited for 800 milliseconds, then she knows the stimulus must appear within the next 200 milliseconds. This will result in a decrease in RTs to longer stimulus onset asynchronies, a decrease that will be especially pronounced if only a few possible stimulus onset asynchronies are used (e.g., 200 ms, 400 ms, 600 ms, 800 ms, and 1000 ms; Klemmer, 1956). To eliminate this problem, which is usually attributed to increased response preparation or increased attention as uncertainty about target presentation time decreases, some researchers have used stimulus onset asynchronies drawn from an exponential distribution (e.g., Green & Luce, 1971). The exponential distribution has the peculiar statistical property that, regardless of the amount of time the observer has waited for a signal, the likelihood that the signal will appear in the next instant is constant. There is no way, then, to predict the onset of the signal from the amount of time that has elapsed. This kind of structure eliminates the

problem of varying RTs caused by stimulus timing or anticipation, although RT still varies with the length of the stimulus onset ansynchrony (e.g., Green & Luce, 1971). One drawback is that the exponential distribution introduces the likelihood that some trials can be delayed by (rare) very long stimulus onset asynchronies.

warning signals Perhaps the most popular way of informing the subject that a trial has ended or begun is to use a separate, easily-detectable warning signal to which a response is not required. Warning signals have at least two benefits: first, they provide a salient point at which the trial begins, and second, they provide a way to identify anticipatory responses, which are not made in response to any signal. In a vigilance task, any response could be to an earlier signal or a false alarm to a signal that the observer thought he saw. It is impossible, then, to determine what kind of response it is. With warning signals, any response made during the time between the warning signal and the target signal (i.e., the foreperiod) is an anticipatory response. Therefore, this procedure eliminates the need for catch trials. The same issues arising with stimulus onset asynchrony arise again with foreperiod durations. However, there is a large literature on foreperiod designs concentrated on the effects of attention in motor learning. Like stimulus onset asynchrony, foreperiods can either be fixed or random. For fixed foreperiods, RT increases as foreperiod increases. For random foreperiods, the reverse is true. A number of explanations have been proposed for this strange pattern of effects, and the most likely seems to involve uncertainty (see Ellis & Jones, 2010; Niemi & Näätänen, 1981, for reviews). As we will discuss later, increased uncertainty, whether about what is going to happen or when it is going to happen, will increase RT. For the fixed foreperiod design, longer foreperiods lead to more uncertainty about when the signal will be presented, perhaps because longer intervals are more difficult to estimate, and this greater uncertainty results in longer RTs for the longer foreperiods. For the random foreperiod designs, there is also uncertainty about what foreperiod will be presented. However, as the foreperiod increases, this uncertainty decreases, resulting in faster RTs to the longer foreperiods. An alternative to presenting a warning signal is to allow the observer to initiate the beginning of the trial with a keypress. This is referred to as a selfpaced design. In a self-paced design, the foreperiod is va n z a n d t, tow n s e n d

265

measured from the keypress (the observer’s signal to begin) to the onset of the signal. The rate of stimulus presentations in a self-paced design is, of course, determined by the observer, and so there is the risk that some observers will pace themselves very slowly, a pace that will likely be associated with slower RTs. Also, the experimenter has less control over intra trial variables, such as stimulus order, which may or may not be important. One last important issue remains, and that concerns the response-stimulus interval, which is important regardless of whether a warning signal is used. The response-stimulus interval is the time between the observer’s response and the next stimulus presented (which may be either a warning signal or the next target signal). It is difficult to simultaneously control both stimulus-onset asynchrony and the response-stimulus interval. Most designs control only the response-stimulus interval. In studies without warning signals, the response-stimulus interval is considered equivalent to a foreperiod, and indeed, the same general effects on RT are observed for increasing and decreasing responsestimulus intervals. If the response-stimulus interval is fixed, observers may use the resulting predictability of the stimulus onset to time their responses. Similarly, increasing the response-stimulus interval may result in increased temporal uncertainty, which can produce slower RTs. Conversely, increasing the response-stimulus interval through a limited range of interval durations can decrease temporal uncertainty for longer response-stimulus intervals, which may speed RTs.

is the simple rt task too simple to be interesting? In many ways, the simple RT task serves as a point of connection between work in psychophysics, which focuses on lower-level perceptual mechanisms, and work in simple choice, which we discuss in the next section. Whereas psychophysical experiments usually concentrate on changes in accuracy with changes in stimulus conditions, choice RT experiments are frequently concerned with simultaneous changes in accuracy and RT. The RTs measured in a simple RT task vary with the same variables that influence accuracy in psychophysical tasks, and many of the variables influencing RT in the simple RT task also influence RT in the choice RT task. Smith (1995) has provided an excellent review of the link between these different areas as well as a model of the simple RT tasks that explains many of the effects we have presented in this section. 266

Although the simple RT design is one of the, well, simplest kinds of RT experiment, it is widely used to study highly complex perceptual phenomena. We have barely skimmed the surface of this literature in this brief review. For example, we have focused in this section primarily on data from keypress responses, but in fact almost any overt motor action can be the basis of a simple RT. This includes, for example, eye movements, measured with eye-tracking equipment, or vocal responses (e.g., Diederich & Colonius, 2008). The stimuli can be very complex, including words, pictures, or a combination of sensory modalities. Of course, each of these stimulus types will influence overall RT.

Choice Reaction Tasks If the number of stimuli presented is greater than the number of responses permitted, then the task is either a go/no-go task or an n-choice task, where n is the number of possible responses. Considering first the n-choice task, we can conceive of the cognitive process as one of classifying the signals into one of n possible categories. In many RT experiments, n = 2 and the observer is asked to determine, for possibly very many different stimuli, whether the signal is an “A”-type or a “B”-type. For example, given a burst of white noise in which a pure tone may or may not be embedded, an observer may be asked to say whether a signal is present or whether it is pure noise—a signal detection task. Given an object like a letter, numeral, word, or picture, the observer may be asked to determine whether the object was encountered previously in the experiment (“old”) or not (“new”)—a recognition task. There are also n-choice tasks, which can arise in studies of categorization. A subject may be presented with a stimulus defined as a location in r-dimensional space, where r is the number of unique and not-necessarily-orthogonal features of the stimulus. A geometric shape, for example, could be defined by the number of its sides and its convexity, size and, color. “A”-shapes may tend to be pinkish, have small numbers of sides, and be large and convex. “B”-shapes may be similar but tend to be greenish and are not always convex. “C”-shapes are pinkish but small and have more sides, and so forth. The observer’s job, given a stimulus, is to say whether it is of type “A,” “B,” or “C.” A go/no-go task has at least one (but possibly more) more stimulus than responses, and that one stimulus is the one that requires withholding a response. The simple RT task with catch trials can be considered a go/no-go task, if one defines the

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

absence of a signal as being a different sort of signal. We will discuss the go/no-go task in more detail in a later section. There are several important design considerations in constructing a choice task. Many of the issues that arise in the design of a simple RT task will still be important, such as the use of warning signals, fixed versus variable foreperiods, and the responsestimulus interval. In addition, we must consider the facts that RT is going to increase as both the number of stimuli and responses increases and that RT is correlated with response accuracy.

transmitted information Response time is a linear function of the amount of uncertainty in the task (Hick, 1952; Hyman, 1953). This effect is so robust that it is referred to as the Hick-Hyman Law of mean RT. We mentioned uncertainty in somewhat vague terms earlier in our discussion of the role that temporal structures play in simple RT task performance. Now we formalize this idea. Uncertainty is a dimensionless quantity that depends on the number of possible outcomes in an experiment and their probability. It can be used to describe many things, but in this context it refers to the amount of information provided by the occurrence of an event. For example, if there is only one possible response, observing that response does not change the amount of information about what response the observer is going to make. However, if there are n equally likely responses, observing one of them changes the amount of information about the observer a lot. In choice RT, the event for which we measure uncertainty is a particular stimulus–response combination. Suppose that a set of stimuli {S1 , S2 , . . . , Sm } may be presented, and to each the observer can select one response from a set of responses {R1 , R2 , . . . , Rn }, where n ≤ m.1 Let the probability that stimulus Si is presented be pi , and let the probability that response Rj is made be qj . Also let the probability that response Rj is made to stimulus Si be rij , which will equal pi qj only when the responses are independent from the stimuli presented. We define the amount of stimulus information to be m  pi log pi , H (S) = − i=1

where log is taken to the base 2. Similarly, H (R) = −

n  i=1

qi log qi

is the amount of response information. Information is measured in bits, which is the fewest number of binary questions that would be required to uniquely identify the event that occurred. Joint information is measured over the collection of stimulus–response pairs. It is given by H (S, R) = −

n  m 

rij log rij .

i=1 j=1

For any set of n events {X1 , X2 , . . . , Xn } that occur with probabilities {p1 , p2 , . . . , pn }, if pi = 1/n then the amount of information in the set is log n, which is also the maximum amount of information possible. Transmitted information determines the speed of responding. Transmitted information is given by T (S, R) = H (S) + H (R) − H (S, R). Note that the information measure does not depend on how accurate the observer is but only on how consistent he is. Transmitted information is at the highest possible level when rij = 1 for some i = k ∈ [1, n] and 0 for all other i = k. Transmitted information is 0 when rij = pi qj —when the response is statistically independent from the stimulus. Hick (1952) and Hyman (1953) both showed that E [RT ] = a + bT (S, R) : mean RT is a linear function of transmitted information. The coefficient b is called the channel capacity of the observer, and it reflects how quickly information is processed (in time units per bit). The importance of this law for the design of reaction time experiments is that as the number of possible stimulus–response combinations increases, mean RT will also increase. As an historical aside, recall our earlier discussion of Sternberg’s (1966) paper in which he examined mean RT in a memory-search task. Observers were shown a search set of digits that they had to remember and then were shown a target digit. They were asked to determine whether the target digit was present in the search set. Sternberg showed that the mean RT to respond “yes” or “no” increased linearly as a function of search set size. According to the Hick-Hyman Law, we would have expected such an increase in mean RT only because of the change in the amount of information transmitted as the size of the search set increased. However, one underappreciated feature of Sternberg’s design is that he very carefully matched search set sizes with va n z a n d t, tow n s e n d

267

speed–accuracy tradeoff One difficulty in RT experiments is the fact that accuracy is correlated with RT:The faster an observer responds, the more errors she makes. Earlier studies such as those of Sternberg (1966, 1969) and Donders (1868/1969) assumed (implicitly or explicitly) that if the error rate were small enough, then error responses could be safely ignored. Although in general this is true, it is also true that very small changes in error rate may reflect a very large change in processing strategy and hence RT. It is straightforward but somewhat tedious to try and keep subjects at a constant accuracy so that their RTs can be measured at a single point on the speed–accuracy tradeoff curve (e.g., Santee & Egeth, 1982). This kind of design requires trial-bytrial manipulation of an independent variable that can control accuracy, such as stimulus presentation time or contrast. Using psychophysical procedures (adaptive staircase methods; Garcia-Perez, 1998), the independent variable is adjusted upward or downward on each trial depending on the accuracy of the previous response. Such procedures may indeed control for the speed–accuracy tradeoff but do not provide any explanation for it. Explanations of the speed–accuracy tradeoff in RT experiments are provided by sequential sampling models, discussed briefly above. These models, perhaps the most successful models of RT, produce the speed–accuracy tradeoff naturally as subjects raise and lower the amounts of information (thresholds) necessary to select a particular response on each trial (see Fig. 14.2). If thresholds are low then less evidence will be required and therefore less time will elapse before a threshold is reached. However, it will be easier for an inappropriate response to accumulate enough evidence to reach a lower threshold. If the thresholds are higher, then RTs will be slower, but 268

Respond "A" Threshold Reduction Evidence

stimulus probabilities to keep the amount of transmitted information constant across changes in set size. Thus, the effect he observed had to result from changes in set size alone. Information theory and the Hick-Hyman Law do not provide a good theory of how choice RTs are generated. More recent work by Usher and McClelland (2001) has suggested that Hick-Hyman Law behavior could be produced by a model of stochastic information accumulation. According to this sort of model, the increase in RT with increases in information transmitted arises from observers’ increasing the amount of information necessary for a response to avoid making errors, the speed–accuracy tradeoff.

0

Incorrect RT Correct RT

Time

Respond "B" Figure 14.2 The speed-accuracy tradeoff in a sequential sampling model. At the presentation of target A at time 0, there is 0 evidence toward either of the two responses (A or B). Information accumulates randomly over time, reaching one of the two thresholds to determine the response. If the thresholds are reduced, then spurious evidence toward response B results in an incorrect decision.

inappropriate responses will be less likely to reach the threshold. Thus, faster RTs will be associated with lower accuracy levels and slower RTs will be associated with higher accuracy levels. The sequential sampling models make predictions about the state of the cognitive processing system over time. Although this system produces as output an RT and a response, researchers have tried to peer inside the system to verify these predictions. Of course, researchers can’t watch the process unfold, but if they assume that the accumulation process is not influenced by where the thresholds are placed, then they can try and move those thresholds up and down and look for predicted changes in RT and response probability. This desire to look into the heart of the information accumulation process led to the development of deadline and response-signal designs. Deadline and response-signal experiments attempt to tell people what their RTs should be. Simple deadlines tell subjects “Too slow” (or “Too fast”) after the response and could potentially penalize them in some way by taking away points or repeating the trial later in the session (e.g., Pachella & Pew, 1968). More severe deadlines can time-out the trial if the subject hasn’t responded. By contrast, response-signal experiments ask subjects to make their responses when they see a signal presented after the target stimulus (e.g., Reed, 1973). Some response signals are presented with a very short foreperiod, and others may be quite long. Researchers assume that under deadlines subjects move their thresholds down or up to permit faster

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

or slower responding (see Fig. 14.2). Deadlines are usually fixed from trial to trial so that subjects can move their thresholds to reliably produce the desired RT. However, under the response-signal paradigm, the thresholds are irrelevant and the response will be based on the level of information accumulated at the time of the response signal. The problem with the deadline design is that the RTs produced may not follow the shape predicted by the model, because they are truncated at the deadline boundaries, distorting their shape (e.g., Van Zandt, Colonius, & Proctor, 2000). However, if the purpose of the experiment doesn’t depend on distribution shape, then deadlines are an excellent way to produce a high proportion of RTs within a particular time window. The dependent variables are choice accuracy and the number of responses executed on time. The response-signal design takes the thresholds out of the response process. Because the assumption is that the response will be made based on the amount of evidence accumulated at the point in time at which the response signal appears, the dependent variable is then the change in accuracy as a function of RT (or response signal time).

Number of Stimuli Equal to the Number of Responses Designs in which each stimulus presented requires its own unique response are sometimes called identification or absolute identification tasks. For example, four-letter stimuli (“A,” “B,” “C,” and “D”) may be presented one at a time, and the subject may be asked to press one of four buttons numbered 1, 2, 3, and 4 to each. Response data from this kind of experiment may be arranged in a confusion matrix, which indicates the fidelity of the assigned responses to their stimuli. Response times in identification tasks are influenced by the same variables that influence simple and choice RT, such as information transmitted and the speed–accuracy tradoff. One important limitation in identification performance was identified by George Miller in his famous (1956) paper, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” Miller argued, after reviewing a wide range of literature, that people could efficiently transmit around 2.5 bits of information (approximately 7 equiprobable stimuli) but not much more without decrements in performance. This means that RT will increase with increased number of stimuli to be identified and that these

increases in RT will occur with a concurrent decrease in accuracy. Another important factor in identification RT is stimulus–response compatibility. Stimulus– response compatibility is a term that describes the extent to which features of the stimulus set (which may or may not be relevant to the responses required to them) overlap or are similar to features of the response set. Although stimulus–response compatibility may influence RTs in any choice-RT design, it is most commonly studied using identification tasks. Compatibility experiments have often focused on the spatial features of stimuli and responses, or where the stimuli appear relative to the location of the responses to be made to them. Highly compatible spatial relationships (e.g., responding with a right button to the stimulus that appears on the right) result in faster RTs than less compatible spatial relationships. Early experiments by Fitts and his colleagues (e.g., Fitts & Seeger, 1953) demonstrated that when stimuli were characterized by different spatial configurations of lights, RTs were fastest when the responses matched those spatial configurations, even when the number of stimuli and responses are the same (holding information constant). Compatibility effects can also occur when the spatial stimulus dimension is task-irrelevant. For example, the Simon effect occurs when two stimuli, say red and green lights, are mapped to two different responses, say a left or right button-press (Simon, 1969). Assume that the observer is to respond with a left button-press to the red light and a right buttonpress to the green light. Response times will be faster if the red light appears to the left of center than if the red light appears to the right of center. Compatibility effects even arise for nonspatial stimulus dimensions such as positive–negative affect of stimuli and verbal responses. (See Proctor & Vu, 2006, for a thorough review of this and other compatibility effects.)

Stop Signal, Dual-Task, and Task-Switching Designs Donders’ go/no-go task can be viewed as a choiceRT task where one of the possible responses is not to respond at all. A paradigm closely related to the go/no-go task is the stop-signal task. A stop-signal task is a choice-RT tasks where, for some trials, a stop signal is presented at some time (usually hundreds of milliseconds’ delay) after the stimulus, indicating that the observer should inhibit the response to the va n z a n d t, tow n s e n d

269

stimulus. Thus, the go/no-go task is a stop-signal task where one of the stimuli is the stop signal, and the stop signal is always presented with 0 milliseconds delay. Stop-signal tasks are used to explore the dynamics of response preparation. As the stop-signal delay increases, observers are less able to inhibit their responses: The function relating the probability of successfully inhibiting a response to stop-signal delay is a smooth, S-shaped curve. It is not a stepfunction, where there is a delay below which the responses are never inhibited but above which they are always inhibited. This suggests that the choice process has components that are gradual and build up over time (e.g., Logan, Cowan, & Davis, 1984). Changes in mean RT with changes in stop-signal delay are consistent with the idea that the choice process unfolds over time and also that the stimulus is processed at the same time as the stop signal. As the stop-signal delay decreases, RTs on stop-signal trials (responses that the observer failed to inhibit) become faster. This decrease in RT may arise from a “race” between the processing of the stimulus and the processing of a stop signal. For stop signals that appear very close to the stimulus onset, a response will be made only on those trials where stimulus processing was fast enough to beat the processing of the stop signal, resulting in mean RTs that increase with increasing delay. Research using the stop-signal task tries to answer questions about “executive control” of action. That is, how do people start and stop their behaviors at appropriate times? In these kinds of problems, we have to appreciate that people may be performing many tasks at once. The stop-signal task is a relatively simple task in which observers do two things at the same time: select a response to a stimulus and also prepare to inhibit that response. Another kind of task, the dual task, asks people to make two (possibly different) responses to two (possibly different) stimuli at the same time. The dual-task design, like the stop-signal paradigm, varies the delay between the onsets of the first and second stimuli. This design was used in some early studies of attention (Telford, 1931; Welford, 1952). These studies demonstrated that RT to the second stimulus decreased as the delay between stimulus onsets increased, suggesting that there was only one “channel” through which the stimuli could be processed, and that this channel could only accomodate one stimulus at a time. This interference between the processing of the first and second stimulus is sometimes called the psychological refractory period, and it is interesting that there is 270

apparently no such interference for the very similar stop-signal task. Another related paradigm is the task-switching paradigm. In task switching, people are asked to do different things on different trials, and their performance on “switch” trials, in which the task changes from the previous trial, is then compared to their performance on “repetition” trials, in which the task does not change. Typically RTs show a “switch cost”: RTs are slower after a switch to a new task than when the task is repeated (Logan & Gordon, 2001). These switch costs are used to measure the time required by executive processes to move between different tasks. The similarity between the stop-signal, dualtask, and task-switching paradigms was explored by Logan and Burkell (1986). They used two stimuli, a letter followed by a tone. The letter required one response whereas the tone required another. In stop-signal conditions, the tone was the stop signal indicating that the response to the letter should be inhibited. In dual-task conditions, both the letter and the tone required responses. In the task-switching conditions, the tone was both a stop signal to the letter and required its own response, so observers had to stop processing the letter and switch to the tone. The critical comparison was between RTs to the tone when the letter response had been either inhibited or uninhibited. The switch costs for trials in which the tone failed to inhibit the response to the letter were similar in magnitude to the interference in the dual-task conditions. There was little or no switch cost for trials in which the tone successfully inhibited the response to the letter, although the extent of interference should have been approximately the same as in the dualtask conditions. Logan and Burkell (1986) argued that this finding supports the idea that the difference between RTs in stop-signal and dual-task paradigms results from interference between responses and not competition for processing resources. The stop-signal, dual-task and task-switching paradigms have been used to explore mechanisms of response inhibition and automaticity of processing, and more generally to understand executive processing, or how people are able to control their behaviors, and also the factors that contribute to uncontrollable (automatic) behaviors. Interested readers should consult Verbruggen and Logan (2009) for a recent review of the literature in this area.

Analysis of Response Time Data Response time data, regardless of the experiment in which they were collected, have the following

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

characteristics. First, the data may be assumed to be a mixture from the process under study and a number of contaminant or outlier observations arising from equipment failures, attentional lapses, subject perversity, and so forth. Second, RTs are samples from positively skewed distributions; the longest RTs may span a very wide range, whereas the shortest RTs are usually concentrated around some modal value. Third, RTs are sequential data and are not, generally speaking, independent from trial to trial. Finally, RTs are subject to a wide range of individual differences, so collapsing data across subjects is not usually a good idea. Analysis of RTs, if done well, takes all these characteristics into account. Unfortunately, there are not many canned procedures that have the ability to accommodate mixtures, skewness, sequential dependencies and individual differences simultaneously, so most analyses compromise on one or more of these issues. The most common approach to the analysis of RT data is to compute the mean RT for each subject’s responses in each experimental condition. For example, an experiment might ask subjects to make choice responses in four conditions. They may be asked to make their responses as quickly as possible, sacrificing accuracy if necessary, or they may be asked go as slowly as necessary to to maintain a high level of accuracy. Within each of these conditions, subjects may see both high-energy and low-energy stimuli. After completing some number of trials in each condition, each subject will have four mean RTs (fast or accurate responding by high or low energy). These means would then be subjected to a repeated-measures factorial analysis of variance to test for effects of instructions and stimulus energy. There are a number of unsatisfactory features of this approach. First, the analysis of variance assumes a model in which the relationship between mean RT and the effect of the independent variables is linear. There is no theoretical basis for this assumption. Second, compressing every subject’s data into means discards a great deal of information that may be useful for determining how the RTs were generated, such as skewness and sequential effects, which is the ultimate goal of experimentation. Third, the assumptions required for analysis of variance, such as normally distributed residuals, independence of observations, and homogeneity of variance across conditions, are routinely violated in RT data. In this section we present a number of methods for analyzing RT data, including the use of mean RT, estimating the parameters of models for RT data,

and using RT for testing cognitive architecture. For a more thorough treatment of RT analysis, interested readers can consult Van Zandt (2002). For a more general treatment of modeling and parameter estimation issues, interested readers can consult Busemeyer and Diederich (2010).

Analyses of Mean Response Time Many hypotheses about cognitive processing are formulated to provide predictions about mean RT. For example, the additive factors method looks for interactive effects of variables on mean RT. The typical procedure involves fitting the general linear model (most commonly the analysis of variance model) to the mean RTs computed for each subject and condition and relying on variance accounted for to argue for effects of different independent variables on performance. The general linear model is unsatisfactory as an inferential instrument for RTs. The assumptions necessary for application of the general linear model are generally not met in RT data, even in mean RT data. These assumptions include normal or symmetric distributions, independence of observations, and homogeneity of variance across conditions. To understand why the assumptions of normality and symmetry are violated, consider how the distributions of RTs from individual subjects are distributed. Response time distributions are positively skewed and hence asymmetric, and RTs are highly correlated across trials and conditions, showing evidence of autocorrelation structure and dependence on previous stimuli and responses. The degree of asymmetry and autocorrelation varies across subjects—that is, each subject’s RTs come from a different distribution. Therefore, although the mean RT from a single subject may be approximately normally distributed via the Central Limit Theorem, the mean RTs across different subjects come from different normal distributions with different means and variances. This means that the mean RTs, the dependent variables, are drawn from a mixture of normal distributions, which is unlikely to be normally distributed itself and probably not symmetric. Homogeneity of variance is violated in mean RT data not only because individual subjects have different mean RT distributions but also because mean RT and RT variance are correlated such that as the mean increases so does the variance. The coefficient of variation (the standard deviation divided by the mean) of RT data is approximately constant (e.g., va n z a n d t, tow n s e n d

271

Luce, 1986, p. 64), implying that the standard deviation is a nearly linear function of the mean. This fact provides strong evidence for the kinds of distributions that best describe RT data and hence the classes of models that best explain performance in RT tasks (Wagenmakers & Brown, 2007). That is, we should not choose to model RT using, say, a normal distribution, because the variance of the normal distribution does not increase as its mean increases. However, the variance of the gamma distribution increases linearly with its mean and has a constant coefficient of variation. Therefore, the gamma distribution is a better choice for a model of RT data. Finally, the linear model itself, which is the basis of procedures like regression and the analysis of variance, relates mean RTs to a linear function of the independent variables. This linear relationship is not an accurate representation of the influences of the independent variables on RTs, which are generated by a highly nonlinear dynamic system. Apart from the marginal benefits of a linear modeling approach, there are other issues that arise when collapsing across observations to compute a summary statistic for RT. The most perennial of these problems arises from the skewness of RT data. The large upper tail in the RT distribution has the effect of creating “outliers,” RTs that are much longer than the bulk of the RTs observed for an individual. Outliers are a problem in all areas of statistical analysis, but the unusual aspect of outliers in RT data is that they potentially derive from the process of interest. That is, they are not necessarily outliers in the sense of contaminations of the data. There is a problem, then, in deciding which observations are contaminants and which are not. Every experimenter has a preferred method for cleaning their RT data, and these methods are usually based on personal preference rather than statistical necessity. One of the authors of this chapter (TVZ), for example, routinely discards choice RT observations faster than 200 milliseconds and greater than 3.5 standard deviations above the mean. Ratcliff (1993) performed an extensive Monte Carlo study of different methods of RT outlier treatment and their effects on inferential tests on the mean. Some of the methods he examined used cutoff values such as those of TVZ, as well as common data transformations such as the inverse and logarithm. For each of these methods, he computed power and the probability of Type I errors for analyses of variance under different levels of outlier contamination. He demonstrated that the choice of outlier treatment had 272

no influence on the rate of Type I errors. However, the different methods had strong effects on power. Cutoff methods that use standard deviations can reduce power. Fixed cutoffs that did not depend on sample statistics maintained the highest power. Unfortunately, a fixed cutoff is difficult to apply across all subjects and conditions in an experiment. A cutoff that seems appropriate for one condition (e.g., 5000 ms) might not be appropriate for another condition, especially because the usual purpose of the different conditions of an RT experiment is to observe increases or decreases in mean RT. In addition, slower subjects will have more RTs eliminated as outliers, which will have implications for evaluating mean differences over conditions and may lead to statistical artifacts such as truncation or floor and ceiling effects. A statistical artifact that arises from cutoffs is estimation bias, which is the extent to which a statistic like the sample mean fails as an estimate of a population parameter. Ulrich and Miller (1994) showed that cutoffs can introduce bias into estimates of the mean, median, and higher moments of the RT distribution and that these effects could be larger than the experimental effects of interest. Van Selst and Jolicoeur (1994) also showed that this bias is influenced by sample size: Smaller sample sizes result in the elimination of fewer high RTs. One way to avoid the problems associated with cutoffs is to use a data transformation. Both the log (log(RT)) and the inverse (1/RT) transformations have the effect of “squeezing” the distribution and reducing skew. Ratcliff (1993) showed that the inverse transformation had better power than the log transformation, almost as high as that of fixed cutoffs. One important benefit of data transformations is that they do not require the researcher to discard data, which is always risky if the researcher is not absolutely certain that an observation is a contaminant. Another way to handle the outlier problem is not to use moment-based statistics like the mean and standard deviation at all. Rather, the researcher may turn to robust statistical methods that are based on the median and interquartile range (see Erceg-Hurn, Wilcox, & Keselman, Chapter 19, Volume 1). The median and interquartile range statistics are called robust because they change very little in the presence of outliers and skew. Their use is uncommon in RT analysis (and most other areas in psychology) because they are mathematically more difficult to work with and their standard errors are

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

larger than those of their moment-based equivalents. The sample sizes required to attain approximate normality of their sampling distributions are much larger (Stuart & Ord, 1999). Analysis of mean RT is therefore not as simple as it appears. However, there are a few rules of thumb that can be followed. First, the researcher must recognize that the linear model does not portray the data-generating mechanism accurately and consider using more sophisticated modeling schemes such as those presented below. Second, if the researcher decides that she must collapse across individual observations into a summary statistic, then she must pay close attention to outliers. If outliers seem to be a problem, then she can use a data transformation method instead of discarding data or use the median instead. Third, the researcher should perform the analysis in several ways (with and without the outlier treatment, or on both the means and medians) to make sure statistical artifacts have not been introduced. Finally, the researcher should be aware that collapsing across individual observations and using a linear modeling scheme may hide important effects in the data. If the effects of independent variables are strong, then the researcher may see them in the means and a regression may easily detect their presence. However, the exact nature of that effect may be obscured, as will the effect itself if it is at all subtle.

Time Series Analysis An increasingly popular way to analyze RT data is to treat them as time series. A time series is a sequence of measurements with a time index, such as the level of the Dow Jones index at the end of every trading day (see Wei, Chapter 22, Volume 2). For RT data, although the measurements are of time, the index is the trial, which may or may not occur at fixed points in time, depending on the design of the experiment in which the RTs were collected. Despite this deviation from a true time series, treating RT data over a sequence of trials as a time series has a number of benefits. Most important of these benefits is the fact that RTs are, as we noted earlier, correlated across trials. Not only do RTs vary as a function of the previous stimuli and responses (Kirby, 1980; Laming, 1968, 1979; Remington, 1969), but they are autocorrelated, usually positively, so that fast responses tend to follow fast responses and slow responses tend to follow slow responses (Wagenmakers, Farrell, & Ratcliff, 2004). Time series approaches attempt to

model directly these correlations, although there are several pitfalls to doing so. A general time series model for a measurement {T1 , T2 , . . . , Tt } at time t is a (possibly nonlinear) function of three things: the values of the measurement {T1 , T2 , . . . , Tt−1 } up to time t, a trend component {μ1 , μ2 , . . . , μt } that does not depend on any Ti , and a random noise process {1 , 2 , . . . , t }. Perhaps the simplest, but a quite powerful, time series model is the autoregressive process of order 1 or AR(1) model, which is written Tt = φTt−1 + μ + t , where the coefficient φ is a constant with absolute value less than 1 and the trend μ is constant across trials. For the AR(1) model, φ determines the extent to which the observation at time t is correlated with the observation at time t − 1. The constant trend is the overall mean of the process, and t is a white noise process, an independent sample from a normal distribution with mean 0 and variance σ 2 . It is the assumption of white noise that poses the first problem for time series analysis of RTs. Almost all applications of time series models in psychology, including autoregressive and moving average models, as well as integrated moving average models, assume white noise. To understand why this is problematic, consider the simple AR(1) model. Under the white noise assumption, the marginal distribution of measurements T should be Gaussian with mean μ and variance σ 2 . However, RTs are not normally distributed. This means that using the AR(1) to estimate, for example, the magnitude of the autocorrelation coefficient for an RT series will not produce accurate results. A second problem is how to identify trend and isolate it from the process generating the RTs. There are many reasons why mean RT might fluctuate over time. One commonly observed trend is a decrease in RT with practice, which occurs even for simple RT. This trend may be completely separate from the mechanism that produces the RTs or it may be an integral part of it. If trend is separate from the data-generating mechanism, then how can we accurately estimate and remove it so that we may estimate the other important features of the process? If trend is not properly removed, then it will distort the impression of autocorrelation. If trend is an integral part of the data-generating mechanism, then how do we explain it and how it contributes to the autocorrelation structure? Much current interest in time series analysis of RT data has been spurred by work of Gilden (1997, va n z a n d t, tow n s e n d

273

2001) and others (Holden, Van Orden, & Turvey, 2009; Kello, Anderson, Holden, & Van Orden, 2008), who have argued that RT variance shows evidence of “1/f noise” or long-range dependence. Long-range dependence means that the RT on trial t is influenced not just by the RTs on trials t − 2 and t − 1 but by all the RTs up to that point (RT1 , RT2 , . . . , RTt−1 ). Long-range dependence is characteristic of a number of natural processes (such as heart rhythms) and is associated with system complexity and fractal structures. Fractal structures are usually formed by simple iterative processes that produce regular patterns at arbitrarily small scales of measurement (see, for example, the Mandelbrot set; “Mandelbrot Set,” 2010). The recurrence of these patterns over different measurement scales is called self-similarity. Self-similar processes can exhibit long-range dependence. For RTs, this may imply scale invariance: RTs may behave the same way whether measured in milliseconds, seconds, minutes, and so forth. However, much of the work exploring long-range dependence uses techniques appropriate only to Gaussian processes and does not adequately deal with trend, which means that measurements of long-range dependence may be distorted. Many demonstrations of long-range dependency have focused on the power spectrum of RT series. There are several nonparametric approaches to spectral density estimation that can be used to support the notion of long-range dependence and fractal structure in RT data. Holden (2005) advocates the use of nonparametric dispersion analysis together with classic parametric estimation methods. Dispersion analysis provides an estimate of fractal dimension of the series, which in turn can provide evidence of long-range dependence (Van Orden, Holden, & Turvey, 2003). The problem of separating trend or experimental effects from dependence is a difficult one (e.g., Peruggia, Van Zandt, & Chen, 2002). Trend that has not been removed from the analyzed series will inflate the perception of long-range dependency. One simple way to detrend a series, which also permits the use of Gaussian process techniques, is to “normalize” the log RTs by passing them through the inverse normal cumulative probability function— that is convert the RT quantiles to normal scores. These scores can then be detrended using a number of different techniques and the detrended scores then passed back through the normal probability function and converted to the original RT scale. Craigmile, Peruggia, and Van Zandt (2010b) have 274

showed how this procedure can quite accurately recover even very complicated patterns of trend. Another more complex way to separate trend from dependence is to explicitly model both the trend and the dependence structure from theoretical principles. For example, Craigmile, Peruggia, and Van Zandt (2010a) constructed a Bayesian model within which they estimated the parameters of both the trend and a realistic RT-generating mechanism. This approach is computationally quite expensive, although it yields information about effects on RTs that are not at all obvious when the RTs are treated as independent samples.

Model Fitting To this point we have discussed analysis of mean RT data and RT time series. The analysis of mean RT is popular for empirical evaluation of nonmathematical hypothesis of cognitive performance (e.g., the stimulus–response compatilibility effects described above). The treatment of RT as time-series data is primarily descriptive, without focused theories to explain trend or the dependencies in the data. By contrast to these approaches, most RT researchers test hypotheses about RT generated by a proposed model of the phenomenon of interest. Most modern models of mental mechanisms make predictions about the shape of the RT distributions. That is, a hypothetical process may dictate that RTs follow some distribution F conditioned on a set of psychologically important parameters θ.2 The most common techniques of analysis in RT research are concerned with fits of a proposed model to the data. Fitting a model involves estimating the parameters θ that result in the closest agreement between the hypothesized distribution F and the data. The procedures we describe in this section are very general and apply to any data set and any distribution F . Once a model is fit to the data, arguments about whether the model is a good one (or better than some other model) usually rely on measures of goodness of fit, such as χ 2 statistics, percentage of variance accounted for, or one of several possible information criteria. In the Meta-Theoretic Model Testing section, we will discuss an alternative to this kind of argumentation. The meta-theoretic approach to model testing is diagnostic in that it restricts models from consideration based on the qualitative characteristics of the RT distribution or other measures rather than goodness-of-fit statistics.

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

What we present here is not intended to be a “how-to” guide for model fitting; books have been written on these techniques. We wish only to provide an overview of model fitting with enough information that a researcher might decide which technique best suits his needs, so he may then seek out the appropriate comprehensive tutorial (see, e.g., Yuan & Schuster, Chapter 18 and Cavagnaro, Myung, & Pitt, Chapter 21, Volume 1).

parameter estimation Fitting a model to data is the process by which the model’s parameters are estimated. A compelling model has parameters with clear psychological interpretation, and so in addition to being constrained by the observed data, the parameters are constrained by the experimental conditions. If, for example, the influence of stimulus intensity is represented by a parameter a and response bias by parameter b, then the model should fit well over changes in stimulus intensity by changing only parameter a and leaving b constant (see, e.g., Donkin, Brown, & Heathcote, 2011). There are many ways to estimate parameter values, and the most effective methods will depend on the characteristics of the model. We can divide these methods roughly into linear and nonlinear approaches. Linear approaches rely on the concept of least squares: The goal is to estimate parameters by choosing those that minimize the sum of squared error between the observed measurements and those predicted by the model. Response time data, however, usually require nonlinear approaches such as maximum likelihood estimation, nonlinear least squares, and Bayesian methods. Linear approaches are much easier, because there are closedform solutions for the best-fitting parameters, but models of RT are usually not linear. Whether a linear model exists (or is reasonable) may depend on the level at which predictions are to be made: Do predictions concern summary statistics such as mean RT or does the model dictate more fine-grained measurement behavior such as how the RTs are to be distributed? Model fitting to mean RTs often relies on linear approaches and, even with nonlinear methods, can sometimes lead to closedform solutions for parameter estimates, depending on the method. For example, the method of moments can sometimes provide an easy set of equations to solve to obtain parameter estimates. The method of moments is a simple technique in which the mean and higher moments predicted by a model are

equated to the sample mean and higher moments of the data. For example, an ex-Gaussian distribution (the sum of independent normal and exponential variables, see p. 276) has three parameters, the mean μ and standard deviation σ of the normal component and the mean τ of the exponential component. To solve for three parameters, we will need the first three moments of the distribution. The mean of the ex-Gaussian distribution is μ + τ , its variance is σ 2 + τ 2 , and its skewness is 2τ 3 . Fitting the ex-Gaussian, then, requires setting these moments equal to the sample mean (X ), variance (s 2 ) and skew (Sk) and solving for the parameter estimates to obtain τˆ = (Sk/2)1/3 , σˆ 2 = s 2 − (Sk/2)2/3 and μˆ = X − (Sk/2)1/3 . Unfortunately, the method of moments sometimes yields unsatisfactory results. For example, there is nothing in the ex-Gaussian method of moments estimates that prevents σˆ 2 from being negative. Method of moments, however, can be very useful for providing starting values for other methods, such as maximum likelihood or nonlinear least squares estimation, in which some objective function is optimized to be as large (or small) as possible by iterative updating of the parameter values.

methods of least squares Methods of least squares are designed to minimize the error between the observed values of the measurements from an experiment and a model’s predicted values. Consider the observed RTs {T1 , T2 , . . . , Tn } and a set of independent variables {X1 , X2 , . . . , Xm }. A model’s predictions for observation i can be written as p(Xi , θ ), and the sum of squared errors or residuals is SSE =

n  

2 Ti − pi , θ ) .

i=1

We could also consider a set of mean RTs {T ij } for subjects i = 1, . . . , n and experimental conditions j = 1, . . . , J . If the model’s predictions p(Xij , θ ) are targeted at the means, then SSE =

J n   

2 T ij − p(Xij , θ ) ,

i=1 j=1

where the independent variable Xij is taken as the ij th element in the n × J design matrix X. If the predictions p are a linear function of the parameters θ, then the least-squares method is linear and the solution for the estimates of θ is of closed va n z a n d t, tow n s e n d

275

form and well known. Otherwise, the method is nonlinear. Nonlinear least squares is more tricky than linear least squares but is nonetheless straightforward (Gallant, 1987; Seber & Wild, 2003). Nonlinear least squares does not have closed form solutions and requires iterative algorithms to find θ to minimize SSE. In some circumstances, a nonlinear model can be made linear by a transformation of variables. For example, if a model predicts that mean RT is a power function of the number of trials in an experiment (Logan, 1988), then the log mean RTs can be compared to the (linear) log power function, and the estimation of parameters can proceed using standard regression methods. For RT data, many researchers have applied a least-squares approach to fitting the RT distribution predicted by a model. In such applications the usual least-squares logic applies, except that the prediction p(X , θ) is a probability density or cumulative distribution p(t|X , θ) defined over time t. The data are then summarized as an estimate of that probability density or cumulative distribution. These estimates may be obtained by either parametric or nonparametric methods. For example, a histogram estimate of the empirical RT density may be computed for some fixed number of points along the time axis, and the heights of the histogram bars at those points could be compared to the density function of the model. It turns out that leastsquared fits to the empirical probability density do not generally recover accurate values of the parameters (Van Zandt, 2002). However, least squares fits of distribution quantiles or the cumulative distribution function can be as accurate as maximum likelihood estimates. For example, consider a set of RTs {T1 , T2 , . . . , Tn } from an individual subject. The empirical distribution function Fˆ (t) is defined as n 1 I (Ti < t), Fˆ (t) = n i=1

where I (x) is an indicator function that equals 1 if the statement x is true and 0 otherwise. If the model states that the RTs should follow a distribution with cumulative distribution function F (t, θ), then the parameters θ can be estimated by minimizing m  2  SSE = Fˆ (ti ) − F (ti , θ) , i=1

for an appropriately selected set of points {t1 , t2 , . . . , tm }. Another nearly equivalent procedure involves setting the points {t1 , t2 , . . . , tm } to be the quantiles of 276

the sample {T1 , T2 , . . . , Tn } and using these points as bin boundaries for a χ 2 statistic. If, for example, the ti s are selected to be the sample deciles, then 10% of the sample falls in each of 10 bins defined as the intervals from ti to ti+1 (where i = 0, . . . , 9, t0 = 0 and t10 = ∞). The theoretical cumulative distribution F dictated by the model gives the predicted proportion of observations between ti and ti+1 as F (ti+1 , θ )−F (ti , θ ). Letting Oi = 0.1n and Ei = n (F (ti+1 , θ ) − F (ti , θ )), we may adjust θ to minimize χ2 =

9 

(Oi − Ei )2 /Ei

i=0

(e.g., Smith & Vickers, 1988). One nice feature of χ 2 minimization is that the minimized value of χ 2 may also serve as a goodnessof-fit measure. If χ 2 is sufficiently small given the degrees of freedom in the model, then we can argue that the model fits well. However, large values of χ 2 do not necessarily indicate an incorrect or misspecified model. The χ 2 statistic is very sensitive to sample size and frequently can be “significantly” large even when the model is correct (Van Zandt, 2000).

maximum likelihood Maximum likelihood is a powerful estimation method that produces fits to a model that makes predictions about the distribution from which data were sampled. For example, a model might state that RTs are normally distributed with mean μ and standard deviation σ (see, e.g., the AR(1) model presented earlier). For a single RT observed to be t, the likelihood of the value t is given by the height of the normal density with mean μ and standard deviation σ at time t, or φ ((t − μ)/σ ), where φ is the standard normal density. For a discrete random variable, the likelihood is interpreted as the probability of observing the measured value of the variable given the parameter values θ. One way of thinking about maximum likelihood, then, is that we choose parameters that give the highest possible probability of having observed the data we obtained. Suppose that a model states that the RT distribution has a probability density function f (t|θ ), where t is a possible value for an observation and θ is the vector of parameters for the distribution (like μ, σ and τ for the ex-Gaussian distribution introduced earlier). We typically assume that the data from an experiment {T1 , T2 , . . . , Tn } form a set of independent and identically distributed observations from

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

the distribution described by f (t|θ), so that the joint probability of having observed the data is given by n

f (T1 |θ)f (T2 |θ) · · · f (Tn |θ) =

f (Ti |θ). i=1

The likelihood is then defined as n

f (Ti |θ),

L(θ|T) = i=1

a function of θ given fixed values for T. We want to choose θˆ so that the likelihood of the data we obtained is as high as possible, or L reaches a global maximum at θˆ . Sometimes closed-form expressions for the maximum likelihood estimates of θ are obtainable by methods of calculus. For example, the maximum likelihood estimate of a shift parameter (sometimes referred to as peripheral processing time in RT data) is given by the smallest observation in the sample. However, it is rarely possible to find closed-form expressions for the parameters for real problems. Rather, we program the likelihood function (if a canned routine is not easily obtainable) and pass it to an optimization algorithm that attempts to find the maximum, just as for nonlinear least squares minimization. Numerically it is usually easier to work with the log likelihood function; because the relationship between L and log L is monotonic, maximizing log L also maximizes L. As with every estimation method, maximum likelihood has some drawbacks. First, parameter estimates that maximize likelihood may not exist. Second, if they do exist, then they may not be unique. That is, there may be another, completely different set of parameter values that work equally well. A third and related problem is that once a set of estimates has been found, it is difficult to determine if the value of the likelihood is a global maximum or only a local maximum. Fourth, sometimes the maximum likelihood estimate will be found at the extremes of the boundaries for the parameters. Proportions, for example, are bounded between 0 and 1, and the maximum likelihood estimate may be 1. The shift parameter for RT data is another example where the maximum likelihood estimate is equal to the value of the smallest observation. When the best estimates are at the extreme ends of a scale, this will frequently influence the estimates of other parameters. Maximum likelihood estimates may also be biased—for example, the maximum likelihood estimate of the shift parameter is too large and consistently overestimates the true shift. Sometimes

the modeler will need to make some arbitrary decisions about parameter limits to move the estimates back to a reasonable value. Maximum likelihood estimates also have many good qualities. Under general conditions, maximum likelihood estimates converge in probability to the true parameter value, they are asymptotically normal, and they have the smallest possible variance. Also, under general conditions, the maximum likelihood estimates minimize the sum of squared error. A related method is quantile maximum likelihood estimation, which transforms the data into quantiles and then maximizes a likelihood based on the predicted proportion of observations falling between the quantiles (Heathcote, Brown, & Mewhort, 2002). This method is especially useful when the probability density function misbehaves for some parameter values (e.g., when singularities arise or when the function becomes sharply peaked) and when outliers result in likelihoods equal to zero. Brown and Heathcote (2003) have provided software for quantile maximum likelihood estimation of the ex-Gaussian distribution—software that can be modified to accommodate other RT distributions.

the ex-gaussian distribution A popular way to characterize RT data is to use a parametric description of the sample that provides a summary of the shape of the empirical distribution. Although several candidate distributions exist, the most popular is the ex-Gaussian distribution, which is the distribution of the sum of a Gaussian variable (with mean μ and standard deviation σ ) and an exponential variable (with mean τ ). This distribution, although atheoretical, is very flexible and can capture a wide variety of postively skewed distributions. Therefore, many have found it very convenient to summarize RT data with estimates of μ, σ , and τ (Ratcliff, 1979; Ratcliff & Murdock, 1976; Heathcote, Popiel, & Mewhort, 1991). The ex-Gaussian estimates can be obtained in a number of ways, the most reliable being maximum likelihood or nonlinear least-squares fits to the cumulative distribution functions. Several routines are publicly available to assist in performing these computations (Cousineau & Larochelle, 1997; Dawson, 1988; Heathcote, 1996). The ex-Gaussian characterization of RTs is useful for estimating the shape of the RT distribution (Heathcote et al., 1991). It is less useful as a tool for inference, or trying to determine whether experimental manipulations had different effects on the va n z a n d t, tow n s e n d

277

RT distributions. For example, a researcher may be concerned that one variable influenced only the slow RTs (an effect that might show up in τ ) and another variable influenced only the fast RTs (an effect that might show up in μ). However, the distributions of the estimates of μ, σ , and τ are unknown; they depend on the underlying (and unknown) RT distribution. It is difficult, therefore, to determine the error in the estimates of μ, σ , and τ . More crucially, the parameter estimates are highly correlated. Because the sample mean must approximately equal μ + τ (see p. 274), as μ increases τ must decrease to fit the data. It may not be possible, therefore, to argue conclusively about the effects that experimental manipulations have on these parameters. It must also be noted that despite the ability of the ex-Gaussian to fit RT data, there are a number of features of RT data we can point to that rule out the ex-Gaussian as a model for RT data (Burbeck & Luce, 1982; Luce, 1986; Van Zandt, 2000). This fact, together with the understanding that the ex-Gaussian is an atheoretical model of the RT distribution and, conditioned on the data, the parameters are strongly correlated, makes it difficult to interpret psychologically the changes in the different parameters across experimental conditions.

Meta-Theoretic Model Testing Although model fitting is primarily an exercise in parameter estimation, model testing is more concerned with discriminating between different potential data-generating mechanisms on the basis of qualitative characteristics of the data. This is a problem that arises in many scientific endeavors, but in psychology it relies quite heavily on RT data. We now have a range of theoretical tools that can be applied to such data to try and discriminate among different kinds of cognitive architectures. The question of whether people can perceive or process a set of visual objects immediately and simultaneously (i.e, in parallel) or whether attention must be switched to each object in succession extends back more than a hundred years (e.g., Hamilton, n.d.). We discussed already how in the 1960s, this question was re-opened by Sternberg (1966) as the serial versus parallel processing issue. He and others reasoned that serial processing should produce mean RTs that increase linearly with the number of items to be searched, whereas parallel processing should produce increasing but negatively accelerated mean RTs. Townsend (1972, 1976) demonstrated, however, that the behavior of mean RT as a function of 278

items to be processed was determined more by the capacity of the process than whether the architecture of the process was serial or parallel. Increasing mean RT indicates that the system slows down as the load increases, and certain parallel models with limited capacity could generate RTs distributed in exactly the same way as serial models. Over the past several years, Townsend and his colleagues (e.g., Townsend & Nozawa, 1995; Townsend & Wenger, 2004a) have proposed a methodology to separate capacity from architectural issues such as serial versus parallel processing and dependence versus independence of processing channels using factorial methods and redundant targets. These methods rely on experimental designs in which stimuli vary on at least two orthogonal dimensions, such as intensity and location. One of the stimulus dimensions (location) can be reasonably assumed to correspond to different processing channels or pathways. Townsend Nozawa have referred to these kinds of experiments as “double factorial designs.” The logic of Sternberg’s (1969) additive factors method rests on such a design, although the procedures we describe here extend to the RT distributions and do not depend on the mean RTs (cf. Roberts & Sternberg, 1992). The factorial methods proposed by Townsend and colleagues depend on the variables in the experimental design having selective influence—that is, a variable influences one and only one subprocess of the task. Dzhafarov and colleagues have worked extensively on the question of selective influence and how it can be used to learn about the smaller components that make up a more complex task (Dzhafarov, 2003; Dzhafarov & Cortese, 1996; Dzhafarov & Gluhovsky, 2006; Dzhafarov & Schweickert, 1995; Kujala & Dzhafarov, 2008). Interested readers may consult these papers or Van Zandt (2002) for a brief overview of selective influence.

factorial methods Consider an experiment where more than one stimulus can be presented at one time. To investigate questions of process architecture, we can assume that each distinct stimulus is processed through a separate processing channel. It is easiest to think about stimuli that differ in spatial location in a visual array, but we can also consider auditory stimuli presented to different ears, tactile stimuli presented at different locations on the body, or even visual spatial gratings presented in the same location but with different frequencies, such frequencies being generally thought

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

to require different processing locations in visual cortex. The general idea is to set up a scenario where more than one stimulus could possibly be processed at the same time. We now do a simple RT experiment where people respond to stimuli that vary according to some feature (such as intensity), crossed factorially with processing channels (like location). In the simplest design, consider stimuli with two levels of intensity (on or off ) presented in two locations (left or right). Are the left and right channels independent of each other? What is their capacity? Does one slow down when the other is working? Can it work at all or must it wait until the other is finished? Can information be shared across channels? A parallel channel model is a broad class of models that assume information flows through more than one pathway toward the execution of a response (see also Fig. 14.1, bottom panel). These models are often conceptualized as races, where a response is made as soon as any channel is finished (see Fig. 14.3, top panel). Response times for race models are therefore distributed as the minimum of the processing times for all of the channels. Such race models are often called “self-terminating,” or OR, models because processing ends as soon as one or the other channel is finished. By contrast, an “exhaustive,” or AND, model requires that all channels complete processing before a response is made (see Fig. 14.3, bottom panel), and RTs are distributed as the maximum of the processing times for all of the channels. Imagine a factorial design presenting observers with two lights colored red and green in different

Respond OR

Channel 1

Channel 2

Channel 1 AND

Respond

Channel 2 Figure 14.3 Two parallel architectures with different processing schemes. The top panel shows a self-terminating process, where the response can be made at the end of processing on either Channel 1 or Channel 2. The bottom panel shows an exhaustive process, where the response can be made only after processing is completed for both Channels 1 and 2.

locations. The lights may be of different colors or they may be the same. We might assume that the simple RT task with these stimuli, using instructions stating, “Respond as soon as you see a light,” encourages OR processing. Similarly, we might assume that the go/no-go task using instructions stating, “Respond only if the two lights are the same color,” encourages AND processing. However, we can’t be certain that this is actually what people do. People could use either OR or AND processing under either set of instructions, which brings up the issue of the decisional stopping rule in any task requiring the processing of more than one stimulus. The stopping rule is not something that can be deduced from the task itself. For example, a lazy system might process fewer items than required in a task that encourages AND processing. Townsend and Colonius (1997) and Van Zandt and Townsend (1993) have explored ways for determining the stopping rule in visual display and memory search experiments, but the factorial methods we present here can be applied to either stopping rule. The researcher must be aware that the stopping rule interacts with other aspects of cognitive architecture, most importantly those of processing channel independence and capacity.

channel independence and capacity Consider the simple factorial design with two stimulus locations (A and B) and two light intensities (on or off ). Assume that a response can be made as soon as any channel signals the presence of a stimulus (an OR stopping rule). This kind of experiment is often called a redundant targets design. In such a design, RTs are faster when two targets are present (both are on) than when only one is present (one is off ). If responses are generated by a race between independent processing channels, a decrease in RT when both targets are present is expected because the minimum of two random variables will have a smaller mean than the means of either of the two random variables alone; this is sometimes called a statistical advantage. How fast does the redundant target RT have to be before we can say that something more than a statistical advantage is occurring—that there is information being shared between the channels? How slow does it have to be before we can say that the two channels interfere with each other? The race inequality (J. Miller, 1982) is an empirical relationship between the RT cumulative distribution functions that must hold for the redundant target conditions and the single target conditions if a parallel race model is generating the RTs. The va n z a n d t, tow n s e n d

279

cumulative distribution function gives the proportion of observations that fall below a value t, or the probability that an RT will be less than t. If the two channels are A and B (corresponding to stimulus location in our design, but the channels could correspond to some other stimulus dimension), let F (t|A, B) is the cumulative distribution of the RTs for the redundant condition and F (t|A) and F (t|B) are the cumulative distributions for the RTs in the A-only and B-only conditions, respectively. Then the race inequality states that F (t|A, B) ≤ F (t|A) + F (t|B). If this inequality is violated for any value of t, then according to Miller, a large set of parallel channel race models are falsified. The Grice inequality (Grice, Canham, & Gwynne, 1984) provides the upper bound on the RT distribution for the redundant targets condition, or how slow the redundant target RTs can be before we have to conclude that the two channels interfere with each other (Colonius, 1990). The redundant targets RT distribution must satisfy max{F (t|A), F (t|B)} ≤ F (t|A, B) if a parallel channel race model is generating the RTs. Townsend and Nozawa (1995) noted that the boundaries on the RT distribution in the redundant targets paradigm imposed by the race and Grice inequalities are determined by the workload capacity of a parallel process and not the parallel or serial architecture itself. If the data satisfy both inequalities, then RTs neither speed up enough to conclude that processing is facilitated across channels (what Miller, 1982, called coactive processing) nor slow down enough to conclude that a load in one channel reduces the efficiency of the other. Increasing the amount of information to be processed by moving from a single target to a redundant target stimulus does not influence the efficiency of the channels. The fact that these bounds are capacity limits and not limits imposed by architecture means that they may be violated by a parallel race process of either limited or “super” capacity.

separating capacity from architecture The Miller and Grice inequalities apply only to processes with OR stopping rules. Townsend and Wenger (2004a) have reviewed much of the literature on RT-based tests of process structure and generalized this kind of thinking to processes with AND stopping rules as well. They have emphasized the relationships between channel independence, 280

the stopping rule required by the process, process architecture, and capacity. An important approach to identifying these different aspects of a cognitive task was presented by Townsend and Nozawa (1995), who investigated the Miller and Grice inequalities in the context of their systems factorial technology. Consider again the design of the redundant targets task, but add a third level of stimulus intensity so that a stimulus may be off (·) or of low (L) or high (H ) intensity. The low-intensity stimulus slows the channel processing that stimulus. Lowering the intensity of the stimulus in one channel does not influence the processing speed in the other channel—that is, the intensity selectively influences the processing times in each channel. Townsend and Nozawa (1995) contributed two tools for analyzing data in such tasks: the interaction contrast SIC (t) and the capacity coefficient C (t) for OR tasks. Townsend and Wenger (Townsend & Wenger, 2004b) later expanded the capacity coefficient to AND tasks. Recall from above that the RT cumulative distribution function F (t) gives the proportion of RTs that are less than some value t. It is the probability of observing an RT faster than time t. The survivor function is S(t) = 1 − F (t), or the probability that an RT is slower than time t. We can subscript these functions to indicate the different conditions in the double factorial experiment, so F ij(t) is the cumulative distribution function when stimulus i = ·, L, or H (absent, low intensity, or high intensity) is presented in the left channel and stimulus j = ·, L, or H is presented in the right channel, so both high- and low-intensity stimuli can be processed in either channel. We can characterize capacity by the relationships between the distribution Fij (t) or survivor Sij (t) functions in different conditions. For experiments encouraging OR processing, it is convenient to use the survivor functions. Considering first the question of architecture (serial or parallel) and stopping rule (AND or OR), we can use the survivor interaction contrast defined as SIC (t) = [SLL (t) − SLH (t)] − [SHL (t) − SHH (t)]. (1) Notice that the interaction contrast relies only on those (redundant target) conditions where a stimulus is presented in both the left and the right locations—the contrast is constructed using only the functions F (t|A, B) over the different stimulus

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

Table 14.1. Interaction contrast predictions for different cognitive architectures and stopping rules. The time t ∗ is a constant that is not necessarily the same for all models; it only marks the point at which the behavior of the function changes. Serial OR Serial AND Parallel OR Parallel AND Coactivation

SIC (t) = 0 for all t SIC (t) < 0 for t < t ∗ and SIC (t) > 0 for t > t ∗ SIC (t) > 0 for all t SIC (t) < 0 for all t SIC (t) < 0 for t < t ∗ and SIC (t) > 0 for t > t ∗

conditions. This means that tests of processing architecture using SIC (t) are not confounded by changes in the number of items to be processed (workload), as are tests that rely on changes in mean RT with changes in workload. Table 14.1 shows behavior of the survivor interaction contrast for different architectures and stopping rules. The qualitative behavior of serial AND and coactivation models shown in Table 14.1 appears to be the same, but the serial AND models predict equal positive and negative areas under the SIC (t) curve, whereas coactivation models predict small negative and large positive areas under the curve, thus providing for experimental discrimination of these models. We now turn to the issue of system capacity, or the efficiency of each processing channel or processing stage under changes in the workload of the system. The interaction contrast SIC (t) function is measured for a constant workload and is used to assess architecture and stopping rules. Capacity is logically independent of the system’s architecture—for example, whether the system is serial or parallel— although serial systems are frequently assumed to be of limited capacity and parallel systems are assumed to be of unlimited capacity. To measure system capacity, we must be able to assess the efficiency of the system, irrespective of architecture, under changes in workload. Measures of capacity must take into account the fact that the stopping rule will affect overall processing time. When all processes must be completed (for AND processing), RTs will generally be slower than when only a single process must be completed (for OR processing). Therefore, the capacity coefficient C (t) takes different forms for the two stopping rules. For the OR task, COR (t) =

− ln S(t|A, B) , − ln S(t|A) − ln S(t|B)

(2)

and for the AND task CAND (t) =

ln F (t|A) + ln F (t|B) . ln F (t|A, B)

(3)

Note that we are suppressing the notation associated with stimulus intensity for the capacity coefficient and have returned to the notation specifying the activities in the processing channels A and B. Thus, in contrast to the interaction contrast SIC (t), the capacity coefficient makes use of the stimuli in the single-target conditions and examines only one stimulus intensity i, which can be either high or low. If either COR (t) or CAND (t) is greater than 1 for any t, then the process is “super” capacity or coactive. If either COR (t) or CAND (t) is less than 1 for any t, then the process is limited capacity. If COR (t) or CAND (t) equals 1 for all t, then the process is unlimited in capacity. Townsend and Nozawa (1995) estimated the interaction contrast and the capacity coefficient functions from data from a double-factorial simple RT design. The behavior of SIC (t) and COR (t) suggested super-capacity parallel processing, with an OR stopping rule in one experiment, and limited capacity parallel processing, with an OR stopping rule in another experiment. More recently, Townsend and Eidels (2011) have showed how the race and Grice inequalities for AND and OR tasks could be expressed in terms of limits on the capacity coefficient. This allows the inequalites to be examined simultaneously with the capacity coefficient to allow greater insight into the capacity characteristics of factorial systems. Together, the use of the interaction contrast and capacity coefficient provide initial insights on the fundamental structure and mechanisms of the investigated system, insights that can then be explored in additional experiments. Interested readers should consult Townsend and colleagues’ work (1995; 2004a; 2011) for more details on these tests. Van Zandt (2002) has provided guidelines for how these tests can be applied to data and some examples.

Summary In this section we discussed the analyses of RT data. There are different analyses for different purposes. Most commonly, we attempt to estimate parameters of cognitive models from RT data, or we test different classes of models in an attempt to discover the fundamentals of cognitive structure. There are many good references that researchers intending to perform RT analyses should consult va n z a n d t, tow n s e n d

281

before proceeding; we have been able only to provide a brief overview of these techniques here. For more information on model fitting and parameter estimation, an excellent reference is Busemeyer and Diederich (2010). An excellent discussion of metatheoretic tests and the philosophy behind them can be found in Townsend and Wenger (2004a). One issue we have not touched on is that of model comparison. That is, when two or more models fit the data or satisfy the constraints of the data, how do we choose between them? This important and difficult question, which faces all areas of quantitative research and not just RT experiments, has been extensively addressed by Myung, Pitt, and colleagues (e.g., Cavagnaro, Myung, Pitt, & Kujala, 2010; Navarro, Pitt, & Myung, 2004; Pitt, Kim, & Myung, 2003) and is summarized in Chapter 21 (Cavagnaro, Myung, & Pitt, Chapter 21, Volume 1). Another closely related issue involves determining statistical significance of model tests. That is, the estimates of the RT distributions are random and subject to sampling error. Therefore, we might expect poor fits or violations of expected behavior (such as an interaction contrast everywhere positive) by chance alone. Determining whether violations are statistically significant is not trivial: The points on each curve are not independent from each other, and this dependence will increase the likelihood that spurious differences will be statistically significant. A number of strategies have been proposed to test these relationships, and the best approach so far is that of Houpt and Townsend (2010).

Conclusion This chapter has outlined the kinds of experimental designs most commonly used in RT experiments and then the most popular methods of RT analysis. Each of the subsections in this chapter, however briefly presented here, has been the topic of many papers and chapters elsewhere and are necessarily very broad overviews of quite complex topics. We have tried to provide the best references to the work in these areas so that interested readers can find the help they need at a more detailed level. To the reader who asks, “What kind of experiment should I do and how should I analyze my data?” we respond: It depends. It depends on the question you are asking, the hypothesis you are trying to test. It is never a good idea to shoehorn a general method to fit a specific problem. Although this chapter gives some guidance in experimental design and analysis, the beauty of the RT experiment 282

is in its flexibility: It may be a simple as measuring a single keypress, or it could measure the times between notes executed by a concert pianist (e.g., Goebl & Palmer, 2009). We hope this chapter provides enough background that the reader feels more confident in creating the unique approach appropriate for his or her unique problem.

Future Directions Perhaps the most important new technique for data analysis in RT studies is being provided by the application of Bayesian statistical techniques. These techniques allow the data to be analyzed within a theoretically motivated framework, one in which the likelihood of the data is provided by the model of interest. We can contrast that to more traditional methods, such as analysis of variance applied to mean RT data, where the model being fit is known to be false and is therefore of no interest at all. There are now statistical packages (such as JAGS and WinBUGS) that will assist in the development of Bayesian models that will run on any desktop computer. Unfortunately, these packages still do not handle well the kinds of models typically explored for RT data, so Bayesian modeling of RTs is still restricted to a small group of quantitative researchers with mathematical and programming expertise. Specialized routines to assist in this kind of modeling will soon become available, opening this avenue to everyone.

Author Note This work was supported by the National Science Foundation under grants no. BCS-0738059, and SES-1024709 the National Institutes of Mental Health under grant no. 57717-04A1, and the Air Force Office of Special Research grant no. FA9550-07-1-0078.

Notes

1. The restriction that n ≤ m is not required for the definition of information but is required for an n-choice task. 2. Psychologists and statisticians use the word “model” in slightly different ways. Psychologists refer to hypothetical cognitive mechanisms as models that then dictate the probability distributions that data will follow. Statisticians refer to the probability distributions themselves as models without as much consideration of the mechanisms that dictate those distributions. We see these two points of view as interchangeable for the purposes of this section, but the reader should be cautious. Different cognitive mechanisms may dictate that data follow the same distribution, and the same cognitive mechanism may dictate that data follow different distributions depending on the assumptions made to implement the model.

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

References Ashby, F. G., & Townsend, J. T. (1980). Decomposing the reaction time distribution: Pure insertion and selective influence revisited. Journal of Mathematical Psychology, 21, 93–123. Balakrishnan, J. D. (1994). Simple additivity of stochastic psychological processes: Tests and measures. Psychometrika, 59, 217–240. Borowsky, A., Oron-Gilad, T., & Parmet, Y. (2009). Age and skill differences in classifying hazardous traffic scenes. Transportation Research Part F: Traffic Psychology and Behaviour, 12, 277–287. Brown, S. D., & Heathcote, A. (2003). QMLE: Fast, robust, and efficient estimation of distribution functions based on quantiles. Behavioral Research Methods, Instruments, & Computers, 35, 485–492. Burbeck, S. L., & Luce, R. D. (1982). Evidence from auditory simple reaction times for both change and level detectors. Perception and Psychophysics, 32, 117–132. Busemeyer, J. R., & Diederich, A. (2010). Cognitive modeling. Thousand Oaks, CA: Sage Publications. Cavagnaro, D. R., Myung, J. I., & Pitt, M. A. (2012). Mathematical modeling. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 438–453). New York: Oxford University Press. Cavagnaro, D. R., Myung, J. I., Pitt, M. A., & Kujala, J. V. (2010). Adaptive design optimization: A mutual information-based approach to model discrimination in cognitive science. Neural Computation, 22, 887–905. Colonius, H. (1990). Possibly dependent probability summation of reaction time. Journal of Mathematical Psychology, 34 , 253–275. Cousineau, D., & Larochelle, S. (1997). PASTIS: A program for curve and distribution analyses. Behavioral Research Methods, Instruments, & Computers, 29, 542–548. Craigmile, P. F., Peruggia, M., & Van Zandt, T. (2010a). Hierarchical Bayes models for response time data. Psychometrika, 75(4), 613–632. Craigmile, P. F., Peruggia, M., & Van Zandt, T. (2010b). Detrending response time series. In S.-M. Chow, E. Ferrer, & F. Hsieh (Eds.), Statistical methods for modeling human dynamics: An interdisciplinary dialogue (Vol. 4, pp. 213–240). New York: Taylor and Francis. Dawson, M. R. (1988). Fitting the ex-gaussian equation to reaction time distributions. Behavioral Research Methods, Instruments, & Computers, 20, 54–57. Diederich, A., & Colonius, H. (2008). Crossmodal interaction in saccadic reaction time: Separating multisensory from warning effects in the time window of integration model. Experimental Brain Research, 186 , 1–22. Donders, F. C. (1868/1969). On the speed of mental processes. Acta Psychologica, 30, 412–431. (Translated by W. G. Koster) Donkin, C., Brown, S., & Heathcote, A. (2011). Drawing conclusions from choice response time models: A tutorial using the linear ballistic accumulator. Journal of Mathematical Psychology, 55(2), 140–151. Duncombe, R. L. (1945). Personal equation in astronomy. Popular Astronomy, 53, 2–13, 63–76, 110–121. Dzhafarov, E. N. (2003). Selective influence through conditional independence. Psychometrika, 68, 7–26. Dzhafarov, E. N., & Cortese, J. M. (1996). Empirical recovery of response time decomposition rules I. Sample-level decomposition tests. Journal of Mathematical Psychology, 40, 185–202.

Dzhafarov, E. N., & Gluhovsky, I. (2006). Notes on selective influence, probabilistic causality, and probabilistic dimensionality. Journal of Mathematical Psychology, 50, 390–401. Dzhafarov, E. N., & Schweickert, R. (1995). Decompositions of response times: An almost general theory. Journal of Mathematical Psychology, 39, 285–314. Ellis, R. J., & Jones, M. R. (2010). Rhythmic context modulates foreperiod effects. Attention, Perception, & Psychophysics, 72(8), 2274–2288. Erceg-Hurn, D. M., Wilcox, R. R., & Keselman, H. H. (2012). Robust statistical estimation. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 388–406). New York: Oxford University Press. Fitts, P. M., & Seeger, M. (1953). S-R compatibility: spatial characteristics of stimulus and response codes. Journal of Experimental Psychology, 46 , 199–210. Gallant, A. R. (1987). Nonlinear statistical models. New York: Wiley Press. Garcia-Perez, M. A. (1998). Forced-choice staircases with fixed step sizes: asymptotic and small-sample properties. Vision Research, 38, 1861–1881. Gilden, D. L. (1997). Fluctuations in the time required for elementary decisions. Psychological Science, 8, 296–301. Gilden, D. L. (2001). Cognitive emissions of 1/f noise. Psychological Review, 108, 33–56. Goebl, W., & Palmer, C. (2009). Synchronization of timing and motion among performing musicians. Music Perception, 26 , 427–438. Green, D. M., & Luce, R. D. (1971). Detection of auditory signals presented at random times: III. Perception and Psychophysics, 9, 257–268. Grice, G. R., Canham, L., & Gwynne, J. W. (1984). Absence of a redundant-signals effect in a reaction time task with divided attention. Perception and Psychophysics, 36 , 565–570. Hamilton, S. W. (n.d.). Lectures on metaphysics and logic. In (Vol. I, pp. 154–170). Boston: Gould and Lincoln. Heathcote, A. (1996). RTSYS: A DOS application for the analysis of reaction time data. Behavioral Research Methods, Instruments, & Computers, 28, 427–445. Heathcote, A., Brown, S. D., & Mewhort, D. J. (2002). Quantile maximum likelihood estimation of response time distributions. Psychonomic Bulletin and Review, 9, 394–401. Heathcote, A., Popiel, S. J., & Mewhort, D. J. (1991). Analysis of response time distributions: An example using the stroop task. Psychological Bulletin, 109, 340–347. Heiervang, E., & Hugdahl, K. (2003). Impaired visual attention in children with dyslexia. Journal of Learning Disabilities, 36 , 68–73. Helmholtz, H. (1850). Vorläufiger Bericht über die Fortpflanzungsgeschwindigkeit der nervenreizung. Archiv für Anatomie, Physiologie und Wissenschaftliche Medizin, 71–73. Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4 , 11–26. Holden, J. G. (2005). Gauging the fractal dimension of response times from cognitive tasks. In M. A. Riley & G. C. V. Orden (Eds.), Tutorials in contemporary nonlinear methods for behavioral scientists. Arlington, VA: National Science Foundation. Retrieved September 2010 from http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.jsp. Last accessed May 7, 2012

va n z a n d t, tow n s e n d

283

Holden, J. G., Van Orden, G. C., & Turvey, M. T. (2009). Dispersion of response times reveals cognitive dynamics. Psychological Review, 116 , 318–342. Houpt, J. W., & Townsend, J. T. (2010). The statistical properties of the Survivor Interaction Contrast. Journal of Mathematical Psychology, 54(5), 446–453. doi:10.1016/j.jmp.2010.06.006 Hyman, R. (1953). Stimulus information as a determinant of reaction time. Journal of Experimental Psychology, 45, 188–196. Kello, C. T., Anderson, G. G., Holden, J. G., & Van Orden, G. C. (2008). The pervasiveness of 1/f scaling in speech reflects the metastable basis of cognition. Cognitive Science: A Multidisciplinary Journal , 32, 1217–1231. Kirby, N. H. (1980). Sequential effects in choice reaction time. In A. T. Welford & J. M. T. Brebner (Eds.), Reaction times (pp. 129–172). New York: Academic Press. Klemmer, E. T. (1956). Time uncertainty in simple reaction time. Journal of Experimental Psychology, 51, 179–184. Kohfeld, D. (1971). Simple reaction time as a function of stimulus intensity in decibels of light and sound. Journal of Experimental Psychology, 88, 251–257. Kujala, J. V., & Dzhafarov, E. N. (2008). Testing for selectivity in the dependence of random variables on external factors. Journal of Mathematical Psychology, 52, 128–144. Laming, D. R. (1968). Information theory of choice reaction time. New York: Wiley Press. Laming, D. R. (1979). Autocorrelation of choice-reaction times. Acta Psychologica, 43, 381–412. Large, E. W., & Jones, M. R. (1999). The dynamics of attending: How people track time-varying events. Psychological Review, 106 , 119–159. Logan, G. D. (1988). Toward an instance theory of automatization. Psychological Review, 95, 492–527. Logan, G. D., & Burkell, J. (1986). Dependence and independence in responding to double stimulation: A comparison of stop, change and dual-task paradigms. Journal of Experimental Psychology: Human Perception and Performance, 12, 549–563. Logan, G. D., Cowan, W. B., & Davis, K. A. (1984). On the ability to inhibit simple and choice reaction time responses: A model and a method. Journal of Experimental Psychology: Human Perception and Performance, 10, 276–291. Logan, G. D., & Gordon, R. D. (2001). Execution control of visual attention in dual-task situations. Psychological Review, 108, 393–434. Luce, R. D. (1986). Response times: Their role in inferring elementary mental organization. New York: Oxford University Press. Mandelbrot set. (2010). In E. W. Weisstein (Ed.), Mathworld – a Wolfram web resource. Retrieved December 14, 2010, from http://mathworld.wolfram.com/MandelbrotSet.html. Mansfield, R. J. (1973). Latency functions in human vision. Vision Research, 13, 2219–2234. Miller, G. A. (1956). The magic number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–87. Miller, J. (1982). Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology, 14 , 247–279. Navarro, D. J., Pitt, M. A., & Myung, I. J. (2004). Assessing the distinguishability of models and the informativeness of data. Cognitive Psychology, 49, 47–84.

284

Niemi, P., & Näätänen, R. (1981). Foreperiod and simple reaction time. Psychological Bulletin, 89, 133–162. Pachella, R. G., & Pew, W. (1968). Speed-accuracy tradeoff in reaction time: Effect of discrete criterion times. Journal of Experimental Psychology, 76 , 19–24. Peruggia, M., Van Zandt, T., & Chen, M. (2002). Was it a car or a cat i saw? An analysis of response times for word recognition. Case Studies in Bayesian Statistics, VI , 319–334. Piéron, H. (1920). Nouvelles recherches sur l’analyse du temps de latence sensorielle et sur la loi qui relie ce temps à l’intensité de l’excitation (New research on the analysis of sensory latency and the law relating latency to the intensity of excitation). L’Année Psychologique, 22, 58–142. Pitt, M. A., Kim, W., & Myung, I. J. (2003). Flexibility vs generalizability in model selection. Psychonomic Bulletin and Review, 10, 29–44. Proctor, R. W., & Vu, K.-P. L. (2006). Stimulus-response compatibility principles: Data, theory, and application. Boca Raton, FL: CRC Press. Querne, L., & Berquin, P. (2009). Distinct response time distributions in attention deficit hyperactivity disorder subtypes. Journal of Attention Disorders, 13, 66–77. Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics. Psychological Bulletin, 86 , 446–461. Ratcliff, R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114 , 510–532. Ratcliff, R., & Murdock, B. B., Jr. (1976). Retrieval processes in recognition memory. Psychological Review, 83, 190–214. Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychological Review, 111, 333–367. Reed, A. V. (1973). Speed-accuracy trade-off in recognition memory. Science, 181, 574–576. Remington, R. J. (1969). Analysis of seequential effects in chice reaction times. Journal of Experimental Psychology, 82, 250– 257. Roberts, S., & Sternberg, S. (1992). The meaning of additive reaction-time effects: Tests of three alternatives. In D. E. Meyer & S. Kornblum (Eds.), Attention and performance XIV (pp. 611–654). Cambridge, MA: MIT Press. Santee, J. L., & Egeth, H. E. (1982). Do reaction time and accuracy measure the same aspects of letter recognition? Journal of Experimental Psychology: Human Perception and Performance, 8, 489–501. Schweickert, R. (1978). A critical path generalization of the additive factor methods analysis of a Stroop task. Journal of Mathematical Psychology, 18, 105–139. Schweickert, R., Giorgini, M., & Dzhafarov, E. N. (2000). Selective influence and response time cumulative distribution functions in serial-parallel networks. Journal of Mathematical Psychology, 44 , 504–535. Schweickert, R., & Townsend, J. T. (1989). A trichotomy method: Interactions of factors prolonging sequential and concurrent mental processes in the stochastic PERT networks. Journal of Mathematical Psychology, 33, 328–348. Seber, G. A. F., & Wild, C. J. (2003). Nonlinear regression. Hoboken, NJ: John Wiley & Sons. Shore, D. I., & Spence, C. (2005). Prior entry. In L. Itti, G. Rees, & J. K. Tsotsos (Eds.), Neurobiology of attention (pp. 89–95). New York: Elsevier.

d e s i g n s f o r a n d a n a ly s e s o f r e s p o n s e t i m e e x pe r i m e n t s

Simon, J. R. (1969). Reactions towards the source of stimulation. Journal of Experimental Psychology, 81, 174–176. Smith, P. L. (1995). Psychophysically principled models of visual simple reaction time. Psychological Review, 102, 567–593. Smith, P. L., & Vickers, D. (1988). The accumulator model of two-choice discrimination. Journal of Mathematical Psychology, 32, 135–168. Sternberg, S. (1966). High-speed scanning in human memory. Science, 153, 652–654. Sternberg, S. (1969). The discovery of processing stages: Extensions of Donder’s method. In W. G. Koster (Ed.), Attention and performance II (pp. 276–315). Amsterdam: North-Holland. Stevens, C. J., Brennan, D., Petocz, A., & Howell, C. (2009). Designing informative warning signals: Effects of indicator type, modality, and task demand on recognition speed and accuracy. Advances in Cognitive Psychology, 5, 42–48. Stuart, A., & Ord, J. K. (1999). Kendall’s advanced theory of statistics (6th ed., Vol. 1). London: Edward Arnold. Sullivan, J. M., Tsimhoni, O., & Bogard, S. (2008). Warning reliability and driver performance in naturalistic driving. Human Factors, 50, 845–852. Teichner, W. H., & Krebs, M. J. (1972). Laws of the simple visual reaction time. Psychological Review, 79, 344–358. Telford, C. W. (1931). The refractory phase of voluntary and associative responses. Journal of Experimental Psychology, 14 , 1–36. Townsend, J. T. (1972). Some results concerning the identifiability of parallel and serial processes. British Journal of Mathematical and Statistical Psychology, 25, 168–199. Townsend, J. T. (1976). Serial and within-stage independent parallel model equivalence on the minimum completion time. Journal of Mathematical Psychology, 14 , 219–238. Townsend, J. T. (1984). Uncovering mental processes with factorial experiments. Journal of Mathematical Psychology, 28, 363–400. Townsend, J. T., & Ashby, F. G. (1983). Stochastic modeling of elementary psychological processes. New York: Cambridge University Press. Townsend, J. T., & Colonius, H. (1997). Parallel processing response times and experimental determination of the stopping rule. Journal of Mathematical Psychology, 41, 392–397. Townsend, J. T., & Eidels, A. (2011). Workload capacity spaces: A unified methodology for response time measures of efficiency as workload is varied. Psychonomic Bulletin and Review, 18, 659–681. Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology, 39, 321–359. Townsend, J. T., & Wenger, M. J. (2004a). The serial-parallel dilemma: A case study in a linkage of theory and method. Psychonomic Bulletin and Review, 11, 391–418.

Townsend, J. T., & Wenger, M. J. (2004b). A theory of interactive parallel processing: New capacity measures and predictions of a response time inequality series. Psychological Review, 111, 1003–1035. Ueno, T. (1978). Temporal summation in human vision: Simple reaction time measurements. Perception and Psychophysics, 23, 43–50. Ulrich, R., & Miller, J. (1994). Effects of truncation on reaction time analysis. Journal of Experimental Psychology: General , 123, 34–80. Usher, M., & McClelland, J. L. (2001). On the time course of pereptual choice: The leaky competing accumulator model. Psychological Review, 108, 550–592. Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2003). Self-organization of cognitive performance. Journal of Experimental Psychology: General , 132, 331–350. Van Selst, M., & Jolicoeur, P. (1994). A solution to the effect of sample size on outlier elimination. Quarterly Journal of Experimental Psychology, 47 , 631–650. Van Zandt, T. (2000). How to fit a response time distribution. Psychonomic Bulletin and Review, 7 , 424–465. Van Zandt, T. (2002). Analysis of response time distributions. In H. Pashler & J. Wixted (Eds.), Stevens’ handbook of experimental psychology (3rd ed., pp. 461–516). New York: John Wiley & Sons. Van Zandt, T., Colonius, H., & Proctor, R. W. (2000). A comparison of two response time models applied to perceptual matching. Psychonomic Bulletin and Review, 7 , 208–256. Van Zandt, T., & Jones, M. R. (2012). Stimulus rhythm and choice performance. (Submitted) Van Zandt, T., & Townsend, J. T. (1993). Self-terminating and exhaustive processes in rapid visual and memory search: An evaluative review. Perception and Psychophysics, 53, 563–580. Verbruggen, F., & Logan, G. D. (2009). Models of response inhibition in the stop-signal and stop-change paradigms. Neuroscience and Biobehavioral Reviews, 33, 647–661. Wagenmakers, E.-J., & Brown, S. (2007). On the linear relation between the mean and the standard deviation of a response time distribution. Psychological Review, 114 , 830–841. Wagenmakers, E.-J., Farrell, S., & Ratcliff, R. (2004). Estimation and interpretation of 1/f α noise in human cognition. Psychonomic Bulletin and Review, 11, 579–615. Wei, W. W. S. (2012). Time series analysis. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 458–485). New York: Oxford University Press. Welford, A. T. (1952). The "psychological refractory period" and the timing of high-speed performance – a review and a theory. British Journal of Psychology, 43, 2–19. Welford, A. T., & Brebner, J. (Eds.). (1980). Reaction times. New York: Academic Press. Woodworth, R., & Schlosberg, H. (1954). Experimental psychology. New York: Holt. Yuan, K-H., & Schuster, C. (2012). Overview of statistical estimation methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 360–386). New York: Oxford University Press.

va n z a n d t, tow n s e n d

285

CHAPTER

15

Observational Methods

Jamie M. Ostrov and Emily J. Hart

Abstract Systematic observational methods require clearly defined codes, structured sampling and recording procedures, and are subject to rigorous psychometric analysis. We review best practices in each of these areas with attention to the application of these methods for addressing empirical questions that quantitative researchers may posit. Special focus is placed on the selection of appropriate observational methods and coding systems as well as on the analysis of reliability and validity. The use of technology to facilitate the collection and analysis of observational data is discussed. Ethical considerations and future directions are raised. Key Words: Observation, observer, time sampling, event sampling, participant observation, focal participant sampling, semi-structured observations, scan sampling, interobserver reliability, Cohen’s Kappa, observer drift, reactivity, remote audio-visual recording, computer-assisted observational software

Introduction Systematic observational methods have been a common technique employed by psychologists studying human and animal behavior since the inception of our field, and yet best practices for the use of observational instruments (see Table 15.1) are often not known or adopted by researchers in our field. As such, the quality of observational research varies widely, and thus, it is our goal in the present chapter to review and explicitly define the standards of practice for this important methodological tool in the psychological sciences. Bakeman and Gottman (1987) have previously defined observational methods to include the a priori use of operationally defined behavioral codes by observers who have achieved interobserver reliability. Importantly, the setting or context is not what defines a method as being systematic (Pellegrini, 2004). That is, systematic observations may be conducted in the laboratory, schools, workplace, public spaces and coded 286

live or via recordings/transcripts. Therefore, having clear definitions and sampling/recording rules as well as reliable codes delineates informal, unsystematic observation from systematic observation. We also distinguish between the use of nonsystematic field notes and other data collection techniques that are often used in qualitative studies by ethologists and educational practitioners in naturalistic contexts and only include a review and analysis of systematic observational methods (Pellegrini, Ostrov, Roseth, Solberg, & Dupuis, in press). Nonsystematic sampling techniques such as Ad libitum (i.e., ad lib) in which there are no a priori systematic sampling or recording rules are often used by researchers as a part of pilot testing and help to inform the development of systematic observational coding systems (Pellegrini, 2004). Thus, ad lib sampling approaches are important to understand the context and nature of the behaviors under study, but they will not be discussed

Table 15.1. Best Practices for Observational Methods Methodological Best practice recommendation issue Defining behaviors/ codes

Clear, discrete behaviors are ideal. A priori operational definitions/observational codes are needed. Codes should be mutually exclusive and exhaustive where appropriate.

Sampling/ recording rules

Procedures should be standardized and appropriate for the behavior under study. Observations should be independent and pilot-tested if a new scheme is used.

Training

Observers should be unaware of study hypotheses. A standardized manual should be used. Initial levels of interobserver reliability should be obtained by all observers with an experienced, reliable trainer.

Data collection

A minimally responsive manner should be used for live observations to reduce participant reactivity. Participants are only observed once per session/day.

Reliability/ validity

Interobserver reliability should be assessed across the study. Cohen’s Kappa should be used when possible. Validity assessments should be included.

Scoring

Standardized procedures should be adopted.

Biases/Error

Efforts should be implemented to reduce participant reactivity, observer drift, and other biases and sources of error.

Ethics

IRB approval as well as informed consent/assent should be obtained when possible. Protections should be considered for the duration of the study.

further in this review. Observational methods may be used in a variety of designs from correlational and quasi-experimental to experimental and even randomized trial designs (Bakeman & Gnisci, 2006). However, it is more typical to find systematic observational methods used outside the laboratory to maximize ecological validity and, thus, less likely as part of experimental manipulations (Bakeman & Gnisci, 2006). The current review will be relevant to all research designs with a focus on those methods that are well designed for quantitative data analysis.

History of Observational Methods The use of systematic observational methods has been used extensively by psychologists throughout the history of our field to examine various empirical questions (see Langfeld, 1913). One of the first documented cases of systematic observational methods in the extant literature was from a study by Goodenough (1930) and was part of an increasing trend in the systematic study of young children as part of the Child Welfare Movement in the United States, which was supported by the National Research Council (for review, see Arrington, 1943). In fact, her seminal work was also one

of the first studies in psychology to be published using time sampling (see Sampling section below) observational procedures (Arrington, 1943). In her classic work (appearing in the first issue of Child Development), Florence L. Goodenough reported on several observational studies conducted in her laboratory at the Institute of Child Welfare (now Institute of Child Development) of the University of Minnesota. This study highlights several best practices that are still endorsed today. For example, careful pilot testing of the observational codes was conducted, and revisions were made to generate mutually exclusive codes (see Coding section below) and reliable distinctions between the categories. In addition, observations of each child’s physical activity were conducted only once per day and only by one observer at a time so that observations of behavior were conducted independent of one another. Goodenough (1930) carefully defined the a priori categories or observational codes and demonstrated interobserver reliability for each of these codes. Finally, Goodenough (1930) described the justification for her observational procedures and discussed alternative techniques (e.g., the optimum duration for an interval within a time-sampling procedure). There are other well-known examples of systematic observation conducted by contemporaries of ostrov, hart

287

Goodenough, including Parten’s (1932) study of young children’s play behavior, which also illustrate best practices (e.g., clearly defined, mutually exclusive observational codes; rules designed to maintain independence of sampling and decrease observer error). Some of the earliest observational studies focused on either children or non-human animals (e.g., Crawford, 1942), as other techniques for studying behavior (and often social domains of study) were either not as well suited for the research questions or not available at the time. Today, systematic observational methods are used in research and applied settings (Pellegrini, 2001) and relevant for training in all domains and subdisciplines of the social and behavioral sciences (Krehbiel & Lewis, 1994).

Sampling and Recording Rules Systematic observational systems follow various sampling and recording rules that are designed for different contexts and research questions. The following section includes a review of the central sampling and recording rules that quantitative scholars would use for conducting systematic observations (see Table 15.2 for a summary of the strengths and weaknesses of each approach). Recently adopted best practices for direct systematic observation are relevant for each of these types of observational methods, and they are briefly reviewed here. These practices, which were first introduced by Hintze, Volpe, and Shapiro (2002), include (1) the observational system is designed to measure well-defined behaviors; (2) the behaviors are operationally defined a priori; (3) observations are recorded using objective, standardized (i.e., manualized training protocols) sampling procedures and recording rules; (4) the context and timing of sampling is explicitly determined; and (5) scoring and coding of data are conducted in a standardized fashion (see Leff & Lakin, 2005, p. 476).

Time Sampling A time-dependent observational procedure in which the researcher a priori divides the behavior stream into discrete intervals and each time interval is scored for the presence or absence of the behavior in question is defined as a time sampling observational approach. That is, the time interval is the unit coded (Bakeman & Gottman, 1987). Time sampling procedures may be conceptualized as either 0/1 (i.e., absent/present or nonoccurrence/occurrence) or continuous in nature. A time sampling procedure 288

o b s e rvat i o n a l m e t h o d s

is an efficient method of sampling, as multiple data points may be collected from a single participant in a short period of time. Time sampling is well suited for measuring rather discrete behaviors, such as overt behaviors (e.g., on task and off task behavior in classrooms), or with behaviors that are frequently occurring. For example, a recent study of the frequency of various behaviors (e.g., off task behavior, noncompliance) during several naturalistic activities in 30 children with various psychiatric diagnoses used a reliable 0/1 time sampling approach with a 15-second interval (Quake-Rapp, Miller, Ananthan, & Chiu, 2008). Alternatively, time sampling is not well designed for infrequently occurring events or events that are long in duration (Slee, 1987). A clear advantage is that time sampling is relatively inexpensive because it is an efficient use of the research assistant (Bakeman & Gottman, 1987). Further, 0/1 sampling is also easier for the observer than alternatives such as instantaneous sampling, in which the research assistant notes if the behavior is present at a precise moment in time rather than it occurring during a larger interval of time. A major disadvantage of the time sampling approach is that the researcher delineates the particular time interval and therefore arbitrarily categorizes the behavior into discrete artificial units of time that may or may not be meaningful (Slee, 1987). Moreover, some behaviors may exceed the often brief interval of time that is selected for the sampling. Thus, it is crucial to carefully justify the interval that is selected. The intervals are often brief and the behaviors in question should be readily apparent and easily observable by trained research assistants. If frequency estimates are to be obtained, then the interval in question needs to be sufficiently brief so that an accurate assessment can be made. That is, typically with an interval approach, a maximum of one behavior is recorded during an interval even if the behavior independently occurs more frequently during this interval (Slee, 1987). Thus, special attention needs to be given to the pilot testing of the observational scheme and various durations of the interval if frequency assessments are desired. Time sampling procedures are used in a range of settings and studies to test various empirical questions that often have applied significance. For example, Macintosh and Dissanayake (2006) adopted a 0/1 time sampling technique to assess spontaneous social interactions in school-aged children with high-functioning autism or Asperger’s disorder as well as typically developing children. Observations were conducted in the schoolyard. For each

Table 15.2. Strengths and Weaknesses of each Observational Approach Method

Strengths

Weaknesses

Time sampling

This method is efficient and inexpensive. It is appropriate for frequently occurring and/or discrete behaviors.

It is less useful for infrequently occurring behaviors. Time units may be categorized inappropriately.

Event sampling

This method efficiently enables the measurement of frequency, duration, latency, and intensity. It may be used with frequently or infrequently occurring behaviors.

It may be inappropriate in situations where it is difficult to determine the independence of events, such as dyadic interactions.

Participant observation

This method is appropriate for the study of broad and complex constructs that encompass a variety of events or behaviors. It may be useful in applied settings.

It is less efficient.

Focal sampling

This method allows for in-depth recording of an individual participant. Continuous recording enables multiple types, sequences, and true frequencies of behaviors. May be useful in applied research contexts.

Large amounts of time are often needed.

Scan sampling

Instantaneous recording rules promote efficiency. It is appropriate for overt, readily observable behaviors.

It may be difficult to obtain true frequency of a behavior. It is less appropriate for subtle behaviors.

Semistructured observations

Experimental control is provided.

Ecological validity may be lacking. It requires additional work to pilot test and validate the paradigm.

timed interval of 30 seconds, one type of behavior (e.g., parallel play) from a particular behavioral domain (e.g., social participation) was coded. For reliability purposes, a second observer made independent ratings for 20% of the entire sample. Intraclass correlation reliability coefficients were all acceptable for each type of behavior (0.78–0.99) with the exception of nonverbal interaction (i.e., gestures; 0.58), which are often difficult to reliably assess in live settings (see also Ostrov & Keating, 2004). Results meaningfully distinguished between the typically developing children and the clinical groups and revealed few differences between the two clinical groups, supporting the use of time sampling as a means to discriminate between clinical and nonclinical groups (Macintosh & Dissanayake, 2006). Time sampling procedures have several other applications and clinical considerations. For example, time sampling methods may differentially affect how treatment effects are interpreted (MeanyDaboul, Roscoe, Bourret, & Ahearn, 2007) and may be appropriate for classroom-based research that tests adherence to educational policies intended to aid students with special needs (Jackson & Neel,

2006; Soukup, Wehmeyer, Bashinski, & Boyaird, 2007).

Event Sampling Event-based sampling is also known as behavior sampling and permits a researcher to study the frequency, duration, latency, and intensity of the behavior under study (Pellegrini, 2004). Essentially, unlike time sampling, event sampling is a type of observational sampling in which the events are time-independent and the behavior is the unit of analysis (Bakeman & Gottman, 1987). Event sampling allows the behavior to remain as part of the naturally occurring phenomenon and may unfold in a manner generally consistent with the timing of the behavior in the natural setting. This type of sampling also can be efficient in terms of the total amount of time needed for observations. Unlike other sampling techniques (e.g., time sampling), a third advantage is that event sampling may be used when the construct under study is either frequently or infrequently occurring (Slee, 1987). There are some clear disadvantages to event-based sampling procedures, ostrov, hart

289

and this may be a reason that it is less commonly seen in the literature. First, it is sometimes challenging to delineate the independence of events—that is, the researcher must specify when one event ends and the next event begins. Second, event sampling does not lend itself well to coding of dyadic interactions such as parent–child or romantic partner relations in which there is a fair amount of interdependence between the participants (Slee, 1987). Event sampling also has wide applicability and has even been used to understand the propensity to violence at sporting events. For example, Bowker et al. (2009) used an event-sampling approach to examine spectator comments at youth hockey games in a large Canadian city. A group of five observers attended 69 hockey games played by youth in two age groups: 11–12 years and 13–14 years. Verbal comments were coded as positive, negative, corrective, or neutral and rated for intensity. Most of the comments elicited by spectators were positively toned. The valence of spectator comments was influenced by gender (i.e., the gender of the children playing) and the purpose for which the game was being played (i.e., competitive or recreational). These results support the utility of event sampling at social and athletic events, where particular behaviors are likely to occur during a finite period of time. Time sampling may not be appropriate in such circumstances because of the presence of a high concentration of individuals in a single setting and many potential interruptions arising from the nature of the activity.

Participant Observation Although participant observation has been more frequently used with nonsystematic field observation and in disciplines that focus on qualitative methods, it is possible to conduct systematic participant observation as part of quantitative studies. Systematic participant observation has been the method of choice for behaviors of interest that require “an insider’s perspective” (Pellegrini, 2004, p. 288) or for contexts in which the sampling period may be long and informal. Moreover, this method is well suited for the use of more global observational ratings that sample events. This procedure has wide applicability, and participant observation has an extensive history of successful use from studies of children with behavioral problems at summer camps in clinical psychology (e.g., Newcomb, 1931; Pelham et al., 2000) to worker stress in organizational psychology (e.g., Länsisalmi, Peiró, & Kivimäki, 2000). For example, a recent study of children 290

o b s e rvat i o n a l m e t h o d s

diagnosed with disruptive behavioral disorders and enrolled in a summer treatment program used staff counselors to complete daily participant observations of social behaviors of the children while they engaged in various camp activities (Lopez-Williams et al., 2005). A second study of social competence among reunited adolescents (M age = 15.5 years) who had attended a research-based summer camp when they were 10 years old revealed the predictive validity of participant observer (i.e., camp counselor) ratings of social skills (Englund, Levy, Hyson, & Sroufe, 2000). The validity of the participant observations of social competence when the participants were 10 years old was determined by revealing significant prospective correlations with a groupproblem solving task that was videotaped and coded by two independent raters along several dimensions (e.g., self-confidence, agency, overall social competence) when the participants were 15 years old. The results support the use of participant observations in studying the development and stability of complex, multifaceted constructs like social competence.

Focal Sampling Focal person sampling involves selecting (typically at random from a roster of participants) one participant and observing the individual for a defined time period. For each sampling interval (ranges vary depending on the question of interest), the observer records all relevant behaviors of the focal person. As we have previously discussed (see Pellegrini et al., in press), for studies of dyads or small groups, the sampling interval should be as long as the typical interaction or displayed behavior of interest. For example, in our work, we study the display of relational aggression (i.e., the use of the relationship as the means of harm via social exclusion, withdrawing friendship, spreading malicious rumors), and given the nature of these behaviors, we have found that an interval of 10 minutes is a reasonable interval for assessing the intent for harm as well as the subtle nature of these peer interactions (Ostrov, 2008; Ostrov & Keating, 2004). Focal sampling may technically use continuous (e.g., Fagot & Hagan, 1985; Laursen & Hartup, 1989), 0/1 (e.g., Hall & McGregor, 2000; Harrist & Bradley, 2003), or instantaneous recording rules (see Pellegrini, 2004). However, focal sampling often uses continuous recording procedures because it permits the simultaneous coding of various behaviors, sequences of behaviors, and interactions with multiple partners in a live setting (e.g., Arsenio & Lover, 1997; Keating & Heltman, 1994). For example,

in our observational studies of relational aggression among young children, we always have used focal sampling with continuous recording given the somewhat covert nature of the behaviors we have targeted for observation, which require a longer period of direct assessment to decipher and appropriately record the behaviors (Ostrov & Keating, 2004). Focal participant sampling is often conducted across multiple days and contexts to better capture the true nature of the behavior rather than any statedependent artifacts. Given the amount of time and the continuous nature of the recordings, this technique permits the recording of behavior that is a close approximation to real-time recording, and a researcher may recreate the behavior of the focal participants with a high degree of accuracy (Pellegrini et al., in press). For example, we observe children in their naturally occurring play contexts on 8 separate days, and they are only ever observed once per day to maintain independence of the data. Thus, in our work, each participant is observed for 80 minutes (8 sessions at 10 minutes each session). More specifically, a study of 120 children resulted in more than 370 hours of observation across the two time-points of the short-term longitudinal study (Ostrov, 2008). Therefore, time is a major cost of focal sampling because of the large number of independent observations typically conducted with this approach. Focal sampling may also be used with 0/1 or instantaneous sampling as recording procedures, but this is rarely done. As previously mentioned, both of these recording procedures require an a priori specified time interval, which is usually relatively brief (i.e., 1–10 seconds). Instantaneous recording is typically used only with scan sampling procedures (see Scan Sampling section below). 0/1 time sampling is not usually used with focal sampling because we are often interested in assessing the true frequency of behaviors that may not be obtained with this procedure (i.e., an independent behavior could occur once or more than once during a set interval, but with 0/1 coding only one point is scored). Despite the emphasis on the use of these methods for studying basic social behavior, focal sampling procedures may be used in a wide range of studies. It is common in the literature to find focal participant sampling studies on a range of social behavior topics: social dominance in children (Keating & Heltman, 1994) and adults (Ostrov & Collins, 2007), play behavior (Pellegrini, 1989), emotion and aggression (Arsenio & Lover, 1997), conflict (Laursen & Hartup, 1989), and peer relations with young children and non-human primates (e.g., Hinde, Easton,

& Meller, 1984; Silk, Cheney, & Seyfarth, 1996). However, there are many practical applications of focal participant sampling (see Leff & Lakin, 2005; Pellegrini, 2001). For example, applied studies have been conducted that have used these observational techniques for examining the adjustment of children with special needs in elementary schools (Hall & McGregor, 2000), peer victimization in early adolescence (Pellegrini & Bartini, 2000), and for testing the efficacy of randomized behavioral interventions (e.g., Harrist & Bradley, 2003; Ostrov et al., 2009).

Scan Sampling Instantaneous or scan sampling is a more efficient observational procedure than focal sampling. Scan sampling exclusively relies on instantaneous recording rules (Pellegrini, 2001). With this procedure the observer scans the entire observation field for a possible behavior or event for a particular period of time. If an event is noted during that scan, then it is recorded. Typically, a number of discrete scans occur across a number of days to maximize the independence of the data. A participant’s data is usually summed across the scans to yield a behavioral score for the construct of interest. A concern with this approach is that it may not accurately assess the true frequency of behaviors if spacing is not adequate between the scans (Pellegrini, 2004). Moreover, given the typical approach in which scans are conducted on an entire reference group in their natural context, behaviors that are selected for this approach must be readily apparent, discrete, and overt behaviors that require typically only a few seconds to observe. In our own field, McNeilly-Choque, Hart, Robinson, Nelson, and Olsen (1996) conducted a study of young children’s aggressive behavior in which they used a random scan sampling method that yielded 100 five-second scans during a 5- to 7-week period, resulting in 8 minutes of total observation per participant (McNeilly-Choque, Hart, Robinson, Nelson, & Olsen, 1996). Thus, this study demonstrated the feasibility and efficiency of systematic scan sampling observations of aggressive behavior on the playground.

Semi-Structured Observations Analog tasks or semi-structured observations, involving controlled simulations or analog situations, are observational tasks designed to mimic naturalistic conditions. Semi-structured observational procedures are another observational paradigm well ostrov, hart

291

suited for low base rate events. The recording and coding procedures are often identical to the procedures an observer would use in a naturalistic setting; however, the context in which the behaviors emerge is different. Often analog tasks are completed in a laboratory or similarly controlled setting and are videotaped for subsequent coding by unaware observers. Thus, analog observational paradigms permit a great deal of experimental control/standardization of procedures, and with the use of videotapes, observers are able to objectively code the session using the same recording rules as permitted in other contexts. A clear advantage of these procedures is that they are efficient and require less cost and time spent observing participants. If the study is not designed well, then a major disadvantage is a lack of ecological validity (i.e., degree to which the context in which the research is conducted parallels the real-life experience of the participants), and poor generalizability of the findings is possible. Moreover, a relatively small sampling of behavior does not provide for a true frequency of behavior or for a representative sample of behavior with many interaction partners (i.e., the researcher is not able to examine individual–partner interactions). Other researchers have addressed this concern by using a “round robin” approach in which each participant completes an analog session with several (or all) other member of the reference group, which may improve the validity of the approach but, of course, adds a great deal of time and expense (see Hawley & Little, 1999). In our own research we have used a semistructured observational paradigm to provide an efficient estimate of young children’s aggressive behavior. To this end, we created a brief (9-minute) analog situation to observe various aggressive and prosocial behaviors (i.e., within dyads or triads) in early childhood (Ostrov & Keating, 2004; Ostrov, Woods, Jansen, Casas, & Crick, 2004). The procedures and a review of the psychometric findings are described extensively elsewhere (e.g., Ostrov & Godleski, 2007), but essentially, each assessment includes three trials of 3 minutes each. For each trial, the children are given the same developmentally appropriate picture to color (e.g., Winnie the Pooh). For triads, three crayons are placed on the table equidistant from all participants, and only one crayon is the functional instrument (e.g., orange crayon for Winnie the Pooh) and two are functionally useless white crayons. At the end of the trial, a new picture and new crayons are placed on the table. This procedure is designed to produce 292

o b s e rvat i o n a l m e t h o d s

mild conflict among the children and was developed to permit the children to engage in a variety of behaviors: prosocial behavior (e.g., sharing the one functional crayon or breaking into pieces to share), relational aggression (e.g., telling the child they will not be their friend anymore unless they give them the crayon), and physical aggression (e.g., taking the crayon away from someone else). The analog task was designed to be developmentally appropriate and resemble everyday conflict interactions concerning limited resources that young children experience in their typical preschool classroom. Highly trained research assistants monitored the entire session and intervened if needed to guarantee the safety of all participants and reduce the likelihood of participant distress. Moreover, at the end of the session, the children were each individually given access to a full box of crayons to diminish any distress and they were praised for their performance (see Ostrov et al., 2004). This paradigm is thus designed to elicit the behavioral constructs of interest in a more controlled environment than free play yet ensures the ethical treatment of participants. One way to demonstrate the ecological validity of semi-structured observations is to correlate behaviors observed in a semi-structured context with behaviors observed in a more naturalistic context. For example, Coie and Kupersmidt (1983) found that social status in experimentally contrived playgroups comprised of unfamiliar peers matched social status in the classroom, supporting the validity of a contrived playgroup paradigm for studying social development (see also Dodge, 1983). Similarly, our own brief semi-structured observational paradigm (i.e., coloring task) has been shown to significantly predict observational scores collected from concurrently assessed naturalistic (i.e., classroom and playground free play) focal child observations with continuous recording (r = 0.48) and to predict future (i.e., 12 months later) behavior in naturalistic contexts at moderate levels (see Ostrov et al., 2004).

Methods of Recording Various methods of recording (i.e., checklist, detailed records, or observation forms) vary widely and should be based on the type of recording procedures that a researcher adopts. For example, time sampling (i.e., 0/1) and instantaneous or scan sampling procedures are well suited for checklist forms in which the prescribed intervals simply receive a check or a precise code indicating the occurrence

or absence of the behavior in question. However, focal participant sampling often requires observation forms that permit greater detail and several codes that are recorded either simultaneously or in close temporal proximity, and, as such, a form that includes the behaviors or events of interest with space for recording the behavior in detail may be needed (for example forms and templates, see Pellegrini et al., in press). A general concern here is that the more time spent writing details about the behavior/event removes the observer’s attention from the participants and important details may be lost. Some observational procedures like time sampling provide the observer with a set period of time after the interval for recording behavior. In general, the easier the observation form is to complete, the less room there is for error. With that said, checklists often do not permit systematic reviews for accuracy of codes by the master trainer. For example, observers that are observing the same participant as part of a reliability check could both code a behavior as “PA” for physical aggression when in fact one research assistant observed a “hit” and the other observed a “kick,” which, depending on the observational system, may be different and might not warrant a positive match or agreement. Thus, depending on the coding scheme and intentions of the researcher, these may artificially match for reliability purposes when in fact they were closely related but discrete behaviors. Finally, if observers record some written details about the event, they may inform subsequent decision rules concerning whether a recorded behavior from observer 1 matches or does not match observer 2 for reliability assessments.

Coding Considerations The development of a reliable coding scheme is crucial for appropriately capturing the behaviors in question and testing the experimenter’s a priori hypotheses (Bakeman & Gottman, 1987). There are three types of coding categories that are often included in observational systems: physical description codes, consequence codes, and relational or environmental relations codes (Pellegrini, 2004). Physical description is believed to be the most “objective” type of codes because these describe “muscle contraction” (Pellegrini, 2004, p. 108) and might, for example, be involved in recording a participant’s social dominance or submissiveness (e.g., direct eye contact, rigid posture, arms akimbo; see Ostrov & Collins, 2007). The second type of codes

is for those of consequence in which a constellation of behaviors are part of a single code if they lead to the same outcome (Pellegrini, 2004). For example, if we were interested in studying social dominance, then we might code taking objects away from others that result in a submissive posture on the part of the nonfocal participant to be an indicator of social dominance (Ostrov & Collins, 2007). The third type of codes includes categories in which participants are described in relation to the context in which they are observed (Pellegrini, 2004). An example of a relational observational category would be a coding scheme that accounted for where and with whom an individual was socially dominant. In terms of costs and benefits, it is clear that physical description codes are often easier to train and therefore potentially more reliable. It is possible that consequence codes may be unreliable given a misunderstanding of the sequence of events (Pellegrini, 2004). Relational codes involve the appropriate documentation of multiple factors and therefore create more possibilities of error (for discussion, see Pellegrini, 2004; Bakeman & Gottman, 1987). Overall, the level of analysis from micro- to macro-coding schemes is important to consider and the most objective and reliable system for addressing a researcher’s particular research question should be adopted. A second consideration is the determination of whether to use mutually exclusive and exhaustive codes. Mutually exclusive codes are used when a single behavior may be recorded under one and only one code. In our observational studies, our coding scheme includes mutually exclusive codes such that a single behavior may be coded as either physical aggression or relational aggression, but not both. Exhaustive coding schemes are designed such that for any given behavior of a theoretical construct, there is an appropriate code for that behavior. For example, in our work we have codes for physical, relational, verbal, or nonverbal aggression as well as aggression not otherwise specified. Thus, if we determine a behavior is an act of aggression, then it may be coded as one of our behaviors in our scheme. Often schemes include mutually exclusive and exhaustive codes because there are several benefits to this approach (see Bakeman & Gottman, 1987). Having mutually exclusive codes means that researchers are not violating assumptions of independence, which are often needed for parametric statistics. For example, if a single behavior may be coded as both physical and relational aggression, then that may violate our assumption that the data are independent and come from independent ostrov, hart

293

behavioral interactions (Pellegrini, 2004). Having exhaustive codes also speaks to the content validity of a coding scheme. That is, if the overall construct appropriately measures all facets of that construct, then the behavior in question should be included in the observational system, and exhaustive schemes guarantee this occurrence. It is important to recall that the larger the coding scheme, the more taxing the observational procedures will be for observers and the greater the possibility of observer error.

Scoring Scoring of observational data is similar to the scoring of any quantitative data within the social and behavioral sciences, and it often depends on the convention within a particular field and the type of observational sampling and recording techniques that are adopted. For example, for focal participant sampling with continuous recording, frequency counts are often generated by summing each independently recorded behavior across the various sessions. In our own research, that would mean that an individual participant would get a score for each of the constructs (i.e., physical aggression, relational aggression, verbal aggression, etc.) by summing all the behaviors within a construct (e.g., all physical aggression behaviors) across all eight sessions (Ostrov & Keating, 2004). If the number of sessions is different for each participant because of missing data, then it is often common practice to divide by the number of sessions completed to generate an average rate of behavior per session (see Crick, Ostrov, Burr et al., 2006). Occasionally it is apparent that an error was made in the original coding of behaviors. Best practices have not been established for addressing these concerns, but as long as these errors are not systematic, the adopted solutions are often not a concern. To avoid problems with potential scoring biases, the observers and coders should always be unaware of the participant’s condition and/or past history. In addition, whenever possible, observers and coders should be unaware of the study hypotheses.

Psychometric Properties Reliability Reliability is often conceptualized as consistency within or between individuals (i.e., intra-observer or inter-observer), within measures (internal consistency), or across time (i.e., test–retest). Arguably, 294

o b s e rvat i o n a l m e t h o d s

for observational methods, the most important measure of consistency is inter-observer reliability, or the degree to which two sets of observations from two independent observers agree (Stangor, 2011). In the present review, we will first address intraobserver reliability and then focus on the assessment of inter-observer reliability. Intra-observer, or within-observer, reliability is defined as a situation in which two sets of observations by the same research assistant agree or are consistent. Essentially, intra-observer reliability is assessing how consistent a particular observer is when coding specific behaviors either between sessions (i.e., across time) or within a single session. As Pellegrini (2004) has discussed in more detail, we may conceptualize and test (e.g., Pearson’s Product-Moment Correlation Coefficient) intraobserver reliability in ways similar to test–retest reliability, and thus, intra-observer reliability is essentially the temporal stability of the observational measure for a given observer between testing sessions. We might desire to know the degree to which the observational score on a given behavioral construct for the same observer is stable across time to test for observer drift (a threat to the validity of the observational data), or the likelihood that observers are deviating from initial training procedures over time and modifying the definitions of the constructs under study (Smith, 1986). Intra-observer reliability or consistency within an observer may also be conceptualized as the reliability of an observer’s scores within a single session, and in this case the test is analogous to assessments of internal consistency (e.g., Cronbach’s α). As Pellegrini (2004) has stated, we assume an observer is first reliable or consistent in their scoring/recording by themselves prior to testing if they agree with an independent observer (i.e., inter-observer reliability). As mentioned, inter-observer reliability or consistency between observers is the gold standard for observational research. Essentially, inter-observer reliability involves comparing the independent codes of the observers with other trained observers. There are several ways to assess this psychometric property (see Pellegrini, 2004), but the key task is comparing agreement across all of the observers. An important best practice for inter-observer reliability procedures is to ensure that observers are sampling/recording the same behaviors independently. Independent coding may be conducted with the use of video and private coding sessions without discussion until all codes have been completed. Inter-observer reliability may be assessed live in the

field if the observers take precautions to avoid conveying to their partner how (and, in some cases, when) they are recording the behavior in question. A second best practice is to assess for reliability across the study to help avoid various biases (e.g., observer drift) and coding/recording errors from corrupting the integrity of the data. That is, observers should be checked against a master coder at the start of the study just after training ends, and each observer should pass an a priori reliability threshold (e.g., Cohen’s κ > 0.70). Next, their observations should be compared against other independent reliable observers throughout the duration of the study, and the trainer should provide constructive feedback for any deviations from the training protocol. Finally, an important consideration is for what percentage of time inter-observer reliability will be checked. This percentage should be a function of the number of cases or possible events that will be recorded, but typically 15% to 30% of a randomly selected sample of the possible sessions is coded by more than one observer for assessing inter-observer reliability. To avoid potential biases, a best practice is for each observer to conduct reliability observations with all other observers in a round-robin format. There are several ways to statistically measure inter-observer reliability. In the past, authors relied on zero-order correlations (Pearson’s r) but that problematic practice is not seen as often in the recent literature. A second statistical method that is still reported in peer-reviewed journals is percent agreement. Percent agreement may be expressed in Equation 1: Pobs = NA /NA + ND × 100%

(1)

where Pobs is the proportion of agreement observed, NA is the total number of agreements, and ND is the total number of disagreements. Percent agreement is not currently best practice, as it is influenced by the number of cases (i.e., it may be biased by relatively few cases) and because it is not compared against a standard threshold (Bakeman & Gottman, 1987). Finally, one of the central concerns with percent agreement (as well as Pearson’s r) as a measure of inter-observer reliability is that it does not control for chance agreement (Bakeman & Gottman, 1987). Cohen’s (1960) κ is a preferred statistic for interobserver reliability because it does control for chance agreements and is a more “stringent statistic,” allowing greater precision in assessing reliability at a specific moment in time or for particular events rather than overall summaries of association (Bakeman & Gottman, 1987, p. 836). Importantly,

κ may only be used when coders use a categorical scale (Bakeman & Gottman, 1987) and when a 2 x 2 matrix may be created to depict the proportion of agreements/disagreements for occurrences/nonoccurrences of behavior for any two observers (Pellegrini, 2004). When calculating the rate of agreement, it is important to a priori indicate any time parameters (i.e., within what period of time must both observers note the occurrence of a behavior, also known as the tolerance interval). Some experts caution that extremely short tolerance intervals (e.g., 1 sec) may be overly stringent and artificially reduce the degree of agreement given typical reaction times of observers (see Bakeman & Gnisci, 2006). If time sampling is being used, then observers should be signaled by an external source (e.g., audible tone from an electronic device) to indicate when they should record the behavior (see Pellegrini, 2004). κ may be expressed in Equation 2: κ = Pobs − Pexp /1 − Pexp

(2)

where Pobs is the proportion of agreement observed, and Pexp is the expected proportion of agreement by chance (Bakeman & Gnisci, 2006). Equation 2 indicates that agreement anticipated as a result of chance is subtracted from both the numerator and denominator, thus κ provides the proportion of agreement corrected for chance agreements (Bakeman & Gnisci, 2006). The range for κ is from −1.00 to +1.00, with a value of “0” indicating that obtained agreement is equivalent to agreement anticipated by chance, and greater than chance agreement would yield positive values with +1.00 equal to perfect agreement between the observers (Cohen, 1960). Interestingly, Cohen (1960) revealed that negative values (less than 0) were rare and suggested agreement at less than chance levels. It is possible to test if κ is significantly different from 0, but statistical significance is often not used as a threshold for determining an “adequate” or “good” criterion (Bakeman & Gottman, 1987). Initially, Landis and Koch (1977) provided an index of the strength of agreement or “benchmarks” and reported the following standards: κ of < 0.00 was “poor,” 0.00 − 0.20 was “slight,” 0.21 − 0.40 was “fair,” 0.41 − 0.60 was “moderate,” 0.61 − 0.80 was “substantial,” and > 0.81 was “almost perfect” (p. 165). However, Bakeman and Gottman (1987) reported that a significant κ of less than 0.70 may be a reason for concern. Other scholars have noted that the conservative nature of κ permits one to use a slightly lower threshold for adequate levels of reliability than the ostrov, hart

295

typical convention of 0.70 and suggest that a κ coefficient of 0.60 or higher is “acceptable” and 0.80 or above is considered “good” (Pellegrini, 2001). Under circumstances when a κ coefficient may not be calculated (e.g., when noncategorical data is used or quadrants of the aforementioned occurrence matrix may not be available given the recording rules of the adopted observational procedure), scholars have suggested that an intraclass correlation coefficient (ICC) be computed between independent raters on the continuous data (Bartko, 1976; McGraw & Wong, 1996; Shrout & Fleiss, 1979). There are several possible ICC formulas that could be depicted that are beyond the scope of the present review, and as such the interested reader is referred to the prior literature on this topic (Shrout & Fleiss, 1979; McGraw & Wong, 1996). Intra-class correlation coefficients may be expressed as a function of either the reliability for a single rating (i.e., the reliability of a typical, single observer compared to another observer) or the average rating of the observations across all the raters (McGraw & Wong, 1996). The average rating ICC uses the SpearmanBrown correction to indicate the reliability for all the observers averaged together (Bartko, 1976). The absolute value of an ICC assessing average ratings will be greater or equal to the ICC for a single rater (Bartko, 1976). Intra-class correlation coefficients may also be calculated as an index of “consistency” or as a measure of “absolute agreement.” Essentially, if systematic differences among observers are of interest, then the “absolute agreement” formula accounts for observer variability in the denominator of the ICC estimate, and this is not included for ICCs that measure “consistency” (for further detail, see McGraw & Wong, 1996). Intra-class correlation coefficients range from –1.00 to +1.00, where negative values indicate a lack of reliability and +1.00 would indicate perfect agreement (Bartko, 1976). An advantage to ICCs is that confidence intervals may be calculated (see McGraw & Wong, 1996). Typically, acceptable levels of reliability for ICCs are similar to other criteria in the field, and as such, levels greater than or equal to 0.70 are considered “acceptable” (e.g., Ostrov, 2008; NICHD Early Child Care Research Network, 2004).

Validity In using observational research methods, an assessment of validity is equally as important as an assessment of reliability. Different types of validity should be considered to strengthen the inferences drawn from a particular method, with construct 296

o b s e rvat i o n a l m e t h o d s

validity being most fundamental to any empirical inquiry. Construct validity is the degree to which the construct being studied actually measures the concept that a researcher intends to study (Stangor, 2011). Construct validity is often established through assessments designed to measure convergent and discriminant validity. Convergent validity rests on the assumption that if a construct is truly being measured, then alternative assessments of the same construct should be correlated with each other (Stangor, 2011). For example, an observational method intended to measure disruptive behaviors in the classroom should be correlated with teacher reports of disruptive behaviors. Alternatively, discriminant validity suggests that the construct being studied should not be correlated with other variables unrelated to the construct (Stangor, 2011). Should the expected convergent and discriminant associations not be observed, then it is unclear what an instrument or observational system is measuring. Other types of validity that are secondary yet still important to the establishment of a psychometrically sound observational system include content validity and criterion validity. Content validity refers to the extent to which a measure adequately assesses the full breadth of the construct being studied (Stangor, 2011). For example, an observational study of children’s play behavior should code for different types of play, given that it is a diverse construct. To ensure that all facets of a construct are included in an observational system, correspondence with experts and focus groups/review panels may be used. Criterion validity involves an assessment of whether a study variable is associated with a theoretically relevant outcome measure. If observations are associated with an outcome that is measured at the same point in time at which observations are conducted, then concurrent validity is demonstrated. If observations are associated with an outcome that is measured at a future point in time, then predictive validity is demonstrated. For example, concurrent validity would be confirmed by associations between classroom observations of disruptive behavior and teacher report of rejection by peers, and predictive validity would be confirmed by associations between classroom observations of disruptive behavior and future parent -report of academic performance.

threats to validity: sources of bias and error There are numerous biases for which observational methods are susceptible. A key bias is the

aforementioned observer drift, and it is paramount that investigators monitor for this threat to the validity of the data by carefully assessing observational records and calculating reliability coefficients for the duration of the study. Importantly, in addition to the aforementioned discussion about intra-observer reliability, observer drift may also be indicated if there is a drop in inter-observer reliability among the phases of training and data collection (Smith, 1986). A second strategy to mitigate observer drift is to regularly retrain observers. In instances where particular observers demonstrate problematic coding patterns, retraining should be individualized and should target the particular area of concern. In general, retraining is a practice that is beneficial for every observer because it reinforces proper coding procedures and observer behavior, thereby ensuring the integrity of the study. A second type of distortion that must be considered results from participant reactivity, which is also a threat to the validity of the observational data. Reactivity occurs when the individuals under study alter their behavior because of the presence or influence of an observer. Consequently, the behavior observed does not provide a true representation of the construct being measured. If participants avoid a particular location within a setting or modify their behavior because they know they are being recorded, this is a major concern for the validity of the data (Stangor, 2011). Depending on the nature of the study, reactivity may be more probable. For example, when observers need to remain within earshot of a focal participant to hear and see the behavioral interactions, it is crucial that the observers remain unobtrusive (e.g., Pellegrini, 1989). Researchers should explicitly address reactivity by training observers in the field to have a minimally responsive manner (Pellegrini, 2004). Essentially, observers should use neutral facial expressions and control their nonverbal behavior, posture, movement, and reactions to events during live coding. It is also possible that participants may be reactive to cameras and other recording devices, and efforts should be made to habituate participants to this equipment (see Use of Technology and Software section below) and monitor for this occurrence. Thus, this habituation process should occur prior to the actual collection of data (Pellegrini, 2004). In our studies, we spend a minimum of several days in the observational environment (and will do so for as long as needed) simulating our observations, which provide the participants an opportunity to habituate to our presence and reduce

reactivity prior to actual data collection. Therefore, regardless of live or videotaped coding, researchers should observe for participant reactivity and report the degree of reactivity in their studies (e.g., Atlas & Pepler, 1998). We define participant reactivity as any direct eye contact between the focal participant and observer, comments from the focal participant to the observer about our presence, or comments about our presence to others in the environment (Ostrov, 2008). Our training procedures and careful monitoring has resulted in relatively low levels of reactivity in several studies (e.g., 1.5–2.5 times per focal participant during 80 min of observation; Crick, Ostrov, Burr et al., 2006). Observer expectancy effects are a third bias (Hartmann & Pelzel, 2005), which is essentially when observers form expectations about the nature of the data based on their knowledge or assumptions about the study goals and hypotheses, which is why best practice is to use unaware observers, when possible, and to use unaware observers for reliability purposes, at a minimum. A final source of bias that we will discuss is gender bias as this is a well-documented concern with observational methods (Ostrov, Crick, & Keating, 2005). Past research has documented that untrained observers maintain gender biases when observing, for example, physical aggression (Lyons & Serbin, 1986; see also Condry & Ross; 1985; Susser & Keating, 1990). That is, men tend to rate boys as more physically aggressive than girls, even when boys and girls are displaying comparable levels of aggression (Lyons & Serbin, 1986). Moreover, male and female college students have shown documented gender biases based on knowledge about gender of young children in past experimental studies (Gurwtiz & Dodge, 1975). Finally, in our own research, we have documented that male college students are less likely to correctly identify relational aggression or prosocial behavior than their female peers (Ostrov et al., 2005). Please note that although the examples were related to our field of study (i.e., aggression), gender biases may be present for a variety of topics of study. Importantly, it may be that when individuals are trained to recognize potential biases, they are more likely to be objective in their coding of behavior (Lyons & Serbin, 1986).

Use of Technology and Software Excellent detailed reviews of computer-assisted recording devices and observational software programs are available (see Hoch & Symons, 2004), ostrov, hart

297

and thus, the present goal of this section is to briefly review the current state of technology and software for assisting in systematic observations in the laboratory and field. The following will include a review of the three most common observational software programs as well as the use of handheld devices and remote audiovisual equipment. The commercially available programs vary widely in function and cost, but most permit the observer to define a coding scheme and corresponding letter or number codes that observers can quickly use when making observations live or when coding digital media in the laboratory. Overall, advances in technology have made observational methods more efficient (e.g., flexible data reduction procedures and automatic statistical analyses), accurate (i.e., automatic rewind and playback functions reduce errors in coding), and applicable to a wider range of settings and topics of study (Bakeman & Gnisci, 2006, p. 140). The first software program and associated computer-assisted recording devices that we will discuss is the Observer® system by Noldus Inc. (Noldus, Trienes, Hendriksen, Jansen, & Jansen, 2000). The current version is Observer XT, which permits both time sampling as well as continuous event-based observational systems and has been used in both human and animal research (see http://www.noldus.com/the-observerxt/observer-xt-research). A notable feature is that this software permits an assessment of response latency of the time between the onset of a stimulus and the initiation of the response, which facilitates consequence coding (see Coding Considerations section above). The software also permits the linking of data from multiple modalities (e.g., observational reports, physiological responses) with a continuous time synch. The software may be used in the field with durable handheld devices or in the laboratory with live streaming video linked directly with the coding program (Noldus et al., 2000). Finally, the new version of the software permits searches of the data for particular comments, events, or behaviors, and data may be exported to various statistical software packages (Noldus et al., 2000). Jonge, Kemner, Naber, and van Engeland (2009) used an earlier version of the Observer software to code data from a study on block design reconstruction in children with autism spectrum disorders and a group of comparison participants. The use of the videotaped sessions and later coding by unaware observers meant that the coders using the software were unaware of the child’s group status. The 298

o b s e rvat i o n a l m e t h o d s

software permitted the coders to record the amount of time the children took to reconstruct the block design pattern as well as a range of errors (Jonge et al., 2009). The program was used to calculate Cohen’s κ based on two independent coders (Jonge et al., 2009), who could make independent evaluations of the behavior without biasing their coding partner. The second observational software program that we examine is the Multi-Option Observation System for Experimental Studies (MOOSES; Tapp, Wehby, & Ellis, 1995) and the associated Procoder for Digital Video (PCDV; Tapp& Walden, 1993), which permits viewing and coding of digital media (see http://mooses.vueinnovations.com/overview). The MOOSES and PCDV programs also permit event and time sampling and for the coding of realtime digital media files or verbatim transcripts of observational sessions (Tapp & Walden, 1993; Tapp et al., 1995). In fact, data files may be exported to MOOSES for event coding or to another format known as the Systematic Analysis of Language Transcripts (SALT) for transcription data coding. MOOSES automatically timestamps events and may provide frequency and duration codes as well as basic reliability statistics (e.g., Cohen’s κ), and MOOSES is designed for sequential analysis (Tapp et al., 1995). A handheld version of MOOSES is available. MOOSES/PCDV has been described as a lower cost alternative to The Observer (Hoch & Symons, 2004). The third system we review is the Behavior Evaluation Strategies and Taxonomies (BEST; Sharpe & Koperwas, 2003). This computer system includes both the BEST Collection for capturing digital media files and the BEST Analysis program for both qualitative and quantitative analysis of the observational data (Sidener, Shabani, & Carr, 2004). The BEST program may be used for examining the frequency or duration of events, and sophisticated sequential analysis may be conducted. Much like the more expensive alternatives, this program will calculate reliability statistics (e.g., Cohen’s κ) and will summarize data in table or various graph formats. A review of this program suggests that BEST does not handle the collection of interval-based data well, but the BEST Analysis program will allow a researcher to analyze this type of observational data (Sidener et al., 2004). A new platform permits video display for captured data from video files, and although the program was initially written for Windows® , there are inexpensive Apple® iPhone® and iPod Touch® applications available for data collection (see http://www.skware.com).

Various types of technology (e.g., audio and video recordings) have an extensive history in the field and laboratory to assist researchers in better capturing verbal and nonverbal interactions (e.g., Abramovitch, Corter, Pepler, & Stanhope, 1986; Stauffacher & DeHart, 2005). Remote audiovisual recordings provided an opportunity to combine the benefits of both audio and video recording while also reducing reactivity to typical recording devices when participants were observed in naturally occurring settings (Asher & Gabriel, 1993; Atlas & Pepler 1998; Pellegrini, 2004; Pepler & Craig, 1995; Pepler, Craig, & Roberts, 1998). That is, videotaping with a telephoto zoom lens from an unobtrusive location in the natural setting and recording audio via a system of wireless microphones provides an externally valid way to record behavior and a time-synched verbal record of the interaction (Pepler & Craig, 1995). Thus, remote audiovisual observational recordings provide all the benefits of having a video for subsequent coding by unaware observers (i.e., the ability to pause, rewind, and analyze subtle nonverbal behaviors) as well as a complete verbal transcript, which helps to put the video data in proper context (Asher & Gabriel, 1993; Pepler & Craig, 1995). Wireless microphones typically are housed within small vests or waist pouches that participants wear, and often only the focal participant has an active or live microphone, and others in the reference group have “dummy” microphones that resemble the weight and look of the real microphone. Importantly, observational codes made with the remote audiovisual equipment have demonstrated acceptable inter-observer reliability coefficients (e.g., κ = 0.76; Pepler & Craig, 1995). Moreover, this procedure as well as sufficient exposure to the equipment by the participants has been found to produce low levels of participant reactivity (e.g., 2 or 3) as anomalous clusters. Abellan, Richardson, and Best (2008) extended the use of posterior probabilities of residual cluster detection to spatio-temporal models. The residuals they proposed to study actually represent smoothed space-by-time interaction terms incorporated into the model through additional heterogeneous random effects. The variance of this random effect distribution is determined by a two-class mixture of “stable” and “unstable” hyperpriors. Assignment to the stable class indicates less variance in this residual component (i.e., other model effects explain the majority of the variation in this class), whereas membership in the unstable class is a possible indication of an outlying cluster. Using an autologistic CAR model (see example below) that combined the main effect space–time random effect model of Knorr-Held (2000) with this new mixture distribution of space– time interaction effects, Abellan et al. (2008) found reasonable operating characteristics, under limited simulation conditions, for a decision rule that classified areas as “unstable” when the posterior probability of membership in the large-variance residual class exceeded 0.50 for at least one measured point in time. We demonstrate this procedure below but also reiterate concerns about the model-dependency of such decision criteria. Exceedence probability criteria would normally require simulating various conditions that produce data distributions (event counts) that closely correspond to the observed data.

Empirical Example: Disease-Mapping Example of Oklahoma Child Maltreatment Fatalities We demonstrate the use of many of the diseasemapping techniques above using child maltreatment fatalities data for all counties in the state Oklahoma between the years of 1991 and 2006. As a

a primer of epidemiologic methods

result of the sparseness of the events, we aggregated child fatality counts for each county across consecutive 2-year intervals. The aggregated counts are still relatively sparse, as shown in the summary statistics listed in Table 16.5. Notice the median number of maltreatment fatalities is zero for all 2-year intervals. Further details on this child fatality data can be found elsewhere (Bard, Damashek, McDiarmid-Nelson, & Bonner, 2012). To begin analysis of the data, we examined Moran I and Geary C correlograms and associated randomization tests of significance for fatalities within each time interval. It may seem odd to test for spatial correlations among an outcome like maltreatment deaths, but as stated in the introduction, we consider the alternative hypothesis of spatial aggregation of events to be of interest for any behavioral outcome. Evidence of aggregation related to geography could signal any number of spatially related risks—for example, varying social norms, prevention efforts, environmental risks, and so forth. Figure 16.3 displays the Moran I correlogram for each time interval. The x-axis in these plots represents first- through fifth-order neighborhood relationships, and the weight matrices used for calculation were all row standardized. Only two of the first-order neighborhood coefficients (interval 99–00: I = 0.20; interval 05–06: I = 0.18), and only three of the second- through fifth-order neighborhood coefficients reached statistical significance. Near identical patterns of significance were found using Geary’s C. The autocorrelations above are not suggestive of strong spatial patterns, but these coefficients are limited by the use of raw (unsmoothed) counts that do not take county sample size into account nor do they consider the pooling of spatial information over time. To overcome the first limitation, we also fit a fully Bayesian version of the BYM convolution model. We explored the fit of this model for each interval separately and compared these results to the correlogram above. Because of the sparse data, we fit a binomial GLMM to the count data rather than the typical Poisson GLMM. The latent event rates for each county, πi , were modeled through a logistic regression:

πi = β0 + ϕi + υi , log 1 − πi where β0 represents the average log-odds of county event rates, the ϕi captures space-dependencies in unobserved heterogeneity and are distributed as intrinsic Gaussian autoregressive random effects, and the υi captures space-independent unobserved

heterogeneity in event rates and is distributed as N(0, συ2 ). Notice that when ([X]) exceeds 0, the area-specific event rate is predicted to be higher than the spatial average, and when this sum is less than 0, the predicted rate falls below this average. Large variability in this sum would suggest the rate across the population is not constant (i.e., unobserved heterogeneity exists). If this were true, then large variation in ϕi relative to υi would also suggest that much of the modeled heterogeneity would seem to be spatially clustered. Bayesian model comparisons (see Kaplan & Depaoli, Chapter 20, Volume 1), using measures like the deviance information criterion (DIC; Spiegelhalter, Best, Carlin, & van der Linde, 2002) and the mean absolute predictive error (MAPE; Gelfand & Ghosh, 1998), can assess the necessity of either or both heterogeneity components. Table 16.6 displays results for the usual binomial model (assuming no spatial aggregation or unobserved heterogeneity in event rates), a binomial model with an unique random effect to explain rate heterogeniety (UH component), and a binomial model with both a unique heterogeneity and a spatial ICAR random effect (S component). Similarly to the Moran I correlogram findings, the DIC and MAPE statistics do not provide strong evidence for the necessity of a spatial aggregation component for any of the time intervals. There is some support for a heterogeneity component with lower DIC and MAPE statistics in the heterogeneous binomial model for two of the time intervals (93–94 and 95– 96). Aside from this 4-year span, however, the fit of the usual binomial seems preferred. Of course, modeling the time-interval data separately could mask some consistency in rates over time. Just as we can pool information across small areas (i.e., neighborhoods of counties), we might also find it advantageous to pool information over neighboring time intervals. Moreover, spatial patterns of clustering effects may be more evident (i.e., power of detection increases) in models that explore consistent spatial aggregation over time. We fit the spatio-temporal model of Abellan, Richardson, and Best (2008; ARB) to examine these possibilities and compared results of this model to various other nested models that eliminated one or more of the ARB random effects. Table 16.7 presents results from two such spatio-temporal models: one assuming independent binomial event processes year-toyear and another assuming overdispersed binomial event processes year-to-year. The independent binomial assumes the event processes remain relatively bard, rodgers, muller

321

322 Table 16.5. Summary Statistics for Child Fatalities in All Counties of Oklahoma Between 1991 and 2006

a primer of epidemiologic methods

Min

1st Quad

Median

Mean

3rd Quad

90th Percentile

95th Percentile

99th Percentile

Max

Fatalities 1991–1992

0

0

0

0.60

1

1

2

6

13

1993–1994

0

0

0

0.79

1

1

2

13

21

1995–1996

0

0

0

0.92

1

2

2

14

28

1997–1998

0

0

0

1.00

1

2

4

15

20

1999–2000

0

0

0

1.40

1

3

4

19

26

2001–2002

0

0

0

0.78

1

1

3

9

13

2003–2004

0

0

0

1.00

1

2

4

11

23

2005–2006

0

0

0

0.84

1

2

3

10

15

Person-years 1991–1992

1800

5700

11,000

22,000

21,000

32,237

50,312

28,3108

31,9738

1993–1994

1800

5600

11,000

23,000

21,000

33,125

51,859

28,7450

32,6589

1995–1996

1700

5500

11,000

23,000

21,000

33,714

52,483

28,8411

32,8501

1997–1998

1700

5600

11,000

23,000

21,000

34,348

53,062

29,2474

32,9928

1999–2000

1700

5400

11,000

23,000

22,000

34,784

52,577

30,0247

33,4186

2001–2002

1500

5300

11,000

23,000

21,000

34,382

51,672

30,8298

34,1683

2003–2004

1400

5100

11,000

23,000

21,000

33,985

51,835

30,8575

34,7555

2005–2006

1300

5000

11,000

23,000

21,000

33,583

52,958

31,3363

35,6954

1991–1992

1993–1994

1995–1996 0.2

0.10 0.05

∗ ∗

–0.1 ∗

Moran’s I

0.0

∗ ∗

0.00 –0.05

∗ ∗ ∗

–0.10

2

3 lags

4

5



∗ ∗

–0.05

1

2

1999–2000



3 lags

4

5

1

2

3 lags

4

5

1

0.2

0.00

∗ ∗ ∗



–0.10

0.00 ∗ ∗

–0.05







Moran’s I

0.05 ∗

–0.05 ∗

0.0

4

5

0.10

Moran’s I

∗ ∗

3 lags 2005–2006

0.05 Moran’s I

Moran’s I

0.1

2

0.3

0.10 ∗





2003–2004

0.15

0.2 ∗



–0.1 –0.2

2001–2002

0.3





0.0

–0.15

–0.20 1

0.00

–0.10

–0.15

–0.2



0.1

0.05 Moran’s I

∗ ∗

Moran’s I

0.10

Moran’s I

0.1

1997–1998



0.1 ∗

0.0







–0.10 –0.1

–0.15

–0.1 1

2

3 lags

4

5

–0.15 1

2

3 lags

4

5

1

2

3 lags

4

5

1

2

3 lags

4

5

Figure 16.3 Moran I correlograms for each 2-year interval. Dots indicate Moran I point estimates. Boundaries represent 95% confidence intervals from randomization tests. Lags represent first through fifth order neighborhoods.

Table 16.6. Fit Comparison of Independent Binomial, Heterogeneous Binomial, and a BYM Convolution Model Intercept

1991–1992

1993–1994

1995–1996

1997–1998

1999–2000

2001–2002

2003–2004

2005–2006

SD(UH)

SD(S)

DIC

MAPE





127.75

0.61

Model

Posterior mean [95% CI]

Independent binomial

−10.53 [−10.83, −10.25]

Binomial +υi

−10.55 [−10.89, −10.26] 0.12 [0.01, 0.60]



127.74

0.60

Binomial +υi + ϕi

−10.59 [−11.01, −10.26] 0.34 [0.02, 1.67]

0.15 [0.02, 0.69]

130.40

0.59

Independent binomial

−10.27 [−10.52, −10.03]



139.16

0.77

Binomial +υi

−10.53 [−11.13, −10.12] 0.49 [0.02, 1.13]



132.87

0.65

Binomial +υi + ϕi

−10.51 [−11.11, −10.10] 0.09 [0.01, 0.43]

0.45 [0.02, 1.09]

136.22

0.66

Independent binomial

−10.12 [−10.35, −9.89]





142.23

0.86

Binomial +υi

−10.31 [−10.76, −9.97]

0.34 [0.02, 0.82]



135.83

0.72

Binomial +υi + ϕi

*No Convergence









Independent binomial

−10.03 [−10.26, −9.81]





142.96

0.81

Binomial +υi

−10.06 [−10.34, −9.83]

0.10 [0.01, 0.47]



143.17

0.80

Binomial +υi + ϕi

−10.06 [−10.34, −9.84]

0.05 [0.01, 0.16]

0.10 [0.02, 0.38]

143.12

0.80

Independent binomial

−9.73 [−9.93, −9.55]





167.11

0.99

Binomial +υi

−9.74 [−9.96, −9.54]

0.07 [0.01, 0.34]



167.04

0.97

Binomial +υi + ϕi

−9.75 [−9.96, −9.55]

0.07 [0.01, 0.29]

0.09 [0.02, 0.31]

170.26

0.98

Independent binomial

−10.30 [−10.55, −10.05]





140.05

0.66

Binomial +υi

−10.30 [−10.58, −10.04] 0.06 [0.01, 0.25]



140.33

0.67

Binomial +υi + ϕi

−10.33 [−10.64, −10.05] 0.31 [0.01, 1.71]

0.06 [0.01, 0.24]

143.71

0.66

Independent binomial

−10.00 [−10.23, −9.79]





151.37

0.87

Binomial +υi

−10.04 [−10.33, −9.80]

0.15 [0.02, 0.55]



150.43

0.84

Binomial +υi + ϕi

−10.04 [−10.31, −9.80]

0.07 [0.01, 0.31]

0.17 [0.02, 0.62]

150.4

0.84

Independent binomial

−10.22 [−10.47, −9.99]





145.68

0.71

Binomial +υi

−10.22 [−10.47, −9.99]

0.08 [0.01, 0.36]



145.93

0.71

Binomial +υi + ϕi

−10.28 [−10.66, −10.01] 0.29 [0.02, 1.27]

0.08 [0.02, 0.36]

146.91

0.71



Note. SD() = standard deviation; υi = Unique Heterogeneity (UH) component; ϕi = Spatial (S) ICAR component; DIC = deviance information criterion; MAPE = mean absolute predictive error

unchanged over time. The overdispersion binomial assumes that unique unobserved heterogeneity exists across space and that the distribution of this spatial heterogeneity experiences mean shifts randomly from year to year. In other words, this model assumes that the relative distribution of event processes remains unchanged from year to year but allows the population average to shift randomly over time. The ARB model, on the other hand, assumes that the spatial aggregation of events remains fairly constant 324

over time but that the overall rate of the state (and, therefore, of each region) closely resembles the rates occurring in the 2-year intervals immediately before and after a chosen time interval (in first order, random walk in time fashion). The model specification of the area by time log-odds of the rates in the ARB can be written as:

log

πi (1 − πi )

a primer of epidemiologic methods

= β0 + ϕi + υi + γt + δt + ωit ,

Table 16.7. Results Comparison for Spatio-Temporal Models Independent binomial

Binomial + UH + UT

Binomial + UH + S + UT + T + ST

Posterior mean [95% CI]

Posterior mean [95% CI]

Posterior mean [95% CI]

Intercept

−10.12 [−10.20, −10.04]

−10.24 [−10.45, −10.04]

−10.26 [−10.44, −10.10]

SD(UH)



0.29 [0.15,0.46]

0.10 [0.01,0.56]

SD(S)





0.25 [0.03,0.45]

SD(UT)



0.19 [0.05,0.38]

0.09 [0.02,0.28]

SD(T)





0.11 [0.02,0.33]

SD(ST1)





0.08 [0.01,0.22]

SD(ST2)





1.41 [0.01,12.41]

P(ST2)





0.36 [0.00,0.98]

1171.10

1125.05

1129.12

0.82

0.73

0.72

Parameter estimate

Deviance MAPE

Note. SD() = standard deviation; UH = Unique Heterogeneity component; S = Spatial ICAR component; UT = Unique Time random effect; T = first-order random walk in time component; ST1 = class-1 space-time interaction mixture component standard deviation; ST2 = class-2 space-time interaction mixture component standard deviation; P(ST2) = proportional assignment of class-2 areas; DIC = deviance information criterion; MAPE = mean absolute predictive error

where β0 , ϕi , and υi represent the same effects as the spatial models of Table 16.6, γt and δt represent a first-order random walk component and a timevarying heterogeneity component, respectively, and the ωit represents a space-time interaction component that is distributed as a mixture of two Normal density random effects, ωit ∼ p(N(0,τ12 )) + (1– p)(N(0, τ22 ). As mentioned earlier, the interaction random effect is modeled as a mixture distribution to account for the possibility of outlying clustered rates. The model forces τ22 > τ12 , and uses the estimated class probabilities as a marker for potential outlying clusters (e.g., ARB Rule 1, p. 1112, labels an outlier as any area with class-2 probability > 0.50). The three spatio-temporal models described above were run using WinBUGS (Spiegelhalter, Thomas, & Best, 1999) MCMC estimation with two chains of 50,000 iterations each, a burn-in of 40,000 iterations, and a thinned solution capturing every 10th iteration. Table 16.7 provides summary information for each model. The DIC favored the heterogeneous binomial over all other models run (including those not tabled). The MAPE only slightly favored the ARB relative to this heterogeneous Binomial. The mixture distribution of the ARB seemed to be problematic as evident by the wide 95% credible interval of the P(ST2) parameter

(probability of membership in “unstable” class), which spanned nearly the entire probabilistic range. Even without this mixture distribution of interaction terms, however, neither the use of a single interaction random effect nor the spatial aggregation random effect seemed necessary. The unobserved heterogeneity binomial was explored further for possible hotspot clusters. Since the area random effects of this model were stable over time (i.e., area by time interaction terms were not present in this model), we chose to use the hotspot detection criterion of Richardson et al. (2004), which was designed for single time-point data. This criterion identified eight hotspots among the 77 counties, and the median posterior odds of a maltreatment death in one of these hotspot counties ranged from 1.2 to1.6 times higher than the unit-specific average odds (i.e., odds for the typical county) in any given 2-year interval. A map of Bayes estimates of odds ratios (relative to the overall average estimate) for the 1999–2000 interval is presented in Figure 16.4, alongside a map that displays posterior exceedence probabilities for each county (probabilities > 0.80 met the Richardson et al. hotspot criterion). We present these results as an illustration of this hotspot detection technique, but as stated earlier, selection criteria

bard, rodgers, muller

325

2.25

Cherokee County

Adair County

Muskogee County

Oklahoma County

McIntosh County

n =8

1.75

Cherokee County

0.8

Adair County

Muskogee County

Oklahoma County

McIntosh County

n = 13

n =3

2

n =8

n =1

1

0.6

1999−2000 1.25

n = 32

Comanche County

n = 22

Carter County

Carter County

0.4 Marshall County

n =1

1

Comanche County

1991−2006

Marshall County

n = 19

n = 42

1.5

0.2

n =0

n =5

0.75

0.5

0

50

100

mi

mi

mi

0

0

50

100

mi

mi

mi

Figure 16.4 Left Panel : Odds ratios of the 1999–2000 county event rate odds to the average, across all years (1991–2006), unit-specific event odds estimate. Right Panel : Exceedence probabilites used in outlier detection procedure. Dark black counties in this panel met the Richardson et al. (2006) hotspot criteria.

could always benefit from data-specific simulation checks of sensitivity and specificity.

Summary of Disease Mapping We covered only discrete area disease mapping in the sections above. Methods for handling point-level (Banerjee et al., 2004) data, where samples are taken from specific easting and northing spatial locations, are also available. Typically, these models estimate the spatial covariance matrix as a function of distances between pairs of points. Point-process models also exist and are often used to assess clustering of known event cases (sampling occurs from event registries and spatial location is treated as a random variable). Lawson (2009), for example, has discussed point-process models for case–control studies where events of a control disease are used to compare spatial aggregation or differential rate relationships with distance to a putative environmental contamination source. Covariate effects can be easily incorporated into Bayesian disease-mapping models. For example, for each model presented above, we added a time-varying linear effect to estimate the association between county-specific poverty (percent at or below the poverty level) and the occurrence of maltreatment child fatalities. This effect was significant and positively directed in most models and slightly adjusted the estimated rates of spatial aggregation and unique heterogeneity. The usual caveats surrounding ecological validity of aggregated covariate measures applies to these types of discrete-area models. This can be largely overcome, however, in the point-process models where covariates and events are observed at the level of the individual. Other ecological level limitations include the modifiable area unit problem (arbitrariness of discrete area boundaries) and edge effects (lack of neighborhood information from areas on the spatial boundary) (see Lawson [2009] for further discussion of each). The mapping demonstrated above is usually reserved for descriptive modeling of noninfectious diseases that are genetically inherited or environmentally causative. The use of space and time, however, allows these models to be flexibly extended to capture infectious disease dynamics, blending the modeling methods above and the usual SEIR methods presented below.

Infectious Disease Modeling Applied to Behavioral Outcomes A large literature accounts for the biological spread of viruses and bacteria through contagious/infectious processes. Perhaps the best known

modeling system is the one developed by Anderson and May (1991), often referred to as the MayAnderson equations. These equations account for the dynamic change over time in the prevalence of a disease or illness like AIDS, malaria, or the common cold within a population. This type of spread is often conceptualized through the SEIR model, an acronym for a system that includes susceptible individuals, those who are exposed and infected but not yet infectious (pre-infectious), those who are infectious, and those who are recovered (or immune) from being infected. Social and behavioral scientists have used this classic infectious disease model to explain and predict the spread of ideas or behaviors through a social network. This type of application illustrates the potential for applying an epidemiological perspective to behavioral, rather than biological, processes. A thriving literature that crosses several disciplinary boundaries (including computer science, information systems, mathematics, sociology, demography, and psychology) has developed the concepts of thought contagion (e.g., Lynch, 1998; Watts, 2003). The concept that ideas spread through a network has developed its own disciplinary domain, referred to as “Innovation Diffusion” (see Mahajan & Peterson, 1985). Another well-known form of innovation diffusion is the idea of a “meme,” as developed by Dawkins (1976). A meme is a unit of conceptual information that is passed through a social network, just as genes are a unit of biological information passed through generations. Dawkins suggested that ideas spread, just as biological agents like viruses spread. Examples of memes are jokes, as they pass through a social network; marketing promotions that are passed through marketing channels; and political campaign material, a specialized form of marketing. A related and even more relevant idea that emerges directly from the field of psychology is the concept of social or behavioral contagion. This perspective has at its starting point the simple psychological assertion that there are social influences that can pass from one individual to another or through a whole social network. In this section, we present an application of social contagion modeling. Two behaviors that appear to be especially amenable to social influence are smoking and drinking. Rowe, Chassin, Presson, Edwards, and Sherman (1992) found that their best-fitting model for the onset of smoking in adolescence suggested that the onset of smoking is primarily a social process, driven by social influence from friends (whereas the transition from experimental smoker to bard, rodgers, muller

327

regular smoker is a biological process, driven by level of nicotine addiction). Rodgers and Johnson (2007) found that individual-level explanations of the first drinking and (especially) the first smoking experience referred explicitly to a social contagious process, and model-fitting cross-validated the finding. These results have emerged from a perspective that Rodgers and Rowe (1993) labeled EMOSA modeling; EMOSA is an acronym referring to Epidemic Modeling of the Onset of Social Activities. A number of EMOSA models have been developed and applied to various adolescent behaviors and outcomes, including onset of intimate and sexual behavior, pregnancy and sexually transmitted disease, smoking and drinking, and religious involvement. We use onset of smoking as a prototypical EMOSA process to explain how the EMOSA modeling approach works. Smoking is a behavior that is presented anew to each adolescent cohort. Early adopters of smoking provide the potential to “transmit” smoking behavior through the adolescent network (defined as either a school or neighborhood network). Some adolescents are “immune” from smoking as a result of religious conviction, family influence, or personality characteristics. The EMOSA approach posits that individuals who share a social network are paired during a given period of time. The smoking status of the two individuals is critical to the transmission of smoking through the network. If neither has ever smoked, through some non-epidemic transmission process, they may (or may not) try smoking for the first time; the probability of this occurring is estimated with a non-epidemic transmission parameter. If both have smoked, then there is no potential for increasing the “ever-smoked” prevalence that emerges from that pair. If one has smoked and the other has not, with some transmission probability, then the smoker will “infect” the nonsmoker by socially influencing the nonsmoker to try a cigarette for the first time. This EMOSA social contagion process is captured in a set of equations that account for the incidence and prevalence of smoking at each age. The equations include variables that are measured (e.g., longitudinally gathered prevalence and/or incidence data), and parameters that are estimated to best predict the empirical outcomes (e.g., the non-epidemic and epidemic transmission parameters). An early model of cigarette-smoking contagion (Rowe & Rodgers, 1991), for example, used age-specific estimates of prevalence among a longitudinal cohort to estimate transmission parameters in the following 328

model:

Pt+1 = T(1 − Pt )Pt + Pt ,

where Pt represents the prevalence of ever having smoked at age t, and T represents the average number of effective contacts (a contact that leads to a new infection) per year between “ever-smoker” (infectious) and “never-smoker” (susceptible) individuals. The term (1 − Pt )Pt reflects the assumption that contacts between these groups of individuals occur randomly over time, and the product of this term with T provides an estimate of the proportion of new smokers at age t + 1 (i.e., yearly incidence). Generally, using prevalence for estimating disease contagion is not advised; however, when disease duration is long (e.g., once an “ever-smoker,” always an “ever-smoker”), rates of change are stable, and the in- and out-migration of a population are roughly equivalent, then differences in prevalence over time can be used to approximate incidence change. Other variables that have been shown to inform the contagion system of equations include measures of maturational processes and family dynamics (e.g., whether parents are smokers or not). Other parameters that have been estimated include ones that account for whether the social influence is a direct, 1-to-1 influence or whether it emerges from the general social environment including media and role models (e.g., Rodgers, 2007). The EMOSA equations, like the May-Anderson equations referenced above, define a nonlinear dynamic system (NDS) of equations that can be solved to estimate parameters in the models and inform our understanding of how both biological and social epidemics are spread through a social network. Nonlinear dynamic system models are more realistic than most linear models (e.g., regression and analysis of variance)inthattheymatchprocessesthatarebelieved to actually occur in the dynamic (i.e., changing over time) environment to which they are applied. Explicit specification of more advanced EMOSA (or May-Anderson) equations is beyond the scope of this chapter. The interested reader can consult Rodgers, Rowe, and Buster (1998) for the most sophisticated EMOSA model of sexual development (one that accounts for onset of sexual behavior, with the potential for both pregnancy and STD) and Rodgers (2007) for a complex application of EMOSA to smoking and drinking onset.

Conclusion Our primary intention when developing this chapter was to provide a brief introduction to the

a primer of epidemiologic methods

essential concepts and methods of the epidemiologist for applied psychological researchers who may be only vaguely familiar with developments in biostatistics and epidemiology. As a secondary goal, we also aimed to demonstrate the utility of advanced applications of epidemiologic modeling when addressing some types of behavioral science outcomes and research questions. We would like to stress that the information found here merely reflects the tip of the iceberg, both in terms of epidemiology’s breadth of study and, perhaps most importantly, in terms of epidemiologic modeling potential for behavioral-based studies. We encourage the reader to explore, for example, some of the earlier cited applications of infectious disease epidemiology models for the spread of behaviors like smoking and risky sexuality (Rodgers, 2007; Rodgers & Johnson, 2007; Rodgers & Rowe, 1993; Rodgers et al., 1998; Rowe et al., 1992). Several other intersections of the two fields of psychology and epidemiology exist and are also worth pursuing and include behavior genetics (e.g., Rijsdijk & Sham, 2003; Blokland, Mosing, Verweij, & Medland, Chapter 11, Volume 2), health psychology (e.g., Suls, Davidson, & Kaplan, 2010), medical decision-making (e.g., Sox, Blatt, Higgins, & Marton, 2007), health economics (e.g., Chisholm & McCrone, 2003), psychopharmacology (e.g., Pies & Rogers, 2005), and the influence of mental health on physical health (e.g., Felitti et al., 1998; Repetti, Taylor, & Seeman, 2002). It is hoped that the blending of these disciplines will also continue to be a two-way street. Psychometrics, for example, has contributed significantly to the QOL measurement issues that commonly appear in the epidemiology of palliative care and rehabilitation sciences (e.g., Reeve et al., 2007). Just as the psychologist might greatly benefit from the models and theories within epidemiology, we find equivalent growth potential for use of psychological theories and methods in the biomedical sciences. It is our hope that these types of cross-disciplinary germination attempts continue to flourish to the benefit and advancement of both epidemiology and psychology.

Author Note David E. Bard, Department of Pediatrics, University of Oklahoma Health Sciences Center; Joseph L. Rodgers, Department of Psychology, University of Oklahoma; Keith E. Muller, Department of Health Outcomes and Policy, University of Florida.

The authors wish to thank their colleague, Will Beasley, Ph.D., for sharing mapping files and code that were adapted for creation of Figure 16.4. Correspondence concerning this chapter should be addressed to David E. Bard, Department of Pediatrics, OUHSC Child Study Center, 1100 NE 13th Street, Oklahoma City, Oklahoma 73117.

References Abellan, J., Richardson, S., & Best, N. (2008). Use of spacetime models to investigate the stability of patterns of disease. [Article]. Environmental Health Perspectives, 116 (8), 1111– 19. doi: DOI 10.1289/ehp.10814 Agresti, A. (2002). Categorical data analysis (2nd ed.). Hoboken, NJ: Wiley. Agresti, A., & Finlay, B. (2009). Statistical methods for the social sciences (4th ed.). Upper Saddle River, NJ: Prentice Hall. Ahrens, W., Krickeberg, K., & Pigeot, I. (2005). An introduction to epidemiology. In W. Ahrens & I. Pigeot (Eds.), Handbook of epidemiology (pp. 1–40). Berlin: Springer. Alonzo, T., & Pepe, M. (2002). Distribution-free ROC analysis using binary regression techniques. Biostatistics, 3, 421–432. Anderson, R. M., & May, R. M. (1991). Infectious Diseases of Humans: Dynamics and Control. London: Oxford University Press. Angell, M. (June 23, 2011). The Epidemic of Mental Illness: Why? The New York Review of Books, 58. Retrieved May 17, 2012, from http://www.nybooks.com/articles/archives/2011/ jun/23/epidemic-mental-illness-why/ Assuncao, R., Potter, J., & Cavenaghi, S. (2002). A Bayesian space varying parameter model applied to estimating fertility schedules. [Article]. Statistics in Medicine, 21(14), 2057–2075. doi: DOI 10.1002/sim.1153 Assuncao, R., & Reis, E. (1999). A new proposal to adjust Moran’s I for population density. Statistics in Medicine, 18(16), 2147–2162. Bandos, A., Rockette, H., & Gur, D. (2005). A permutation test sensitive to differences in areas for comparing ROC curves from a paired design. Statistics in Medicine, 24, 2873–2893. doi: DOI 10.1002/sim.2149 Banerjee, S., Carlin, B. P., & Gelfand, A. E. (2004). Hierarchical modeling and analysis for spatial data. Boca Raton, FL: Chapman and Hall/CRC. Bard, D. E., Balachova, T., Bonner, B., & Chaffin, M. (2011). Screening for risk of alcohol exposed pregnancy among pregnant and nonpregnant Russian women: A concurrent and predictive validity investigation. Manuscript submitted for publication. Bard, D. E., Damashek, A., McDiarmid-Nelson, M. M., & Bonner, B. L. (2012). Spatial analysis of fatal child maltreatment patterns in Oklahoma: 1991–2006. Manuscript submitted for publication. Benichou, J., & Palta, M. (2005). Rates, risks, measures of association and impact. In W. Ahrens & I. Pigeot (Eds.), Handbook of epidemiology (pp. 89–156). Berlin: Springer. Benish, W. (2003). Mutual information as an index of diagnostic test performance. Methods of Information in Medicine, 42, 260–264. Berger, R. L., & Boos, D. D. (1994). P-values maximized over a confidence set for the nuisance parameter. Journal of the American Statistical Association, 89, 1012–1016. doi: 10.2307/2290928

bard, rodgers, muller

329

Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36(2), 192–236. Besag, J., York, J., & Mollie, A. (1991). Bayesian imagerestoration, with 2 applications in spatial statistics. Annals of the Institute of Statistical Mathematics, 43(1), 1–20. doi: 10.1007/bf00116466 Biggerstaff, B. (2000). Comparing diagnostic tests: A simple graphic using likelihood ratios. Statistics in Medicine, 19, 649–663. Blokland, G. A. M., Mosing, M. A., Verweij, K. J. H., & Medland, S. E. (2012). Twin studies and behavior genetics. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 198–218). New York: Oxford University press. Bromet, E. J., & Susser, E. (2006). The burden of mental illness. In E. Susser, S. Schwartz, A. Morabia & E. J. Bromet (Eds.), Psychiatric epidemiology: Searching for the causes of mental disorders (pp. 5–14). New York: Oxford University Press. Carroll, R. J., Wang, S., & Wang, C. Y. (1995). Prospective analysis of logistic case-control studies. Journal of the American Statistical Association, 90, 157–169. Chisholm, D., & McCrone, P. (2003). Health economics for psychiatric epidemiology. In M. Prince, R. Stewart, T. Ford, & M. Hotopf (Eds.), Practical Psychiatric Epidemiology (pp. 357–376). New York: Oxford University Press. Clayton, D. G., & Kaldor, J. (1987). Empirical Bayes estimates of age-standardized relative risks for use in disease mapping. Biometrics, 43, 671–681. Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46. Coxe, S., West, S. G., & Aiken, L. S. (2012). Generalized linear models. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 26–51). New York: Oxford University Press. Dawkins, R. L. (1976). The selfish gene. London: Oxford University Press. DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. I. (1988). Comparing the areas under 2 or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics, 44, 837–845. Durkheim, E. (1897). Le suicide. New York: Free Press. Farewell, V. T. (1979). Some results on the estimation of logistic models based on retrospective data. Biometrika, 66, 27–32. Felitti, V., Anda, R., Nordenberg, D., Williamson, D., Spitz, A., Edwards, V.,. . . Marks, J. (1998). Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults—The adverse childhood experiences (ACE) study. American Journal of Preventive Medicine, 14(4), 245–258. Geary, R. C. (1954). The contiguity ratio and statistical mapping. The Incorporated Statistician, 5(3), 115–146. Gelfand, A.E., &Ghosh, S.K.(1998). Modelchoice: Aminimum posterior predictive loss approach. Biometrika, 85(1), 1–11. Gordis, L. (2008). Epidemiology (4th ed.). Philadelphia, PA: Saunders. Guyatt, G. H., Rennie, D., Meade, M. O., & Cook, D. J. (Eds.). (2008). Users’ guides to the medical literature: A manual for the evidence-based clinical practice. New York: The American Medical Association and McGraw-Hill. Hanley, J. A., & McNeil, B. J. (1983). A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148, 839–843. Hemenway, D. (2004). Private guns, public health. Ann Arbor, MI: University of Michigan Press.

330

Hill, P. W., & Goldstein, H. (1998). Multilevel modeling of educational data with cross-classification and missing identification for units. Journal of Educational and Behavioral Statistics, 23(2), 117–128. James, W., & Stein, C. (1961). Estimation with quadratic loss. Paper presented at the Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Vol. I, Berkeley, CA. Kaplan, D. & Depaoli, S. (2012). Bayesian statistical methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 1, pp. 406–436). NewYork: Oxford University Press. Knorr-Held, L. (2000). Bayesian modelling of inseparable space-time variation in disease risk. Statistics in Medicine 19(17–18), 2555–2567. doi: 10.1002/10970258(20000915/30)19:17/183.0.co;2-# Langford, I. H., Leyland, A. H., , Rasbash, J., & Goldstein, H. (1999). Multilevel modelling of the geographical distributions of diseases. Journal of the Royal Statistical Society. Series C (Applied Statistics), 48(2), 253–268. Last, J. M. (Ed.) (2000). A Dictionary of Epidemiology. Oxford University Press. Lawson, A. (2009). Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology. Boca Raton, FL: Chapman and Hall/CRC. Lawson, A. (2010). Hotspot detection and clustering: ways and means. Environmental and Ecological Statistics, 17(2), 231– 245. doi: DOI 10.1007/s10651-010-0142-z Lee, E. T., & Wang, J. (2003). Statistical methods for survival data analysis. Hoboken, NJ: Wiley-Interscience. Lynch, A. (1998). Thought contagion. New York: Basic Books. MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1, 173–181. Mahajan, V., & Peterson, R. A. (1985). Models for innovation diffusion. Beverly Hills, CA: Sage. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719–748. Marshall, R. J. (1991). Mapping disease and mortality rates using empirical bayes estimators. Journal of the Royal Statistical Society. Series C (Applied Statistics), 40(2), 283–294. McClish, D. K. (1989). Analyzing a portion of the ROC curve. Medical Decision Making, 9(3), 190–195. McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12, 153–157. Moran, P. A. P. (1950). Notes on continuous stochasic phenomena. Biometrika, 37, 17–23. Murray, C. J. L., & Lopez, A. D. (1996). The Global Burden of Disease. Cambridge, MA: Harvard University Press. Peterson, T. (2012). Analyzing event history data. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 458–485). New York: Oxford University press. Pies, R. W., & Rogers, D. P. (2005). Handbook of essential psychopharmacology (2nd ed.). Arlington, VA: American Psychiatric Publishing. Prince, M., Stewart, R., Ford, T., & Hotopf, M. (2003). Psychiatric epidemiology: Looking to the future. In M. Prince, R. Stewart, T. Ford, & M. Hotopf (Eds.), Practical psychiatric epidemiology (pp. 384–401). New York: Oxford University Press.

a primer of epidemiologic methods

Rasmussen, S. (2004). Modelling of discrete spatial variation in epidemiology with SAS using GLIMMIX. Computer Methods and Programs in Biomedicine, 76, 83–89. Reeve, B., Hays, R., Bjorner, J., Cook, K., Crane, P., Teresi, J.,. . . Grp, P. C. (2007). Psychometric evaluation and calibration of health-related quality of life item banks—Plans for the patient-reported outcomes measurement information system (PROMIS). Medical Care, 45, S22–S31. Repetti, R., Taylor, S., & Seeman, T. (2002). Risky families: Family social environments and the mental and physical health of offspring. Psychological Bulletin, 128, 330–366. doi: DOI 10.1037//0033-2909.128.2.330 Richardson, S., Thomson, A., Best, N., & Elliott, P. (2004). Interpreting posterior relative risk estimates in diseasemapping studies. Environmental Health Perspectives, 112, 1016–1025. doi: DOI 10.1289/ehp.6740 Rijsdijk, F., & Sham, P. (2003). Genetic epidemiology 1: Behavioural genetics. In M. Prince, R. Stewart, T. Ford, & M. Hotopf (Eds.), Practical Psychiatric Epidemiology (pp. 315–333). New York: Oxford University Press. Rodgers, J. L. (2007). The shape of things to come: Using developmental curves from adolescent smoking and drinking reports to diagnosis the type of social process that generated the curves. In T. D. Little, J. A. Bovaird & N. A. Card (Eds.), Modeling contextual effects in longitudinal studies (pp. 343–62). Mahweh, NJ: Lawrence Erlbaum Associates. Rodgers, J. L., & Johnson, A. (2007). Nonlinear dynamic models of nonlinear dynamic behaviors: Social contagion of adolescent smoking and drinking at aggregate and individual levels. In S. M. Boker & M. J. Wenger (Eds.), Data analytic techniques for dynamical systems (pp. 213–242). Mahwah, NJ: Lawrence Erlbaum Associates. Rodgers, J. L., & Rowe, D. C. (1993). Social contagion and adolescent sexual behavior: A developmental EMOSA model. Psychological Review, 100, 479–510. Rodgers, J. L., Rowe, D. C., & Buster, M. (1998). Social contagion, adolescent sexual behavior, and pregnancy: A nonlinear dynamic EMOSA model. Developmental Psychology, 34, 1096–1113. Rothman, K. J., & Greenland, S. (2005). Basic concepts. In W. Ahrens & I. Pigeot (Eds.), Handbook of epidemiology (pp. 43–88). Berlin: Springer. Rowe, D., Chassin, L., Presson, C., Edwards, D., & Sherman, S. (1992). An epidemic model of adolescent cigarette smoking. Journal of Applied Social Psychology, 22(4), 261–285. Rowe, D. C., & Rodgers, J. L. (1991). Adolescent smoking and drinking: Are they epidemics. Journal of Studies on Alcohol, 52, 110–117. Satten, G. A., & Kupper, L. L. (1993). Inferences about exposure-disease association using probability-of-exposure information. Journal of the American Statistical Association, 88, 200–208. Serfling, R. J. (1980). Approximation theorems of mathematical statistics. New York: Wiley. Simpson, E. H. (1951). The interpretation of interaction in contingency tables. Journal of the Royal Statistical Society, Series B, 13, 238–241. Skrondal, A., & Rabe-Hesketh, S. (2004). Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models. New York: Chapman and Hall/CRC. Sox, H., Blatt, M. A., Higgins, M. C., & Marton, K. I. (2007). Medical Decision Making. Philadelphia, PA: The American College of Physicians.

Spiegelhalter, D. J., Best, N., Carlin, B., & van der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society Series B-Statistical Methodology, 64, 583–616. Spiegelhalter, D. J., Thomas, A., & Best, N. G. (1999). WinBUGS Version 1.2 User Manual: MRC Biostatistics Unit. Spybrook, J., Bloom, H., Congdon, R., Hill, C., Martinez, A., & Raudenbush, S. (2011). Optimal design plus empirical evidence: Documentation for the Optimal Design software. Available from www.wtgrantfoundation.org or from pikachu.harvard.edu/od/. Retrieved May 17, 2012, from http://pikachu.harvard.edu/od/od-manual-20111016v300.pdf Stokes, M. E., Davis, C. S., & Koch, G. G. (2000). Categorical data analysis using the SAS® system (2nd ed.). Cary, NC: SAS Institute. Suls, J. M., Davidson, K. W., & Kaplan, R. M. (Eds.). (2010). Handbook of Health Psychology and Behavioral Medicine. New York: The Guilford Press. Thompson, M. L., & Zucchini, W. (1989). On the statistical analysis of ROC curves. Statistics in Medicine, 8, 1277–1290. Tu, Y. K., Gunnell, D., & Gilthorpe, M. S. (2008). Simpson’s Paradox, Lord’s Paradox, and Suppression Effects are the same phenomenon—the reversal paradox. Emerging Themes in Epidemiology, 5, 1–9. doi: 10.1186/1742-7622-5-2 Turkheimer, E. (2000). Three laws of Behavior Genetics and what they mean. Current Directions in Psychological Science, 9, 160–164. Tversky, A., & Gilovich, T. (1989). The cold facts about the “hot hand” in basketball. Chance: New Directions for Statistics and Computing, 2, 16–21. Venkatraman, E. S. (2000). A permutation test to compare receiver operating characteristic curves. Biometrics, 56, 1134– 1138. Vynnycky, E., & White, R. (2010). An introduction to infectious disease modelling. New York: Oxford University Press. Waller, L. A., & Gotway, C. A. (2004). Applied Spatial Statistics for Public Health Data. New York: Wiley. Wardrop, R. L. (1995). Simpson’s Paradox and the hot hand in basketball. American Statistician, 49, 24–28. Wardrop, R. L. (1999). Statistical tests for the hothand in basketball in a controlled setting. Unpublished manuscript. Retrieved May 17, 2012, from http://www.stat.wisc.edu/∼wardrop/papers/tr1007.pdf Watts, D. (2003). Six Degrees: The science of a connected age. New York: W. W. Norton & Company. Wild, P. (2005). Design and planning of epidemiological studies. In W. Ahrens & I. Pigeot (Eds.), Handbook of epidemiology (pp. 465–501). Berlin: Springer. Woods, C. M. (2012). Categorical methods. In T. D. Little (Ed.), The Oxford Handbook of Quantitative Methods (Vol. 2, pp. 52–73). New York: Oxford University Press. World Health Organization. (2008). The global burden of disease: 2004 update. Switzerland: World Health Organization. World Health Organization. (2009). Global health risks: Mortality and burden of disease attributable to slected major risks. France: World Health Organization. Yashin, A. I., Iachine, I. A., Begun, A. Z., & Vaupel, J. W. (2001). Hidden frailty: myths and reality (pp. 1–48). Odense: University of Southern Denmark, Department of Statistics. Yule, G. U. (1903). Notes on the theory of association of attributes in statistics. Biometrika, 2, 121–134.

bard, rodgers, muller

331

CHAPTER

17

Program Evaluation: Principles, Procedures, and Practices

Aurelio José Figueredo, Sally Gayle Olderbak, Gabriel Lee Schlomer, Rafael Antonio Garcia, and Pedro Sofio Abril Wolf

Abstract This chapter provides a review of the current state of the principles, procedures, and practices within program evaluation. We address a few incisive and difficult questions about the current state of the field: (1) What are the kinds of program evaluations? (2) Why do program evaluation results often have so little impact on social policy? (3) Does program evaluation suffer from a counterproductive system of incentives? and (4) What do program evaluators actually do? We compare and contrast the merits and limitations, strengths and weaknesses, and relative progress of the two primary contemporary movements within program evaluation, Quantitative Methods and Qualitative Methods, and we propose an epistemological framework for integrating the two movements as complementary forms of investigation, each contributing to different stages in the scientific process. In the final section, we provide recommendations for systemic institutional reforms addressing identified structural problems within the real-world practice of program evaluation. Key Words: Program evaluation, formative evaluation, summative evaluation, social policy, moral hazards, perverse incentives, quantitative methods, qualitative methods, context of discovery, context of justification

Introduction President Barack Obama’s 2010 Budget included many statements calling for the evaluation of more U. S. Federal Government programs (Office of Management and Budget, 2009). But what precisely is meant by the term evaluation? Who should conduct these evaluations? Who should pay for these evaluations? How should these evaluations be conducted? This chapter provides a review of the principles, procedures, and practices within program evaluation. We start by posing and addressing a few incisive and difficult questions about the current state of that field: 1. What are the different kinds of program evaluations?

332

2. Why do program evaluation results often have so little impact on social policy? 3. Does program evaluation suffer from a counterproductive system of incentives? We then ask a fourth question regarding the real-world practice of program evaluation: What do program evaluators actually do? In the two sections that follow, we try to answer this question by reviewing the merits and limitations, strengths and weaknesses, and relative progress of the two primary contemporary “movements” within program evaluation and the primary methods of evaluation upon which they rely: Part 1 addresses Quantitative Methods and Part 2 addresses Qualitative Methods. Finally, we propose a framework for the integration of the

two movements as complementary forms of investigation in program evaluation, each contributing to different stages in the scientific process. In the final section, we provide recommendations for systemic institutional reforms addressing identified structural problems within the real-world practice of program evaluation.

What Are the Different Kinds of Program Evaluations? Scriven (1967) introduced the important distinction between summative program evaluations as compared with formative program evaluations. The goal of a summative evaluation is to judge the merits of a fixed, unchanging program as a finished product, relative to potential alternative programs. This judgment should consist of an analysis of the costs and benefits of the program, as compared with other programs targeted at similar objectives, to justify the expenses and opportunity costs society incurs in implementing one particular program as opposed to an alternative program, as well as in contrast to doing nothing at all. Further, a summative evaluation must examine both the intended and the unintended outcomes of the programmatic intervention and not just the specific stated goals, as represented by the originators, administrators, implementers, or advocates of the program (Scriven, 1991). A formative evaluation, on the other hand, is an ongoing evaluation of a program that is not fixed but is still in the process of change. The goal of a formative evaluation is to provide feedback to the program managers with the purpose of improving the program regarding what is and what is not working well and not to make a final judgment on the relative merits of the program. The purely dichotomous and mutually exclusive model defining the differences between summative and formative evaluations has been softened and qualified somewhat over the years. Tharp and Gallimore (1979, 1982), in their research and development (R&D) program for social action, proposed a model of evaluation succession, patterned on the analogy of ecological succession, wherein an ongoing, long-term evaluation begins as a formative program evaluation and acquires features of a summative program evaluation as the program naturally matures, aided by the continuous feedback from the formative program evaluation process. Similarly, Patton (1996) has proposed a putatively broader view of program evaluation that falls between the summative versus formative

dichotomy: (1) knowledge-generating evaluation, evaluations that are designed to increase our conceptual understanding of a particular topic; (2) developmental evaluation, an ongoing evaluation that strives to continuously improve the program; and (3) using the evaluation processes, which involves more intently engaging the stakeholders, and others associated with the evaluation, to think more about the program and ways to improve its efficacy or effectiveness. Patton has argued that the distinction between summative and formative evaluation is decreasing, and there is a movement within the field of program evaluation that applies a more creative use and application of evaluation. What he termed knowledge-generative evaluation is a form of evaluation focused not on the instrumental use of evaluation findings (e.g., making decisions based on the results of the evaluation) but, rather, on the conceptual use of evaluation findings (e.g., theory construction). A developmental evaluation (Patton, 1994) is a form of program evaluation that is ongoing and is focused on the development of the program. Evaluators provide constant feedback but not always in the forms of official reports. Developmental evaluation assumes components of the program under evaluation are constantly changing, and so the evaluation is not geared toward eventually requiring a summative program evaluation but, rather, is focused on constantly adapting and evolving the evaluation to fit the evolving program. Patton (1996) proposed that program evaluators should focus not only on reaching the evaluation outcomes, but also on the process of the evaluation itself, in that the evaluation itself can be “participatory and empowering . . . increasing the effectiveness of the program through the evaluation process rather than just the findings” (p. 137). Stufflebeam (2001) has presented a larger classification of the different kinds of evaluation, consisting of 22 alternative approaches to evaluation that can be classified into four categories. Stufflebeam’s first category is called Pseudoevaluations and encompasses evaluation approaches that are often motivated by politics, which may lead to misleading or invalid results. Pseudoevaluation approaches include: (1) Public Relations-Inspired Studies and (2) Politically Controlled Studies (for a description of each of the 22 evaluation approaches, please refer to Stufflebeam’s [2001] original paper). Stufflebeam’s second category is called Questions-AndMethods-Evaluation Approaches (Quasi-Evaluation Studies) and encompasses evaluation approaches

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

333

geared to address a particular question, or apply a particular method, which often result in narrowing the scope of the evaluation. This category includes: (3) Objectives-Based Studies; (4) Accountability, Particularly Payment by Results Section; (5) Objective Testing Program; (6) Outcome Evaluation as Value-Added Assessment; (7) Performance Testing; (8) Experimental Studies; (9) Management Information Systems; (10) Benefit–Cost Analysis Approach; (11) Clarification Hearing; (12) Case Study Evaluations; (13) Criticism and Commentary; (14) Program Theory-Based Evaluation; and (15) Mixed-Methods Studies. Stufflebeam’s (2001) third category, Improvement/Accountability-Oriented Evaluation Approaches, is the most similar to the commonly used definition of program evaluation and encompasses approaches that are extensive and expansive in their approach and selection of outcome variables, which use a multitude of qualitative and quantitative methodologies for assessment. These approaches include: (16) Decision/Accountability-Oriented Studies; (17) Consumer-Oriented Studies; and (18) Accreditation/Certification Approach. Stufflebeam’s fourth category is called Social Agenda/Advocacy Approaches and encompasses evaluation approaches that are geared toward directly benefitting the community in which they are implemented, sometimes so much so that the evaluation may be biased, and are heavily included by the perspective of the stakeholders. These approaches include: (19) ClientCentered Studies (or Responsive Evaluation); (20) Constructivist Evaluation; (21) Deliberative Democratic Evaluation; and (22) Utilization-Focused Evaluation. These different types of program evaluations are not exhaustive of all the types that exist, but they are the ones that we consider most relevant to the current analysis and ultimate recommendations.

Why Do Program Evaluation Results Often Have So Little Impact on Social Policy? At the time of writing, the answer to this question is not completely knowable. Until we have more research on this point, we can never completely document the impact that program evaluation has on public policy. Many other commentators on program evaluation (e.g., Weiss, 1999), however, have made the point that program evaluation does not have as much of an impact on social policy as we would like it to have. To illustrate this point, we will use two representative case studies: the Kamehameha 334

p r o g r a m e va l u a t i o n

Early Education Project (KEEP), and the Drug Abuse Resistance Education (DARE). Although the success or failure of a program and the success or failure of a program evaluation are two different things, one is intimately related to the other, because the success or failure of the program evaluation is necessarily considered in reference to the success or failure of the program under evaluation.

The Frustrated Goals of Program Evaluation When it comes to public policy, the goal of an evaluation should include helping funding agencies, such as governmental entities, decide whether to terminate, cut back, continue, scale up, or disseminate a program depending on success or failure of the program, which would be the main goal of a summative program evaluation. An alternative goal might be to suggest modifications to existing programs in response to data gathered and analyzed during an evaluation, which would be the main goal of a formative program evaluation. Although both goals are the primary purposes of program evaluation, in reality policymakers rarely utilize the evaluation findings for these goals and rarely make decisions based on the results of evaluations. Even an evaluation that was successful in its process can be blatantly ignored and result in a failure in its outcome. We relate this undesirable state of affairs further below with the concept of a market failure from economic theory. According to Weiss (1999) there are four major reasons that program evaluations may not have a direct impact on decisions by policymakers (the “Four I’s”). First, when making decisions, a host of competing interests present themselves. Because of this competition, the results of different evaluations can be used to the benefit or detriment of the causes of various interested parties. Stakeholders with conflicting interests can put the evaluator between a rock and a hard place. An example of this is when a policymaker receives negative feedback regarding a program. On the one hand, the policymaker is interested in supporting successful programs, but on the other hand, a policymaker who needs to get re-elected might not want to be perceived as “the guy who voted no on drug prevention.” Second, the ideologies of different stakeholder groups can also be a barrier for the utilization of program evaluation results. These ideologies filter potential solutions and restrict results to which policymakers will listen. This occurs most often when the ideology claims that something is “fundamentally wrong.”

For example, an abstinence-only program, designed to prevent teenage pregnancy, may be in competition with a program that works better, but because the program passes out condoms to teenagers, the abstinence-only plan may be funded because of the ideologies of the policymakers or their constituents. Third, the information contained in the evaluation report itself can be a barrier. The results of evaluations are not the only source of information and are often not the most salient. Policymakers often have extensive information regarding a potential policy, and the results of the evaluation are competing with every other source of information that can enter the decision-making process. Finally, the institutional characteristics of the program itself can become a barrier. The institution is made up of people working within the context of a set structure and a history of behavior. Because of these institutional characteristics, change may be difficult or even considered “offlimits.” For example, if an evaluation results in advocating the elimination a particular position, then the results may be overlooked because the individual currently in that position is 6 months from retirement. Please note that we are not making a value judgment regarding the relative merits of such a decision but merely describing the possible situation. The utilization of the results of an evaluation is the primary objective of an evaluation; however, it is often the case that evaluation results are put aside in favor of other, less optimal actions (Weiss, 1999). This is not a problem novel to program evaluators but a problem that burdens most applied social science. A prime example of this problem is that of the reliability of eyewitness testimony. Since Elizabeth Loftus published her 1979 book, Eyewitness Testimony, there has been extensive work done on the reliability of eyewitnesses and the development of false memories. Nevertheless, it took 20 years for the U. S. Department of Justice to institute national standards reflecting the implications of these findings (Wells et al., 2000). Loftus did accomplish what Weiss refers to as “enlightenment” (Weiss, 1980), or the bringing of scientific data into the applied realm of policymaking. Although ideally programs would implement evaluation findings immediately, this simply does not often happen. As stated by Weiss (1999), the volume of information that organizations or policymakers have regarding a particular program is usually too vast to be overthrown by one dissenting evaluation. These problems appear to be inherent in social sciences and program evaluation, and it is unclear how to ameliorate them.

To illustrate how programs and program evaluations can succeed or fail, we use two representative case studies: one notable success of the program evaluation process, the KEEP, and one notable failure of the program evaluation process, DARE.

Kamehameha Early Education Project A classic example of a successful program evaluation described by Tharp and Gallimore (1979) was that of KEEP. Kamehameha Early Evaluation Project was started in 1970 to improve the reading and general education of Hawaiian children. The project worked closely with program evaluators to identify solutions for many of the unique problems faced by young Hawaiian-American children in their education, from kindergarten through third grade, and to discover methods for disseminating these solutions to the other schools in Hawaii. The evaluation took 7 years before significant improvement was seen and involved a multidisciplinary approach, including theoretical perspectives from the fields of psychology, anthropology, education, and linguistics. Based on their evaluation of KEEP, Tharp and Gallimore (1979) identified four necessary conditions for a successful program evaluation: (1) longevity—evaluations need time to take place, which requires stability in other areas of the program; (2) stability in the values and goals of the program; (3) stability of funding; and (4) the opportunity for the evaluators’ recommendations to influence the procedure of the program. In terms of the “Four I’s,” the interests of KEEP were clear and stable. The project was interested in improving general education processes. In terms of ideology and information, KEEP members believed that the evaluation process was vital to its success and trusted the objectivity of the evaluators, taking their suggestions to heart. From its inception, the institution had an evaluation system built in. Since continuing evaluations were in process, the program itself had no history of institutional restriction of evaluations.

Drug Abuse Resistance Education In this notable case, we are not so much highlighting the failure of a specific program evaluation, or of a specific program per se, as highlighting the institutional failure of program evaluation as a system, at least as currently structured in our society. In the case of DARE, a series of program evaluations produced results that, in the end, were not

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

335

acted upon. Rather, what should have been recognized as a failed program lives on to this day. The DARE program was started in 1983, and the goal of the program was to prevent drug use. Although there are different DARE curricula, depending on the targeted age group, the essence of the program is that uniformed police officers deliver a curriculum in safe classroom environments aimed at preventing drug use among the students. As of 2004, DARE has been the most successful school-based prevention program in attracting federal money: The estimated average federal expenditure is three-quarters of a billion dollars per year (West & O’Neal, 2004). Although DARE is successful at infiltrating school districts and attracting tax dollars, research spanning more than two decades has shown that the program is ineffective at best and detrimental at worst. One of the more recent meta-analyses (West & O’Neal, 2004) estimated the average effect size for DARE’s effectiveness was extremely low and not even statistically significant (r = 0.01; Cohen’s d = 0.02, 95% confidence interval = –0.04, 0.08). Early studies pointed to the ineffectiveness of the DARE program (Ennett, Tobler, Ringwalt, & Flewelling, 1994; Clayton, Cattarello, & Johnstone, 1996; Dukes, Ullman, & Stein, 1996). In response to much of this research, the Surgeon General placed the DARE program in the “Does Not Work” category of programs in 2001. In 2003, the U. S. Government Accountability Office (GAO) wrote a letter to congressmen citing a series of empirical studies in the 1990s showing that in some cases DARE is actually iatrogenic, meaning that DARE does more harm than good. Despite all the evidence, DARE is still heavily funded by tax dollars through the following government agencies: California National Guard, Combined Federal Campaign (CFC), Florida National Guard, St. Petersburg College, Multijurisdictional, Counterdrug Task Force Training, Indiana National Guard, Midwest Counterdrug Training Center/ National Guard, U.S. Department of Defense, U.S. Department of Justice, Bureau of Justice Assistance (BJA), Drug Enforcement Administration, Office of Juvenile Justice and Delinquency Prevention, and the U.S. Department of State. These are institutional conflicts of interest. As described above, few politicians want to be perceived as “the guy who voted against drug prevention.” The failure of DARE stems primarily from these conflicts of interest. In lieu of any better options, the U.S. Federal Government continues to support DARE, 336

p r o g r a m e va l u a t i o n

simply because to not do so might appear as if they were doing nothing. At the present writing in 2012, DARE has been in effect for 29 years. Attempting to change the infrastructure of a longstanding program like this would be met with a great deal of resistance. We chose the DARE example specifically because it is a long-running example, as it takes years to make the determination that somewhere something in the system of program evaluation failed. If this chapter were being written in the early 1990s, people in the field of program evaluation might reasonably be predicting that based on the data available, this program should either be substantially modified or discontinued. Rather, close to two decades later and after being blacklisted by the government, it is still a very well-funded program. One may argue that the program evaluators themselves did their job; however, what is the use of program evaluation if policymakers are not following recommendations based on data produced by evaluations? Both the scientific evidence and the anecdotal evidence seem to suggest that programs with evaluations built-in seem to result in better utilization of evaluation results and suggestions. This may partly result from better communication between the evaluator and the stakeholders, but if the evaluator is on a first-name basis (or maybe goes golfing) with the stakeholders, then what happens to his/her ability to remain objective? We will address these important issues in the sections that immediately follow by exploring the extant system of incentives shaping the practice of program evaluation.

What System of Incentives Governs the Practice of Program Evaluation? Who Are Program Evaluators? On October 19, 2010, we conducted a survey of the brief descriptions of qualifications and experience of evaluators posted by program evaluators (344 postings in total) under the “Search Resumes” link on the American Evaluation Association (AEA) website (http://www.eval.org/find_an_evaluator/ evaluator_search.asp). Program evaluators’ skills were evenly split in their levels of quantitative (none: 2.0%; entry: 16.9%; intermediate: 37.5%; advanced: 34.9%; expert: 8.4%; strong: 0.3%) and qualitative evaluation experience (none: 1.5%; entry: 17.2%; intermediate: 41.6%; advanced: 27.3%; expert: 12.2%; strong: 0.3%). Program evaluators also expressed a range of years they were involved with evaluation (10 years: 23.5%). In general, program evaluators were highly educated, with the highest degree attained being either a masters (58.8%) or a doctorate of some sort (36%), and fewer program evaluators had only an associates (0.3%) or bachelors degree (5.0%). The degree specializations were also widely distributed. Only 12.8% of the program evaluators with posted resumes described their education as including some sort of formal training specifically in evaluation. The most frequently mentioned degree specialization was in some field related to Psychology (25.9%), including social psychology and social work. The next most common specialization was in Education (15.4%), followed by Policy (14.0%), Non-Psychology Social Sciences (12.2%), Public Health or Medicine (11.6%), Business (11.3%), Mathematics or Statistics (5.8%), Communication (2.9%), Science (2.3%), Law or Criminal Justice (2.0%), Management Information Systems and other areas related to Technology (1.5%), Agriculture (1.5%), and Other, such as Music (1.7%).

For Whom Do Program Evaluators Work? We sampled job advertisements for program evaluators using several Internet search engines: usajobs.gov, jobbing.com, and human resources pages for government agencies such as National Institutes of Health (NIH), the National Institute of Mental Health (NIMH), Centers for Disease Control (CDC), and GAO. Based on this sampling, we determined there are four general types of program evaluation jobs. Many agencies that deliver or implement social programs organize their own program evaluations, and these account for the first, second, and third types of program evaluation jobs available. The first type of program evaluation job is obtained in response to a call or request for proposals for a given evaluation. The second type of program evaluation job is obtained when the evaluand (the program under evaluation) is asked to hire an internal program evaluator to conduct a summative evaluation. The third general type of program evaluation job is obtained when a program evaluator is hired to conduct a formative evaluation; this category could include an employee of the evaluand who serves multiple roles in the organization, such as secretary and data collector. We refer to the fourth type of program evaluation job as the Professional Government Watchdog. That type of evaluator works for an agency like the GAO.

The GAO is an independent agency that answers directly to Congress. The GAO has 3300 workers (http://www.gao.gov/about/workforce/) working in roughly 13 groups: (1) Acquisition and Sourcing Management; (2) Applied Research and Methods; (3) Defense Capabilities and Management; (4) Education, Workforce, and Income Security; (5) Financial Management and Assurance; (6) Financial Markets and Community Investment; (7) Health Care; (8) Homeland Security and Justice; (9) Information Technology; (10) International Affairs and Trade; (11) Natural Resources and Environment; (12) Physical Infrastructure; and (13) Strategic Issues. Each of these groups is tasked with the oversight of a series of smaller agencies that deal with that group’s content. For example, the Natural Resources and Environment group oversees the Department of Agriculture, Department of Energy, Department of the Interior, Environmental Protection Agency, Nuclear Regulatory Commission, Army Corps of Engineers, National Science Foundation, National Marine Fisheries Service, and the Patent and Trademark Office. With the many billions of dollars being spent by the U. S. government on social programs, we sincerely doubt that 3300 workers can possibly process all the program evaluations performed for the entire federal government. Recall that the estimated average federal expenditure for DARE alone is threequarters of a billion dollars per year and that this program has been supported continuously for 17 years. We believe that such colossal annual expenditures should include enough to pay for a few more of these “watchdogs” or at least justify the additional expense of doing so.

Who Pays the Piper? The hiring of an internal program evaluator for the purpose of a summative evaluation is a recipe for an ineffective evaluation. There is a danger that the program evaluator can become what Scriven (1976, 1983) has called a program advocate. According to Scriven, these program evaluators are not necessarily malicious but, rather, could be biased as a result of the nature of the relationship between the program evaluator, the program funder, and the program management. The internal evaluator is generally employed by, and answers to, the management of the program and not directly to the program funder. In addition, because the program evaluator’s job relies on the perceived “success” of the evaluation, there is an incentive to bias the results in favor of the program being evaluated. Scriven has

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

337

argued that this structure may develop divided loyalties between the program being evaluated and the agency funding the program (Shadish, Cook, & Leviton, 1991). Scriven (1976, 1983) has recommended that summative evaluations are necessary for a society to optimize resource allocation but that we should also periodically re-assign program evaluators to different program locations to prevent individual evaluators from being co-opted into local structures. The risks of co-opting are explained in the next section.

Moral Hazards and Perverse Incentives As a social institution, the field of program evaluation has professed very high ethical standards. For example, in 1994 The Joint Committee on Standards for Educational Evaluation produced the Second Edition of an entire 222-page volume on professional standards in program evaluation. Not all were what we would typically call ethical standards per se, but one of the four major categories of professional evaluation standards was called Propriety Standards and addressed what most people would refer to as ethical concerns. The other three categories were denoted Utility Standards, Feasibility Standards, and Accuracy Standards. Although it might be argued that a conscientious program evaluator is ethically obligated to carefully consider the utility, feasibility, and accuracy of the evaluation, it is easy to imagine how an occasional failure in any of these other areas might stem from factors other than an ethical lapse. So why do we need any protracted consideration of moral hazards and perverse incentives in a discussion of program evaluation? We should make clear at the outset that we do not believe that most program evaluators are immoral or unethical. It is important to note that in most accepted uses of the term, the expression moral hazard makes no assumptions, positive or negative, about the relative moral character of the parties involved, although in some cases the term has unfortunately been used in that pejorative manner. The term moral hazard only refers (or should only refer) to the structure of perverse incentives that constitute the particular hazard in question (Dembe & Boden, 2000). We wish to explicitly avoid the implication that there are immoral or unethical individuals or agencies out there that intentionally corrupt the system for their own selfish benefit. Unethical actors hardly need moral hazards to corrupt them: They are presumably already immoral and can therefore be readily 338

p r o g r a m e va l u a t i o n

corrupted, presumably with little provocation. It is the normally moral or ethical people about which we need to worry under the current system of incentives, because this system may actually penalize them for daring to do the right thing for society. Moral hazards and perverse incentives refer to conditions under which the incentive structures in place tend to promote socially undesirable or harmful behavior (e.g., Pauly, 1974). Economic theory refers the socially undesirable or harmful consequences of such behavior as market failures, which occur when there is an inefficient allocation of goods and services in a market. Arguably, continued public or private funding of an ineffective or harmful social program therefore constitutes a market failure, where the social program is conceptualized as the product that is being purchased. In economics, one of the welldocumented causes of market failures is incomplete or incorrect information on which the participants in the market base their decisions. That is how these concepts may relate to the field of program evaluation. One potential source of incomplete or incorrect information is referred to in economic theory as that of information asymmetry, which occurs in economic transactions where one party has access to either more or better information than the other party. Information asymmetry may thus lead to moral hazard, where one party to the transaction is insulated from the adverse consequences of a decision but has access to more information than another party (specifically, the party that is not insulated from the adverse consequences of the decision in question). Thus, moral hazards are produced when the party with more information has an incentive to act contrary to the interests of the party with less information. Moral hazard arises because one party does not risk the full consequences of its own decisions and presumably acquires the tendency to act less cautiously than otherwise, leaving another party to suffer the consequences of those possibly ill-advised decisions. Furthermore, a principal-agent problem might also exist where one party, called an agent, acts on behalf of another party, called the principal. Because the principal usually cannot completely monitor the agent, the situation often develops where the agent has access to more information than the principal does. Thus, if the interests of the agent and the principal are not perfectly consistent and mutually aligned with each other, the agent may have an incentive to behave in a manner that is contrary to the interests of the principal. This is the

problem of perverse incentives, which are incentives that have unintended and undesirable effects (“unintended consequences”), defined as being against the interests of the party providing the incentives (in this case, the principal). A market failure becomes more than a mere mistake and instead becomes the inevitable product of a conflict of interests between the principal and the agent. A conflict of interests may lead the agent to manipulate the information that they provide to the principal. The information asymmetry thus generated will then lead to the kind of market failure referred to as adverse selection. Adverse selection is a market failure that occurs when information asymmetries between buyers and sellers lead to suboptimal purchasing decisions on the part of the buyer, such as buying worthless or detrimental goods or services (perhaps like DARE?). When applying these economic principles to the field of program evaluation, it becomes evident that because program evaluators deal purely in information, and this information might be manipulated—either by them or by the agencies for which they work (or both of them in implicit or explicit collusion)—we have a clear case of information asymmetry. This information asymmetry, under perverse incentives, may lead to a severe conflict of interests between the society or funding agency (the principal) and the program evaluator (the agent). This does not mean that the agent must perforce be corrupted, but the situation does create a moral hazard for the agent, regardless of any individual virtues. If the perverse incentives are acted on (meaning they indeed elicit the execution of impropriety), then it is clearly predicted by economic theory to produce a market failure and specifically adverse selection on the part of the principal. Getting back to the question of the professional standards actually advocated within program evaluation, how do these lofty ideals compare to the kind of behavior that might be expected under moral hazards and perverse incentives, presuming that program evaluators are subject to the same kind of motivations, fallibilities, and imperfections as the rest of humanity? The Joint Committee on Standards for Educational Evaluation (1994) listed the following six scenarios as examples of conflicts of interest: – Evaluators might benefit or lose financially, long term or short term, depending on what evaluation results they report, especially if the evaluators are connected financially to the program being evaluated or to one of its competitors.

– The evaluator’s jobs and/or ability to get future evaluation contracts might be influenced by their reporting of either positive or negative findings. – The evaluator’s personal friendships or professional relationships with clients may influence the design, conduct, and results of an evaluation. – The evaluator’s agency might stand to gain or lose, especially if they trained the personnel or developed the materials involved in the program being evaluation. – A stakeholder or client with a personal financial interest in a program may influence the evaluation process. – A stakeholder or client with a personal professional interest in promoting the program being evaluated may influence the outcome of an evaluation by providing erroneous surveys or interview responses. (p. 115) In response to these threats to the integrity of a program evaluation, the applicable Propriety Standard reads: “Conflicts of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results” (The Joint Committee on Standards for Educational Evaluation, 1994, p. 115). Seven specific guidelines are suggested for accomplishing this goal, but many of them appear to put the onus on the individual evaluators and their clients to avoid the problem. For example, the first three guidelines recommend that the evaluator and the client jointly identify in advance possible conflicts of interest, agree in writing to preventive procedures, and seek more balanced outside perspectives on the evaluation. These are all excellent suggestions and should work extremely well in all cases, except where either the evaluator, the client, or both are actually experiencing real-world conflicts of interests. Another interesting guideline is: “Make internal evaluators directly responsible to agency heads, thus limiting the influence other agency staff might have on the evaluators” (p. 116). We remain unconvinced that the lowerechelon and often underpaid agency staff have more of a vested interest in the outcome of an evaluation than the typically more highly paid agency head presumably managing the program being evaluated. A similar situation exists with respect to the Propriety Standards for the Disclosure of Findings: “The formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results” (The

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

339

Joint Committee on Standards for Educational Evaluation, 1994, p. 109). This statement implicitly recognizes the problem of information asymmetry described above but leaves it up to the “formal parties to an evaluation” to correct the situation. In contrast, we maintain that these are precisely the interested parties that will be most subject to moral hazards and perverse incentives and are therefore the least motivated by the financial, professional, and possibly even political incentives currently in place to act in the broader interests of society as a whole in the untrammelled public dissemination of information. Besides financial gain or professional advancement, Stufflebeam (2001) has recognized political gains and motivations also play a role in the problem of information asymmetry: The advance organizers for a politically controlled study include implicit or explicit threats faced by the client for a program evaluation and/or objectives for winning political contests. The client’s purpose in commissioning such a study is to secure assistance in acquiring, maintaining, or increasing influence, power, and/or money. The questions addressed are those of interest to the client and special groups that share the client’s interests and aims. Two main questions are of interest to the client: What is the truth, as best can be determined, surrounding a particular dispute or political situation? What information would be advantageous in a potential conflict situation? . . . Generally, the client wants information that is as technically sound as possible. However, he or she may also want to withhold findings that do not support his or her position. The strength of the approach is that it stresses the need for accurate information. However, because the client might release information selectively to create or sustain an erroneous picture of a program’s merit and worth, might distort or misrepresent the findings, might violate a prior agreement to fully release findings, or might violate a “public’s right to know” law, this type of study can degenerate into a pseudoevaluation. (p. 10–11)

By way of solutions, Stufflebeam (2001) then offers: While it would be unrealistic to recommend that administrators and other evaluation users not obtain and selectively employ information for political gain, evaluators should not lend their names and endorsements to evaluations presented by their clients that misrepresent the full set of relevant findings, that present falsified reports aimed at winning political

340

p r o g r a m e va l u a t i o n

contests, or that violate applicable laws and/or prior formal agreements on release of findings. (p. 10)

Like most of the guidelines offered by The Joint Committee on Standards for Educational Evaluation (1994) for the Disclosure of Findings, this leaves it to the private conscience of the individual administrator or evaluator to not abuse their position of privileged access to the information produced by program evaluation. It also necessarily relies on the individual administrator’s or evaluator’s selfreflective and self-critical conscious awareness of any biases or selective memory for facts that one might bring to the evaluation process, to be intellectually alerted and on guard against them. To be fair, some of the other suggestions offered in both of these sections of the Propriety Standards are more realistic, but it is left unclear exactly who is supposed to be specifically charged with either implementing or enforcing them. If it is again left up to either the evaluator or the client, acting either individually or in concert, it hardly addresses the problems that we have identified. We will take up some of these suggestions later in this chapter and make specific recommendations for systemic institutional reforms as opposed to individual exhortations to virtue. As should be clear from our description of the nature of the problem, it is impossible under information asymmetry to identify specific program evaluations that have been subject to these moral hazards, precisely because they are pervasive and not directly evident (almost by definition) in any individual final product. There is so much evidence for these phenomena from other fields, such as experimental economics, that the problems we are describing should be considered more than unwarranted speculation. This is especially true in light of the fact that some of our best hypothetical examples came directly from the 1994 book cited above on professional evaluation standards, indicating that these problems have been widely recognized for some time. Further, we do not think that we are presenting a particularly pejorative view of program evaluation collectively or of program evaluators individually: we are instead describing how some of the regrettable limitations of human nature, common to all areas of human endeavor, are exacerbated by the way that program evaluations are generally handled at the institutional level. The difficult situation of the honest and well-intentioned program evaluator under the current system of incentives is just a special case of this general human condition, which

subjects both individuals and agencies to a variety of moral hazards.

Cui Bono? The Problem of Multiple Stakeholders In the historic speech, Pro Roscio Amerino, given by Marcus Tullius Cicero in 80 BC, he is quoted as having said (Berry, 2000): The famous Lucius Cassius, whom the Roman people used to regard as a very honest and wise judge, was in the habit of asking, time and again, “To whose benefit?”

That speech made famous the expression “cui bono?” for the next two millennia that followed. In program evaluation, we have a technical definition for the generic answer to that question. Stakeholders are defined as the individuals or organizations that are either directly or indirectly affected by the program and its evaluation (Rossi & Freeman, 1993). Although a subtle difference here is that the stakeholders can either gain or lose and do not always stand to benefit, the principle is the same. Much of what has been written about stakeholders in program evaluation is emphatic on the point that the paying client is neither the only, nor necessarily the most important, stakeholder involved. The evaluator is responsible for providing information to a multiplicity of different interest groups. This casts a program evaluator more in the role of a public servant than a private contractor. For example, The Joint Committee on Standards for Educational Evaluation (1994) addressed the problem of multiple stakeholders under several different and very interesting headings. First, under Utility Standards, they state that Stakeholder Identification is necessary so that “[p]ersons involved in or affected by the evaluation should be identified, so that their needs can be addressed” (p. 23). This standard presupposes the rather democratic and egalitarian assumption that the evaluation is being performed to address the needs of all affected and not just those of the paying client. Second, in the Feasibility Standards, under Political Viability, the explain that “[t]he evaluation should be planned and conducted with anticipation of the different positions of various interest groups, so that their cooperation might be obtained, and so that possible attempts by any of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted” (p. 63). This standard instead presupposes that the diverse

stakeholder interests have to be explicitly included within the evaluation process because of political expediency, at the very least as a practical matter of being able to effectively carry out the evaluation, given the possible interference by these same special interest groups. The motivation of the client in having to pay to have these interests represented, and of the evaluator in recommending that this be done, might therefore be one of pragmatic or “enlightened” self-interest rather than of purely altruistic and public-spirited goals. Third, in the Propriety Standards, under Service Orientation, they state: “Evaluations should be designed to assist organizations to address and effectively serve the needs of the targeted participants” (p. 83). This standard presupposes that both the client, directly, and the evaluator, indirectly, are engaged in public service for the benefit of these multiple stakeholders. Whether this results from enlightened self-interest on either of their parts, with an eye to the possible undesirable consequences of leaving any stakeholder groups unsatisfied, or to disinterested and philanthropic communitarianism is left unclear. Fourth, in the Propriety Standards, under Disclosure of Findings, as already quoted above, there is the statement that the full set of evaluation findings should be made accessible to all the persons affected by the evaluation and not just to the client. This standard again presupposes that the evaluation is intended and should be designed for the ultimate benefit of all persons affected. So all persons affected are evidently “cui bono?”As another ancient aphorism goes, “vox populi, vox dei” (“the voice of the people is the voice of god,” first attested to have been used by Alcuin of York, who disagreed with the sentiment, in a letter to Charlemagne in 798 AD; Page, 1909, p. 61). Regardless of the subtle differences in perspective among many of these standards, all of them present us with a very broad view of for whom program evaluators should actually take themselves to working. These standards again reflect very lofty ethical principles. However, we maintain that the proposed mechanisms and guidelines for achieving those goals remain short of adequate to insure success.

What Do Program Evaluators Actually Do? Part I: Training and Competencies Conceptual Foundations of Professional Training Recent attempts have been made (King, Stevahn, Ghere, & Minnema, 2001; Stevahn, King, Ghere,

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

341

& Minnema, 2005) at formalizing the competencies and subsequent training necessary of program evaluators. These studies have relied on the thoughts and opinions of practicing evaluators in terms of their opinion of the essential competencies of an effective evaluator. In their studies, participants were asked to rate their perceived importance on a variety of skills that an evaluator should presumably have. In this study (King et al., 2001), there was remarkably general agreement among evaluators for competencies that an evaluator should possess. For example, high agreement was observed for characteristics such as the ability to collect, analyze, and interpret data as well as to report the results. In addition, there was almost universal agreement regarding the evaluator’s ability to frame the evaluation question as well as understand the evaluation process. These areas of agreement suggest that the essential training that evaluators should have are in the areas of data-collection methods and dataanalytic techniques. Surprisingly, however, there was considerable disagreement regarding the ability to do research-oriented activities, drawing a line between conducting evaluation and conducting research. Nonetheless, we believe that training in research-oriented activities is essential to program evaluation because the same techniques such as framing questions, data collection, and data analysis and interpretation are gained through formal training in research methods. This evidently controversial position will be defended further below. Formal training standards are not yet developed for the field of evaluation (Stevahan et al., 2005). However, it does appear that the training necessary to be an effective evaluator includes formal and rigorous training in both research methods and the statistical models that are most appropriate to those methods. Further below, we outline some of the research methodologies and statistical models that are most common within program evaluation. In addition to purely data-analytic models, however, logic models provide program evaluators with an outline, or a roadmap, for achieving the outcome goals of the program and illustrate relationships between resources available, planned activities, and the outcome goals. The selection of outcome variables is important because these are directly relevant to the assessment of the success of the program. An outcome variable refers to the chosen changes that are desired by the program of interest. Outcome variables can be specified at the level of the individual, group, or population and can refer to a change in specific behaviors, practices, or ways 342

p r o g r a m e va l u a t i o n

of thinking. A generic outline for developing a logic model is presented by the United Way (1996). They define a logic model as including four components. The first component is called Inputs and refers to the resources available to program, including financial funds, staff, volunteers, equipment, and any potential restraints, such as licensure. The second component is called Activities and refers to any planned services by the program, such as tutoring, counseling, or training. The third component is called Outputs and refers to the number of participants reached, activities performed, product or services delivered, and so forth. The fourth component is called Outcomes and refers to the benefits produced by those outputs for the participants or community that the program was directed to help. Each component of the logic model can be further divided into initial or intermediate goals, with a long- or short-term timeframe, and can include multiple items within each component. Table 17.1 displays an example of a logic model. The logic model shown is a tabular representation that we prepared of the VERB Logic Model developed for the Youth Media Campaign Longitudinal Survey, 2002–2004 (Center for Disease Control, 2007). This logic model describes the sequence of events envisioned by the program for bringing about behavior change, presenting the expected relations between the campaign inputs, activities, impacts, and outcomes. A PDF of the original figure can be downloaded directly from the CDC website (http://www.cdc.gov/youthcampaign/research/PDF/ LogicModel.pdf ). We believe that it is essential for program evaluators to be trained in the development and application of logic models because they can assist immensely in both the design and the analysis phases of the program evaluation. It is also extremely important that the collaborative development of logic models be used as a means of interacting and communicating with the program staff and stakeholders during this process, as an additional way of making sure that their diverse interests and concerns are addressed in the evaluation of the program.

Conceptual Foundations of Methodological and Statistical Training In response to a previous assertion by Shadish, Cook, and Leviton (1991) that program evaluation was not merely “applied social science,” Sechrest and Figueredo (1993) argued that the reason that this was so was:

Table 17.1. Example of a Logic Model: Youth Media Campaign Longitudinal Survey, 2002–2004 Input

Activities

Short-term outcomes

Mid-term outcomes

Long-term outcomes

Consultants Staff Research and evaluation Contractors Community Infrastructure Partnership

Advertising Promotions Web Public relations National and community outreach

Tween and parent awareness of the campaign brand and its messages “Buzz” about the campaign and brand messages

Changes in: Subjective Norms Beliefs Self-efficacy Perceived behavioral control

Tweens engaging in and maintaining physical activity, leading to reducing chronic disease and possibly reducing unhealthy risky behaviors

Shadish et al. (1991) appeal to the peculiar problems manifest in program evaluation. However, these various problems arise not merely in program evaluation but whenever one tries to apply social science. The problems, then, arise not from the perverse peculiarities of program evaluation but from the manifest failure of much of mainstream social science and the identifiable reasons for that failure. (p. 646–647)

These “identifiable reasons” consisted primarily of various common methodological practices that led to the “chronically inadequate external validity of the results of the dominant experimental research paradigm” (p. 647) that had been inadvisedly adopted by mainstream social science. According to Sechrest and Figueredo (1993), the limitations of these sterile methodological practices were very quickly recognized by program evaluators, who almost immediately began creating the quasiexperimental methods that were more suitable for real-world research and quickly superseded the older laboratory-based methods, at least within program evaluation: Arguably, for quasi-experimentation, the more powerful and sophisticated intellectual engines of causal inference are superior, by now, to those of the experimental tradition. (p. 647)

The proposed distinction between program evaluation and applied social science was therefore more a matter of practice than a matter of principle. Program evaluation had adopted methodological practices that were appropriate to its content domain, which mainstream social science had not. The strong implication was that the quasi-experimental methodologies developed within program evaluation would very likely be more suitable for applied social science in general than the dominant experimental paradigm.

Similarly, we extend this line of reasoning to argue that program evaluators do not employ a completely unique set of statistical methods either. However, because program evaluators disproportionately employ a certain subset of research methods, which are now in more general use throughout applied psychosocial research, it necessarily follows that they must therefore disproportionally employ a certain subset of statistical techniques that are appropriate to those particular designs. In the sections below, we therefore concentrate on the statistical techniques that are in most common use in program evaluation, although these data-analytic methods are not unique to program evaluation per se.

What Do Program Evaluators Actually Do? Part II: Quantitative Methods Foundations of Quantitative Methods: Methodological Rigor Even its many critics acknowledge that the hallmark and main strength of the so-called quantitative approach to program evaluation resides primarily in its methodological rigor, whether it is applied in shoring up the process of measurement or in buttressing the strength of causal inference. In the following sections, we review a sampling of the methods used in quantitative program evaluation to achieve the sought-after methodological rigor, which is the “Holy Grail” of the quantitative enterprise.

Evaluation-Centered Validity Within program evaluation, and social sciences in general, there are several types of validity that have been identified. Cook and Campbell (1979) formally distinguished between four types of validity more specific to program evaluation: (1) internal validity, (2) external validity, (3) statistical conclusion validity, and (4) construct validity. Internal

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

343

validity refers to establishing the causal relationship between two variables such as treatment and outcome; external validity refers to supporting the generalization of results beyond a specific study; statistical conclusion validity refers to applying statistical techniques appropriately to a given problem; and construct validity falls within a broader class of validity issues in measurement (e.g. face validity, criterion validity, concurrent validity, etc.) but specifically consists of assessing and understanding program components and outcomes accurately. In the context of a discussion of methods in program evaluation, two forms of validity take primacy: internal and external validity. Each validity type is treated with more detail in the following sections.

internal validity The utility of a given method in program evaluation is generally measured in terms of how internally valid it is believed to be. That is, the effectiveness of a method in its ability to determine the causal relationship between the treatment and outcome is typically considered in the context of threats to internal validity. There are several different types of threat to internal validity, each of which applies to greater and lesser degrees depending on the given method of evaluation. Here we describe a few possible threats to internal validity.

selection bias Selection bias is the greatest threat to internal validity for quasi-experimental designs. Selection bias is generally a problem when comparing experimental and control groups that have not been created by the random assignment of participants. In such quasi-experiments, group membership (e.g., treatment vs. control) may be determined by some unknown or little-known variable that may contribute to systematic differences between the groups and may thus become confounded with the treatment. History is another internal validity threat. History refers to any events, not manipulated by the researcher, that occur between the treatment and the posttreatment outcome measurement that might even partially account for that posttreatment outcome. Any events that coincide with the treatment, whether systematically related to the treatment or not, that could produce the treatment effects on the outcome are considered history threats. For example, practice effects in test taking could account for differences pretest and posttreatment if the same type of measure is given at each measurement occasion. Maturation is the tendency for changes in 344

p r o g r a m e va l u a t i o n

an outcome to spontaneously occur over time. For example, consider a program aimed at increasing formal operations in adolescents. Because formal operations tend to increase over time during adolescence, the results of any program designed to promote formal operations during this time period would be confounded with the natural maturational tendency for formal operations to improve with age. Finally, regression to the mean may cause another threat to internal validity. These regression artifacts generally occur when participants are selected into treatment groups or programs because they are unusually high or low on certain characteristics. When individuals deviate substantially from the mean, this might in part be attributable to errors of measurement. In such cases, it might be expected that over time, their observed scores will naturally regress back toward the mean, which is more representative of their true scores. In research designs where individuals are selected in this way, programmatic effects are difficult to distinguish from those of regression toward the mean. Several other forms of threats to internal validity are also possible (for examples, see Shadish, Cook, & Campbell, 2002; Mark & Cook, 1984; Smith, 2010).

external validity External validity refers to the generalizability of findings, or the application of results beyond the given sample in a given setting. The best way to defend against threats of external validity is to conduct randomized experiments on representative samples, where participants are first randomly drawn from the population and then randomly assigned to the treatment and control groups. Because there are no prior characteristics systematically shared by all members of either the control or treatment participants with members of their own corresponding groups, but systematically differing between those groups, it can be extrapolated that the effect of a program is applicable to others beyond the specific sample assessed. This is not to say that the results of a randomized experiment will be applicable to all populations. For example, if a program is specific to adolescence and was only tested on adolescents, then the impact of the treatment may be specific to adolescents. On the contrary, evaluations that involve groups that were nonrandomly assigned face the possibility that the effect of the treatment is specific to the population being sampled and thus becomes ungeneralizable to other populations. For example, if a program is designed to

reduce the recidivism rates of violent criminals, but the participants in a particular program are those who committed a specific violent crime, then the estimated impact of that program may be specific to only those individuals who committed that specific crime and not generalizable to other violent offenders.

Randomized Experiments Randomized experiments are widely believed to offer evaluators the most effective way of assessing the causal influence of a given treatment or program (St. Pierre, 2004). The simplest type of randomized experiment is one in which individuals are randomly assigned to one of at least two groups— typically a treatment and control group. By virtue of random assignment, each group is approximately equivalent in their characteristics and thus threats to internal validity as a result of selection bias are, by definition, ruled out. Thus, the only systematic difference between the groups is implementation of the treatment (or program participation), so that any systematic differences between groups can be safely attributed to receiving or not receiving the treatment. It is the goal of the evaluator to assess this degree of difference to determine the effectiveness of the treatment or program (Heckman & Smith, 1995; Boruch, 1997). Although randomized experiments might provide the best method for establishing the causal influence of a treatment or program, they are not without their problems. For example, it may simply be undesirable or unfeasible to randomly assign participants to different groups. Randomized experiments may be undesirable if results are needed quickly. In some cases, implementation of the treatment may take several months or even years to complete, precluding timely assessment of the treatment’s effectiveness. In addition, it is not feasible to randomly assign participant characteristics. That is, questions involving race or sex, for example, cannot be randomly assigned, and, therefore, use of a randomized experiment to answer questions that center on these characteristics is impossible. Although experimental methods are useful for eliminating these confounds by distributing participant characteristics evenly across groups, when research questions center on these prior participant characteristics, experimental methods are not feasible methods to apply to this kind of problem. In addition, there are ethical considerations that must be taken into account before randomly assigning individuals to groups. For example, it would be unethical

to assign participants to a cigarette smoking condition or other condition that may cause harm. Furthermore, it is ethically questionable to withhold effective treatment from some individuals and administer treatment to others, such as in cancer treatment or education programs (see Cook, Cook, & Mark, 1977; Shadish et al., 2002). Randomized experiments may also suffer other forms of selection bias insensitive to randomization. For example, selective attrition from treatments may create nonequivalent groups if some individuals are systematically more likely to drop out than others (Smith, 2010). Randomized experiments may also suffer from a number of other drawbacks. For a more technical discussion of the relationship between randomized experiments and causal inference, see Cook, Scriven, Coryn, and Evergreen (2010).

Quasi-Experiments Quasi-experiments are identical to randomized experiments with the exception of one element: randomization. In quasi-experimental designs, participants are not randomly assigned to different groups, and thus the groups are considered non-equivalent. However, during data analysis, a program evaluator may attempt to construct equivalent groups through matching. Matching involves creating control and treatment groups that are similar in their characteristics, such as age, race, and sex. Attempts to create equivalent groups through matching may result in undermatching, where groups may be similar in one characteristic (such as race) but nonequivalent in others (such as socioeconomic status). In such situations, a program evaluator may make use of statistical techniques that control for undermatching (Smith, 2010) or decide to only focus on matching those characteristics that could moderate the effects of the treatment. Much debate surrounds the validity of using randomized experiments versus quasi-experiments in establishing causality (see, for example, Cook et. al. 2010). Our goal in this section is not to evaluate the tenability of asserting causality within quasiexperimental designs (interested readers are referred to Cook & Campbell, 1979) but, rather, to describe some of the more common methods that fall under the rubric of quasi-experiments and how they relate to program evaluation.

one-group, posttest-only design Also called the one-shot case study (Campbell, 1957), the one-group, posttest-only design provides

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

345

the evaluator with information only about treatment participants and only after the treatment has been administered. It contains neither a pretest nor a control group, and thus conclusions about program impact are generally ambiguous. This design can be diagrammed: NR X O1 The NR refers to the nonrandom participation in this group. The X refers to the treatment, which from left to right indicates that it temporally precedes the outcome (O), and the subscript 1 indicates that the outcome was measured at time-point 1. Although simple in its formulation, this design has a number of drawbacks that may make it undesirable. For example, this design is vulnerable to several threats to internal validity, particularly history threats (Kirk, 2009; Shadish et al., 2002). Because there is no other group with which to make comparisons, it is unknown if the treatment is directly associated with the outcome or if other events that coincide with treatment implementation confound treatment effects. Despite these limitations, there is one circumstance in which this design might be appropriate. As discussed by Kirk (2009), the one-group, posttestonly design may be useful when sufficient knowledge about the expected value of the dependent variable in the absence of the treatment is available. For example, consider high school students who have taken a course of calculus and recently completed an exam. To assess the impact of the calculus course, one would have to determine the average expected grade on the exam had the students not taken the course and compare it to the scores they actually received (Shadish et al., 2002). In this situation, the expected exam grade for students had they not taken the course would likely be very low compared to the student’s actual grades. Thus, this technique is only likely useful when the size of the effect (taking the class) is relatively large and distinct from alternative possibilities (such has history threat).

posttest-only, nonequivalent groups design This design is similar to the one-group, posttestonly design in that only posttest measures are available; however, in this design, a comparison group is available. Unlike a randomized experiment with participants randomly assigned to a treatment and a control group, in this design participant group membership is not randomized. This design can be 346

p r o g r a m e va l u a t i o n

diagrammed: NR X O1 NR X O1 Interpretation of this diagram is similar to that of the previous one; however, in this diagram, the dashed line indicates that the participants in each of these groups are different individuals. It is important to note that the individuals in these two groups represent nonequivalent groups and may be systematically different from each other in some uncontrolled extraneous characteristics. This design is a significant improvement over the one-group, posttest-only design in that a comparison group that has not experienced the treatment can be compared on the dependent variable of interest. The principal drawback, however, is that this method may suffer from selection bias if the control and treatment groups differ from each other in a systematic way this is not related to the treatment (Melvin & Cook, 1984). For example, participants selected into a treatment based on their need for the treatment may differ on characteristics other than treatment need from those not selected into the treatment. Evaluators may implement this method when pretest information is not available, such as when a treatment starts before the evaluator has been consulted. In addition, an evaluator may choose to use this method if pretest measurements have the potential to influence posttest outcomes (Willson & Putnam, 1982). For example, consider a program designed to increase spelling ability in middle childhood. At pretest and posttest, children are given a list of words to spell. Program effectiveness would then be assessed via estimating the improvement in spelling by comparing their spelling performance before and after the program. However, if the same set of words were given to children at posttest that where administered in the pretest, then the effect of the program might be confounded with a practice effect. Although it is possible that pretest measures may influence posttest outcomes, such situations are likely to be relatively rare. In addition, the costs of not including a pretest may significantly outweigh the potential benefits (see Shadish et al., 2002).

one-group, pretest–posttest design In the pretest–posttest design, participants are assessed before the treatment and assessed again after the treatment has been administered. However, there is no control group comparison. The form of

this design is: NR O1 X O2 . This design provides a baseline with which to compare the same participants before and after treatment. Change in the outcome between pretest and posttest is commonly attributed to the treatment. This attribution, however, may be misinformed as the design is vulnerable to threats to internal validity. For example, history threats may occur if uncontrolled extraneous events coincide with treatment implementation. In addition, maturation threats may also occur if the outcome of interest is related with time. Finally, if the outcome measure was unusually high or low at pretest, then the change detected by the posttest may not be the result of the treatment but, rather, of regression toward the mean (Melvin & Cook, 1984). Program evaluators might use this method when it is not feasible to administer a program only to one set of individuals and not to another. For example, this method would be useful if a program has been administered to all students in a given school, where there cannot be a comparative control group.

pretest and posttest, nonequivalent groups design The pretest and posttest nonequivalent groups design is probably the most common to program evaluators (Shadish et al., 2002). This design combines the previous two designs by not only including pretest and posttest measures but also a control group at pretest and posttest. This design can be diagrammed: NR O1 X O2 NR O1 O2. The advantage of this design is that threats to internal validity can more easily be ruled out (Mark & Cook, 1984). When threats to internal validity are plausible, they can be more directly assessed in this design. Further, in the context of this design, statistical techniques are available to help account for potential biases (Kenny, 1975). Indeed, several authors make recommendations that data should be analyzed in a variety of ways to determine the proper effect size of the treatment and evaluate the potential for selection bias that might be introduced as a result of nonrandom groups (see Cook & Campbell, 1979; Reichardt, 1979; Bryk, 1980). In summary, the pretest and posttest, nonequivalent groups design, although not without its flaws, is a relatively effective technique for assessing treatment impact. An inherent strength of this design is

that with the exception of selection bias as a result of nonrandom groups, no single general threat to internal validity can be assigned. Rather, threats to internal validity are likely to be specific to the given problem under evaluation.

interrupted time series design The interrupted time series design is essentially an extension of the pretest and posttest, nonequivalent groups design, although it not strictly necessary for one to include a control group. Ideally, this design consists of repeated measures of some outcome prior to treatment, implementation of the treatment, and then repeated measures of the outcome after treatment. The general form of this design can be diagrammed: NR O1 O2 O3 O4 O5 X O6 O7 O8 O9 O10 NR O1 O2 O3 O4 O5 O6 O7 O8 O9 O10. In this diagram, the first line of Os refers to the treatment group, which can be identified by the X among the Os. The second line of Os refers to the control condition, as indicated by the lack of an X. The dashed line between the two conditions indicates participants are different between the two groups, and the NR indicates that individuals and nonrandomly distributed between the groups. Interrupted time series design is considered by many to be the most powerful quasi-experimental design to examine the longitudinal effects of treatments (Wagner et al., 2002). Several pieces of information can be gained about the impact of a treatment. The first is a change in the level of the outcome (as indicated by a change in the intercept of the regression line) after the treatment. This simply means that change in mean levels of the outcome as a result of the treatment can be assessed. The second is change in the temporal trajectory of the outcome (as indicated by a change in the slope of the regression line). Because of the longitudinal nature of the data, the temporal trajectories of the outcome can be assessed both pre- and posttreatment, and any change in the trajectories can be estimated. Other effects can be assessed as well, such as any changes in the variances of the outcomes after treatment, whether the effect of the treatment is continuous or discontinuous and if the effect of the treatment is immediate or delayed (see Shadish et al., 2002). Thus, several different aspects of treatment implementation can be assessed with this design. In addition to its utility, the interrupted time series design (with a control group) is robust against

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

347

many forms of internal validity threat. For example, with a control group added to the model, history is no longer a threat because any external event that might have co-occurred with the treatment should have affected both groups, presumably equally. In addition, systematic pretest differences between the treatment and control groups can be more accurately assessed because there are several pretest measures. Overall, the interrupted time series design with a nonequivalent control group is a very powerful design (Mark & Cook, 1984). A barrier to this design includes the fact that several measurements are needed both before and after treatment. This may be impossible if the evaluator was not consulted until after the treatment was implemented. In addition, some evaluators may have to rely on the availability of existing data that they did not collect or historical records. These limitations may place constraints on the questions that can be asked by the evaluator.

regression discontinuity design First introduced to the evaluation community by Thistlethwaite and Campbell (1960), the regressiondiscontinuity design (RDD) provides a powerful and unbiased method for estimating treatment effects that rivals that of a randomized experiment (see Huitema, 1980). The RDD contains both a treatment and a control group. Unlike other quasiexperimental designs, however, the determination of group membership is perfectly known. That is, in the RDD, participants are assigned to either a treatment or control group based on a particular cutoff (see also Trochim, 1984, for a discussion of so-called fuzzy regression discontinuity designs). The RDD takes the following form: O A C X O2 OA C O2. OA refers to the pretest measure for which the criterion for group assignment is determined, C refers to the cutoff score for group membership, X refers to the treatment, and O2 refers to the measured outcome. As an example, consider the case where elementary school students are assigned to a program aimed at increasing reading comprehension. Assignment to the program versus no program is determined by a particular cutoff score on a pretest measure of reading comprehension. In this case, group membership (control vs. treatment) is not randomly assigned; however, the principle or decision rule for assignment is perfectly known (e.g., 348

p r o g r a m e va l u a t i o n

the cut-off score). By directly modeling the known determinant of group membership, the evaluator is able to completely account for the selection process that determined group membership. The primary threat to the internal validity of the RDD is history, although the tenability of this factor as a threat is often questionable. More importantly, the analyses of RDDs are by nature complex, and correctly identifying the functional forms of the regression parameters (linear, quadratic, etc.) can have a considerable impact on determining the effectiveness of a program (see Reichardt, 2009, for a review).

Measurement and Measurement Issues in Program Evaluation In the context of program evaluation, three types of measures should be considered: (1) input measures, (2) process measures, and (3) outcome measures (Hollister & Hill, 1995). Input measures consist of more general measures about the program and the participants in them, such as the number of individuals in a given program or the ethnic composition of program participants. Process measures center on the delivery of the program, such as a measure of teaching effectiveness in a program designed to improve reading comprehension in schoolchildren. Outcome measures are those measures that focus on the ultimate result of the program, such as a measure of reading comprehension at the conclusion of the program. Regardless of the type of measurement being applied, it is imperative that program evaluators utilize measures that are consistent with the goals of the evaluation. For example, in an evaluation of the performance of health-care systems around the world, the World Health Organization (WHO) published a report (World Health Organization, 2000) that estimated how well the different health-care systems of different countries were functioning. As a part of this process, the authors of the report sought to make recommendations based on empirical evidence rather than WHO ideology. However, their measure of overall health system functioning was based, in part, on an Internet-based questionnaire of 1000 respondents, half of whom were WHO employees. In this case, the measure used to assess health system functioning was inconsistent with the goals of the evaluation, and this problem did not go unnoticed (see Williams, 2001). Evaluators should consider carefully what the goals of a given program are and choose measures that are appropriate toward the goals of the program.

An important part of choosing measures appropriate to the goals of a program is choosing measures that are psychometrically sound. At minimum, measures should be chosen that have been demonstrated in past research to have adequate internal consistency. In addition, if the evaluator intends to administer a test multiple times, then the chosen measure should have good test–retest reliability. Similarly, if the evaluator chooses a measure that is scored by human raters, then the measure should show good inter-rater reliability. In addition to these basic characteristics of reliability, measures should also have good validity, in that they actually measure the constructs that they are intended to measure. Published measures are more likely to already possess these qualities and thus may be less problematical when choosing among possible measures. It may be the case, however, that either an evaluator is unable to locate an appropriate measure or no appropriate measures currently exist. In this case, evaluators may consider developing their own scales of measurement as part of the process of program evaluation. Smith (2010) has provided a nice tutorial on constructing a survey-based scale for program evaluation. Rather than restate these points, however, we discuss some of the issues that an evaluator may face when constructing new measures in the process of program evaluation. Probably the most important point is that there is no way, a priori, to know that the measure being constructed is valid, in that it measures what it intended to measure. Presumably the measure will be high in face validity, but this does not necessarily translate into construct validity. Along these lines, if an evaluator intends to create their own measure of a given construct in the context of an evaluation, then the measure should be properly vetted regarding its utility in assessing program components prior to making any very strong conclusions. One way to validate a new measure is to add additional measures in the program evaluation to show convergent and divergent validity. In addition, wherever possible, it would be ideal if pilot data on the constructed measure could be obtained from some of the program participants to help evaluate the psychometric properties of the measure prior to its administration to the larger sample that will constitute the formal program evaluation. Another problem that program evaluators may face is that of “re-inventing the wheel,” when creating a measure from scratch. When constructing a measure, program evaluators are advised to research the construct that they intend to measure so that

useful test items can be developed. One way to avoid re-inventing the wheel may be to either borrow items for other validated scales or to modify an existing scale to suit the needs of the program and evaluation, while properly citing the original sources. Collaboration with academic institutions can help facilitate this process by providing resources to which an evaluator may not already have access.

Statistical Techniques in Program Evaluation Program evaluators may employ a wide variety of techniques to analyze the results of their evaluation. These techniques range from “simple” correlations, t-tests, and analyses of variance (ANOVAs) to more intensive techniques such as multilevel modeling, structural equation modeling, and latent growth curve modeling. It is often the case that the research method chosen for the evaluation dictates the statistical technique used to analyze the resultant data. For experimental designs and quasi-experimental designs, various forms of ANOVA, multiple regression, and non-parametric statistics may suffice. However, for longitudinal designs, there may be more options for the program evaluator in terms of how to analyze the data. In this section, we discuss some of the analytical techniques that might be employed when analyzing longitudinal data and, more specifically, the kind of longitudinal data derived from an interrupted time series design. For example, we discuss the relative advantages and disadvantages of repeated measures analysis of variance (RM-ANOVA), multilevel modeling, and latent growth curve modeling. For a more systematic review of some of the more basic statistical techniques in program evaluation, readers are referred to Newcomer and Wirtz (2004). To discuss the properties of each of these techniques, consider a hypothetical longitudinal study on alcohol use among adolescents. Data on alcohol consumption were collected starting when the adolescents were in sixth grade and continued through the twelfth grade. As a part of the larger longitudinal study, a group of adolescents were enrolled in a program aimed at reducing alcohol consumption during adolescence. The task of the evaluator is to determine the effectiveness of the program in reducing alcohol use across adolescence. One way to analyze such data would be to use RM-ANOVA. In this analysis, the evaluator would have several measures of alcohol consumption across time and another binary variable that coded whether a particular adolescent received the program. When

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

349

modeling this data, the repeated measures of alcohol consumption would be treated as a repeated measure, whereas the binary program variable would be treated as a fixed factor. The results of this analysis would indicate the functional form of the alcohol consumption trend over time as well as if the trend differed between the two groups (program vs. no program). The advantage of the repeated measures technique is that the full form of the alcohol consumption trajectory can be modeled, and increases and decreases in alcohol consumption can easily be graphically displayed (e.g., in SPSS). In addition, the shape of the trajectory (e.g., linear, quadratic, cubic, etc.) of alcohol consumption can be tested empirically through significance testing. The primary disadvantage of RM-ANOVA in this case is that the test of the difference between the two groups is limited to the shape of the overall trajectory and cannot be extended to specific periods of time. For example, prior to the treatment, we would expect that the two groups should not differ in their alcohol consumption trajectories; only after the treatment do we expect differences. Rather than specifically testing the difference in trajectories following the treatment, a test is being conducted about the overall shape of the curves. In addition, this technique cannot test the assumption that the two groups are equal in their alcohol consumption trajectories prior to the treatment, a necessary precondition needed to make inferences about the effectiveness of the program. To test these assumptions, we need to move to multilevel modeling (MLM). Multilevel modeling is a statistical technique designed for use with data that violate the assumption of independence (see Kenny, Kashy, & Cook, 2006). The assumption of independence states that after controlling for an independent variable, the residual variance between variables should be independent. Longitudinal data (as well as dyadic data) tend to violate this assumption. The major advantage of MLM is that the structure of these residual covariances can be directly specified (see Singer, 1998, for examples). In addition, and more specifically in reference to the current program evaluation example, the growth function of longitudinal data can be more directly specified in a number of flexible ways (see, for example, Singer & Willett, 2003, p. 138). One interesting technique that has seen little utilization in the evaluation field is what has been called a piecewise growth model (see Seltzer, Frank, & Bryk, 1994, for an example). In this model, rather than specifying a single linear or curvilinear slope, two slopes with a single intercept are 350

p r o g r a m e va l u a t i o n

modeled. The initial slope models change up to a specific point, whereas the subsequent slope models change after a specific point. Perhaps by now, the utility of this method has been discovered as it applies to time series analysis in that trajectories of change can be modeled before and after the implementation of a treatment, intervention, or program. In terms of the present example, change in alcohol consumption can be a model for the entire sample before and after the program implementation. Importantly, different slopes can be estimated for the two different groups (program vs. no program) and empirically tested for differences in the slopes. For example, consider a model that specified a linear growth trajectory for the initial slope (prior to the program) and another linear growth trajectory for the subsequent slope (after the program). In a piecewise growth model, significance testing (as well as the estimation of effect sizes) can be performed separately for both the initial slope and subsequent slope. Further, by adding the fixed effect of program participation (program vs. no program), initial and subsequent slopes for the different groups can be modeled and the differences between the initial and subsequent slopes for the two groups can be tested. With piecewise growth modeling, the evaluator can test the assumption that the initial slopes between the two groups are, in fact, the same as well as test the hypotheses that following the program the growth trajectories of the two groups differ systematically, with the intended effect being that the program group shows a less positive or even negative slope over time (increased alcohol consumption among adolescents being presumed undesirable). Although this method is very useful for interrupted time series design, it is not without its drawbacks. Perhaps one drawback is the complexity of model building; however, this drawback is quickly ameliorated with some research on the topic and perhaps some collaboration. Another drawback to this technique is that the change in subsequent slope may be driven primarily by a large change in behavior immediately following the program and does not necessarily indicate a lasting change over time. Other modeling techniques can be used to explore such variations in behavioral change over time. The interested reader can refer to Singer and Willett (2003). Structural equation modeling can also be used to model longitudinal data through the use of latent growth curve models. For technical details on how to specify a latent growth curve model,

the interested reader can refer to Duncan, Duncan, and Stryker (2006). The primary advantage of using latent growth curve modeling over MLM is that latent variables can be used (indeed, piecewise growth models can be estimated in a latent growth model framework as well; see Muthén & Muthén, 2009, p. 105). In addition, more complex models such as multilevel latent growth curve models can be implemented. Such models also account for the interdependence of longitudinal data but are also useful when data are nested—for example, when there is longitudinal data on alcohol consumption in several different schools. These models can become increasingly complex, and it is recommended that evaluators without prior knowledge of this statistical technique seek the advice and possible collaboration with experts on this topic.

What Do Program Evaluators Actually Do? Part III: Qualitative Methods Foundations of Qualitative Methods: Credibility and Quality The two principal pillars on which qualitative program evaluation rests are credibility and quality. These two concepts lie at the heart of all qualitative research, regardless of any more specific philosophical or ideological subscriptions (Patton, 1999). Although these concepts are not considered to be purely independent of each other in the literature, for the sake of clarity of explanation, we will treat them as such unless otherwise specified.

credibility When performing a literature search on the credibility concept within the qualitative paradigms, the emphasis seems to be primarily with the researcher and only secondarily on the research itself. The points most notably brought to light are those of researcher competence and trustworthiness.

competence Competence is the key to establishing the credibility of a researcher. If a researcher is deemed as incompetent, then the credibility and quality of the entire study immediately comes into question. One of the biggest issues lies with training of qualitative researchers in methods. In a classic example of the unreliability of eyewitness testimonies, Katzer, Cook, and Crouch (1978) point out what can happen when sufficient training does not occur. Ignorance is not bliss, at least in science. Giving any researcher tools without the knowledge to use them

is simply bad policy. Subsequent to their initial training, the next most important consideration with respect to competence is the question of their scientific “track record.” If an evaluator has demonstrated being able to perform high-quality research many times, then it can be assumed that the researcher is competent.

trustworthiness Something else to note when considering the credibility of an evaluator is trustworthiness. There is little doubt that the researcher’s history must be taken into account (Patton, 1999). Without knowing where the researcher is “coming from,” in terms of possible ideological commitments, the reports made by a given evaluator may appear objective but might actually be skewed by personal biases. This is especially a problem with more phenomenological methods of qualitative program evaluation, such as interpretive and social constructionist. As Denzin (1989) and many others have pointed out, pure neutrality or impartiality is rare. This means that not being completely forthright about any personal biases should be a “red flag” regarding the trustworthiness (or lack thereof ) of the evaluator.

judging credibility There are those that argue that credibility and trustworthiness are not traits that an evaluator can achieve themselves, but rather that it has to be established by the stakeholders, presumably democratically and all providing equal input (Atkinson, Heath, & Chenail, 1991). This notion seems to be akin to that of external validity. This is also fundamentally different from another school of thought that claims to be able to increase “truth value” via external auditing (Lincoln & Guba, 1985). Like external validation, Atkinson would argue that evaluators are not in a position to be able to judge their own work and that separate entities should be responsible for such judging. According to this perspective, stakeholders need to evaluate the evaluators. If we continue down that road, then the evaluators of the evaluators might need to be evaluated, and they will need to be evaluated, and so on and so forth. As the Sixth Satire, written by First Century Roman poet Decimus Iunius Iuvenalis, asks: “quis custodiet ipsos custodes?” (“who shall watch the watchers?”; Ramsay, 1918) The way around this infinite regress is to develop some sort of standard by which comparisons between the researcher and the standard can be made.

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

351

Evaluators can only be as credible as the credibility of the system that brought them to their current positions. Recall that there is a diverse array of backgrounds among program evaluators and a broad armamentarium of research methods and statistical models available from which they can select, as well as the fact that there are currently no formal training standards in program evaluation (Stevahan et al., 2005). Until a standard of training is in place, there is no objective way to assess the credibility of a researcher, and evaluators are forced to rely on highly subjective measures of credibility, fraught with biases and emotional reactions.

Quality The other key concern in qualitative program evaluation is quality. Quality concerns echo those voiced regarding questions of reliability and validity in quantitative research, although the framing of these concepts is done within the philosophical framework of the research paradigm (Golafshani, 2003). Patton, as the “go-to guy” for how to do qualitative program evaluations, has applied quantitative principles to qualitative program evaluation throughout his works (Patton, 1999, 1997, 1990), although they seem to fall short in application. His primary emphases are on rigor in testing and interpretation.

rigorous testing Apart from being thorough in the use of any single qualitative method, there appears to be a single key issue with respect to testing rigor, and this is called triangulation. Campbell discussed the concept of methodological triangulation (Campbell, 1953, 1956; Campbell & Fiske, 1959). Triangulation is the use of multiple methods, each having their own unique biases, to measure a particular phenomenon. This multiple convergence allows for the systematic variance ascribable to the “trait” being measured by multiple indicators to be partitioned from the systematic variance associated with each “method” and from the unsystematic variance attributable to the inevitable and random “error” of measurement, regardless of the method used. Within the context of qualitative program evaluation, this can consist either of mixing quantitative and qualitative methods or of mixing qualitative methods. Patton (1999) outspokenly supported the use of either form of triangulation, because each method of measurement has its own advantages and disadvantages. 352

p r o g r a m e va l u a t i o n

Other contributors to this the literature have claimed that the “jury is still out” concerning the advantages of triangulation (Barbour, 1998) and that clearer definitions are needed to determine triangulation’s applicability to qualitative methods. Barbour’s claim seems unsupported because there is a clear misinterpretation of Patton’s work. Patton advocates a convergence of evidence. Because the nature of qualitative data is not as precise as the nature of quantitative data, traditional hypothesis testing is virtually impossible. Barbour is under the impression that Patton is referring to perfectly congruent results. This is obviously not possible because, as stated above, there will always be different divergences between different measures based on which method of measurement is used. Patton is advocating the use of multiple and mixed methods to produce consistent results. One example of how to execute triangulation within the qualitative paradigm focused on three different educational techniques (Oliver-Hoyo & Allen, 2006). For cooperative grouping, hands-on activities, and graphical skills, these authors used interviews, reflective journal entries, surveys, and field notes. The authors found that the exclusive use of surveys would have led to different conclusions, because the results of the surveys alone indicated that there was either no change or a negative change, whereas the other methods instead indicated that there was a positive change with the use of these educational techniques. This demonstrates the importance of using triangulation. When results diverge, meaning that they show opposing trends using different methods, the accuracy of the findings falls into question. Lincoln and Guba (1985) have also discussed the importance of triangulation but have emphasized its importance in increasing the rigor and trustworthiness of research with respect to the interpretation stage. This is ultimately because all methods will restrict what inferences can be made from a qualitative study.

rigorous interpretation As with quantitative program evaluation, qualitative methods require rigorous interpretation at two levels: the microscale, which is the sample, and the macroscale, which is the population for quantitative researchers and is most often the social or global implication for qualitative researchers. Looking at qualitative data is reminiscent of exploratory methods in quantitative research but without the significance tests. Grounded Theory is one such analytic method. The job of the researcher

is to systematically consider all of the data and to extract theory from the data (Strauss & Corbin, 1990). The only exception made is for theory extension when going with a preconceived theory is acceptable. Repeatedly throughout the literature (e.g., Patton, 1999; Atkinson, Heath, & Chenail, 1991; Lincoln & Guba, 1985), the evaluator is emphasized as the key instrument in analysis of data. Although statistics can be helpful, they are seen as restricting and override any “insight” from the researcher. Analysis necessarily depends on the “astute pattern recognition” abilities of the investigating researcher (Patton, 1999). What Leech and Onwuegbuzie (2007) have called “data analysis triangulation” is essentially an extension of the triangulation concept described by Patton (1999) as applied to data analytics. The idea is that by analyzing data with different techniques, convergence can be determined, making the findings more credible or trustworthy. Because a large part of qualitative inquiry is subjective and dependent on a researcher’s creativity, Patton (1999) has advocated reporting all relevant data and making explicit all thought processes, thus avoiding the problem of interpretive bias. This may allow anyone that reads the evaluation report to determine whether the results and suggestions were sufficiently grounded. Shek et al. (2005) have outlined the necessary steps that must occur to demonstrate that the researcher is not simply forcing their opinions into their research.

Qualitative Methods in Program Evaluation The most common methods in qualitative program evaluation are straightforward and fall into one of two broad categories: first-party or third-party methods (done from the perspective of the evaluands, which are the programs being evaluated). These methods are also used by more quantitative fields of inquiry, although they are not usually framed as part of the research process.

first-party methods When an evaluator directly asks questions to the entities being evaluated, the evaluator is utilizing a first-party method. Included in this method are techniques such as interviews (whether of individuals or focus groups), surveys, open-ended questionnaires, and document analyses. Interviews, surveys, and open-ended questionnaires are similar in nature. In interviews, the researcher begins with a set of potential questions, and depending on the way in which the individuals

within the entity respond, the questions will move in a particular direction. The key here is that the questioning is fluid, open, and not a forced choice. In the case of surveys and open-ended questionnaires, fixed questions are presented to the individual, but the potential answers are left as open as possible, such as in short-answer responding. Like with interviews, if it can be helped, the questioning is open and not a forced choice (see Leech & Onwuegbuzie, 2007; Oliver-Hoyo & Allen, 2006; Pugach, 2001; Patton, 1999). Although document analysis is given its own category in the literature (Pugach, 2001; Patton, 1999), it seems more appropriate to include the document analysis technique along with other first-party methods. Document analysis will usually be conducted on prior interviews, transcribed statements, or other official reports. It involves doing “archival digging” to gather data for the evaluation. Pulling out key “success” or “failure” stories are pivotal to performing these kinds of analyses and utilized as often as possible for illustrative purposes. The unifying theme of these three techniques is that the information comes from within the entity being evaluated.

third-party methods The other primary type of methodology used in qualitative research is third-party methods. The two major third-party methods are naturalistic observations and case studies. These methods are more phenomenological in nature and require rigorous training on the part of the researcher for proper execution. These methods are intimately tied with the Competence section above. Naturalistic observation has been used by biological and behavioral scientists for many years and involves observation of behavior within its natural context. This method involves observing some target (whether that is a human or nonhuman animal) performing a behavior in its natural setting. This is most often accomplished reviewing video recordings or recording the target in person while not interacting with the target. There are, however, many cases of researchers interacting with the target and then “going native” or becoming a member of the group they initially sought to study (Patton, 1999). Some of the most prominent natural scientists have utilized this method (e.g., Charles Darwin, Jane Goodall, and Isaac Newton). According to Patton (1999), there are well-documented problems with this method, including phenomena like researcher presence effects, “going native,” researcher biases,

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

353

and concerns regarding researcher training. Despite the inherent risks and problems with naturalistic observation, it has been, and will likely continue to be, a staple method within scientific inquiry. Case studies can be special cases of a naturalistic observation or can be a special kind of “artificial” observation. Case studies provide extensive detail about a few individuals (Banfield & Cayago-Gicain, 2006; Patton, 1999) and can simply be used to demonstrate a point (as in Abma, 2000). Case studies usually take a substantial amount of time to gather appropriate amounts of idiographic data. This method utilizes any records the researcher can get their hands on, regarding the individual being studied (self-report questionnaires, interviews, medical records, performance reviews, financial records, etc.). As with naturalistic observation, case study researchers must undergo much training before they can be deemed “capable” of drawing conclusions based on a single individual. The problems with case studies are all of those in naturalistic observation but with the addition of a greater probability of a sampling error. Because case studies are so intensive, they are often also very expensive. The salience and exhaustion of a few cases makes it difficult to notice larger, nominal trends in the data (Banfield & Cayago-Gicain, 2006). This could also put a disproportionate emphasis on the “tails” of the distribution, although that may be precisely what the researcher wants to accomplish (see next section).

Critiques/Criticisms of Quantitative Methods One of the major critiques of quantitative methods by those in qualitative evaluation is that of credibility. Relevance of findings using quantitative evaluation to what is “important” or what is “the essence” of the question, according to those using qualitative evaluation methods, is rather poor (see discussion in Reichardt & Rallis, 1994a, 1994b). Recall that according to Atkinson (1991), the relevance of findings, and whether they are appropriate, cannot be determined by the evaluator. The stakeholders are the only ones that can determine relevance. Although there are those in qualitative program evaluation that think almost everything is caused by factors like “social class” and “disparity in power,” Atkinson would argue that the evaluator is not able to determine what is or is not relevant to the reality experienced by the stakeholders. Another criticism is that quantitative research tends to focus simply on the majority, neglecting the individuals in the outer ends of the normal 354

p r o g r a m e va l u a t i o n

distribution. This is a valid critique for those quantitative researchers who tend to “drop” their outliers for better model fits. Banfield and Cayago-Gicain (2006) have pointed out that qualitative research allows for more detail on a smaller sample. This allows for more context surrounding individuals to be presented. With additional knowledge from the “atypical” (tails of the distribution) cases, theory can be extracted that fits all of the data best and not just the “typical” person.

Beyond the Qualitative/Quantitative Debate Debate about the superiority of qualitative versus quantitative methodology has a long history in program evaluation. Prior to the 1970s, randomized experiments were considered the gold standard in impact assessment. More and more, however, as evaluators realized the limitations of randomized experiments, quasi-experiments became more acceptable (Madey, 1982). It was also not until the early 1970s that qualitative methods became more acceptable; however, epistemological differences between the two camps prevailed in perpetuating the debate, even leading to distrust and slander between followers of the different perspectives (Kidder & Fine, 1987). In an effort to ebb the tide of the qualitative–quantitative debate, some evaluators have long called for integration between the two approaches. By recognizing that methods typically associated with qualitative and quantitative paradigms are not inextricably linked to these paradigms (Reichardt & Cook, 1979), an evaluator has greater flexibility with which to choose specific methods that are simply the most appropriate for a given evaluation question (Howe, 1988). Further, others have pointed out that because the qualitative and quantitative approaches are not entirely incompatible (e.g., Reichardt & Rallis, 1994a, 1994b), common ground can be found between the two methods when addressing evaluation questions. An evaluator thus may choose to use quantitative or qualitative methods alone or may choose to use both methods in what is known as a mixed methods design. A mixed methods approach to evaluation has been advocated on the basis that the two methods: (1) provide cross-validation (triangulation) of results and (2) complement each other, where the relative weakness of one method becomes the relative strength of the other. For example, despite the purported epistemological differences between the

two paradigms, the different approaches to evaluation often lead to the same answers (Sale, Lohfeld, & Brazil, 2002). Thus, combining both methods into the same evaluation can result in converging lines of evidence. Further, each method can be used to complement the other. For example, the use of qualitative data collection techniques can help in the development or choice of measurement instruments, as the personal interaction with individual participants may pave the way for collecting more sensitive data (Madey, 1982). Despite the promise of integrating qualitative and quantitative methods through a mixed method approach, Sale et al. (2002) challenged the notation that qualitative and quantitative methods are separable from their respective paradigms, contrary to the position advocated by Reichardt and Cook (1979). Indeed, these authors have suggested that because the two approaches deal with fundamentally different perspectives, the use of both methods to triangulate or complement each other is invalid. Rather, mixed methods should be used in accordance with one another only with the recognition of the different questions that they address. In this view, it should be recognized that qualitative and quantitative methods do address different questions, but at the same time they can show considerable overlap. Thus, mixed methods designs provide a more complete picture of the evaluation space by providing all three components: cross-validation, complimentarity, and unique contributions from each. Despite the utility in principle of integrating both qualitative and quantitative methods in evaluation and the more recent developments in mixed methodology (see Greene & Caracelli, 1997), the overwhelming majority of published articles in practice employ either qualitative or quantitative methods to the exclusion of the other. Perhaps one reason for the persistence of the single methodology approach is the lack of training in both approaches in evaluation training programs. For example, the AEA website (http://www.eval.org) lists 51 academic programs that have an evaluation focus or evaluation option. In a review of each of these programs, we found that none of the evaluation programs had a mixed methods focus. Moreover, when programs did have a focus, it was on quantitative methods. Further, within these programs quantitative methods and qualitative methods were generally taught in separate classes, and there was no evidence of any class in any program that was focused specifically on mixed methods designs. Indeed, Johnson

and Onwuegbuzie (2004) have noted that “ . . . graduate students who graduate from educational institutions with an aspiration to gain employment in the world of academia or research are left with the impression that they have to pledge allegiance to one research school of thought or the other” (p. 14). Given the seeming utility of a mixed methods approach, it is unfortunate that more programs do not offer specific training in these techniques.

Competing Paradigms or Possible Integration? In summary, the quantitative and qualitative approaches to program evaluation have been widely represented as incommensurable Kuhnian paradigms (e.g., Guba & Lincoln, 1989). On the other hand, it has been suggested that perhaps the road to reconciliation lies with Reichenbach’s (1938) important distinction between the context of discovery versus the context of justification in scientific research. Sechrest and Figueredo (1993) paraphrased their respective definitions: In the context of discovery, free reign is given to speculative mental construction, creative thought, and subjective interpretation. In the context of justification, unfettered speculation is superseded by severe testing of formerly favored hypotheses, observance of a strict code of scientific objectivity, and the merciless exposure of one’s theories to the gravest possible risk of falsification. (p. 654)

Based on that philosophical perspective, Sechrest and Figueredo (1993) recommended the following methodological resolution of the quantitative/qualitative debate: We believe that some proponents of qualitative methods have incorrectly framed the issue as an absolute either/or dichotomy. Many of the limitations that they attribute to quantitative methods have been discoursed upon extensively in the past. The distinction made previously, however, was not between quantitative and qualitative, but between exploratory and confirmatory research. This distinction is perhaps more useful because it represents the divergent properties of two complementary and sequential stages of the scientific process, rather than two alternative procedures . . . Perhaps a compromise is possible in light of the realization that although rigorous theory testing is admittedly sterile and nonproductive without adequate theory development, creative theory

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

355

construction is ultimately pointless without scientific verification. (p. 654)

We also endorse that view. However, in case Sechrest and Figueredo (1993) were not completely clear the first time, we will restate this position here a little more emphatically. We believe that qualitative methods are most useful in exploratory research, meaning early in the evaluation process, the so-called context of discovery, in that they are more flexible and open and permit the researcher to follow intuitive leads and discover previously unknown and unimagined facts that were quite simply not predicted by existing theory. Qualitative methods are therefore a useful tool for theory construction. However, the potentially controversial part of this otherwise conciliatory position is that it is our considered opinion that qualitative methods are inadequate for confirmatory research, the so-called context of justification, in that they do not and cannot even in principle be designed to rigorously subject our theories to critical risk of falsification, as by comparison to alternative theories (Chamberlin, 1897; Platt, 1964; Popper, 1959; Lakatos, 1970, 1978). For that purpose, quantitative methods necessarily excel because of their greater methodological rigor and because they are equipped to do just that. Quantitative methods are therefore a more useful tool for theory testing. This does not make quantitative evaluation in any way superior to qualitative evaluation, in that exploration and confirmation are both part of the necessary cycle of scientific research. It is virtually routine in many other fields, such as in the science of ethology, to make detailed observations regarding the natural history of any species before generating testable hypotheses that predict their probable behavior. In cross-cultural research, it is standard practice to do the basic ethnographical exploration of any new society under study prior to making any comparative behavioral predictions. These might be better models for program evaluation to follow than constructing the situation as an adversarial one between supposedly incommensurable paradigms.

Conclusions and Recommendations for the Future As a possible solution to some of the structural problems, moral hazards, and perverse incentives in the practice of program evaluation that we have reviewed, Scriven (1976, 1991) long ago suggested that the program funders should pay for summative evaluations and pay the summative evaluators 356

p r o g r a m e va l u a t i o n

directly. We completely agree with this because we believe that the summative program evaluators must not have to answer to the evaluands and that the results of the evaluation should not be “filtered” through them. For example, in the Propriety Standards for Conflicts of Interest, The Joint Committee on Standards for Educational Evaluation (1994) has issued the following guideline: “Wherever possible, obtain the evaluation contract from the funding agency directly, rather than through the funded program or project” (p. 116). Our only problem with this guideline is that the individual evaluator is called on to implement this solution. Should an ethical evaluator then decline contracts offered by the funded program or project? This is not a realistic solution to the problem. As a self-governing society, we should simply not accept summative evaluations in which the funded programs or projects (evaluands) have contracted their own program evaluators. This is a simple matter of protecting the public interest by making the necessary institutional adjustments to address a widely recognized moral hazard. Similarly, in the Propriety Standards for Disclosure of Findings, The Joint Committee on Standards for Educational Evaluation (1994) has issued various guidelines for evaluators to negotiate in advance with clients for complete, unbiased, and detailed disclosure of all evaluation findings to all directly and indirectly affected parties. The problem is that there is currently no incentive in place for an individual evaluator to do so and possibly jeopardize the award of an evaluation contract by demanding conditions of such unrestricted dissemination of information to which almost no client on this planet is very likely to agree. On the other hand, we recommend that the evaluands should pay for formative evaluations and pay the formative evaluators directly. This is because we believe that formative evaluators should provide continuous feedback to the evaluands and not publish those results externally before the program is fully mature (e.g., Tharp & Gallimore, 1979). That way, the formative evaluator can gain the complete trust and cooperation of the program administrators and the program staff. Stufflebeam (2001) writes: Clients sometimes can legitimately commission covert studies and keep the findings private, while meeting relevant laws and adhering to an appropriate advance agreement with the evaluator. This can be the case in the United States for private organizations not governed by public disclosure laws. Furthermore,

an evaluator, under legal contractual agreements, can plan, conduct, and report an evaluation for private purposes, while not disclosing the findings to any outside party. The key to keeping client-controlled studies in legitimate territory is to reach appropriate, legally defensible, advance, written agreements and to adhere to the contractual provisions concerning release of the study’s findings. Such studies also have to conform to applicable laws on release of information. (p. 15)

In summary, summative evaluations should generally be external, whereas formative evaluations should generally be internal. Only strict adherence to these guidelines will provide the correct incentive system for all the parties concerned, including the general public, which winds up paying for all this. The problem essentially boils down to one of intellectual property. Who actually owns the data generated by a program evaluation? In a free market society, the crude but simple answer to this question is typically “whoever is paying for it!” In almost no case is it the program evaluator, who is typically beholden to one party or another for employment. We should therefore arrange for the owner of that intellectual property to be in every case the party whose interests are best aligned with those of the society as a whole. In the case of a formative evaluation, that party is the programproviding agency (the evaluand) seeking to improve its services with a minimum of outside interference, whereas in the case of a summative evaluation, that party is the program-funding agency charged with deciding whether any particular program is worth society’s continuing investment and support. Many informative and insightful comparisons and contrasts have been made on the relative merits and limitations of internal and external evaluators (e.g., Braskamp, Brandenburg, & Ory, 1987; Love, 1991; Mathison, 1994; Meyers, 1981; Newman & Brown, 1996; Owen & Rogers, 1999; Patton, 1997; Tang, Cowling, Koumijian, Roeseler, Lloyd, & Rogers, 2002; Weiss, 1998). Although all of those considerations are too many to list here, internal evaluators are generally valued for their greater availability and lower cost as well as for their greater contextual knowledge of the particular organization and ability to obtain a greater degree of commitment from stakeholders to the ultimate recommendations of the evaluation, based on the perceived legitimacy obtained through their direct experience in the program. We believe that

these various strengths of internal evaluators are ideally suited to the needs of formative evaluation; however, some of these same characteristics might compromise their credibility in the context of a summative evaluation. In contrast, external evaluators are generally valued for their greater technical expertise as well as for their greater independence and objectivity, including greater accountability to the public interest and ability to criticize the organization being evaluated—hence their greater ability to potentially position themselves as mediators or arbiters between the stakeholders. We believe that these various strengths of external evaluators are ideally suited to the needs of summative evaluation; however, some of these same characteristics might compromise their effectiveness in the context of a formative evaluation. A related point is that qualitative methods are arguably superior for conducting the kind of exploratory research often needed in a formative evaluation, whereas quantitative methods are arguably superior for conducting the confirmatory research often needed in a summative evaluation. By transitive inference with our immediately prior recommendation, we would envision qualitative methods being of greater use to internal evaluators and quantitative methods being of greater use to external evaluators, if each method is being applied to what they excel at achieving, within their contingently optimal contexts. With these conclusions, we make our final recommendation that the qualitative/quantitative debate be officially ended, with the recognition that both kinds of research each have their proper and necessary place in the cycle of scientific research and, by logical implication, that of program evaluation. Each side must abandon the claims that their preferred methods can do it all and, in the spirit of the great evaluation methodologist and socio-cultural evolutionary theorist Donald Thomas Campbell, to recognize that all our methods are fallible (Campbell & Fiske, 1959) and that only through exploiting their mutual complementarities can we put all of the interlocking fish scales of omniscience back together (Campbell, 1969).

References Abma, T. A. (2000). Stakeholder conflict: A case study. Evaluation and Program Planning, 23, 199–210. Atkinson, B., Heath, A., & Chenail, R. (1991). Qualitative research and the legitimization of knowledge. Journal of Marital and Family Therapy, 17 (2), 175–180. Barbour, R. S. (1998). Mixing qualitative methods: Quality assurance or qualitative quagmire? Qualitative Health Research, 8(3), 352–361.

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

357

Banfield, G., & Cayago-Gicain, M. S. (2006). Qualitative approaches to educational evaluation: A regional conference-workshop. International Education Journal, 7 (4), 510–513. Berry, D. H. (2000) Cicero Defense Speeches, trans. New York: Oxford University Press. Boruch, R. F. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage. Braskamp, L.A., Brandenburg, D.C. & Ory, J.C. (1987). Lessons about clients’ expectations. In J Nowakowski (Ed.), The client perspective on evaluation: New Directions For Program Evaluation, 36, 63–74. San Francisco, CA: Jossey-Bass. Bryk, A. S. (1980). Analyzing data from premeasure/postmeasure designs. In S. Anderson, A. Auquier, W. Vandaele, & H. I. Weisburg (Eds.), Statistical methods for comparative studies (pp. 235–260). Hoboken, NJ: John Wiley & Sons. Campbell, D. T. (1953). A study of leadership among submarine officers. Columbus, OH: The Ohio State University, Personnel Research Board. Campbell, D. T. (1956). Leadership and its effects upon the group. Columbus, OH: Bureau of Business Research, The Ohio State University. Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312. Campbell, D. T. (1969). Ethnocentrism of disciplines and the fish-scale model of omniscience. In M. Sherif and C.W. Sherif, (Eds.), Interdisciplinary Relationships in the Social Sciences, (pp. 328–348). Chicago IL: Aldine. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin 56 (2), 81–105. Center for Disease Control (2007). Youth media campaign, VERB logic model. Retrieved May 18, 2012, from http://www.cdc.gov/youthcampaign/research/logic.htm Chamberlin, T.C. (1897). The method of multiple working hypotheses. Journal of Geology, 5, 837–848. Clayton, R. R., Cattarello, A. M., & Johnstone B. M. (1996). The effectiveness of Drug Abuse Resistance Education (Project DARE): 5-year follow-up results. Preventive Medicine 25(3), 307–318. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago, IL: RandMcNally. Cook, T. D., Cook, F. L., & Mark, M. M. (1977). Randomized and quasi-experimental designs in evaluation research: An introduction. In L. Rutman (Ed.), Evaluation research methods: A basic guide (pp. 101–140). Beverly Hills, CA: Sage. Cook, T. D., Scriven, M., Coryn, C. L. S., & Evergreen, S. D. H (2010). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31, 105–117. Dembe, A. E., & Boden, L. I. (2000). Moral hazard: A question of morality? New Solutions 2000, 10(3), 257–279. Denzin, N. K. (1989). Interpretive interactionism. Newbury Park, CA: Sage. Dukes, R. L., Stein, J. A., & Ullman, J. B. (1996). Longterm impact of Drug Abuse Resistance Education (D.A.R.E.). Evaluation Review, 21(4), 483–500. Dukes, R. L., Ullman, J. B., & Stein, J. A. (1996). Three-year follow-up of Drug Abuse Resistance Education (D.A.R.E.). Evaluation Review, 20(1), 49–66.

358

p r o g r a m e va l u a t i o n

Duncan, T. E., Duncan, S. C., & Stryker, L. A. (2006). An introduction to latent variable growth curve modeling: Concepts, issues, and applications (2nd Ed.). Mahwah, NJ: Laurence Erlbaum. Ennett, S. T., Tobler, M. S., Ringwalt, C. T., & Flewelling, R. L. (1994). How effective is Drug Abuse Resistance Education? A meta-analysis of Project DARE outcomes evaluations. American Journal of Public Health, 84 (9), 1394–1401. General Accountability Office (2003). Youth Illicit Drug Use Prevention (Report No. GAO-03-172R). Marjorie KE: Author. Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8(4), 597–606. Greene, J. C., & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. In J. C. Greene & V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (pp. 5–17).(New Directions for Evaluation, No. 74). San Francisco: Jossey-Bass. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park: Sage. Heckman, J. J., & Smith, J. A. (1995). Assessing the case for social experiments. Journal of Economic Perspectives, 9, 85–110. Hollister, R. G. & Hill, J (1995). Problems in the evaluation of community-wide initiatives. In Connell, J. P., Kubish, A. C., Schorr, L. B., & Weiss, C. H. (Eds.), New approaches to evaluating community intiatives: Concepts, methods, and contexts (pp. 127–172). Washington, DC: Aspen Institute. Howe, K. R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher, 17, 10–16. Huitema, B. E. (1980). The analysis of covariance and alternatives. New York, NY: John Wiley & Sons. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed-methods research: A research paradigm whose time has come. Educational Researcher, 33, 14–26. Katzer, J., Cook, K. and Crouch, W. (1978). Evaluating information: A guide for users of social science research. Reading, MA: Addison-Wesley. Kenny, D. A. (1975). A quasi-experimental approach to assessing treatment effects in the non-equivalent control group design. Psychological Bulletin, 82, 345–362. Kenny, D. A., Kashy, D. A. & Cook, W. L. (2006). Dyadic data analysis. New York: Guilford Press. Kidder, L. H., & Fine, M. (1987). Qualitative and quantitative methods: When stories converge. In M. M. Mark & R. L. Shotland (Eds.), Multiple methods in program evaluation (pp. 57–75). Indianapolis, IN: Jossey-Bass. King, J., A., Stevahn, L., Ghere, G. & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22, 229–247. Kirk, R. E. (2009). Experimental Design. In R.E. Millsap & A. Maydeu-Olivares (Eds.), The Sage handbook of quantitative methods in psychology (pp. 23–45). Thousand Oaks, CA: Sage. Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In Lakatos, I., & Musgrave, A., (Eds.), Criticism and the growth of knowledge (pp. 91–196). Cambridge, UK: Cambridge University Press. Lakatos, I. (1978). The methodology of scientific research programs. Cambridge, UK: Cambridge University Press. Leech, N. L., & Onwuegbuzie, A. J. (2007). An array of qualitative data analysis tools: A call for data analysis triangulation. School Psychology Quarterly, 22(4), 557–584.

Loftus, E.F. (1979). Eyewitness Testimony, Cambridge, MA: Harvard University Press. Love, A.J. (1991). Internal evaluation: Building organizations from within. Newbury Park, CA: Sage. Madey, D. L. (1982). Some benefits of integration qualitative and quantitative methods in program evaluation. Educational Evaluation and Policy Analysis, 4, 223–236. Mark, M. M., & Cook, T. D. (1984). Design of randomized experiments and quasi-experiments. In L. Rutman (Ed.), Evaluation research methods: A basic guide (pp. 65–120). Beverly Hills, CA: Sage. Mathison, S. (1994). Rethinking the evaluator role: partnerships between organizations and evaluators. Evaluation and Program Planning, 17(3), 299–304. Meyers, W. R. (1981). The Evaluation Enterprise: A Realistic Appraisal of Evaluation Careers, Methods, and Applications. San Francisco, CA: Jossey-Bass. Muthén, L. K. & Muthén, B. O. (1998–2009). Mplus user’s guide. Statistical analysis with latent variables. Los Angeles, CA: Muthén & Muthén. Newcomer, K. E. & Wirtz, P. W. (2004). Using statistics in evaluation. In Wholey, J. S., Hatry, H. P. & Newcomer, R. E. (Eds.), Handbook of practical program evaluation (pp. 439–478). San Francisco, CA: John Wiley & Sons. Newman, D. L. & Brown, R. D. (1996). Applied ethics for program evaluation, San Francisco, CA: Sage. Office of Management and Budget. (2009). A new era of responsibility: Renewing America’s promise. Retrieved May 18, 2012, from http://www.gpoaccess.gov/usbudget/fy10/pdf/fy10newera.pdf Oliver-Hoyo, M., & Allen, D. (2006). The use of triangulation methods in qualitative educational research. Journal of College Science Teaching, 35, 42–47. Owen, J. M., & Rogers, P. J. (1999). Program Evaluation: Forms and Approaches (2nd ed.), St Leonards, NSW: Allen & Unwin. Page, R. B. (1909). The Letters of Alcuin. New York: The Forest Press. Patton, M. Q. (1990) Qualitative evaluation and research methods. Thousand Oaks, CA: Sage. Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice. 15(3), 311–319. Patton, M. Q. (1996). A world larger than formative and summative. Evaluation Practice, 17 (2), 131–144. Patton, M.Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage. Patton, M. Q. (1999). Enhancing the quality and credibility of qualitative analysis. Health Services Research, 35:5 Part II, 1189–1208. Pauly, M. V. (1974). Overinsurance and public provision of insurance: The roles of moral hazard and adverse selection. Quarterly Journal of Economics, 88, 44–62. Platt, J. R. (1964). Strong inference. Science, 146, 347–353. Popper, K. (1959). The Logic of Scientific Discovery. New York: Basic Books. Pugach, M. C. (2001). The stories we choose to tell: Fulfilling the promise of qualitative research for special education. The Council for Exceptional Children, 67 (4), 439–453. Ramsay, G. G. (1918). Juvenal and Persius. trans. New York: Putnam. Reichardt, C. S. (1979). The statistical analysis of data from nonequivalent groups design. In T. D. Cook & D. T. Campbell

(Eds.), Quasi-experimentation: Design and analysis issues for field settings (pp. 147–206). Chicago, IL: Rand-McNally. Reichardt, C. S. (2009). Quasi-experimental design. In R. E. Millsap & A. Maydeu-Olivares (Eds.), The Sage handbook of quantitative methods in psychology (pp. 46–71). Thousand Oaks, CA: Sage. Reichardt, C. S., & Cook, T. D. (1979). Beyond qualitative versus quantitative methods. In T. D. Cook & C. S. Reichardt (Eds.), Qualitative and quantitative methods in evaluation research (pp. 7–32). Beverly Hills, CA: Sage. Reichardt, C. S., & Rallis, S. F. (1994b). Qualitative and quantitative inquiries are not incompatible: A call for a new partnership. New Directions for Program Evaluation, 61, 85–91. Reichardt, C.S., & Rallis, S. F. (1994a). The relationship between the qualitative and quantitative research traditions. New Directions for Program Evaluation, 61, 5–11. Reichenbach, H. (1938). Experience and prediction. Chicago: University of Chicago Press. Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park, CA: Sage. Sale, J. E. M., Lohfeld, L. H., & Brazil, K. (2002). Revisiting the quantitative-qualitative debate: Implications for mixedmethods research. Quality & Quantity, 36, 43–53. Scriven, M. (1967). The methodology of evaluation. In Gredler, M. E., (Ed.), Program Evaluation (p. 16). Englewood Cliffs, New Jersey: Prentice Hall, 1996. Scriven, M. (1976). Evaluation bias and its control. In C. C. Abt (Ed.) The Evaluation of Social Programs, (pp. 217–224). Beverly Hills, CA: Sage. Scriven, M. (1983). Evaluation idiologies. In G.F. Madaus, M. Scriven & D.L. Stufflebeam (Eds.). Evaluation models: Viewpoints on educational and human services evaluation (pp. 229–260). Boston: Kluwer-Nijhoff. Scriven, M. (1991). Pros and cons about goal-free evaluation. Evaluation Practice, 12(1), 55–76. Sechrest, L., & Figueredo, A. J. (1993). Program evaluation. Annual Review of Psychology, 44, 645–674. Seltzer, M. H., Frank, K. A., & Bryk, A. S. (1994). The metric matters: The sensitivity of conclusions about growth in student achievement to choice of metric. Education Evaluation and Policy Analysis, 16, 41–49. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin. Shadish, W. R., Cook, T. D., & Leviton, L. C. (2001). Foundations of program evaluations: Theories of practice. Newberry Park, CA: Sage. Shek, D. T. L., Tang, V. M. Y., & Han, X. Y. (2005). Evaluation of evaluation studies using qualitative research methods in the social work literature (1990–2003): Evidence that constitutes a wake-up call. Research on Social Work Practice, 15, 180–194. Singer, J. D. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics, 24, 323–355. Singer, J. D. & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press. Smith, M. J. (2010). Handbook of program evaluation for social work and health professionals. New York: Oxford University Press.

f i g u e r e d o , o l d e r b a k , s c h l o m e r, g a r c i a , wo l f

359

St. Pierre, R. G. (2004). Using randomized experiments. In J.S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 150–175). San Fransisco, CA: John Wiley & Sons. Stevahn, L., King, J. A., Ghere, G. & Minnema, J. (2005). Establishing essential compenticies for program evaluators. American Journal of Evaluation, 26, 43–59. Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage. Stufflebeam,D. L. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98. Tang, H., Cowling, D.W., Koumijian, K., Roeseler, A., Lloyd, J., & Rogers, T. (2002). Building local program evaluation capacity toward a comprehensive evaluation. In R. Mohan, D.J. Bernstein, & M.D. Whitsett (Eds.), Responding to sponsors and Stakeholders in Complex Evaluation Environments (pp. 39–56). New Directions for Evaluation, No. 95. San Francisco, CA: Jossey-Bass. Tharp, R., & Gallimore, R. (1979). The ecology of program research and development: A model of evaluation succession. In L. B. Sechrest, S. G. West, M. A. Phillips, R. Redner, & W. Yeaton (Eds.), Evaluation Studies Review Annual (Vol. 4, pp. 39–60). Beverly Hills, CA: Sage. Tharp, R., & Gallimore, R. (1982). Inquiry process in program development. Journal of Community Psychology, 10(2), 103– 118. The Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards (2nd ed.). Thousand Oaks, CA: Sage. Thistlethwaite, D. L., & Campbell, D. T. (1960). Regressiondiscontinuity analysis: An alternative to the ex post facto experiment. The Journal of Educational Psychology, 51, 309–317.

360

p r o g r a m e va l u a t i o n

Trochim, W. M. K. (1984). Research design for program evaluation: The regression discontinuity approach. Newbury Park, CA: Sage. United Way. (1996). Guide for logic models and measurements. Retrieved May 18, 2012, from http://www.yourunitedway. org/media/GuideforLogModelandMeas.ppt Wagner, A. K., Soumerai, S. B., Zhang, F., & Ross-Degnan, D. (2002). Segmented regression analysis of interrupted time series studies in medication use research. Journal of Clinical Pharmacy and Therapeutics, 27, 299–309. Weiss, C.H. (1980) Knowledge creep and decision accretion. Knowledge: Creation, Diffusion, Utilisation 1(3): 381–404. Weiss, C. H. (1998). Evaluation: Methods for Studying Programs and Policies, 2nd ed. Upper Saddle River, NJ: Prentice Hall. Weiss, C. J. (1999) The interface between evaluation and public policy. Evaluation, 5(4), 468–486. Wells, G.L., Malpass, R.S., Lindsay, R.C.L., Fisher, R.P., Turtle, J.W., & Fulero, S.M. (2000). From the lab to the police station: A successful application of eyewitness research. American Psychologist, 55(6), 581–598. West, S.L., & O’Neal, K.K. (2004) Project D.A.R.E. outcome effectiveness revisited. American Journal of Public Health, 94 (6) 1027–1029. Williams, A. (2001). Science or marketing at WHO? Commentary on ‘World Health 2000’. Health Economics, 10, 93–100. Willson, E. B., & Putnam, R. R. (1982). A meta-analysis of pretest sensitization effects in experimental design. American Educational Research Journal, 19, 249–258. World Health Organization (2000). The World Health Report 2000 – Health Systems: Improving Performance. World Health Organization: Geneva, Switzerland.

CHAPTER

18

Overview of Statistical Estimation Methods

Ke-Hai Yuan and Christof Schuster

Abstract This chapter provides an overview of methods for estimating parameters and standard errors. Because it is impossible to cover all statistical estimation methods in this chapter, we focus on those approaches that are of general interest and are frequently used in social science research. For each estimation method, the properties of the estimator are highlighted under idealized conditions; drawbacks potentially resulting from violations of ideal conditions are also discussed. In addition, the chapter reviews several widely used computational algorithms for calculating parameter estimates. Key Words: Maximum likelihood, pseudo-maximum likelihood, generalized least squares, robust M-estimators, Bayes methods, estimating equations, δ-method, bootstrap, Newton algorithm, EM algorithm, Markov chain Monte Carlo.

Introduction In social sciences, statistical models are used to describe probabilistic mechanisms assumed to underlie observed data. Typically, a model contains parameters that characterize important aspects of the corresponding population. An example is the parallel measurement model in classical testing theory, where variances of observed variables and measurement errors are assumed equal across tests. If the model holds for a target population, then all the tests are exchangeable with respect to the information they provide about an examinee from the population. If the variances of the observed variables are not statistically different, we need to further estimate the unknown parameters, true score and measurement error variances, to proceed with the analysis. Typically, additional assumptions on data and model are required for estimation purposes. For the parallel measurement model, the assumptions include independence of observations from different participants as well as zero correlation among

variables conditional on the true score. If the sample can be regarded as coming from a normally distributed population, we can include this information in our estimation procedure to yield nearly optimal parameter estimates. This chapter provides an overview of methods for obtaining parameter estimates and their standard errors (SEs). The diversity of estimation methods results mainly from the differences in statistical models and/or the distribution of the sample. Although maximum likelihood (ML) is, generally speaking, the most preferred estimation method, it may be difficult to apply or not available for a particular population. Then, alternative approaches, which are frequently modifications of ML, are available. These include least-squares and generalized least-squares, pseudo-ML, quasi-ML, marginal ML, restricted ML, robust procedures, and estimating equations. Each of the methods aims to get unbiased parameter estimates that are as efficient as possible.

361

A second general approach to parameter estimation is provided by the Bayesian statistical framework, in which parameters are regarded as random quantities. The Bayes approach to parameter estimation is to provide a summary of the distribution of the parameters (e.g., mean, mode, SE, and percentiles). We also cover the class of James-Stein estimators, which are closely related to Bayes estimators but justified from a frequentist perspective. We will highlight the properties of each estimator under idealized conditions and discuss the expected consequences of violated model assumptions. For each estimation method, we will distinguish it from the computational algorithm with which the estimate is obtained. For example, expectationmaximation (EM) and Markov-chain Monte Carlo are algorithms to obtain a ML or Bayes estimator rather than introducing new estimates themselves. Key applications of each method will be reviewed to demonstrate its strength. The next section contains methods for estimating parameters. The section on Methods for Estimating Standard Errors and Confidence Intervals contains methods for estimating SEs that can be applied to all the parameter estimates in the section on Methods for Estimating Parameters. Algorithms or simulation methods for computing the parameter estimates are discussed in the section on Algorithms. Concluding remarks as well as a table summarizing the applicability of each method are provided at the end.

Methods for Estimating Parameters Maximum Likelihood The ML method, also called full information maximum likelihood, is most widely used because it generates estimates with highly desirable large sample properties. These properties also approximately hold in finite samples. In particular, for linear models with normally distributed errors, the ML estimator (MLE) is unbiased, normally distributed and most efficient. Let y1 , y2 , . . ., yn be independent and assume that each yi follows a parametric model with a probability density function (pdf ) or a frequency distribution function fi ( yi ; θ). The likelihood function of θ is n

fi ( yi ; θ).

L(θ) = i=1

Clearly, given a θ, L(θ) represents the probability for the sample to be observed. Because the sample is already observed, the idea of ML is to find a value of θ that maximizes this probability. Formally, 362

the MLE is defined by the value θˆ that maximizes L(θ ). Let li (θ ) = log fi ( yi ; θ ). Because the θˆ that maximizes L(θ ) also maximizes l (θ ) = log L(θ ) =

n 

li (θ ),

i=1

which changes the multiplication sign in L(θ ) to a summation sign, it is easier to work with l (θ ) in most applications. Suppose there exists a value θ 0 such that fi ( yi ; θ 0 ) is the true density of yi . Then, under a set of mild regularity conditions, θˆ is consistent for θ 0 — that is, θˆ approaches θ 0 with probability 1 as n → ∞. The MLE is also asymptotically efficient — that is, no other consistent estimator has a smaller SE than θˆ when n is large enough. Further, the MLE is asymptotically normally distributed. Let ∂ 2 li (θ ) and I (i) = −E [l¨i (θ 0 )], l¨i (θ ) = ∂θ∂θ  which is the so-called information matrix associated with yi . For large n, the covariance matrix of θˆ , which will be denoted by "n , approximately equals I −1 n , where n  I (i) In = i=1

is the information matrix based on the whole sample. The above properties of the MLE can be described by a (1) θˆ ∼ N (θ 0 , "n ), a

where ∼ is the notation for asymptotically follows or approximately follows. The expected value of −l¨i (θ) is typically a function of θ . Therefore, we need to estimate the SEs of θˆ using ˆ ˆ En = I −1 " (2) n (θ); alternatively, we can estimate "n by ˆ On = [− "

n 

l¨i (θˆ )]−1 .

(3)

i=1

The I n (θˆ ) in Equation 2 is called the expected or Fisher information matrix and the matrix  − ni=1 l¨i (θˆ ) in Equation 3 is called the observed information matrix. When the likelihood function ˆ En and " ˆ On are asymptotiis correctly specified, " ˆ On are typically cally equivalent. The SEs based on " better with a smaller sample size. In particular, with missing data that are missing at random (Little & Rubin, 2002) and are ignored when specifying the likelihood function, the expectation in obtaining

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

I −1 n (θ) can only be calculated under the assumption of missing completely at random—thus, an incorˆ On can rect expectation. In this situation, only " ˆ provide consistent SEs for θ. In addition to consistency, efficiency, and asymptotic normality, the MLE also has the important, so-called invariance property: If θˆ is the MLE of θ and t = t(θ) is a function of θ, then the MLE ˆ In other words, a of this function is tˆ = t(θ). function of an MLE is also an MLE. An example is the Pearson product-moment correlation, calculated as rij = sij /(sii sjj )1/2 . Because the sample covariance sij and both variances in the denominator are MLEs for normally distributed data, rij is the MLE of the population correlation coefficient ρij = σij /(σii σjj )1/2 . The density/frequency function fi ( yi ; θ) allows covariates xi to be included. For example, fi ( yi ; θ ) = f ( yi , xi ; θ). When no covariate is involved, one typically assumes identically distributed observations—that is, fi ( yi ; θ) = f ( yi ; θ). Then there exists I n = nI with I = −E [l¨i (θ 0 )] being the information matrix based on a single observation. We can also express Equation 1 as √ L n(θˆ − θ 0 ) → N (0, "),

(4)

L

where the notation → implies converges in distribution to, and " = I −1 is consistently estimated by ˆ or n" ˆ On . either I −1 (θ) Notice that the result in Equation 1 or Equation 4 is based on asymptotics or a large sample size. The normal approximation to the distribution of θˆ as ˆ On to estimate the covariance ˆ En or " well as using " matrix " may not be sufficiently accurate when sample size n is small. Exceptions are linear models with normally distributed data. As an example, consider the simple linear regression model yi = α + βxi + ei , i = 1, 2, . . . , n,

(5)

where ei ∼ N (0, σ 2 ) are independent. Let θ = (α, β, σ 2 ) , x¯ be the sample mean of xi , 1 2 xi , n n

mx2 = sxy =

1 n

i=1 n 

(xi − x¯ )(yi − y¯ ).

i=1

1 (xi − x¯ )2 , n n

sxx = sx2 =

i=1

When xi are nonstochastic, we have 1 1 li (α, β, σ 2 ) = − log(2π) − log(σ 2 ) 2 2 1 − (yi − α − βxi )2 ; 2σ 2 sxy ˆ βˆ = , αˆ = y¯ − x¯ β, (6) sxx n 1 ˆ i )2 ; (yi − αˆ − βx σˆ 2 = n i=1

and



⎞ 0 0 ⎠. "n = I −1 n 2σ 2 (7) It follows from Equation 1 and Equation 7 that mx2 /sxx ⎝ −¯x /sxx = n 0 σ2

−¯x /sxx 1/sxx 0

a

αˆ ∼ N (α0 , mx2 σ 2 /(nsxx )), and a βˆ ∼ N (β0 , σ 2 /(nsxx )).

Because both αˆ and βˆ are linear functions of the random variables yi , there also exist αˆ ∼ N (α0 , mx2 σ 2 /(nsxx )), βˆ ∼ N (β0 , σ 2 /(nsxx )). In covariance structure analysis, the ML method is commonly presented through the normaldistribution-based discrepancy function FNML (S, (θ )) = tr(S −1 (θ ))−log |S −1 (θ )|−p, (8) where p is the number of observed variables, 1 S = (sjk ) = ( yi − y¯ )( yi − y¯ ) , n n

(9)

i=1

and (θ ) is a covariance structural model that is commonly generated by latent variables with θ containing the unknown parameters. The function FNML (S, (θ ))isequalto2[l ( y¯ , S)−l (θ )]/n, where l ( y¯ , S) is the log likelihood function based on yi ∼ ˆ = S, and ˆ = y¯ and  N (μ, ) and evaluated at μ l (θ) is the log likelihood function based on yi ∼ N (μ, (θ)). Thus, the θˆ obtained by minimizing Equation 8 is just the MLE that maximizes l (θ). The most desirable property of the MLE is its efficiency. Among all consistent estimators, no other estimator can be asymptotically more efficient than MLE. However, if the likelihood function is misspecified, then the resulting “MLE" may not enjoy any of the desirable properties (consistency, efficiency, asymptotic normality). In practice, the distribution of the observed data is typically unknown. yuan, schuster

363

Nevertheless, many researchers choose the normal distribution for MLE because it is the default option in standard software. Maximum likelihood methods based on the normal distribution include the sample mean for estimating the population mean in ANOVA,usingthesamplecovariancematrixStoestimate the population covariance matrix or structural parameters in regression, structural equation modeling (SEM), and many other multivariate procedures. However, the resulting parameter estimates are not asymptotically efficient if the normality assumption is not satisfied. The resulting SEs corresponding to estimates of variance/covariance parameters based on Equation 1 or Equation 4 are not even consistent. In particular, the SEs from the normal-distributionbasedMLinfactoranalysis, SEM,growthcurvemodels, correlation analysis, and principal component analysis are not consistent with typical nonnormal data in practice (Micceri, 1989). In addition, even if the MLE is consistent, it may have a finite sample bias. Examples are the σˆ 2 in Equation 6 and S in Equation 9. In an extreme case, an MLE can be inconsistent. Consider the balanced one-way ANOVA model yij ∼ N (μi , σ 2 ), i = 1, 2, . . . , I ; j = 1, 2, . . . , J , where I denotes the number of treatment groups and J denotes the number of observations within each group. Let σˆ 2 be the MLE of σ 2 , which is just the within-group sum of squares divided by IJ . Then IJ − I 2 σ . E (σˆ 2 ) = IJ Obviously, the sample size is n = IJ . When J is held constant and I increases, σˆ 2 converges to its expected value (1 − 1/J )σ 2 . In particular, the limit is σ 2 /2 at J = 2. This example illustrates a well-known problem with the MLE when the number of parameters increases proportionally with the sample size. This problem is commonly called the Neyman-Scott problem because of their work in 1948. Other examples of inconsistent MLEs include factor analysis or item response models when treating the factor scores or latent traits as model parameters. This partially explains why factor scores are better treated as random variables when estimating the item parameters. Similarly, for an ANOVA model with many conditions that can be regarded as randomly selected from a large pool of conditions, it might be better to formulate the problem as a random effect model. Then, ML remains nearly optimal when estimating these random effect models. 364

The bias in S in Equation 9 can be corrected by replacing the denominator n by n − 1. Biases in σˆ 2 for the regression and ANOVA models can also be corrected by replacing the denominators n and IJ by (n − 2) and I (J − 1), respectively. These corrected estimators are automatically obtained in the method of restricted ML to be introduced in a separate subsection below. Unless n is small or the number of parameters increases proportionally with n, biases in MLE will be small compared to sampling errors or errors created by model misspecification or data contamination. Therefore, small sample biases of MLEs are typically not a serious concern if the distribution is correctly specified.

Least-Squares The least-squares (LS) method generates parameter estimates by minimizing the squared distance between the data and the model. It is most commonly used in linear regression and is closely related to ML if the data come from the normal distribution. For the simple regression model in Equation 5, the LS function is defined by LS(α, β) =

n 

(yi − α − βxi )2 .

(10)

i=1

The estimates of α and β produced by minimizing Equation 10 are identical to those given in Equation 6. However, LS itself does not provide an estimate for σ 2 . The commonly used unbiased estimate for σ 2 in linear regression is a restricted MLE—not a MLE nor a LS estimate. Notice that LS for regression is equivalent to ML only when the ei in Equation 5 follow N (0, σ 2 ). When the ei follow another distribution, such as the Student t- or double-exponential distribution, the estimators resulting from minimizing Equation 10 are no longer equivalent to the MLEs based on these distributions. The LS method for covariance structure analysis with p variables is defined by LS(θ ) =

p p  

[sij − σij (θ )]2 ,

(11)

i=1 j=1

where sij is the sample covariance between the ith and jth variables, σij (θ ) is the element of (θ ) in Equation 8 corresponding to sij . Notice the LS(θ ) in Equation 11 is a two-step procedure, using sij to estimate σij in the first step before proceeding to LS. The LS estimate obtained from minimizing Equation 11 is no longer equivalent to the normaldistribution-based MLE for covariance structure analysis.

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

Generalized Least-Squares Suppose the errors ei in the simple regression model in Equation 5 are not independent. Let e = (e1 , e2 , . . . , en ) and V = Cov(e). If V is known, then we can estimate α and β by generalized least-squares (GLS). For example, V may follow from the study design or can be obtained from an additional source of information. Let 1 = (1, . . . , 1) , x = (x1 , x2 , . . . , xn ) , X = (1, x), and y = (y1 , y2 , . . . , yn ) . Then the regression model in Equation 5 can be written as y = Xβ + e, (α, β)

and e = (e1 , e2 , . . . , en where β = GLS function for Equation 12 is defined as GLS(β) = ( y − Xβ) V −1 ( y − Xβ).

(12) ) .

The (13)

Solving for β by minimizing Equation 13 results in βˆ = (X  V −1 X)−1 X  V −1 y. A special case of GLS, weighted least-squares (WLS), occurs if V is a diagonal matrix. If the covariance matrix of e has the particularly simple form V = σ 2 I, GLS, WLS, and LS are identical. Both GLS and WLS estimates of β are identical to the MLE based on y ∼ N (Xβ, V). When each dependent variable yi in Equation 5 is an average based on a variable specific sample size mi , i = 1, 2, . . ., n, then V can be specified as diag(σ12 /m1 , σ22 /m2 , . . . , σn2 /mn ), where σi2 is the common variance of the individual observations from which the ith average is calculated. It is considerably more difficult to specify a general V in the context of regression, although GLS is often introduced in this context. For a multivariate regression model, let V be the within-subject covariance matrix, assumed constant across individuals. Then V can be consistently estimated by the average of the cross-product of residuals from LS estimators. Thus, GLS regression parameter estimates can be obtained by GLS following an initial LS estimation. The residuals from GLS regression can be used to update the estimate for V. This process can be repeated until the changes of βˆ across iterations become sufficiently small. Such an iterative process may improve the efficiency of the regression parameter estimates only by a small amount, because the GLS estimates at later steps have the same asymptotic efficiency as the GLS estimator of the first step in this process. The GLS method for covariance structure analysis is introduced in essentially every textbook on SEM or confirmatory factor analysis. The so-called GLS discrepancy function is defined as 1 FNGLS (S, (θ)) = tr({[S − (θ)]S−1 }2 ), 2

where the subscript N is for the assumption of normally distributed variables. A GLS method that does not need the normal distribution assumption in SEM is called AGLS (‘A’ indicating arbitrary distribution with finite fourth-order moments) or the asymptotically distribution-free (ADF) method. Parameter estimates based on minimizing FNGLS (S, (θ )) and FNML (S, (θ )) in Equation 8 are asymptotically equivalent when the model is correctly specified. This asymptotic equivalence does not depend on the normality assumption. However, the two estimation approaches are not equivalent when the model is misspecified. NGLS and AGLS yield asymptotically equivalent estimates when data are normally distributed — not depending on the specification of the model (Yuan & Chan, 2005). We need to emphasize that parameter estimates based on modeling means and covariances are not efficient when data are not normally distributed. The AGLS/ADF estimators in covariance structure analysis are asymptotically efficient among estimators based on modeling the covariance matrix S (Browne, 1984). Other estimators by modeling the distributional shape of the sample can easily be superior to the GLS and AGLS estimators both asymptotically and at finite sample sizes (Yuan, Bentler & Chan, 2004).

Pseudo- and Quasi-Maximum Likelihood The names of pseudo- and quasi-ML are used quite freely in the literature when an estimation method is a modification of ML. For example, Gourieroux, Monfort, and Trognon (1984), Gong and Samaniego (1981), as well as Park (1986) all used the term pseudo-ML, although they considered different estimation approaches. We will use pseudo-ML for the procedure described by Gourieroux et al. (1984) and refer to the approach of Gong and Samaniego (1981) as quasi-ML. A typical scenario in practice is that a researcher has a model in mind (e.g., ANOVA, factor analysis), but the sample does not follow any known distribution. Here, the model and distribution are distinguished. For example, in the simple regression model, the distributions of the observed predictors and responses may not follow any known forms. In such a situation, many researchers choose the normal distribution for convenience. Therefore, the resulting MLEs are referred to as pseudo-MLEs. When the interest is in a set of mean and variance– covariance parameters and the chosen distribution belongs to a quadratic exponentially family, the yuan, schuster

365

pseudo-MLEs are consistent and asymptotically normally distributed (Gourieroux et al., 1984). But its covariance matrix is no longer consistently estimated by the inverse of the information matrix. Rather, the "n in Equation 1 can be consistently estimated by the so-called sandwich-type covariance matrix  n −1  n   ˆ ˆ l˙ (θ) ˆ ˆn = " l¨i (θ) l˙i (θ) i

i=1

 n 

i=1

ˆ l¨i (θ)

−1 ,

(14)

i=1

where l˙i (θ) = ∂li (θ)/∂θ. The side matrix  n ¨ ˆ i=1 li (θ) in Equation 14 is just −1 times the observed information matrix, which mainly reflects the model structureand the assumed distribution. ˆ l˙ (θ) ˆ in Equation 14 The middle matrix ni=1 l˙i (θ) i corrects possible misspecification in the distribution assumption. For example, for mean and covariance structure analysis using a normal distribution assumption, the middle matrix contains the sample estimates of skewness and kurtosis of the observed data (see Yuan, Bentler, & Zhang, 2005). Notice that once the likelihood function is determined, the parameter estimate is the same whether we call it a MLE or a pseudo-MLE. The difference is that in pseudo-ML the researcher allows for the possibility that the data may not follow the distribution specified in the likelihood function. Because, in any statistical modeling, the distribution specification is at best only an approximation to the real world (see Box, 1979), one may use Equation 14 as a default covariance matrix for obtaining the SEs rather than the information matrix in Equation 2 or Equation 3. Even when the density is correctly specified, the SEs based on the sandwich-type covariance matrix in Equation 14 remain consistent. When the chosen distribution does not belong to an exponential family, the pseudo-MLE is generally not consistent for estimating the mean and variance– covariance parameters. Rather,  it converges to the value θ ∗ that maximizes E [ ni=1 li (θ)], where the expectation is with respect to the true underlying ˆ n remains consistent distribution. The estimator " ˆ Technifor the asymptotic covariance matrix of θ. cal details of pseudo-ML are in White (1982) and Gourieroux et al. (1984). Although the term quasi-ML is also used to describe ML with a misspecified likelihood function, we use it to describe the situation, where the likelihood function has two different sets of 366

parameters, θ and γ (Kano, Berkane, & Bentler, 1993). For certain reasons, only the parameters in θ are of substantive interest, but γ is needed to specify the likelihood function. If simultaneously estimating both θ and γ by maximum likelihood is difficult or even impossible, but an estimate γˆ for γ is available and consistent, then one may maximize l (θ , γˆ ) rather than l (θ , γ ) to obtain a quasi-MLE θˆ . The resulting θˆ is consistent and asymptotically normally distributed. But SEs based on the corresponding information matrix of treating γˆ as known may or may not be consistent (Yuan & Jennrich, 2000). An example of quasi-ML is the polychoric correlation, where γ contains the thresholds of the two marginal variables and θ contains the single parameter of the population polychoric correlation (Olsson, 1979; Poon & Lee, 1987). The thresholds can be obtained by the quantiles underlying the standard normal curve corresponding to the observed marginal frequency of the ordinal variables. These are treated as known when performing the ML estimation of the polychoric correlation. Another example is in the context of item response models, where γ contains the item parameters and θ contains the person or trait parameters. The item parameters can be estimated from the same or a different sample and treated as known when estimating the trait parameters (Cheng & Yuan, 2010). Quasi-ML has also been used to describe a situation where the mean structure of the observed variables can be correctly specified, whereas the variances and covariances are only specified as a constant times a structured matrix. The mean and the variance–covariance structure may depend on the same set of parameters. Then, quasi-ML defines parameter estimates as satisfying an equation derived from a normal distribution with given covariance matrix. Such defined estimators are consistent within a large class of unknown distributions. But the resulting estimators may not have the efficiency of a true MLE. Examples in this direction include generalized linear models with over dispersion parameters (McCullagh & Nelder, 1989; Nelder & Lee, 1992). Pseudo- and quasi-ML can appear in the same problem. For example, in ML estimation with a multivariate t-distribution, the degrees of freedom (df ) of the t-distribution is not of direct interest. One can fix it at a given value df0 or estimate it by df using the fourth-order moments (see Berkane, Kano, & Bentler, 1994). If the true population belongs to the family of t-distributions with df = df0 and one sets df at df0 when estimating the means and

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

variances–covariances, then the resulting estimates are MLEs. If the true population belongs to the - in family of t-distributions and df was set at df the estimation, then the resulting estimator is a quasi-MLE. If the true population does not belong to the family of t-distributions, then the resulting estimator is a pseudo-MLE (when df = df0 ) or - ). pseudo-quasi MLE (when df = df In general, a quasi-MLE or a pseudo-MLE does not have the efficiency of an MLE.

Marginal Maximum Likelihood It is not always easy to specify the likelihood function, even when we fully understand the underlying population distribution. In many cases, introducing a set of latent variables allows us to easily specify the joint frequency or density function of both the observed and latent variables. The working likelihood function for parameter estimation needs to be based on the marginal distribution of only the observed variables by integrating out the latent variables. Maximizing such a working likelihood is called marginal ML in the psychometric literature, although the same procedure applied to other models is simply called ML. Let y be a vector containing all the observed variables and ξ be a vector containing the latent variables. Let f ( y; θ 1 |ξ ) be the probability density/frequency function of y given ξ ; and f (ξ ; θ 2 ) be the density/frequency function of ξ . Then the joint pdf of ( y, ξ ) is given by f ( y, ξ ; θ) = f ( y; θ 1 |ξ )f (ξ ; θ 2 ).

(15)

Thus, the marginal density/frequency distribution of y is  (16) f ( y; θ) = f ( y; θ 1 |ξ )f (ξ ; θ 2 )d ξ . Parameter estimates obtained by maximizing the likelihood function defined through the density function in Equation 16 are called marginal MLE. The marginal MLE enjoys the same properties as those of the MLE — that is, it is consistent, efficient, asymptotically normally distributed, and its asymptotic covariance matrix can be consistently estimated by the inverse of the information matrix corresponding to the marginal likelihood function. However, the integral in Equation 16 may not have an exact analytical solution. It is typically evaluated using numerical or Monte Carlo method. If the distribution of either ( y|ξ ) or ξ is misspecified, then the marginal MLE may not have any of the desirable properties that MLEs have.

For item response models, y contains the responses of a person to a given set of items, θ contains item parameters, and ξ denotes the traits. It is typically assumed that ξ follows a multivariate normal distribution in applications and that the observations are locally independent—that is, conditional on the trait the item responses are independent. Most applications of item response models are unidimensional with ξ = ξ containing a single latent variable. Another approach to the estimation of θ is to treat ξ for each person as a parameter and estimate θ and ξ1 , ξ2 , . . ., ξn simultaneously based on maximizing the likelihood function with li (θ , ξi ) = log f ( yi , ξi ; θ). However, the resulting MLEs of both θ and ξ1 to ξn may not be consistent because the number of parameters in ξ = (ξ1 , ξ2 , . . . , ξn ) is proportional to the number of cases. The vector ξ is removed in Equation 16 by integration and the technical difficulty of obtaining a consistent θˆ is resolved by turning to marginal ML. Marginal ML for item response models has been used in Bock and Lieberman (1970) as well as in Bock and Aitkin (1981) and has been discussed systematically in Baker and Kim (2004), where a numerical method is used to evaluate the integral in Equation 16. The numerical integration procedure essentially approximates the area under a continuous curve by many small rectangles. Marginal ML can be used also for nonlinear factor models or SEM with interaction terms. Suppose we have three latent variables ξ1 , ξ2 , and η with η = h(ξ1 , ξ2 ) + ζ , where h(·, ·) is a function of known form. Let ξ = (ξ1 , ξ2 , η) and the measurement model be y = μ + ξ + e, where  is a factor loading matrix and e contains measurement errors that are independent of ζ . If we assume ζ ∼ N (0, τ 2 ) and e ∼ N (0, ), then, conditional on ξ1 and ξ2 , η is normally distributed and so is y. If we further assume that (ξ1 , ξ2 ) follows a distribution with pdf f (ξ1 , ξ2 ; θ 2 ), then the marginal likelihood can be obtained as in Equation 16. Marginal likelihood for modeling interaction effects with latent variables has been studied by Lee and Zhu (2002), where Monte Carlo methods or Gibbs sampling is used to evaluate the integral in Equation 16. Klein and Moosbrugger (2000) proposed to evaluate Equation 16 numerically and called it a latent moderated structural equation approach. In the context of hierarchical generalized linear models the likelihood function based on the yuan, schuster

367

joint pdf in Equation 15 is called h-likelihood by Lee and Nelder (1996), who reviewed its applications for random effect models with continuous and categorical dependent variables.

Restricted Maximum Likelihood For a linear model with random effect, the MLEs for the variance–covariance parameters are typically biased because the estimators do not account for the fact that the fixed parameters are unknown. We have discussed such biases in the context of regression and ANOVA models previously in the subsection of Maximum Likelihood. Restricted ML (REML) is a special case of ML that aims to obtain unbiased estimates of variance–covariance parameters by defining the likelihood on residuals. Specifically, the likelihood function is defined on the projection of the dependent variables onto the space that is orthogonal to the space of the fixed effects. The resulting estimates of variance–covariance parameters automatically correct the biases in MLE because of the degrees of freedom lost in estimating the fixed effect. Consider the regression model in Equation 12 with p predictors, where X contains a column of 1’s corresponding to the intercept and β contains (p +1) parameters. Let βˆ = (X  X)−1 X  y. Then the residual vector is given by r = y−X βˆ = Qx y, where Qx = I − X(X  X)−1 X  . Notice that Qx is a projection matrix for the space orthogonal to that spanned by the columns of X. Assuming ei ∼ N (0, σ 2 ), then r ∼ N (0, σ 2 Qx ).

(18)

has a density. Such a L can be obtained by the n − (p + 1) eigenvectors of Qx corresponding to the n − (p + 1) eigenvalues of 1.0. The log likelihood function based on Equation 18 is given by n − (p + 1) [log(2π) + log σ 2 ] 2 z (L  Qx L)−1 z − . 2σ 2

l (σ 2 ) = −

Setting the derivative of l (σ 2 ) with respect to σ 2 at zero yields the restricted MLE σ˜ 2 = 368

z (L  Qx L)−1 z . n − (p + 1)

E [z (L  Qx L)−1 z] = σ 2 tr[(L  Qx L)−1 (L  Qx L)] = [n − (p + 1)]σ 2 . Thus, σ˜ 2 is unbiased. Actually, z (L  Qx L)−1 z is mathematically equivalent to the residual sum of squares for the regression model. Similarly, the restricted MLE of σ 2 for the ANOVA model discussed previously in the subsection of Maximum Likelihood is unbiased and consistent. Actually, at J = 2, the restricted MLE of σ 2 is equivalent to the MLE using (yi1 − yi2 ) ∼ N (0, σ 2 ), i = 1, 2, . . . , I . The idea of REML was introduced by Bartlett (1937). Patterson and Thompson (1971) first applied it to estimating variance components with unbalanced design. The name of restricted ML was suggested by Harville (1977), who also showed its applications to general mixture effect models. Restricted ML estimation has become increasingly popular and is a method covered in most modern textbooks on linear models. Because REML is just a ML, the resulting estimator enjoys all the properties of the MLE: consistency, efficiency, and asymptotic normality. Of course, when the likelihood function for the residual is misspecified, the restricted MLE may not possess any of the desirable properties.

(17)

Also notice that the parameters in β are not part of Equation 17. Because Qx is singular with rank n − (p + 1), r does not have a density function. However, there exists a n × [n − (p + 1)] matrix L such that z = L  r ∼ N (0, σ 2 L  Qx L)

Using Equation 18 we immediately have

Robust Procedures Robust procedures are closely related to ML and pseudo-ML. When the population distribution of the sample is unknown, robust procedures aim to achieve parameter estimates with efficiency close to that of a true MLE. In particular, when data are contaminated or contain outliers, the normaldistribution-based MLE is not only inefficient but also biased. A robust method also minimizes the effect of data contamination. For example, both the sample median and α-trimmed means are considerably more stable than the sample mean when a small percentage of data are arbitrarily altered. Although there are many approaches to robust estimation, we will mainly discuss the M-estimator originally proposed by Huber (1967) because of its close relationship to ML. Consider the linear regression model in Equation 12 with p nonstochastic predictors. Let xi be the vector of 1 and the p predictors from the ith observation, and ri = yi − x β. Then the LS estimator βˆ

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

i

can be regarded as obtained by solving the following set of (p + 1) equations n 

xi (yi − xi β) =

i=1

n 

xi ri = 0.

(19)

i=1

Let w(t) be a decreasing but nonnegative function. The M-estimator for β is defined by n 

xi w(ri /σ )ri = 0.

(20)

i=1

Obviously, Equation 20 is a modification of Equation 19 and it reduces to Equation 19 when w(t) = 1. Thus, the LS estimator or the normaldistribution-based MLE of β is a special case of the M-estimator. The purpose of w(t) is to minimize the effect of observations having large values of ri . The extent to which observations with large ri affect the resulting estimator depends on the choice of w(t). Several weight functions have been proposed (see Table 11-1 of Hoaglin, Mosteller, & Tukey, 1983). Popular ones include the Huber-type weight . 1, if |t| ≤ c w(t) = (21) c/|t|, if |t| > c for a constant c, and the weight corresponding to the ML procedure of a t-distribution with m degrees of freedom, w(t) = (m + 1)/(m + t 2 ).

(22)

When w(t) = sgn(t)/t with sgn being the sign function, then Equation 20 becomes n 

xi sgn(ri ) = 0,

which defines the MLE of β when ei follow the double-exponential distribution. The βˆ satisfying Equation 23 minimizes L1 (β) =

is often recommended for use in Equation 20 (Maronna, Martin, & Yohai, 2006, p. 100). An alternative approach is to solve Equation 20 and

which is often called the L1 -norm or least absolute deviation function. The constant c in Equation 21 is a tuning parameter controlling the percentage of the observations being downweighted. This percentage increases as the tuning constant c decreases. For example, when c = −1 (α) and ei ∼ N (0, σ 2 ), about 2α × 100% of the observations are downweighted in Equation 20. Similarly, the m in Equation 22 can also be

(24)

i=1

simultaneously, where u(t) = w 2 (t)/τ for the Huber-type weight with τ < 1 being determined by c and aiming for unbiased σˆ 2 , and u(t) = w(t) for the weight based on the t-distribution. Notice that Equation 24 defines an estimator for σ 2 that is a direct generalization of the normaldistribution-based MLE in Equation 6. One can also replace the n in the denominator by n − p − 1 for a small sample correction. Clearly, the contribution of cases with larger ri is downweighted by u(ri /σ ) in Equation 24. An iterative procedure called iteratively reweighted least-squares (IRLS), to be discussed in a later section, can be easily implemented to solve Equations 20 and 24. Robust M-estimation can be generalized to other models as well. For example, consider a p-variate sample y1 , y2 , . . ., yn with mean vector μ and covariance matrix . Let di = [( yi − μ)  −1 ( yi − μ)]1/2 be the Mahalanobis distance, and let w1 (d ) and w2 (d ) be two decreasing functions of d . Robust estimates of μ and  are defined by n 

|ri |,

i=1

1 u(ri /σ )ri2 n n

σ2 =

(23)

i=1

n 

regarded as a tuning parameter, and the smaller the m, the smaller the weights for cases with larger ri . When solving Equation 20, we also need an estimate of σ although it is confounded with the choice of c or m. Let ri be the residual evaluated at the solution of Equation 23. The median of the non-null absolute residuals 1 med(|ri |), (ri = 0) σˆ = −1  (3/4)

w1 (di )( yi − μ) = 0,

(25)

i=1

and n 

[w2 (di )( yi − μ)( yi − μ) − ] = 0. (26)

i=1

and solved by IRLS. Obviously, Equations 25 and 26 are parallel to Equations 20 and 24, originated from Maronna (1976). The resulting robust estiˆ can be further used for mean ˆ and  mates μ comparisons, principal components analysis, factor yuan, schuster

369

analysis and SEM. According to the theory of estimating equations, robust estimators are consistent, asymptotically normally distributed and the asymptotic covariance matrix can be consistently estimated by a sandwich-type covariance matrix. The details for obtaining this matrix will be given in the next subsection. Notice that the weights in Equations 20 and 24 for the regression model are defined as functions of the residuals for given predictors. If the predictors xi are also subject to sampling error, then it is more sensible to let yi = (xi , yi ) and use Equations 25 and 26 to estimate the joint means and covariances of x and y. Then robust regression coefficients can be obtained by −1 ˆ xx σˆ xy , βˆ = 

(27)

ˆ correspondˆ xx and σˆ xy are elements of  where  ing to xx and σ xy . We may call the estimator in Equation 27 a two-stage approach and the one defined in Equation 20 a direct approach. The robustness of an M-estimator results from the fact that cases lying far from the model or the center of the majority of the data cloud are downweighted. Unlike outlier removal, the process is automatic. Existing results indicate that robust estimators typically have smaller SEs than the normal-distribution-based pseudo-MLE with real data (Wilcox, 2005; Yuan & Bentler, 1998; Zu & Yuan, 2010). They are also less biased when data are contaminated, and simulation results indicate that they perform almost as good as the normal-distribution-based MLE when data are truly normally distributed. In addition to M-estimators, many alternative robust estimators exist—for example, L-estimator, R-estimator, minimum-volume-ellipsoid-estimator, S-estimator, and τ -estimator. They may be more robust than the M-estimator, but they also tend to lose more efficiency when data are normally distributed. Most of them are not as straightforward as the M-estimator when generalizing to different models. Robust M-procedures for estimating latent abilities in item response models are studied in Wainer and Wright (1980), Mislevy and Bock (1982), and Schuster and Yuan (2011). Robust procedures for SEM following Equations 25 and 26 are studied in Yuan and Bentler (1998). Factor analysis and SEM, parallel to Equations 20 and 24 without estimating the saturated model, are studied in Yuan and Zhong (2008). Systematic discussions of robust procedures for other models can be found in Wilcox (2005) and Maronna et al. (2006). 370

Estimating Equations All the methods discussed so far (ML, LS, GLS, pseudo-ML, REML, M-estimator) generate a vector of estimators that satisfy a set of equations. In particular, for a sample y1 , y2 , . . ., yn , let gi ( yi ; θ ) be a vector of functions that satisfy E [gi ( yi ; θ 0 )] = 0. Then, under a set of standard regularity conditions (Yuan & Jennrich, 1998), the estimate θˆ obtained by solving n  gi ( yi ; θˆ ) = 0 (28) i=1

is consistent, asymptotically normally distributed, and there exists √ L n(θˆ − θ 0 ) → N (0, "), (29) where " = A −1 BA  −1 with A and B being consistently estimated by 1  ∂gi ( yi ; θ ) , and Aˆ = n ∂θ  n

1 Bˆ = n

i=1 n 

ˆ gi ( yi ; θˆ )gi ( yi ; θ).

(30)

i=1

Equation 28 is called an estimating equation and the resulting estimator is the estimating equation estimator. There are many applications of estimating equations in various disciplines of statistics because of their flexibility and established properties. The estimating equation approach was introduced by Godambe (1960). In the context of modeling repeated measures with categorical data, Liang and Zeger (1986) used generalized linear models for the marginal frequency and accounted for the variable association using a convenient and possibly misspecified covariance structure. They called the resulting equation the generalized estimating equation (GEE). Now, GEE is often used to refer to the estimating equation approach in general. Clearly, the equations defining the robust estiˆ σˆ 2 , μ ˆ in Equations 20, 24, 25 ˆ and  mators β, and 26 are estimating equations. So their covariance matrices or SEs can be consistently estimated using Equations 29 and 30. The versatility of the estimating equation approach is best illustrated by SEM with ordinal and continuous variables, where the estimates of polychoric, polyserial, and Pearsonproduct-moment correlations are obtained from different approaches. They are then combined in a single matrix. The asymptotic covariance matrix of the correlations is needed for obtaining SEs of the parameter estimates when fitting the correlations

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

by a SEM model. There are five sets of parameters before fitting the factor model: thresholds for ordinal data, means and standard deviations for continuous data, polychoric correlations between ordinal variables, polyserial correlations between continuous and ordinal variables, and Pearson product-moment correlations between continuous variables. In the estimation process, the thresholds are obtained by matching the corresponding probabilities underlying the standardized normal curve with the observed marginal frequencies. The polychoric correlations are obtained by the quasi-ML approach in which the obtained thresholds are treated as fixed constants. The correlations for continuous data are obtained using Pearson product-moment correlations. The polyserial correlations are obtained by quasi-ML considering the obtained thresholds as constants. Thus, the estimation process does not fit into any of the frameworks described previously. However, all the parameter estimates satisfy a set of equations as in Equation 28. A consistent covariance matrix for all the correlations is straightforward to obtain using Equations 29 and 30. Such an approach to obtaining a consistent covariance matrix estimate is behind the development of SEM for ordinal data (Jöreskog, 1994; Lee, Poon, & Bentler, 1995; Maydeu-Olivares, 2006; Muthén & Satorra, 1995; Yuan, Wu, & Bentler, 2011).

James-Stein and Ridge Estimators We would think that the sample means are the best estimators of the population means for normally distributed populations. However, this is true only in a few special cases. In particular, using mean-square errors (MSEs) as a criterion for evaluating the estimator, Stein (1956) showed that there is always a better estimator than the sample mean when three or more variables are involved. Similarly, there is always a better estimator for the population covariance matrix than the sample covariance matrix. This may seem odd because we have already learned that the MLE is most efficient. But that efficiency is attained when compared with all unbiased estimators. As is well-known, for a parameter estimate, MSE = variance + bias2 . If the variance can be greatly reduced by allowing a small amount of bias, then the resulting estimator will have a smaller MSE. A large class of such estimators exist, commonly called Stein or JamesStein estimators, because of the original work of Stein (1956) and James and Stein (1961).

Let yi be a sample of size n from Np (μ, ), y¯ be the vector of sample means, and S be the MLE of ˆ let the distance . For an estimator μ, ˆ μ) = (μ ˆ − μ)  −1 (μ ˆ − μ) D(μ, be the criterion for comparing estimators, which is often called the loss function in decision theoretic statistics (Ferguson, 1967). When p ≥ 3, James and Stein (1961) gave p−2 ˆ js = [1 − μ ]¯y (n − p + 2)¯y S−1 y¯ ˆ js , μ)] < E [D(¯y, μ)] for and showed that E [D(μ all μ and . They also gave a uniformly better estimator for  than S or nS/(n − 1) using the ˆ ) defined in Equation ML discrepancy FNML (, 8. Using the expected ML discrepancy, Haff (1980) found that   n 1 (p − 1) ˆ = C  ( yi − y¯ )( yi − y¯ ) + n−1 tr(S−1 C) i=1

is an even better estimator of , where C is any positive definite matrix. Using the loss function FNML , Haff also showed that nS/(n − 1) is the best among all estimators of the form aS. Howˆ ) = ever, if using the quadratic loss function D(, −1 2 ˆ tr{[( − ) ] }, the best estimator in the form of aS is (n − 1)S/(n + p). James-Stein-type estimators have been generalized to many other models. In particular, the wellknown ridge regression estimator (Hoerl & Kennard, 1970) βˆ r = (X  X + κI)−1 X  y (31) can be regarded as a James-Stein estimator (see Draper & van Nostrand, 1979). Efron and Morris (1973) showed that James-Stein estimators are also empirical Bayes estimators and vice versa. A ridge covariance matrix in the form of S + κI is also a James-Stein estimator. James-Stein and ridge estimators have found applications in almost all areas of statistics. Although biased, they are closer to the population values of the parameters on average. In covariance structure analysis with a ridge type covariance matrix S + κI, letting the term κI be part of the error variances, Yuan and Chan (2008) found that the resulting estimators are less biased and more accurate than those of modeling the sample covariance matrix, even when data are normally distributed. In particular, the ridge procedure greatly increases the convergence rate in item-factor analysis with small samples (Yuan, Wu, & Bentler, 2011). Additional information on the Stein-estimator can be found in Efron and Morris (1977). yuan, schuster

371

Bayes Estimation

Let y1 , y2 , . . ., yn be a sample from a parametric model with a pdf f ( y; θ ). The Bayesian approach to parameter estimation is different from any of the methods in the previous sections. It considers the parameter vector θ as a vector of random variables with a pdf f (θ), called the prior distribution. Formally, the pdf f ( y; θ ) needs to be rewritten as f ( y|θ), the conditional pdf of y given θ. Bayes estimates are based on the so-called posterior distribution, the conditional distribution of θ given the current sample. Let Y be the data matrix of the sample. Conditional on θ, its pdf is given by n

f ( yi |θ).

f (Y|θ) = i=1

Thus, the joint pdf of (Y, θ) and the conditional pdf of (θ|Y) are respectively f (Y, θ) = f (Y|θ)f (θ), and f (θ|Y) = c(Y)f (Y|θ)f (θ), c(Y) = 1/

f (Y, θ)d θ.

The conditional distribution f (θ|Y) in Equation 32 is referred to as the posterior distribution. Bayesian estimates for θ are based on this posterior distribution. Commonly used estimates are the posterior mean θˆ = E (θ|Y), the posterior mode that maximizes f (θ|Y), and the posterior median θˆ = (θˆ1 , θˆ2 , . . . , θˆq ) , where each θˆj is defined by the relationship Fj (θˆj |Y) = 1/2, where Fj (θj |Y) is the marginal posterior cumulative distribution function for the jth parameter. The prior pdf f (θ) represents one’s knowledge about θ before the current sample is obtained. The prior distribution may also contain unknown parameters, which are called hyperparameters. In classical Bayes analysis, the hyperparameters are determined subjectively so that f (θ) is completely determined. When hyperparameters are estimated from the data Y, the resulting estimates are called empirical Bayes estimates. It is possible that different people have different amounts of information about θ. In practice, choosing a f (θ) to summarize the prior information may not be a trivial matter. Most priors in the Bayesian literature are either Jeffreys noninformative priors or conjugate priors. The Jeffreys prior is proportional to the square root of the determinant of the information matrix for f ( y|θ), and it has the interesting property of being invariant 372

f (y; μ, σ 2 ) ∝ exp[−(y − μ)2 /(2σ 2 )]

(32)



where

under reparameterization of the parameter vector. A prior distribution f (θ ) is a conjugate prior if the resulting f (θ |Y) belongs to the same family as f (θ). Obviously, both Jeffreys and conjugate priors depend on the chosen likelihood function. The rest of this section contains a simple example followed by the development of the Bayes estimators for the linear regression model and the covariance matrix of a normally distributed population, where analytical solutions are available. Bayes estimation with a factor analysis model will be presented in a following section on Markov chain Monte Carlo (MCMC), an important tool for Bayes inference when analytical solutions are unavailable. Readers who are not interested in details may skip the material for the regression and the factor analysis models without loss of continuity. We use the notation ∝, which reads as proportional to, to simplify the presentation. For example, we write the density function of y ∼ N (μ, σ 2 ) as

by omitting the constant term 1/(2π σ 2 )1/2 . Consider a random sample y1 , y2 , . . ., yn from N (μ, σ02 ), where σ02 is known. Let y = (y1 , y2 , . . . , yn ) . Then, 

n 1  2 f ( y|μ) ∝ exp − 2 (yi − μ) . 2σ0 i=1 Let the prior distribution for μ be μ ∼ N (ν, τ 2 ). Simplifying Equation 32 yields (μ|y) ∼ N (νy , τy2 ),

(33)

where

1 1 −1 + . σ02 /n τ 2 (34) Because the posterior distribution belongs to the normal family, N (ν, τ 2 ) is a conjugate prior for μ. The value of νy in Equation 34 is the classical Bayesian estimate for μ, and Equation 33 is the basis for inference regarding μ. We can also estimate the hyperparameters ν and τ 2 from the data to get empirical Bayes estimates of μ and the posterior variance. It follows from f (yi , μ) = f (yi |μ)f (μ) that the marginal distribution of yi is N (ν, σ02 + τ 2 ). Thus, we may estimate ν by the sample mean y¯ and σ02 + τ 2 by the sample variance sy2 or τˆ 2 = max(sy2 −σ02 , 0). This results in empirical

τ 2 y¯ + (σ02 /n)ν and τy2 = νy = τ 2 + σ02 /n

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

Bayes estimates

follows an inverse gamma distribution. Each distribution formulates a family indexed by the shape and scale parameters. The well-known chi-square distribution is a special member of the gamma distribution. The pdf of the inverse gamma distribution with shape parameter a and scale parameter b is given by

τˆ 2 y¯ + σ02 y¯ /n and τˆ 2 + σ02 /n

1 1 −1/2 + = . σ02 /n τˆ 2

νeby = τeby

Clearly, the confidence interval for μ can be obtained using these estimates together with Equation 33. When sy2 ≤ σ02 , τeby = 0, which may imply that the assumed value σ02 is not proper; alternatively, it is also possible that the prior distribution or even the Bayesian method is not proper for analyzing the data. Notice that the posterior mean νy in Equation 34 is a weighted average of the prior mean ν and the sample mean y¯ ; the posterior precision 1/τy2 is the sum of the precisions of the sample mean and the prior mean. More prior information about μ is reflected by a greater prior precision 1/τ 2 , which further leads to a more accurate posterior mean. As n → ∞, νy → y¯ and nτy2 → σ02 . Thus, the effect of prior information decreases as the sample size increases. Jeffreys noninformative prior for the parameter μ in N (μ, σ02 ) is f (μ) ∝ 1 with μ ∈ (−∞, ∞), which is improper (no such distribution exists). Simplifying Equation 32 yields (μ|y) ∼ N (¯y , σ02 /n).

(35)

Thus, with the Jeffreys prior, point estimates (mean, mode, and median), SEs, and confidence intervals based on the posterior distribution in Equation 35 are identical to those based on the results of ML. The posterior distribution in Equation 35 can be regarded as a special case of Equation 33 when τ 2 = ∞. We next consider Bayesian estimates for the linear regression model as represented in Equation 12 with p nonstochastic predictors, where X is a n × (p + 1) matrix and ei ∼ N (0, σ 2 ). The development needs seemingly complicated notation, but it only involves linear algebra with matrices and the concept of conditional distribution. The conditional distribution of y can be written as L( y|X, β, σ 2 ) ∝ (2σ 2 )−n/2   ( y − Xβ) ( y − Xβ) . exp − 2σ 2

(36)

We need to introduce the so-called inverse gamma distribution for the prior and posterior distributions of σ 2 . If T follows a gamma distribution, then 1/T

f (t; a, b) =

ba (1/t)a+1 exp(−b/t) (a)

(37)

and is denoted by  −1 (a, b). The conjugate priors for the linear regression model are σ 2 ∼  −1 (a, b) and (β|σ 2 ) ∼ N (β 0 , β ) with β = σ 2 V, where a, b, β 0 and V are hyperparameters. Thus, p(β, σ 2 ) = p(β|σ 2 )p(σ 2 ) ∝ (σ 2 )−(p+2a+3)/2 . 1 exp − [2b+ 2σ 2 / (β − β 0 ) V −1 (β − β 0 )] . (38) It follows from Equations 36 and 38 that p(β, σ 2 |y, X) ∝ (σ 2 )−(n+p+2a+3)/2 ⎡ 2b + (β − β 0 ) V −1 (β − β 0 ) ⎢ +( y − Xβ) ( y − Xβ) exp ⎢ − ⎣ 2σ 2

⎤ ⎥ ⎥. ⎦ (39)

Notice that the numerator within the exponential term in Equation 39 only depends on β. Comparing Equations 39 with 37, we obtain

n + p + 2a + 1 (σ 2 |β, y, X) ∼  −1 , 2 2b + (β − β 0 ) V −1 (β − β 0 ) +( y − Xβ) ( y − Xβ) . 2

(40)

After some algebraic manipulation we can rewrite Equation 39 as p(β, σ 2 |y, X) ∝ (σ 2 )−(p+1)/2  (β − β y ) Vy−1 (β − β y ) exp − 2σ 2   2b + u2 × (σ 2 )−(n+2a+2)/2 exp − , 2σ 2 yuan, schuster

(41)

373

where ˆ β y = (V −1 + X  X)−1 [V −1 β 0 + (X  X)β], u2 = (y − Xβ 0 ) (In + XVX  )−1 (y − Xβ 0 ) (42) with βˆ = (X  X)−1 X  y and Vy = (V −1 + X  X)−1 . Because the last term on the right of Equation 41 does not depend on β, we have (β|σ 2 , y, X) ∼ N (β y , σ 2 Vy ).

(43)

Because random numbers following the normal and inverse gamma distributions are easy to generate, one can simulate (β, σ 2 ) from the posterior distribution defined in Equation 39, using MCMC. In particular, when iteratively simulating β from Equation 43 and σ 2 from Equation 40 many times, the pair of random numbers (β, σ 2 ) at the end of this process will approximately follow the joint distribution in Equation 39. With independent draws of (β, σ 2 ) from Equation 39, we can estimate the posterior mean, posterior median, and posterior confidence interval for β and σ 2 using the sample counterparts. We will further discuss this simulation approach in a later section on computing estimators. For the regression model, we can analytically obtain the marginal posterior distributions of (σ 2 |y, X) and (β|y, X) by integrating out β and σ 2 from the joint posterior distribution, respectively. Notice that the first term on the right of Equation 41 disappears when taking the integration with respect to β. Comparing the second term to Equation 37 yields (σ 2 |y, X) ∼  −1



n + 2a 2b + u2 , 2 2

.

Also notice that the numerator in the exponential term of Equation 39 can be regarded as a constant when taking the integral with respect to σ 2 . By letting t = 1/σ 2 , the integral is transformed into a gamma function, which immediately yields a multivariate t-distribution for (β|y, X) with df = n + 2a, mean β y given in Equation 42, and a scatter matrix

βy =

(2b + u2 ) Vy . n + 2a

Clearly, the posterior mean β y is again a weighted ˆ The average of the prior mean β 0 and the MLE β. weights are proportional to their respective precision 374

matrix. The hyperparameters a and b only affect the precision of β y , not β y itself. Notice that the variance of σ 2 ∼  −1 (a, b) is b 2 /[(a − 1)2 (a − 2)]. A prior for σ 2 with more uncertainty also passes the uncertainty to β y through a more dispersed βy . When letting β 0 = 0 and V = I/κ, the β y in Equation 42 is identical to the βˆ r in Equation 31. Thus, the ridge regression estimator is a Bayesian estimator and vice versa. The Jeffreys priors for the regression model are p(β) ∝ 1 and p(σ 2 ) ∝ σ −2 . Using essentially the same algebra, we will find that

n − p − 1 (n − p − 1)σ˜ 2 (σ 2 |y, X) ∼  −1 , 2 2 and (β|y, X) follows a multivariate t-distribution ˆ dispersion matrix (X  X)−1 σ˜ 2 and with mean β, degrees of freedom n − p − 1, where σ˜ 2 =

1 ˆ  ( y − X β). ˆ ( y − X β) n−p−1

In particular, let ωˆ j be the square root of the jth diagonal element of (X  X)−1 σ˜ 2 , then βj − βˆj ∼ tn−p−1 , ωˆ j where tn−p−1 is the notation for the Student tdistribution with n − p − 1 degrees of freedom. Thus, a confidence interval for βj with Jeffreys prior is the same as that obtained from the conventional LS regression analysis. In many other contexts, a Bayesian analysis with a Jeffreys prior also yields the same inference as the conventional statistical analysis (Box & Tiao, 1973). Another example for the Bayesian approach to enjoy an analytical solution is the covariance matrix of a normally distributed sample, where the conjugate prior for  is the inverse multivariate gamma distribution, which is obtained by inverting a random matrix that follows a multivariate gamma distribution. The Wishart distribution is a special member of the multivariate gamma distribution, which extends the chi-square distribution to multivariate case. Let S be the sample covariance matrix of a sample of size n from a p-variate normal distribution y ∼ Np (μ, ). Then nS follows the Wishart distribution Wp (, n − 1). Assume  has a prior inverse Wishart distribution Wp−1 (hH, n ), where h and n are scalar hyperparameters and H is a p × p matrix of hyperparameters. Then the posterior distribution of  given S follows the inverse Wishart distribution Wp−1 ((n − 1)S + hH, n + n − 1) with

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

E (|S) = k1 S + k2 H, where k1 = (n − 1)/(n + n − p − 2) and k2 = h/(n + n − p − 2). Thus, a ˆ = S+κI is also a Bayesian ridge covariance matrix  covariance matrix. In summary, the Bayesian approach to parameter estimation needs a set of prior distributions. Most Bayesian analyses use either conjugate priors or noninformative priors. How to let priors reflect the existing information may need experience in addition to substantive knowledge and statistical skills. As the effect of priors disappears when n increases, there is no need to worry about the effect of priors when n is large. However, Bayesian analysis is most useful when having a small sample (see Lee & Song, 2004). For example, the sample covariance matrix S might be singular when n is not large enough. Many multivariate procedures based on analyzing S cannot be performed. Analysis based on a Bayesian or a ridge covariance matrix S + κI may still yield reasonable results when one proceeds with caution. One concern with the Bayesian approach is the effect of priors (Berger & Berliner, 1986). Two researchers will get different results with the same sample if they use different priors. Another concern is the effect of a misspecified f ( y|θ). Because practical multivariate data sets may not follow any known distribution, many choose the normal distribution as f ( y|θ) for convenience. Such a choice also makes it easier to identify the conjugate priors in most applications. However, inference based on the resulting posterior distribution may not provide a goodsummaryofthedata. Suggestionsforimproving inference in this situation include using the bootstrap procedure to evaluate the performance of Bayesian estimates (Laird & Louis, 1987) and performing transformations on the sample before applying a Bayesian method (Hayashi & Yuan, 2003). Readers who are interested in the Bayesian approach are referred to Box and Tiao (1973) for classical Bayesian analysis; to Gelman et al. (2004) for applications of Bayesian methods to linear and generalized models; to Carlin and Louis (2009) for the comparison of Bayesian and frequentist approaches to data analysis; and to Lee (2007), Albert (1992), Patz and Junker (1999), and Baker and Kim (2004) for models with latent variables. A simple introduction to Bayesian statistics is given by Berry (1996).

Additional Approaches In addition to the methods reviewed in this section, the so-called method of moment plays an

important role in estimation when MLE is difficult to compute. With the advance of computational power, the method of moment is less frequently used. Other methods not reviewed include profile ML, conditional ML, empirical ML and two-stage least-squares (2SLS), which also have important applications. Interested readers are referred to Pawitan (2001) for an introduction of profile, conditional, and empirical ML, and to Bollen (1996) for an application of 2SLS in covariance structure analysis. Baker and Kim (2004) contains an application of conditional ML to the Rasch model.

Methods for Estimating Standard Errors and Confidence Intervals This section contains two methods for estimating standard errors. The so-called δ-method is based on asymptotics, whereas the bootstrap procedure is based on simulation. Although the accuracy of the bootstrap also depends on sample size, it may perform better than the δ-method when n is not large enough.

δ-Method

Let θˆ be an asymptotically normally distributed estimate for θ 0 , which is calculated from a sample of size n. In other words, the estimate satisfies √ L n(θˆ − θ 0 ) → N (0, "). (44) All the estimators discussed so far satisfy Equation 44. When θˆ is a pseudo-MLE, " is just a sandwichtype covariance matrix that can be consistently ˆ n being given in Equation ˆ n with " estimated by n" 14. A Bayesian estimate also satisfies Equation 44 with " being consistently estimated by n times the posterior covariance matrix when θˆ is the posterior mean. Let γ = g (θ ) be a vector of continuously differentiable functions of θ and denote g˙ (θ ) = ∂g (θ)/∂θ  . Then, the δ-method states that the transformed parameter vector also is asymptotically normally distributed—that is, √ L n(γˆ − γ 0 ) → N (0, #),

(45)

ˆ is a consistent estiwhere # = g˙ (θ 0 )"˙g (θ 0 ). If "  ˆ ˆ ˆ ˆ mate of ", # = g˙ (θ )"˙g (θ ) is also consistent for #. Thus, according to the √ δ-method, the SE of γˆj can be obtained by πˆ j / n, where πˆ j is the square ˆ corresponding root of the jth diagonal element of # to γˆj . The δ-method has many applications. Consider (xi , yi ), i = 1, 2, . . ., n from a bivariate population with finite fourth-order moments. yuan, schuster

375

Then the vector θˆ = (sxx , sxy , syy ) containing two sample variances and a sample covariance satisfies Equation 44, where " is a 3 × 3 matrix consisting of fourth-order population moments. Because θˆ is the MLE of θ = (σss , σxy , σyy ) for normally ˆ can be obtained by invertdistributed data, " ing the corresponding information matrix or by a sandwich-type covariance matrix, depending on the distribution of the population. The sample correlation r = sxy /(sxx syy )1/2 is obviously a continuously differentiable function of the sample variances and covariance. As an estimate of the population correlation ρ, the SE of r can be consistently estimated using Equation 45. Further, g (r) = log[(1 + r)/(1 − r)]/2 is a continuously differentiable function of r, and its SE follows from another application of Equation 45. Under the assumption of normally √ distributed data the above two steps lead to Var[ ng (r)] = 1. This result is the well-known Fisher’s z-transformation, √ where the SE√of g (r) is usually given by 1/ n − 3 instead of 1/ n for a small sample correction. Notice that the variance of g (r) for Fisher’s ztransformation does not depend on any unknown population parameters. Therefore, it is also called a variance-stabilizing transformation. Such a transformation for a single parameter can be obtained by equating # to a constant and solving the function g using differential equations.

The Bootstrap In many estimation problems, it is straightforward to calculate a parameter estimate θˆ , but it is not obvious how to obtain the corresponding SEs. For example, the eigenvalues of a covariance matrix can be obtained without difficulty; however, there is no simple formula for calculating its SEs or evaluating its distribution when the population distribution is unknown. The bootstrap provides an easy way to estimate its SE by simulation (Efron & Tibshirani, 1993). The bootstrap method can also be used for model testing and power evaluation when the distribution of the statistic is unknown. Unlike the traditional Monte Carlo approach that generates random numbers from a known population, the bootstrap draws values from the discrete empirical distribution that puts probability 1/n at each observed yi . When sampling from an estimated population—for example, N (¯y, S)—the method is referred to as parametric bootstrap. It has been shown that the bootstrap approach provides consistent 376

SEs for continuous functions of sample moments (Mammen, 1992), which cover almost all the commonly used statistics in social science research. Let θ be an interesting parameter and θˆ be its estimator, based on a random sample y1 , y2 , . . ., yn . To use the nonparametric bootstrap, we draw an observation with replacement from the observed sample Y = ( y1 , y2 , . . . , yn ). We then repeatedly sample additional observations with replacements until we have a sample of size n. Denoted the n independent draws as Y∗ = ( y∗1 , y∗2 , . . . , y∗n ). This sample is called a bootstrap sample. We can now calculate the estimator of θ using Y∗ in the same way as θˆ is obtained from Y. Denote the newly calculated estimate as θˆ∗ . With the help of the computer, we can easily generate B replications of θˆ∗ : θˆ∗1 , θˆ∗2 , . . ., θˆ∗B . These represent a bootstrap sample for θˆ . Thus, the SE of θˆ , a confidence interval for θ, as well as the distributional shape of θˆ can be estimated through this bootstrap sample. Specifically, the bootstrap standard error of θˆ is given by  1/2 B 1  2 ˆ ¯ SEB = (θ∗b − θ∗ ) , B−1 where θ¯∗ =

B

b=1

ˆ

b=1 θ∗b /B.

Let

θˆ∗(1) ≤ θˆ∗(2) ≤ · · · ≤ θˆ∗(B) be the order statistics for the θˆ∗b s. The empirical distribution that puts a probability of 1/B at each θˆ∗(b) will be the bootstrap estimate for the distribution of ˆ The bootstrap percentile confidence interval for θ θ. with level 2α is given by [θˆ∗([Bα]) , θˆ∗([B(1−α)]) ],

(46)

where [Bα] is the integer part of Bα. In contrast to the SEs obtained by the δ-method, the bootstrap approach does not assume that θˆ asymptotically follows a normal distribution. For example, we may use the histogram or quantilequantile (QQ) plot of the θˆ∗b to study the distribution of θˆ . Actually, the confidence interval in Equation 46 may not be symmetric about θˆ . When the histogram or the QQ plot indicates a skewed distribution, an even better confidence interval, called the bias corrected and accelerated (BCa ) confidence interval by Efron (1987), can be calculated. One needs to calculate two additional numbers to obtain the BCa interval. The bias correction number is calculated as zˆ0 = −1 (#{θˆ∗b < θˆ }/B),

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

where  is the cumulative distribution function of N (0, 1). The acceleration number aˆ is estimated by n (θˆ∗b − θ¯∗ )3 , aˆ = − nb=1 6{ b=1 (θˆ∗b − θ¯∗ )2 }3/2 which measures the skewness of θˆ∗b . Let z (α) be the 100αth percentile of N (0, 1) and   zˆ0 + z (α) , α1 =  zˆ0 + 1 − aˆ (ˆz0 + z (α) )   zˆ0 + z (1−α) . α2 =  zˆ0 + 1 − aˆ (ˆz0 + z (1−α) ) The BCa confidence interval for θ is [θˆ∗([Bα1 ]) , θˆ∗([Bα2 ]) ]. Notice that when zˆ0 = aˆ = 0, the BCa interval is identical to the percentile interval. Both the percentile and the BCa interval bounds can be transformed. For example, when h(θ) is a monotonic function of θ, then the percentile confidence interval for h(θ ) is given by [h(θˆ∗([Bα]) ), h(θˆ∗([B(1−α)]) )]. Suppose the exact distribution of θˆ is available and one can construct an exact confidence interval for θ. The BCa interval can approximate the exact confidence interval to the order of 1/n, whereas the percentile confidence interval as well as the symmetric interval based on the δ-method can only approximate√ the exact confidence interval to the order of 1/ n (Efron & Tibshirani, 1993). Therefore, without knowing the exact distribution of θˆ in general, the BCa is the preferred confidence interval for θ . As mentioned earlier, the bootstrap can also be applied to the estimation of SEs for Bayesian estimates. In particular, when the parametric model f ( y|θ ) is misspecified, the posterior SEs do not describe the variability in θ correctly. By using the posterior distribution of θ based on repeatedly sampling from the empirical distribution of y the bootstrap SEs can correct the biases of a misspecified model (Laird & Louis, 1987). It is well-known that SEs based on asymptotics tend to be smaller than empirical ones at smaller sample sizes. Because the bootstrap is based on simulations, it automatically picks up the effect of a finite n and provides more accurate SEs. However, it does not always perform better. For example, with normally distributed samples, the confidence interval for the Pearson correlation ρ based on Fisher’s

z-transformation tends to perform better than that based on the bootstrap (Efron, 1988; Rasmussen, 1987). When the normal distribution is not achievable, the commonly used SE following Fisher’s z-transformation is no longer consistent whereas the bootstrap continues to provide consistent SEs. Notice that the SE based on the bootstrap for a misspecified model is asymptotically equivalent to that based on the sandwich-type covariance matrix given in Equation 14, which accounts for both the misspecified distribution and the misspecified structural model. Some software may contain SEs based on a sandwich-type covariance matrix formulated assuming a correctly specified model. Such SEs are not consistent and are not asymptotically equivalent to those based on the nonparametric bootstrap (Yuan & Hayashi, 2006). The key of constructing a bootstrap procedure is to let the bootstrap resampling closely mimic the process that generated the original data from the underlying population. When cases in the original sample are correlated, one has to estimate the independent random components first using a model, then perform random sampling from these components, and finally use the model to construct the bootstrap replications. More applications of the bootstrap with different models and data structures are given in Efron and Tibshirani (1993).

Algorithms This section describes four computational methods for obtaining parameter estimates defined previously in the section of Methods for Estimating Parameters. None of the methods is needed if an analytical solution for the estimator is available.

Newton-Type Algorithms Newton-type algorithms are used to obtain parameter estimates defined by the maximum or minimum of an objective function or the root of a set of equations. Examples are the MLE, the estimating equation estimator, and the posterior mode. Let l (θ ) be the log likelihood or another objective function whose maximum or minimum defines the estimator, and let l˙(θ ) be the vector of partial derivatives of l (θ ) with respect to the elements of θ. Unless the defined estimator θˆ is on the boundary of permissible values, it will satisfy l˙(θˆ ) = 0. (47) The vector l˙(θ) is sometimes called the gradient, the score function, or estimating function. Let l¨(θ) yuan, schuster

377

be the matrix of second-order partial derivatives of l (θ) with respect to the elements of θ, which is commonly called the Hessian matrix. The Newton algorithm for solving Equation 47 is given by

where

θ (j+1) = θ (j) − θ (j) ,

(48)

θ (j) = [l¨(θ (j) )]−1 l˙(θ (j) )

(49)

is the Newton direction. The iterative procedure generates a solution to Equation 47 when it converges. A modified Newton algorithm is to replace θ (j) by κ θ (j) in Equation 48, where the purpose of κ is to control the length of the step in the Newton direction. If the estimator is defined by the maximum value of l (θ), then the choice of κ should lead to l (θ (j+1) ) > l (θ (j) ). Suggested values of κ are 1/2k with k = 0, 1, 2, . . ., called step-halfing. A more elaborate approach is to build a quadratic function y = a + bx + cx 2 to approximate g (κ) = l (θ (j) + κ θ (j) ) by letting the quadratic function pass through (0, g (0)), (1/2, g (1/2)), and (1, g (1)); and set κ = −b/(2c). Notice that without adjusting the length of the step in Equation 48 at each iteration, the Newton algorithm is sensitive to starting values. It converges very fast when the starting values are close to the target values. However, it may not converge when the starting values are far from the target values. Adjusting the length of each step makes the starting values less important for the modified Newton algorithm. Also notice that the converged value may correspond to a local maximum or minimum. When there is a possibility that the objective function has multiple local maxima or minima, it is necessary to use multiple randomly selected starting values to find the global maximum or minimum. When an estimating equation has multiple roots, one should select the root that is most appealing substantively. A well-known modification of the Newton algorithm is the Fisher-scoring algorithm, which replaces l¨(θ ) in (49) by its expected value under the model. In many estimation problems, terms involving second derivatives in l¨(θ) disappear after taking the expectation. Thus, the Fisher-scoring algorithm is typically n easier to program. When l (θ) = i=1 li (θ), another alternative is to replace the Hessian matrix  in Equation 49 by − ni=1 l˙i (θ)l˙i (θ). There is no established name for this modification, we refer to it as the empirical Fisher-scoring algorithm. Neither the Fisher-scoring nor the empirical Fisher-scoring algorithm is as sensitive to the starting values as 378

the Newton algorithm. But they are not as fast as the Newton algorithm when the starting values are close to the target values. There are other modifications of the Newton algorithm, which approximate the second derivatives using certain forms of first derivatives. Properties of Newton-type algorithms have been introduced systematically in Kelley (2003). Everitt (1987) illustrated the applications of Newton-type algorithms to various statistical problems.

Iteratively Reweighted Least-Squares

Let y1 , y2 , . . ., yn be independent with E ( yi ) = μi (θ ). Let Wi be a given weight matrix and define the GLS objective function as GLS(θ ) =

n 

[yi − μi (θ )] Wi [yi − μi (θ )].

i=1

The Gauss-Newton algorithm for minimizing GLS(θ ) is θ (j+1) = θ (j) + θ (j) , where

θ

(j)

=

 n 

−1 ˙ i (θ (j) )Wi μ ˙ i (θ (j) ) μ

i=1 n 

˙ i (θ (j) )Wi [yi − μi (θ (j) )], μ

(50)

i=1

which only involves the first derivatives of μi (θ ). Changing yi − μi (θ ) and Wi in Equation 50 to gi ( yi , θ ) and Wi ( yi , θ ) respectively results in θ (j+1) = θ (j) + θ (j) ,

(51)

where

θ (j)  n −1  = g˙ iθ ( yi , θ (j) )Wi ( yi , θ (j) )˙giθ ( yi , θ (j) ) i=1 n 

g˙ iθ ( yi , θ (j) )Wi ( yi , θ (j) )gi ( yi , θ (j) ) (52)

i=1

with g˙ iθ ( yi , θ ) = ∂gi ( yi , θ )/∂θ  . The algorithm in Equations 51 and 52 is commonly called iteratively reweighted least squares (IRLS). Obviously, at the convergence of Equation 51 the value of θ satisfies n 

g˙ iθ ( yi , θ)Wi ( yi , θ )gi ( yi , θ ) = 0,

i=1

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

which is an estimating equation that determines the M-estimator in robust regression, the MLE for generalized linear models, GEE estimators in repeated measure models, the MLE of normal-distributionbased mean and covariance structure analysis, and many others. The IRLS algorithm in Equation 51 can be further simplified for specific models. For example, when yi contains a single observation yi , and we define θ = β, ri = yi − xi β, and also Wi ( yi , β) = w(ri ), then Equations 51 and 52 reduce to β (j+1) = [

n  i=1

xi w(ri )xi ]−1

n 

xi w(ri )yi . (53)

i=1

A special application of Equation 53 is to let the w(t) be either the Huber-type weight or the weight corresponding to a t-distribution. Then IRLS provides an algorithm to solve the equation defining the M-estimators introduced previously in the subsection on Robust Procedures. Starting values of β and σ 2 can be set as β = 0 and σ 2 = 1. Holland and Welsch (1977) have provided two other iterative procedures for computing the M-estimator in regression for a given σˆ 2 . Similarly, equations defining the robust estimates of means and covariances in Equations 25 and 26 can be solved by n (j) w1 (di )yi (j+1) = i=1 μ (j) n i=1 w1 (di ) and  (j+1) =

1 (j) w2 (di )( yi − μ(j) )( yi − μ(j) ) , n n

i=1

(j) di

where is the Mahalanobis distance for case yi evaluated at μ(j) and  (j) . Applications of IRLS for generalized linear models are in McCullagh and Nelder (1989), for robust SEM models are in Yuan and Bentler (2000) and Yuan and Zhong (2008), and for robust estimates of other models are in Green (1984).

Expectation-Maximization Algorithm Expectation-maximization is mainly used to obtain the MLE when Newton-type algorithms are hard to program. This typically occurs when a sample contains missing values and the commonly used formula with complete data does not apply. For example, the sample mean is the MLE of the population mean for normally distributed data without missing values. One can easily see that the formula

does not work anymore when cases have different numbers of observed values. The EM algorithm allows us to use the formula/procedure for the complete data MLE to obtain the MLE with missing values by iteratively applying the E-step and the Mstep. The E-step is to fill in the missing values by their conditional expectations given the current values of parameters and the observed data. Once the missing values are replaced by the expected values, the procedures/formulas for obtaining the MLE with complete data, called M-step, can be applied. At convergence, the iterative process yields the parameter values that maximize the observed log likelihood function (Dempster, Laird, & Rubin, 1977). Let xi contain the observed values for the ith case and let zi contain the missing values, i = 1, 2, . . ., n. If all the zi were observed, the log likelihood function is n  log f (xi , zi ; θ ). (54) l (θ ) = i=1

Because only the xi are observed, the log likelihood function based on the observed data is n  log fi (xi ; θ i ), (55) lo (θ ) = i=1

where

 fi (xi ; θ i ) =

f (xi , zi ; θ)d zi .

The aim is to find a θˆ to maximize lo (θ). One can use a Newton-type algorithm to maximize Equation 55. However, the computation is usually complicated if each θ i in Equation 55 contains different numbers of elements, because special attention is needed when calculating the score-function and the Hessian matrix. The EM algorithm works with the l (θ ) in Equation 54 rather the lo (θ ) in Equation 55. Let θ (j) be the parameter estimates at the jth step. The E-step obtains 1 0 n  Q (θ ) = E log f (xi , zi ; θ )|xi , θ (j) . (56) i=1

The M-step maximizes Q (θ) to yield θ (j+1) , which is further conditioned upon when performing the next E-step. Alternating between E- and M-steps leads to a θˆ that locally maximizes lo (θ ) at convergence. Multiple starting values are needed when lo (θ) have multiple local maxima. Notice that only zi is random in taking the conditional expectation in Equation 56. yuan, schuster

379

Consider the population N (μ, ) and assume our interest is to obtain the MLEs of θ = (μ, ) that maximize Equation 55. To simplify notation, we let yi = (xi , zi ) denote the complete data for the ith case. If missing values do not appear at the end of yi , regarding yi as a rearrangement of the original variables, we put the expected values of zi back to their original positions for the M-step. After omitting a constant, we can write li (θ) = log f (xi , zi ; θ) as 1 li (θ) = − {log ||+ 2 tr[ −1 ( yi yi − yi μ − μyi + μμ )]}. Thus, E {li (θ)|xi ; θ (j) } = −

12

Thus, each term involving an expectation on the right side of Equation 56 or Equation 57 can be evaluated easily by Equation 58 or Equation 59. Notice that the expectations in Equations 58 and 59 consist of numbers only. The unknown parameters in Equation 57 are μ and . It is well-known that the sample means and sample variances-covariances are the MLEs for the complete data. Thus, the M-step for maximizing Q (θ) is given by μ(j+1) =  (j+1) =

1 E ( yi |xi ; θ (j) ), n n

1 n

i=1 n 



E ( yi yi |xi ; θ (j) ) − μ(j+1) μ(j+1) .

i=1

log || 2 +  + tr  −1 E ( yi yi |xi ; θ (j) )

(60)

The advantage of the EM algorithm is obvious when there is an analytical solution at the M-step, as in Equation 60. Otherwise, Newton or another (j)  iterative procedure has to be used at the M-step. − E ( yi |xi ; θ )μ If the complexity of maximizing Q (θ) is about the ,3 same as that of maximizing lo (θ ), then there is no . − μE ( yi |xi ; θ (j) ) + μμ obvious advantage of using the EM over a Newton(57) type algorithm. It is well known that convergence of the EM algorithm can be very slow when θ (j) is Because E ( yi |xi ; θ (j) ) is a transpose of E ( yi |xi ; θ (j) ), (j) the E-step just involves two expectations: E ( yi |xi ; θ ) close to the target value. In this case, other iterative procedures are useful to expedite the convergence and E ( yi yi |xi ; θ (j) ). Notice that (Jamshidian & Jennrich, 1997).

xi Notice that the EM algorithm is to maximize E ( yi |xi ; θ (j) ) = E (zi |xi ; θ (j) ) lo (θ ), not involving a missing data mechanism. The consistency of the resulting MLE still depends on and the missing at random mechanism (Little & Rubin,  (j) E ( yi yi |xi ; θ ) 2002), although the distribution does not need to

be correctly specified (Yuan, 2009; Yuan & Bentler, xi E (zi |xi ; θ (j) ) xi xi . = 2010). In particular, SEs need to be calculated using E (zi |xi ; θ (j) )xi E (zi zi |xi ; θ (j) ) the observed information matrix for lo (θ ) when the Let population distribution is correctly specified, and     (j) (j) (j) the corresponding sandwich-type covariance matrix μxi xxi xzi (j) μ(j) = = ,  . (j) (j) (j) when the population is misspecified. μzi zxi zzi The EM algorithm was formally established by Then, using the well-established formulas for conDempster et al. (1977). It can also be used when ditional expectation and covariance for the normal the MLE is hard to calculate for a complete data distribution, the E-step is given by problem, whereas it is easier to obtain after intro(j) (j) (j) (j) ducing some latent variables. Examples of EM E (zi |θ (j) , xi ) = μzi + zxi (xxi )−1 (xi − μxi ), algorithm for parameter estimation with complete (58) data include factor analysis, item response models, E (zi zi |xi ; θ (j) ) = Cov(zi |xi ; θ (j) ) finite mixture models and the ML approach to combining effect sizes or mean differences (Bentler & + E (zi |xi ; θ (j) )E (zi |xi ; θ (j) ) Tanaka, 1983; Bock & Aitkin, 1981; Everitt, 1984; (j) (j) (j) (j) Goodman, 1974; Rubin & Thayer, 1982; Yuan & = [zzi − zxi (xxi )−1 xzi ] Bushman, 2002). An EM algorithm based on the + E (zi |xi ; θ (j) )E (zi |xi ; θ (j) ). multivariate t-distribution for saturated means and (59) covariances with missing data was given by Little 380

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

(1988). Jamshidian and Bentler (1999) provided the normal-distribution-based EM algorithm for SEM with missing data. More applications of the EM algorithm can be found in McLachlan and Krishnan (2008).

Markov Chain Monte Carlo Markov Chain Monte Carlo is a simulation tool for estimating parameters, SEs, and confidence intervals introduced in the section of Methods for Estimating Parameters when analytical procedures are not available or hard to implement. In particular, for a parameter vector θ that follows a given (conditional) distribution, if we can repeatedly draw samples from this distribution, then we can use the sample mean and sample standard deviation to estimate the population counterparts. In Bayesian analysis, the posterior means and variancescovariances often involves an integral with many variables, which is hard to evaluate analytically or numerically. Directly simulating θ from the posterior distribution may also be hard to do. The MCMC technique allows us to obtain samples from a complicated distribution by iteratively simulating from relatively simple conditional distributions. Markov Chain Monte Carlo is closely related to the technique of data augmentation (Tanner & Wong, 1987). For example, we may have a sample y1 , y2 , . . ., yn from a parametric model f ( y|θ), and we want to evaluate the posterior mean E (θ|Y), which might be hard. Random numbers following f (θ|Y) may also be too difficult to simulate. We may augment the data matrix Y by another matrix Z, where Z is not observed. We may also split the parameters in θ into subsets θ 1 , θ 2 , . . ., θ k . In this way, it becomes easier to simulate from the following conditional distributions, given the value of Y and the current values of the other parameters and Z: (j+1)

(θ 1

(j+1)

(θ 2

(j)

(j)

|Y; θ 2 , . . . , θ k , Z(j) ), (j+1)

|Y; θ 1

(j)

(j)

, θ 3 . . . , θ k , Z(j) ), .. .

(61)

(j+1) (j+1) (j+1) (θ k |Y; θ 1 , . . . , θ k−1 , Z(j) ), (j+1)

(Z(j+1) |Y; θ 1

(j+1)

, θ2

(j+1)

, . . . , θk

).

Let j = 0, 1, 2, . . ., m, where j = 0 corresponds to starting values θ and Z. The iterations gener(j) (j) (j) ate a sequence of θ (j) = (θ 1 , θ 2 , . . . , θ k ) and Z(j) , which is called a Markov chain. Under regularity conditions, the sequence (θ (m) , Z(m) ) converges

in distribution to (θ, Z|Y) as m increases. We only need the marginal conditional distribution (θ |Y) for inference of θ . The purpose of introducing Z is to make the simulation from the distributions in Equation 61 easier. There is no need for data augmentation if it is straightforward to simulate from (θ|Y). There is no need for MCMC either if (θ 1 |Y), (θ 2 |Y; θ 1 ), . . ., (θ k |Y; θ 1 , θ 2 , . . . , θ k−1 ) are easy to simulate, because the chain converges in one step. Of course, if E (θ |Y), Cov(θ |Y) and the confidence interval for θ based on the distribution of (θ|Y) are available analytically, then simulation is not necessary. An example is the posterior distribution of β for the regression model discussed previously in the subsection on Bayes Estimation. Although the conditional distributions of β and σ 2 are easy to simulate using Equations 40 and 43, it is best to use E (β|Y) given in Equation 42 when calculating the mean of β. Suppose the sequence θ (j) and Z(j) converged at m = mc . Let (θ (1), Z(1)) denote the converged values. Then we can continue this process another mc times to get (θ (2), Z(2)), using an independent set of starting values or just using (θ(1), Z(1)) as the starting values. Repeating this procedure N times produces the replications (θ(1), Z(1)), (θ (2), Z(2)), . . ., (θ (N ), Z(N )). Then we can estimate the posterior means and variances-covariances by N 1  4 |Y) = Eˆ (θ|Y) = θ (i) and Cov(θ N i=1 N  1 θ (i)θ (i) − N Eˆ (θ |Y)Eˆ (θ  |Y) . N −1 i=1

We can also estimate the confidence interval for θj using the quantile of θj (i) and evaluate the distribution shape of (θj |Y) using a QQ plot or a histogram of θj (i), just as in the bootstrap methodology. The sequence Z(i) are replications from the conditional distribution f (Z|Y), which can be used to evaluate the posterior means and variances-covariances of (Z|Y), but it has no direct consequence on θ . We will illustrate the application of MCMC using the following one-factor model with p indicators y = μ + λξ + ε,

(62)

where ξ ∼ N (0, φ), ε ∼ N (0, ) with  = diag(ψ11 , ψ22 , . . . , ψpp ), and ξ and ε are independent. The first element of λ is fixed at 1.0 to identify yuan, schuster

381

the scale of ξ . Notice that if the ξ is observed, then Equation 62 is just a multivariate regression model. So we augment our data to (Y, ξ ), where ξ = (ξ1 , ξ2 , . . . , ξn ) is a vector of factor scores corresponding to the sample y1 , y2 , . . ., yn . Using conditioning, we obtain after some algebra f (Y, ξ |μ, λ, φ, ) ∝

φ n/2

1 5p

1 [( yi − μ)  −1 ( yi − μ) 2 i=1 

− 2( yi − μ) 

−1

λξi + ξi2 (φ −1



+λ

−1

λ)]}. (63)

Let λ1 be the part of λ after removing the first element 1.0, we choose the following Jeffreys priors for the unknown parameters f (μ) ∝ 1, f (λ1 ) ∝ 1, f (φ) ∝ φ −1 , f (ψjj ) ∝ ψjj−1 , j = 1, 2, . . . , p.

(64)

Let y¯ and ξ¯ be the sample mean of yi and ξi , respectively; and 1 ( yi − y¯ )( yi − y¯ ) , n n

S= syξ =

1 n

i=1 n 

( yi − y¯ )(ξi − ξ¯ ), mξ 2 =

i=1

1 2 ξi . n n

i=1

Combining Equations 63 and 64 yields f (ξ , μ, λ, φ, |Y) ∝ . × exp

φ

1 5 p n/2+1

n/2+1 j=1 ψjj

n − [(¯y − μ)  −1 (¯y − μ) + tr( −1 S) 2

− 2λ  −1 syξ − 2ξ¯ λ  −1 (¯y − μ) / −1  −1 + mξ 2 (φ + λ  λ)] .

(65)

It follows from Equation 65 that (μ|ξ , λ, φ, , Y) ∼ N (¯y − λξ¯ , /n), (λ|ξ , μ, φ, , Y) ∼ N ([sξ y + ξ¯ (¯y − μ)]/mξ 2 , /(nmξ 2 )),  n nm  ξ2 , , (φ|ξ , μ, λ, φ, , Y) ∼  −1 2 2

n nhjj (ψjj |ξ , μ, λ, φ, Y) ∼  −1 , , 2 2 j = 1, 2, . . . , p 382

H = (hij ) =

n 

( yi − μ − λξi )( yi − μ − λξi ) /n,

i=1

and (ξi |μ, λ, φ, , yi )

φ  −1 φ 2 λ  −1 λ λ  ( yi − μ), ∼N , a a

n/2

j=1 ψjj

n

× exp{−

with

(66)

where a = 1 + φλ  −1 λ. Each of the above five sets of distributions is either normal or inverse gamma. Because random numbers following normal or inverse gamma distribution are easy to simulate, the rather complicated distribution in Equation 65 can be simulated by MCMC. Notice that the distribution in Equation 66 is for all the elements of λ, which is just for convenience. We just need to change the first element of λ to 1.0 at each replication of the simulation. The key to MCMC is to construct a set of conditional distributions that are easy to simulate. After the set of distributions is identified, one still needs to determine the number m so that θ (m) approximately follows f (θ |Y). The process to reach convergence is called the burn-in period, which makes θ (m) independent of the starting value θ (0) . One suggestion is to visually examine the plot of the sequence (j, θ (j) ) for each element of θ. There should be no obvious pattern starting from m in the plot if the chain has converged. Another criterion is to test whether the autocorrelation of θ (j) with lag m can be regarded as 0. Markov Chain Monte Carlo is a computational tool to evaluate the estimator defined by E (θ |Y) or the median of f (θ |Y). If the parametric model f ( y|θ ) is not properly formulated or the priors are not reasonable in addition to a small n, then the point estimates, SEs, and confidence intervals obtained by MCMC will not provide a good summary of the data. The popularity of MCMC results from the pioneering work of Geman and Geman (1984). The work of Tanner and Wong (1987) and Gelfand and Smith (1990) are also fundamental to the wide applications of MCMC. The method of iterative sampling from conditional distributions described above is called the Gibbs sampler (Casella & George, 1992), which is the most widely used method of MCMC. Readers who are interested in more applications of MCMC are referred to Tanner (1996), Papp (2002), Lee (2007), and Cai (2010).

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

Table 18.1. Applicability of Different Methods and Their Potential Misuse Method

Applicability

Potential misuse

ML

Correctly specified likelihood function, n is not too small, q does not increase with n

Misspecified likelihood function, contaminated data, n is too small

Pseudo-ML

Certain discrepancy between the likelihood and the distribution of the sample

The discrepancy is due to data contamination

Marginal ML

The joint likelihood function involves latent variables or two many parameters

Same as for ML

Quasi-ML

Two sets of parameters, technically difficult to estimate both sets simultaneously

Same as for ML

Restricted ML

Variance components are of interest, many mean parameters

Same as for ML

LS & GLS

For regression with normally distributed errors or mean and covariance structure analysis with normally distributed data (GLS)

Data are contaminated or not normally distributed

Robust method

The population has either heavier tails or data are contaminated, sample size is not too small

Seemingly data contamination results from the underlying population mechanism or multiple clusters or groups

Estimating equations

A versatile method for obtaining parameter estimates and their SEs, each equation can be from any of the reviewed methods

Small sample size, certain equations might be obtained from misused methods

James-Stein/ ridge method

ML run into difficulty because of a small sample size or multicollinearity, bias is not a serious concern

Arbitrary ridge constant

Bayes method

Having quantifiable prior information or a small sample

Misspecified likelihood, arbitrary prior, the chain is terminated before convergence in using MCMC

Bootstrap (for SEs and confidence intervals)

Sample size is not too small the estimator is relatively easy to calculate

Correlated observations, n is too small, many nonconverged bootstrap samples are discarded in parameter estimation

δ-method (for SEs)

Moderate sample size, the involved function and derivatives are relatively easy to obtain

Sample size is too small

Note: q is the number of parameters.

yuan, schuster

383

Conclusion This chapter reviewed estimation methods that have wide applications in social science research. Among all the methods, the normal-distributionbased ML is used most frequently because it is the default method in standard software. When data are not normally distributed or contaminated, the MLE can be biased, inconsistent and inefficient. Geary (1947, p. 241) observed that “Normality is a myth; there never was, and never will be a normal distribution.” Such an observation was supported empirically by Micceri (1989). Thus, the normal-distribution-based MLE is most likely a pseudo-MLE. It is better to use the sandwich-type covariance matrix in Equation 14 to estimate the SE of an MLE. Similarly, SEs of restricted or marginal MLEs need to be adjusted when data do not follow the assumed distribution. Huber (1981, p. 3) suggested that data from physical sciences often contain 1% to 10% of outliers. We would expect data in social sciences, which are typically collected through questionnaires or survey, to contain even a higher percentage of outliers. Because a single outlier can make the normal-distribution-based MLE meaningless, we recommend robust procedures be routinely used unless one is confident that data are normally distributed. Actually, the robust M-estimator aims to approximate the MLE when data contain outliers or are contaminated. However, the robust method is not recommended when anomalous data truly reflect the population or when the population consists of a mixture of clearly distinct distributions. Data using Likert-type scales are also subject to contamination. Although outliers are limited in values, the effect of contaminated data on parameter estimation and model assessment is still a concern (Tatsuoka & Tatsuoka, 1982). Robust methods are also preferred with categorical or ordinal data. Bayes methods consider parameters as random variables. They allow prior information to be included in the current study, although it may not be trivial to do so. James-Stein and ridge procedures yield more accurate parameter estimates with respect to MSE. In particular, when the sample size is small, the good asymptotic properties of the MLE cannot be realized even if we know the true family of the population distribution. The use of a Bayes method with a naive prior distribution will yield more stable parameter estimates than the MLE or a robust estimator, as is the ridge or James-Stein estimator. In summary, information about the population distribution, the quality of the data, the size of the sample, the amount of prior information, and the 384

complexity of the model all play important roles in choosing an estimation method. Table 18.1 summarizes the applicability of each of the reviewed methods and its potential misuse in practice. Being a necessary part of any statistical procedure, methods for parameter estimation are highly developed. With the advance of statistical methods into almost every discipline of science, it might be impractical to invent a new and general estimation method at this stage. Future research should focus on solving specific problems using cutting-edge statistical methods. For example, in bioinformatics the number of variables can be huge, which poses problems to almost all the reviewed methods. Another problem is small sample size together with an unknown distribution, which occurs frequently in social science research. There might not exist a perfect solution to either of the problems. A combination of familiarity with existing statistical methods and substantive knowledge is needed to achieve satisfactory solutions.

Author note Ke-Hai Yuan, Department of Psychology, University of Notre Dame, Notre Dame, Indiana 46556, USA. Christof Schuster, Department of Psychology, University of Giessen, Otto-Behaghel-Str. 10, 35394 Giessen, Germany. Address for chapter correspondence: Ke-Hai Yuan, Department of Psychology, University of Notre Dame, Notre Dame, Indiana 46556, USA. Email: [email protected]

References Albert, J. H. (1992). Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88, 669–679. Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques (2nd ed). New York: Marcel Dekker. Bartlett, M.S. (1937). Properties of sufficiency and statistical tests. Proceedings of the Royal Society of London, A160, 268–282. Bentler, P. M., & Tanaka, J. S. (1983). Problems with EM algorithms for ML factor analysis. Psychometrika, 48, 247–251. Berger, J. O., & Berliner, L. M. (1986). Robust Bayes and empirical Bayes analysis with -contaminated priors. Annals of Statistics, 14, 461–486. Berry, D. A. (1996). Statistics: A Bayesian perspective. Belmont, CA: Duxbury Press. Bock, R. D., & Lieberman, M. (1970). Fitting a response model for n dichotomously scored items. Psychometrika, 35, 179– 197. Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46, 443–459.

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

Bollen, K. A. (1996). An alternative two stage least squares (2SLS) estimator for latent variable equations. Psychometrika, 61, 109–12. Box, G.E.P., & Tiao, G. C. (1973). Bayesian inference statistical analysis. New York: Wiley. Browne, M. W. (1984). Asymptotic distribution-free methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 62–83. Cai, L. (2010). High-dimensional exploratory item factor analysis by a Metropolis-Hastings Robbins-Monro algorithm. Psychometrika, 75, 33–57. Cheng, Y., & Yuan, K.-H. (2010). The impact of fallible item parameter estimates on latent trait recovery. Psychometrika, 75, 280–291. Carlin, B. P., & Louis, T. A. (2009). Bayesian methods for data analysis (3rd rd.). Boca Raton, FL: Chapman & Hall/CRC. Casella, G., & George, E. I. (1992). Explaining the Gibbs sampler. American Statistician, 46, 167–174. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood estimation from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society Series B, 39, 1–38. Draper, N. R., & van Nostrand, C. R. (1979). Ridge regression and James-Stein estimation: Review and comments. Technometrics, 21, 451–466. Efron, B. (1987). Better bootstrap confidence intervals. Journal of the American Statistical Association, 82, 171–185. Efron, B. (1988). Bootstrap confidence intervals: Good or bad (with discussion)? Psychological Bulletin, 104, 293–296. Efron, B., & Morris, C. (1973). Stein’s estimation rule and its competitors-An empirical Bayes approach. Journal of the American Statistical Association, 68, 117–130. Efron, B. & Morris, C. (1977). Stein’s paradox in statistics. Scientific American, 236, 119–127. Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman & Hall. Everitt, B. S. (1984). Maximum likelihood estimation of the parameters in a mixture of two univariate normal distributions; a comparison of different algorithms. Statistician, 33, 205–215. Everitt, B. S. (1987). Introduction to optimization methods and their application in statistics. London: Chapman & Hall. Ferguson, T. S. (1967). Mathematical statistics: A decision theoretic approach. New York: Academic Press. Geary, R. C. (1947). Testing for normality. Biometrika, 34, 209– 242. Gelfand, A. E., & Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398–409. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian data analysis (2nd ed.). Boca Raton, FL: Chapman & Hall/CRC. Geman, S., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Mechine Intelligence, 721–741. Godambe, V. P. (1960). An optimum property of regular maximum likelihood estimation. Annals of Mathematical Statistics, 31, 1208–1211. Gong, G., & Samaniego, F. J. (1981). Pseudo maximum likelihood estimation: theory and applications, Annals of Statistics, 9, 861–869.

Goodman, L. A. (1974). Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika, 61, 215–231. Gourieroux, C., Monfort, A., & Trognon, A. (1984). Pseudo maximum likelihood methods: Theory. Econometrica, 52, 681–700. Green, P. J. (1984). Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives (with discussion). Journal of the Royal Statistical Society B, 46, 149–192. Haff, L. R. (1980). Empirical Bayes estimation of the multivariate normal covariance matrix. Annals of Statistics, 8, 586–597. Harville, D. A. (1977). Maximum likelihood approaches to variance component estimation and to related problems (with discussions). Journal of the American Statistical Association, 72, 320–340. Hayashi, K., & Yuan, K.-H. (2003). Robust Bayesian factor analysis. Structural Equation Modeling, 10, 525–533. Hoaglin, D. C., Mosteller, F., & Tukey, J. W. (1983). Understanding Robust and Exploratory Data Analysis. Wiley. Hoaglin, D. C., Mosteller, F., & Tukey, J. W. (1983). Understanding robust and exploratory data analysis. New York: Wiley. Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12, 55–67. Holland, P. W., & Welsch, R. E. (1977). Robust regression using iteratively reweighted least-squares. Communications in Statistics-Theory and Methods, A6, 813–827. Huber, P. J. (1967). The behavior of maximum likelihood estimates under nonstandard conditions. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (pp. 221–233). Berkeley, CA: University of California Press. Huber, P. J. (1981). Robust statistics. New York: Wiley. James, W., & Stein, C. (1961).Estimation with quadratic loss. In Proceedings of the fourth Berkeley Symposium on Mathematical Statistics and Probability (pp. 361–379). Berkeley, CA: University of California Press. Jamshidian, M., & Bentler, P. M. (1999). Using complete data routines for ML estimation of mean and covariance structures with missing data. Journal Educational and Behavioral Statistics, 23, 21–41. Jamshidian, M., & Jennrich, R. I. (1997). Acceleration of the EM algorithm by using Quasi-Newton methods. Journal of the Royal Statistical Society B, 59, 569–587. Jöreskog, K. G. (1994). On the estimation of polychoric correlations and their asymptotic covariance matrix. Psychometrika, 59, 381–390. Kano, Y., Berkane, M., & Bentler, P. M. (1993). Statistical inference based on pseudo-maximum likelihood estimators in elliptical populations. Journal of the American Statistical Association, 88, 135–143. Kelley, C. T. (2003). Solving nonlinear equations with Newton’s method. Philadelphia: SIAM. Klein, A., & Moosbrugger, H. (2000). Maximum likelihood estimation of latent interaction effects with the LMS method. Psychometrika, 65, 457–474. Laird, N. M., & Louis, T. A. (1987). Empirical Bayes confidence intervals based on bootstrap samples (with

yuan, schuster

385

discussion). Journal of the American Statistical Association, 82, 739–757. Lee S.-Y. (2007). Structural equation modeling: A Bayesian approach. Chichester: Wiley. Lee, Y., & Nelder, J. A. (1996). Hierarchical generalized linear models (with discussion). Journal of the Royal Statistical Society B, 58, 619–678. Lee, S.-Y., Poon, W.-Y., & Bentler, P. M. (1995). A two-stage estimation of structural equation models with continuous and polytomous variables. British Journal of Mathematical and Statistical Psychology, 48, 339–358. Lee, S.-Y., & Song, X.-Y. (2004). Evaluation of the Bayesian and maximum likelihood approaches in analyzing structural equation models with small sample sizes. Multivariate Behavioral Research, 39, 653–686. Lee, S.-Y., & Zhu, H.-T. Maximum likelihood estimation of nonlinear structural equation models. Psychometrika, 67, 189–210. Liang, K. Y., & Zeger, S. L. (1986). Longitudinal data analysis using generalized linear models. Biometrika, 73, 13–22. Little, R. J. A. (1988). Robust estimation of the mean and covariance matrix from data with missing values. Applied Statistics, 37, 23–38. Little, R. J. A., & Rubin, D. B. (2002). Statistical analysis with missing data (2nd ed.). New York: Wiley. Maydeu-Olivares, A. (2006). Limited information estimation and testing of discretized multivariate normal structural models. Psychometrika, 71, 57–77. McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (2nd ed.). London: Chapman and Hall. Mammen, E. (1992). When does bootstrap work? New York: Springer. Maronna, R. A. (1976). Robust M-estimators of multivariate location and scatter. Annals of Statistics, 4, 51–67. Maronna, R. A., Martin, R. D., & Yohai, V. J. (2006). Robust statistics: Theory and methods. New York: Wiley. McLachlan, G. J., & Krishnan, T. (2008). The EM algorithm and extensions (2nd ed.). New York: Wiley. Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156–166. Mislevy, R. J., & Bock, R. D. (1982). Biweight estimates of latent ability. Educational and Psychological Measurement, 42, 725–737. Muthén, B., & Satorra, A. (1995). Technical aspects of Muthén’s LISCOMP approach to estimation of latent variable relations with a comprehensive measurement model. Psychometrika, 60, 489–503. Nelder, J. A. & Lee, Y. (1992). Likelihood, quasi-likelihood and pseudolikelihood: Some comparisons. Journal of the Royal Statistical Society B, 54, 273–284. Neyman, J., & Scott, E. L. (1948). Consistent estimates based on partially consistent observations. Econometrika, 16, 1–32. Olsson, U. (1979). Maximum likelihood estimation of the polychoric correlation coefficient. Psychometrika, 44, 443–460. Papp, R. (2002). What are the advantage of MCMC based inference in latent variable models? Statistica Neerlandica, 56, 2–22. Parke, W. R. (1986). Pseudo maximum likelihood estimation: the asymptotic distribution, Annals of Statistics, 14, 355–357.

386

Patterson, H. D., & Thompson, R. (1971). Recovery of interblock information when block sizes are unequal. Biometrika, 58, 545–554. Patz, R. J., & Junker, B. W. (1999). A straightforward approach to Markov Chain Monte Carlo methods for item response models. Journal of Educational and Behavioral Statistics, 24, 146–178. Pawitan, Y. (2001). In all likelihood: Statistical modelling and inference using likelihood. New York: Oxford University Press. Poon, W.-Y., & Lee, S.-Y. (1987). Maximum likelihood estimation of multivariate polyserial and polychoric correlation coefficient. Psychometrika, 52, 409–430. Rasmussen, J. (1987). Estimating correlation coefficients: Bootstrap and parametric approaches. Psychological Bulletin, 101, 136–139. Rubin, D. B., & Thayer, D. T. (1982). EM algorithms for ML factor analysis. Psychometrika, 47, 69–76. Schuster, C., & Yuan, K.-H. (2011). Robust estimation of latent ability in item response models. Journal of Educational and Behavioral Statistics, 36, 720–735. Stein, C. (1956). Inadmissibility of the usual estimator for the mean of a multivariate distribution. In Proceedings of the third Berkeley symposium on mathematical statistics and probability, vol. I, (pp. 197–206). Berkeley and Los Angeles: University of California Press. Tanner, M. A. (1996). Tools for statistical inference: Methods for the exploration of posterior distributions and likelihood functions (3rd ed.). New York: Springer-Verlag. Tanner, M. A., & Wong, W. (1987). The Calculation of Posterior Distributions by Data Augmentation (with discussion). Journal of the American Statistical Association, 82, 528–550. Tatsuoka, K. K., & Tatsuoka, M. M. (1982). Detection of aberrant response patterns and their effect on dimensionality. Journal of Educational Statistics, 7, 215–231. Wainer, H., & Wright, B. D. (1980). Robust estimation of ability in the Rasch model. Psychometrika, 45, 373-391. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25. Wilcox, R. R. (2005). Introduction to robust estimation and hypothesis testing (2nd ed.). Burlington, MA: Academic Press. Yuan, K.-H. (2009). Normal distribution based pseudo ML for missing data: With applications to mean and covariance structure analysis. Journal of Multivariate Analysis, 100, 1900–1918. Yuan, K.-H., & Bentler, P. M. (1998). Structural equation modeling with robust covariances. Sociological Methodology, 28, 363–396. Yuan, K.-H., & Bentler, P. M. (2000). Robust mean and covariance structure analysis through iteratively reweighted least squares. Psychometrika, 65, 43–58. Yuan, K.-H., & Bentler, P. M. (2010). Consistency of normal distribution based pseudo maximum likelihood estimates when data are missing at random. American Statistician, 64, 263–267. Yuan, K.-H., Bentler, P. M., & Chan, W. (2004). Structural equation modeling with heavy tailed distributions. Psychometrika, 69, 421–436. Yuan, K.-H., Bentler, P. M., & Zhang, W. (2005). The effect of skewness and kurtosis on mean and covariance structure analysis: The univariate case and its multivariate implication. Sociological Methods & Research, 34, 249–258.

o v e rv i e w o f s tat i s t i c a l e s t i m at i o n m e t h o d s

Yuan, K.-H., & Bushman, B. J. (2002). Combining standardized mean differences using the method of maximum likelihood. Psychometrika, 67, 589–607. Yuan, K.-H., & Chan, W. (2005). On nonequivalence of several procedures of structural equation modeling. Psychometrika, 70, 791–798. Yuan, K.-H., & Chan, W. (2008). Structural equation modeling with near singular covariance matrices. Computational Statistics & Data Analysis, 52, 4842–4858. Yuan, K.-H., & Hayashi, K. (2006). Standard errors in covariance structure models: Asymptotics versus bootstrap. British Journal of Mathematical and Statistical Psychology, 59, 397–417. Yuan, K.-H., & Jennrich, R. I. (1998). Asymptotics of estimating equations under natural conditions. Journal of Multivariate Analysis, 65, 245–260.

Yuan, K.-H., & Jennrich, R. I. (2000). Estimating equations with nuisance parameters: Theory and applications. Annals of the Institute of Statistical Mathematics, 52, 343–350. Yuan, K.-H., Wu, R., & Bentler, P. M. (2011). Ridge structural equation modeling with correlation matrices for ordinal and continuous data. British Journal of Mathematical and Statistical Psychology, 64, 107–133. Yuan, K.-H., & Zhong, X. (2008). Outliers, leverage observations and influential cases in factor analysis: Minimizing their effect using robust procedures. Sociological Methodology, 38, 329–368. Zu, J., & Yuan, K.-H. (2010). Local influence and robust procedures for mediation analysis. Multivariate Behavioral Research, 45, 1–44.

yuan, schuster

387

CHAPTER

19

Robust Statistical Estimation

David M. Erceg-Hurn, Rand R. Wilcox, and Harvey J. Keselman

Abstract Traditional statistical methods are built on strong assumptions, such as normality and homoscedasticity. These assumptions are frequently violated in practice. This can lead to undesirable consequences such as the inaccurate estimation of parameters and confidence intervals, inaccurate calculation of p-values, inflated rates of type I error, and low statistical power. Modern robust statistical methods typically overcome these problems. They are designed to work well both when traditional assumptions are satisfied and when they are not. Using robust methods increases the likelihood of discovering genuine differences between groups and associations among variables. We provide a nontechnical introduction to robust measures of location and scale, bootstrapping, outlier detection, significance testing, and other procedures that have practical value to applied researchers. We discuss software that can be used to conduct robust analyses. Psychological research would benefit from the greater use of robust methods. Key Words: Robustness, robust estimation, normality, bootstrapping, outliers, software, statistical assumptions, parametric, Central Limit Theorem

Introduction Classic parametric statistics are the dominant method for analyzing data in psychology and related fields. Researchers routinely estimate parameters such as the mean, use null hypothesis significance tests such as Student’s t-test and analysis of variance (ANOVA), fit regression equations using ordinary least squares, and compute effect sizes such as Cohen’s d . There are important assumptions underlying classic parametric statistics—for example, that scores are normally distributed in the population. When these assumptions are sufficiently satisfied, classic parametric methods work well. But in practice, the assumptions underlying classic parametric methods are often violated. It is not uncommon for these violations to be severe. Many psychologists do not realize that using classic parametric methods when the assumptions underlying them are

388

sufficiently violated can lead to undesirable consequences. These include the inaccurate estimation of parameters and confidence intervals, inaccurate calculation of p-values, inflated rates of type I error and low statistical power. These problems can lead to erroneous research findings. Fortunately, there is a solution. Robust statistical methods alleviate the problems inherent in using traditional methods when their assumptions are sufficiently violated. Robust methods are designed to produce accurate results both when normal theory assumptions hold and when they do not. Robust methods can be applied in most circumstances where classic parametric statistics have traditionally been used. For example, robust approaches to ANOVA, regression, and effect size have been developed. Analyzing data using robust methods can increase the precision with

which parameters and confidence intervals are estimated and can lead to large gains in statistical power, better control of the type I error rate, and a deeper understanding of how groups compare and variables are associated. Although robust methods have been part of the field of statistics since the 1960s, they have been underused by psychologists and other behavioral scientists. The purpose of this chapter is to provide an introduction to robust methods for those researchers who have not encountered them before. First, we review some of the limitations of classic parametric statistics. We then introduce some key concepts and procedures underlying robust methods. We illustrate the practical benefits of using robust methods and software that can be used to conduct robust analyses. Finally, we touch on criticisms of robust methods and future directions for the field.

Problems With Classic Techniques All statistical procedures are built on assumptions. Two fundamental assumptions underlying classic parametric statistical procedures (also known as normal theory or traditional methods) are normality and homogeneity of variance/homoscedasticity. These assumptions refer to the distribution of scores in the population from which sample data are drawn. The normality assumption states that the data are normally distributed in the population (or, in the case of regression, that the residuals are normally distributed). Homogeneity of variance refers to the dispersion of the scores, which should be identical across populations. When normality holds, the mean and standard deviation (SD) are optimal indices of central tendency and variability. However, when normality does not hold, the mean and SD can break down. To illustrate this point, consider Figure 19.1, which comes from a study by Ho, Hunt, and Li (2008). These authors investigated the delay between the onset of anxiety disorders and treatment seeking among 46 Chinese immigrants living in Australia. The length of time between anxiety disorder onset and treatment seeking ranged from 0 to 48 years. It is evident from examining Figure 19.1 that the distribution of scores is skewed (i.e., asymmetric) rather than normally distributed. The mean time between the onset of anxiety disorders and seeking treatment was 7.04 years (SD = 9.96). Both the mean and SD are inflated by a very small percentage of immigrants (< 4% ) who took more than 40 years to seek treatment. Most immigrants (about

70% ) took less than 7 years to seek treatment, and a large proportion of the sample (> 36% ) sought treatment within 2 years. Consequently, the mean of 7.04 is not a good indicator of how long a typical Chinese immigrant waits before seeking help for an anxiety disorder. Similarly, the SD (9.96) is not a good indicator of dispersion in this data set because it is inflated by the presence of outliers (i.e., extreme values). It is well known that under normality, approximately 68% of scores fall within 1 SD of the mean, 95% within 2 SDs, and 99% within 3 SDs. However, for the data in Figure 19.1, more than 90% of scores in the distribution fall within 1 SD of the mean. In summary, departures from normality and the presence of outliers in a distribution can compromise the usefulness of the mean as an index of central tendency and the SD as a measure of dispersion. Non-normality and outliers can also lead to other problems, such as: • distorted estimates of reliability coefficients such as Chronbach’s α (Christmann & Van Aelst, 2006; Liu & Zumbo, 2007). • biased estimates of regression and structural equation model (SEM) parameters, and error-prone SEM fit statistics and significance tests (Lim & Melville, 2009; Yuan & Bentler, 2001; Yuan, Bentler, & Zhang, 2005; Yuan, Marshall, & Weston, 2002). • the estimation of inaccurate loading patterns in exploratory factor analysis (Yuan, Marshall, & Bentler, 2002).

Violations of distributional assumptions can also have a substantial impact on the performance of null hypothesis significance tests. This is an important topic given that virtually all articles published in psychology journals report the results of significance tests (Cumming et al., 2007). The impact of distributional assumption violations on widely used significance tests such as the independent groups t-test and various types of ANOVA has been extensively studied over several decades using Monte Carlo methods. To illustrate conceptually how Monte Carlo studies work, imagine we want to determine the actual type I error probability when using the two-sample t-test at the 0.05 level and when sampling is from a non-normal distribution. That is, under normality, the probability of rejecting when the null hypothesis is true is 0.05 and the goal is to determine the extent to which this remains true when sampling from a non-normal distribution. Using special software,

erceg-hurn, wilcox, keselman

389

30

25

Percent (%)

20

15

10

5

0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 Total length of delay (years) Figure 19.1 Delay between the onset of anxiety symptoms and seeking treatment among Chinese immigrants living in Australia.

we would create two large population distributions that are not normally distributed but that have equal (population) means. Then we would use the computer to draw thousands of random samples from the populations. For each sample, we would perform a t-test. If the t-test controls the probability of a type I error well, then we would expect that 5% of the t-tests performed would return erroneous “statistically significant” results. If the rate of observed type I errors is considerably lower or higher than 5%, then we would conclude that the test is not robust. Similarly, we could examine the power of the t-test by setting the means of the populations so that they are different (i.e., so that the null hypothesis is false). We would then draw thousands of random samples and compute a ttest on each sample to determine the proportion of occasions that the test correctly rejected the null hypothesis. Numerous Monte Carlo studies have found that classic parametric techniques are generally not robust to violations of their assumptions. The precise impact of assumption violations depends on a complex interaction of factors, such as the test used, sample size, and the type and severity of assumption violations. However, the following general conclusions can be drawn from the literature: 390

r o b u s t s tat i s t i c a l e s t i m at i o n

• Violating the normality assumption can sub-

stantially reduce statistical power (e.g., Blair & Higgins, 1980, 1985; MacDonald, 1999) • Violating the homogeneity of variance assumption can distort the type I error rate of a statistical test, biasing it upward or downward, and reduce statistical power (e.g., Harwell, Rubinstein, Hayes, & Olds, 1992; Wilcox, Charlin, & Thompson, 1986). • Combined violations of assumptions (e.g., violating both normality and homogeneity) are common and can severely affect type I and II error rates (e.g., Lix & Keselman, 1998; Zimmerman, 1998) • The impact of assumption violations is exacerbated when sample sizes are unequal (e.g., MacDonald, 1999; Wilcox et al., 1986). • It is a misconception that equal sample sizes alleviate low power and type I errors. Although the impact of distributional violations is less pronounced when cell sizes are equal, they can still be problematic (e.g., Harwell et al., 1992). To further illustrate the undesirable impact of distributional assumption violations on type I error rates and statistical power, consider the following examples. Figure 19.2 is a plot of the statistical power of two-way ANOVA to detect a main effect

Statistical Power of Factorial ANOVA

Power (1 – β)

0.8

t-Test Observed Type I Error Rate

0.20

Normal Distribution Nonnormal Distribution

Type I Error Rate (α)

1.0

0.6 0.4 0.2 0.0 0.3

0.4

0.5

0.6

0.7

0.15 0.10 0.05 0.00

0.8

1.0

Effect Size (τ)

1.5 2.0 2.5 3.0 3.5 Population Variance Ratio (x:1)

4.0

Figure 19.2 Statistical power of factorial ANOVA under normal and non-normal distributions. Based on data reported by Akritas, Arnold & Brunner (1997, Table 2) for 2 × 4 factorial design, α = 0.05, n = 10 per cell. Based on Monte Carlo simulation with 5000 runs. The errors of the “nonnormal distribution” were lognormally distributed.

Figure 19.3 Observed Type I rrror rate of independent groups t-test based on Monte Carlo simulation reported in Zimmerman (1998). Data were sampled from exponential distributions with varying degrees of heterogeneity. Sample sizes per group were 20 and 40, the group with the smaller sample size paired with the larger variance. Nominal alpha was set to 0.05.

of factor A when sampling from normal and nonnormal population distributions, based on a study conducted by Akritas, Arnold, and Brunner (1997). High statistical power is desirable, as it increases the chance of detecting real differences between groups. Figure 19.2 shows that the power of ANOVA is considerably lower when sampling from a non-normal (exponential) population distribution compared to when the distribution is normal. When the normality assumption is sufficiently violated, the power of the ANOVA F -test to detect genuine effects can be reduced by more than 50%. Heterogeneity of variance can lead to low power and distorted rates of type I error. For example, Wilcox, Charlin, and Thompson (1986) investigated the effect on the type I error rate of ANOVA when normality holds but variances are heterogeneous. They found that when sample sizes are unequal, the observed type I error rate can be more than six times the nominal value. For example, the actual (observed) type I error rate of ANOVA can exceed 30% when it should be 5% . The type I error rates were less inflated when sample sizes were equal, but could still be more than double the nominal level (i.e., the observed type I error rate could exceed 10% when it should have been 5% ). This emphasizes the point that although equal sample sizes can reduce the impact of assumption violations, they do not solve the problem. The authors also found that when variances were heterogeneous, the power of ANOVA to detect real effects was typically only one-third of what it was when variances were homogeneous. In some cases, power under ANOVA was more than 14 times lower under heterogeneity.

Thus far, we have only considered the impact of non-normality and heterogeneity in isolation. But in practice, it is common for both assumptions to be simultaneously violated (Erceg-Hurn & Mirosevich, 2008). Combined assumption violations can have a very undesirable impact on the type I error rate and statistical power of significance tests. Consider a study by Zimmerman (1998), who examined the type I error rate of the independent groups ttest when sampling from a variety of non-normal distributions, under varying degrees of heterogeneity. Figure 19.3 shows the rate of observed type I errors when sampling from exponential (i.e., nonnormal) distributions with heterogeneous variances, when group sample sizes are unbalanced by a ratio of 2:1. Nominal α was set to 0.05. If the t-test was not affected by heterogeneity, we would expect to see the plot of the observed type I error rate follow the dotted line in the figure. That is, the observed type I error rate would be constant at 0.05 irrespective of the degree of heterogeneity. However, it is evident that as heterogeneity increases, so, too, does the occurrence of type I errors. Only when sampling occurs from populations with equal variances do the nominal and observed type I error rates match. When the variance ratio is 1.5:1, the observed type I error rate (0.09) is almost double the nominal rate; when the variance ratio is 4:1, the observed type I error rate is 0.17. Similar findings occur when other significance tests are studied, such as ANOVA. For example, Lix and Keselman (1998) found that the observed type I error rate of ANOVA can reach 50% —ten times the nominal level of 5% —when it is used to analyze

erceg-hurn, wilcox, keselman

391

data sampled from non-normal and heterogeneous populations.

Assumption Violations Are Common It is evident that distributional assumption violations can have a negative impact on the results of traditional statistical procedures. This would be of little concern if assumption violations were rare, but this is not the case. Empirical research suggests that violations of distributional assumptions are the rule, rather than the exception. In a landmark study, Micceri (1989) examined whether psychological data are normally distributed. He gathered 440 data sets from the psychology and educational literatures by contacting the authors of published research articles and psychometric tests. The data sets comprised scores on wide ranges of measures, such as ability and aptitude tests (e.g., reading, maths, GPA), personality scales (e.g., MMPI), and measures of constructs such as anxiety, anger, curiosity, sociability, quality of life, locus of control, hallucinations, and so forth. All data sets were based on large sample sizes—the minimum was 190 and the maximum 10,893. For 70% of the data sets, the sample size exceeded 1000 subjects. Because the samples were large, there is a high likelihood that the distributions of the observed (sample) data closely approximate the population distributions from which they were drawn. Micceri found that none of the datasets were normally distributed, and few distributions even remotely resembled the normal curve. Instead, the distributions tended to resemble those in Figure 19.4. Real psychological data are more likely to be skewed and lumpy than normally distributed. Micceri’s findings are consistent with other empirical research (e.g., Bradley, 1977; Hill & Dixon, 1982; Wu, 2002) and anecdotal evidence suggesting that normality is more fiction than fact. In 2001, biostatistician Peter Gartside quipped , “After almost 40 years of teaching statistics and providing consulting to biomedical researchers, I have yet to come across a real dataset that is symmetric, (p. 171)” whereas statistician Marks Nester (1996, p. 405) wrote, “Surely there is no one among us who believes that a sample of data from a normal distribution has ever existed.” As far back as 1947, Geary wrote that all statistical textbooks should carry a warning stating that, “Normality is a myth; there never was, and never will be, a normal distribution (p. 241)”. There is also considerable evidence that the types of data routinely analyzed by psychologists often violate the homogeneity of variance assumption that underlies many statistical tests (Erceg-Hurn & 392

r o b u s t s tat i s t i c a l e s t i m at i o n

Mirosevich, 2008; Grissom, 2000; Keselman et al., 1998). Given the abundance of evidence that real data are not normally distributed, one may wonder why a class of statistics based on such as unrealistic assumption was ever developed. The reason is mathematical expediency (Geary, 1947). Assuming normality (and homoscedasticity) simplifies statistical analyses. This was particularly important during the first half of the twentieth century, when classic parametric statistics were developed and computations were often performed by hand and using tables. Today, assuming normality and homogeneity is unnecessary because robust statistical analyses that do away with those assumptions can be quickly performed using computers.

Traditional Approaches for Dealing With Assumption Violations Are Flawed Rather than using robust statistics, researchers typically attempt to deal with distributional assumption violations using methods such as switching to nonparametric tests or transforming data and continuing to use classic parametric methods. These approaches to dealing with assumption violations are problematic, as outlined by Erceg-Hurn and Mirosevich (2008). For example, simple transformations often fail to sufficiently restore normality and complicate the interpretation of results, as they are based on the transformed rather than original data. This is an important issue given that transformations can reverse the order of group means (Brunner, Domhof, & Langer, 2002) and alter the spacing between points in a distribution (Osborne, 2002). Relying on well-known modifications of classic parametric tests (e.g., Welch t-test) and classic nonparametric statistics (e.g., Mann Whitney U -test) as a means for dealing with assumption violations is also generally ill advised, because these procedures can suffer from the same problems as classic parametric methods (low power and inflated type I error rates) when the normality and homogeneity of variance assumptions are concurrently violated (Lix, Keselman, & Keselman, 1996; Sawilowsky, 1990; Zimmerman, 1998, 2000). Statistically testing assumptions (e.g., using Levene’s test for homogeneous variances) and then deciding whether to use a standard or robust procedure on the basis of the assumption test result should also be avoided. Assumption tests often fail to detect violations that are sufficiently severe to cause problems for parametric tests, and as a result the practice of assumption testing has been heavily criticized (Erceg-Hurn &

(a) Discrete Mass at Zero

(b) Positive Skew

(c) Multimodality and Lumpiness

(d) Negative Skew

(e) Bimodality

(f ) Digit Preference

(g) Mass at Zero

(h) Smooth Symmetric

Figure 19.4 Common distributions of psychological data. (Reprinted with permission from Sawilowsky & Blair, 1992).

Mirosevich, 2008; Glass & Hopkins, 1996; Wells & Hintze, 2007). Another common approach to dealing with assumption violations is to simply ignore the problem. As noted by Bradley (1978), Erceg-Hurn and Mirosevich (2008), and Wilcox (1998b), authors of textbooks and journal articles often claim that classic parametric tests are “robust” or “insensitive” to

violations of their assumptions and that therefore there is no need to use alternative methods. It is true that parametric methods are relatively unaffected by assumption violations in certain circumstances. For example, Sawilowsky and Blair (1992) found that Student’s t-test is unaffected by non-normality when all four of the following conditions hold: (1) variances are equal, (2) sample sizes are equal, (3) sample

erceg-hurn, wilcox, keselman

393

sizes are 25 or more per group, and (4) tests are twotailed. When any of these conditions is not satisfied, the t-test is not insensitive to violations of normality (Bradley, 1980; Ramsey, 1980; Sawilowsky & Blair, 1992; Zimmerman, 1998). Authors who argue that parametric tests are robust tend to overlook the limited circumstances under which such statements are true (Bradley, 1980). Some authors also appeal to statistical theory to justify the use of classic parametric methods despite substantial violations of their assumptions. Indeed, one of the most fundamental lessons in introductory statistics textbooks teaches students about the Central Limit Theorem, which states that even when observations are sampled from a nonnormal distribution, the sampling distribution1 of the mean will approximate a normal distribution when the sample size is large. Tutorials illustrating the Central Limit Theorem can be found online at http://www.intuitor.com/statistics/Cen tralLim.html and http://www.chem.uoa.gr/applets/ AppletCentralLimit/Appl_CentralLimit2.html. It follows from the theorem that as long as N is large, classic parametric methods can theoretically be used even when normality is violated without fear of negative consequences. Textbook authors tend to claim that as long as N equals 30 or more, the Central Limit Theorem can be relied on and that departures from distributional assumptions are of little concern. In contrast to textbook advice, the extant research indicates that the rate at which a sampling distribution of means converges to a normal distribution depends not only on sample size but also the shape of the underlying population distribution. The Central Limit Theorem tends to work well when sampling from distributions with little skew, light tails, and no outliers (Wilcox, 2003; Wu, 2002). However, such distributions are not typical of those from which psychologists sample data (Micceri, 1989). Wu (2002) discovered that when sampling from distributions typical of psychological research (see Figure 19.4), sample sizes in excess of 260 can be necessary for a distribution of sample means to resemble a normal distribution. Consistent with this, Smith and Wells (2006) found that as the skewness and kurtosis of population distributions increased, sample sizes in excess of 200 can be needed for the Central Limit Theorem to work when sampling from realistic population distributions. Other studies (e.g., Bradley, 1980) revealed that z-, t- and F -tests can suffer from very inflated rates of type I error when sampling from skewed distributions, 394

r o b u s t s tat i s t i c a l e s t i m at i o n

even when sample sizes are in the hundreds. In summary, when sampling from distributions commonly found in psychological research, sample sizes in the hundreds can be needed for the sampling distribution of means to converge to normality. Such large sample sizes are often hard to obtain in practice. Given this, researchers should be wary of placing faith in the Central Limit Theorem to protect them from type I errors and low statistical power.

Robust Statistics We now turn to providing a nontechnical introduction to some basic robust statistical methods. Robust statistics can be viewed as a modern version of parametric statistics. Robust analogs of most classic techniques, such as ANOVA and regression, have been developed. Whereas classic parametric techniques were pioneered in the first half of twentieth century, robust methods were introduced into statistics in the 1960s by the likes of Tukey (1962), Huber (1964), and Hampel (1968). Over subsequent decades, numerous researchers have introduced new robust methods and refined existing techniques.

Purpose of Robust Methods The broad goal of robust statistics is to find population parameters, estimators, and hypothesis testing methods that are not drastically impacted by changes in population distributions. The term robustness means the statistic/procedure is insensitive to the effects of non-normality, heteroscedasticity, outliers, or contaminated data (Maronna, Martin, & Yohal, 2006). An outlier is an observation that is unusually far from the bulk of data. The two observations at the right of Figure 19.4 are a salient example. Outliers can provide useful information, but if the goal is to learn about the typical subject under study, they are a nuisance because they can unduly influence measures of central tendency and variability. Compared to advocates for traditional statistical methods, advocates of robust methods have a different perspective on how observed data come to be non-normally distributed. The traditional view when sample data are not normally distributed is that something has gone awry during the sampling process. It is assumed that the population from which the data were sampled is normally distributed and outlier-free. Therefore it makes sense, despite the non-normal sample data, to estimate parameters such as the mean and the SD. In contrast, advocates of robust methods take the view that if sample

data deviates from normality, then the population itself is likely to be non-normal as well. This is a more realistic position to take, given that empirical research indicates that psychological data are rarely normally distributed (Micceri, 1989). If we accept the view that populations are usually not normally distributed, then estimating parameters such as the mean, variance, or correlation can be problematic because the estimates can be distorted by an extremely small subset of observations (Wilcox, 1998a). It is not a goal of robust statistics to find better ways of estimating these parameters—rather, the focus is on alternative robust parameters and their estimators.

Robust Measures of Central Tendency A measure of central tendency can be loosely defined as a quantity that characterizes the typical individual or thing under study. The idea is that a measure of central tendency should characterize the middle portion of a data set. There are numerous measures of central tendency, such as the arithmetic mean, trimmed mean, Winsorized mean, median, and M-, R-, and S-estimators. It is important to note that there is no single “best” measure to use in all situations—different distributions call for different measures of central tendency. We now describe some of the most relevant measures of central tendency that may be of interest to psychologists. The most well-known measure of central tendency is the (arithmetic) mean. Unfortunately, the mean is not robust. Consider a data set containing the following values: 1, 1, 1, 2, 2, 5, 5, 5, 6, 20, 40.

(1)

The mean of the values is 8. However, the mean is distorted by two outlying values (20 and 40). All of the other values in the data set are less than or equal to 6. Herein lies the problem with the mean—it can be distorted by as little as one outlier. The finite sample breakdown point of an estimate is the smallest proportion of observations that can distort it, so that it no longer accurately reflects the central values in a data set. It is an index of an estimator’s resistance to contamination. The breakdown point for the mean is 1/N —indicating that a single outlier can result in the mean becoming arbitrarily large or small, irrespective of the values of the other observations.

median The median is an alternative to the mean that is resistant to the deleterious effects of outliers. The median is the middle value (i.e., 50th percentile) in a set of observations. The median of the observations in Equation 1 is 5. The finite sample breakdown point of the median is approximately 0.50—the highest value possible. In other words, the median can be an accurate indicator of the central observations in a data set, even when a large proportion of outliers are present. A weakness of the median is that compared to the mean and competing robust estimators, it has a large standard error when data are sampled from normal or light tailed distributions (Wilcox, 2003). The practical consequence of this is that hypothesis tests of medians can be less powerful than when other robust measures of location are utilized. As a result, it is often better to perform hypothesis tests of trimmed means, rather than medians.

trimmed mean An appealing robust measure of location is the class of trimmed means, which includes the median as a special case. Trimming involves removing a certain percentage of data from the tail(s) of a distribution and then computing the mean of the remaining observations. The median represents the most extreme amount of trimming where all but one or two values are trimmed. The rationale for trimming is that influential observations that can distort the mean are found in the tails of a distribution. Such observations are undesirable given that the purpose of a measure of central tendency is to find a value that best represents the middle portion of a data set. By trimming, the influence of outlying observations is negated. The symbol y is often used to indicate the proportion of data trimmed from each tail. The percentage to trim is a decision that needs to be made by the researcher—current research suggests that for most purposes, y = 0.20 (i.e., 20% trimming) works well (Wilcox, 2012a). To make things more concrete, let us compute a 20% trimmed mean for the observations in Equation 1. To do this, the lowest and highest 20% of the values from the data set are removed, leaving: 1, 2, 2, 5, 5, 5, 6.

(2)

The mean of the remaining values is then calculated. The 20% trimmed mean is 3.71, which reflects the central values of the original data set more accurately than the untrimmed (arithmetic) mean of 8.

erceg-hurn, wilcox, keselman

395

The breakdown point of the trimmed mean is equal to y. So, a 20% trimmed mean has a breakdown point of 0.20. A 10% trimmed mean has a breakdown point of 0.10, a 30% trimmed mean a breakdown point of 0.30, and so forth. One major advantage of the trimmed mean over the arithmetic mean is that it is more resistant to outliers. Consequently, the trimmed mean may better reflect the typical or central values in a data set than the mean, as was the case in the above example. Further, if data are normally distributed, then the mean and trimmed mean will be the same. Opting to use the trimmed mean can make it easier to detect genuine differences among groups, and relationships among variables.

m-estimators When computing a trimmed mean, a predetermined percentage of a distribution (e.g., 20% ) is removed from both tails. However, in some situations, such as when sampling from light-tailed distributions, it may be desirable to trim no or few observations, as this can lower the standard error. Also, it is sometimes desirable not to trim the same proportion of observations from each tail. For example, if a distribution is skewed to the left, then it might be preferable to trim more observations from the left rather than the right tail of the distribution (see Keselman, Wilcox, Lix, Algina, & Fradette, 2007). A class of robust measures of location known as M-estimators can work well in such situations. An attractive property of M-estimators is that they empirically determine what proportion of a distribution to trim, rather than determining it a priori, as is the case for trimmed means. There are several approaches to computing M-estimators, and providing computational details is beyond the scope of this chapter. We refer readers to Wilcox (2003) for a relatively nontechnical introduction to M-estimators and software to compute them; see also Wilcox (2012a), Maronna et al. (2006), and Keselman et al. (2007). M-estimators are generally preferred over trimmed means when performing regression analyses and may be superior for location problems when sampling from normal and contaminated normal distributions. Trimmed means are superior when sampling from exponential and lognormal distributions and when there are tied values (Sawilowsky, 1998). Trimmed means are also easier to compute and interpret. The arithmetic mean, trimmed mean, and M-estimators can all be conceptualized as weighted means. For the arithmetic mean, all observations in 396

r o b u s t s tat i s t i c a l e s t i m at i o n

a data set are given equal weight (i.e., a weight of 1). In contrast, trimmed means and M-estimators give more weight to observations at the center of a data set and less weight to those in the tails. For trimmed means, the trimmed observations are given a weight of 0, and the observations that are not trimmed are given a weight of 1/(n − 2g ), where g is the number of observations that are trimmed from each tail. The weighting scheme used by M-estimators is more complex, but the underlying principle—to ensure that central observations are given more weight than distant observations—is the same.

Robust Measures of Scale Measures of central tendency do not tell us about the variability of scores in a data set. For that, we require measures of scale—also known as measures of variability. The most common estimators of scale are the sample variance and its square root, the SD. Neither is robust—the finite sample breakdown point of both is 1/N . A small proportion of outliers can seriously hamper the utility of these estimators. Several robust measures of scale have been studied. We now discuss two that have practical utility—the Winsorized variance and the Median Absolute Deviation (MAD; see Keselman, Wilcox, Algina, Othman, & Fradette [2008] for robust tests of spread).

winsorized variance Whereas extreme observations are eliminated when trimming, Winsorizing involves “pulling in” and “replacing” extreme scores in a data set with less extreme values. To illustrate the calculation of a Winsorized variance, consider the following set of observations: 100, 13, 12, 15, 22, 20, 21, 19, 99, 9.

(3)

The first step in computing the Winsorized variance is to re-order the scores from lowest to highest: 9, 12, 13, 15, 19, 20, 21, 22, 99, 100.

(4)

Next, the smallest y proportion of scores are replaced by the next smallest value, and the largest y proportion of scores are replaced by the next largest value. For example, if y = 0.2, then for the present data set, the lowest two values (9 and 12) would be replaced by 13, and the highest two values (99 and 100) would be replaced by 22. The resulting Winsorized scores are: 13, 13, 13, 15, 19, 20, 21, 22, 22, 22.

(5)

The mean of the Winsorized scores is then calculated: 1 (13 + 13 + 13 + 15 + 19 + 20 + 21 X¯ W = 10 + 22 + 22 + 22) = 18. (6) Finally, the variance of the Winsorized scores is calculated using the usual formula for the sample variance, with the exception that the Winsorized scores and Winsorized mean are used in place of the original scores and mean. For the present data set, the Winsorized variance is 16.22. This value can be converted into a Winsorized SD by taking the square root—for the example, the Winsorized SD is 4.03. Before computing a Winsorized variance, a decision needs to be made about what proportion of scores to Winsorize. Often a Winsorized variance is computed after first estimating a robust measure of location, such as a trimmed mean. Typically the same value of y that was used to compute the trimmed mean is used when estimating the Winsorized variance.

median absolute deviation Another robust measure of scale is the MAD. We illustrate its computation by again using the scores in Equation 3. First compute the sample median, which is 19.5. Then subtract the median from each of the scores in the data set. For example, subtracting the median (19.5) from the first score (100) gives us 80.5. If we continue for the remainder of the scores, we get 80.5, −6.5, −7.5, −4.5, 2.5, 0.5, 1.5, − 0.5, 79.5, −10.5.

(7)

We then ignore the positive and negative signs, and place the absolute values in ascending order: 0.5, 0.5, 1.5, 2.5, 4.5, 6.5, 7.5, 10.5, 79.5, 80.5. (8) The MAD is the median of these values: 5.5. The MAD is involved in the computation of some M-estimators and is also very useful for detecting outliers. A common but flawed outlier detection strategy is to classify scores that are 2 or 3 SDs smaller or larger than the mean as outliers. The rationale for this strategy is that under a normal distribution, we expect only a small proportion of scores to fall further than 2 or 3 SDs from the mean. However, this method is problematic because outliers can distort the mean and SD, and this can lead to outliers not being detected. Consider again the scores in Equation 3. There are clearly two scores (99 and

100) that are considerably different from the rest. These two outliers inflate the mean and SD of the data set, which are 33 and 35.3 respectively. The two outlying values both fall within 2 SDs of the mean. A better outlier detection rule is to declare a score an outlier when X −M > C, (9) MAD .6745

where X is the score, M is the median of the scores, and C is a threshold value. Equation 9 is known as the Hampel Identifier. Hampel (1985) recommended using a threshold value of 3.5, whereas Rousseeuw (1990) recommended 2.5 and Wilcox (2012a) 2.24. The threshold values are all somewhat arbitrary—the important point is that scores that are not outliers are unlikely to exceed any of the aforementioned thresholds. For the scores in Equation 3, the median (M ) equals 19.5, and the MAD equals 5.5. The extreme score of 99 is declared an outlier, as (99–19.5) / (5.5 / 0.6745) = 79.5 / 8.154 = 9.75, which exceed all of the cutoff values proposed in the literature. Given that 99 was deemed an outlier, 100 is as well.

Bootstrapping Bootstrapping is a computer-intensive resampling technique that was introduced by Efron (1979). Bootstrapping is used to approximate sampling distributions, which play a critical role in hypothesis testing and the construction of confidence intervals (Guthrie, 2001). In regards to hypothesis testing, the basic idea is to perform a simulation study using the data at hand to determine an appropriate critical value, in contrast to determining a critical value by assuming normality. In this section, we provide a conceptual overview of how bootstrapping is implemented. The sampling distributions used in classic parametric statistics, such as the t and F families of distributions, are theoretical and are constructed by making strict assumptions about the shape of population distributions, such as that they exactly follow a normal curve. Bootstrapping is a method that allows us to do away with making unrealistic assumptions about population distributions. Bootstrapping is used to construct empirical sampling distributions. According to Rodgers (1999), the founders of classic parametric statistics, such as Sir Ronald Fisher, believed that empirical sampling distributions were superior to theoretical distributions, but they had to settle for theoretical sampling distributions because, without computers, they lacked

erceg-hurn, wilcox, keselman

397

a suitable method for creating empirical sampling distributions. Imagine that the goal is to determine how large or small Student’s t-statistic must be to reject the hypothesis that the population mean is equal to some specified value when testing at the 0.05 level. For illustrative purposes, assume the goal is to test the hypothesis that the population mean is 0. One of the more basic bootstrap methods, called a bootstrap-t method, is performed by doing the following: • A sample of data of size n is collected. • Subtract the sample mean from each observa-

tion, so that now the sample mean is 0. • Perform a simulation study on the data that now have a mean of 0. That is, sample with replacement n observations from the data set and compute Student’s t-statistic. • Repeat hundreds or thousands of times. • Imagine that 2.5% of the resulting t-values are less than or equal to −2.3 and that 2.5% of them are greater than or equal to 2.5. Then the bootstrap (simulation) study indicates that if the observed value for t is ≤ −2.3 or ≥ 2.5, then the type I error probability will be 0.05. A variation of the bootstrap-t method, called the percentile bootstrap method, proceeds in a similar manner. Rather than computing a test statistic, one merely computes the mean for each (bootstrap) sample, and now the data are not shifted to have a mean of 0. Suppose that among the 1000 (bootstrap) sample means, 95% are between the values 2.9 and 8.3, then (2.9, 8.3) is taken to be an approximate 95% confidence interval for the mean. The percentile bootstrap does not perform well when the goal is to make inferences about the mean, but it performs well when using a robust measure of location such as a 20% trimmed mean. To be a bit more concrete, imagine we conduct a study and obtain the following scores on the dependent variable: 2, 3, 3, 4, 5, 6, 7, 8, 9, 9.

(10)

The sample size is 10, and the 20% trimmed mean is 5.5. We then use a computer to randomly sample with replacement 10 observations from the original scores. Sampling with replacement means that each individual score remains in the original data set before the selection of the next score, rather than being removed from the original data set. As a result, observations can occur more (or fewer) times 398

r o b u s t s tat i s t i c a l e s t i m at i o n

in the bootstrapped sample than they did in the original sample. A bootstrap sample generated from the original observations in this example might be 3, 3, 3, 4, 7, 7, 7, 8, 8, 9.

(11)

The 20% trimmed mean of this bootstrap sample is 6. The process of generating bootstrap samples from the original scores is repeated, let’s say, 1000 times. With modern computers, this can be accomplished very quickly. If, for example, the 1000 bootstrapped trimmed means are put in order from lowest to highest, then the central 95% of values can be used to form a 95% confidence interval. A p-value can be computed as well. The major advantage of using bootstrapping is that based on both theoretical and simulation results, it generally leads to more accurate results than if theoretical distributions were used. However, bootstrap methods are not a panacea for dealing with violations of assumptions. Typically they perform very well with robust measures of location. When dealing with means, they can reduce problems associated with Student’s t-test, but practical concerns remain (e.g., Wilcox, 2012a). It is also important to realize that there are many variants of the bootstrap methods outlined here. A thorough description of many bootstrap methods can be found in texts such as Chernick (1999) and Lunnenborg (2000). Readers may also find the paper by Rodgers (1999) interesting, where similarities and differences between the bootstrap and related methods such as the Jackknife are discussed.

Significance Testing It is important to note that it is usually not possible to simply calculate a robust measure of location or scale and then insert these into standard formulas used to conduct classical analyses. Special adjustments usually need to be made to test statistics, standard errors, and so forth when using robust estimators to take into account dependencies among the observations. We discuss this point in more detail in a later section of this chapter, and the required adjustments are outlined in journal articles and books that we discuss shortly. Most psychologists do not need to concern themselves with manually adjusting formulas, as software capable of performing robust analyses takes care of this issue. Many robust alternatives to common statistical significance tests have been developed over the past few decades. These include robust alternatives to standard t-tests, ANOVA, and regression. Robust

hypothesis tests typically evaluate hypotheses that are similar to those assessed using classic parametric techniques. For example, Student’s t-test is used to evaluate whether two independent population means are significantly different. The goal of a robust t-test is identical, with the exception that the usual measure of central tendency—the mean—is replaced by a robust measure of central tendency, such as a trimmed mean. Similarly, the goal of both standard and robust regression is to find a regression equation that best fits the data. Robust techniques have the advantage of being relatively insensitive to heteroscedasticity and non-normality, which can lead to better fitting regression equations. Replacing classic estimators with robust alternatives generally leads to substantially improved rates of type I error and improved statistical power compared to using classic estimators, even when theoretical sampling distributions continue to be used (e.g., Yuen, 1974). Additional benefits are usually realized by switching to empirical sampling distributions created via bootstrapping (e.g., Keselman, Othman, Wilcox, & Fradette, 2004). An alternative to using robust significance tests is to use modern rank-based methods. These can be viewed as modern nonparametric statistics. Many modern rank-based methods are also robust to type I error inflation and have good statistical power when analyzing data from non-normal distributions. Some useful rank-based methods are discussed in books by Wilcox (2012a,b)—more detailed coverage can be found in other texts (Brunner et al., 2002; Cliff, 1996; Hettmansperger & McKean, 1998).

Practical Benefits of Using Robust Methods Robust statistical methods are designed to work well both when the assumptions underlying classic parametric method hold and when they do not. Robust hypothesis tests are usually able to maintain the observed type I error rate close to the nominal level under departures from normality and homoscedasticity. They are also usually more powerful than classic methods when classical assumptions do not hold. The major benefit for applied researchers of using robust methods is that they enhance our ability to discover true relationships between variables and to detect genuine differences between groups. There are three converging lines of evidence that support the greater use of robust methods in psychological research. The first line of evidence comes from studies, such as that by Micceri (1989),

demonstrating that normal-theory distributional assumptions are frequently violated. The second line of evidence comes from Monte Carlo studies demonstrating that classic parametric techniques are not generally robust to violations of their assumptions, whereas methods based on robust estimators usually are (e.g., Blair & Higgins, 1980; Keselman et al., 2004; Lix & Keselman, 1998; Zimmerman, 2000). For example, Lix and Keselman (1998) conducted a Monte Carlo study and found that when data are drawn from a variety of non-normal and heteroscedastic populations, the type I error rate of standard ANOVA can become very distorted and reach 50% , whereas robust tests based on trimmed means were able to maintain the type I error rate close to 5% . The third line of evidence supporting the greater use of robust methods comes from analyses of real psychological data and observing the benefits of using robust methods. Robust methods can detect effects that would otherwise be missed. For example, in a study investigating the interaction between working memory and drug-relevant memory associations on predicting substance use, Grenard and colleagues (2008) found that standard regression methods failed to uncover effects, whereas robust regression produced statistically significant results. The reason for the divergent results was that the residuals were heteroscedastic—a condition known to reduce the power of standard regression but that does not affect robust regression analyses. In another study, Schug, Raine, and Wilcox (2007) analyzed data collected from people suffering from schizophrenia and controls. The data were skewed and contained outliers. Robust analyses uncovered significant differences between the groups that were missed when classic parametric methods were used. Such findings suggest that researchers are well served by using robust analyses. In our view, these converging lines of evidence indicate that it is good practice for psychologists to routinely use robust methods to analyze data. Compared to relying on standard analyses, using robust analyses will result in psychologists: • detecting more genuine relationships between

variables (i.e., greater statistical power, less type II errors); • returning less spurious “false–positive” findings (i.e., less type I errors); • estimating more relevant effect sizes and other parameters of interest; and

erceg-hurn, wilcox, keselman

399

• computing confidence intervals around parameter estimates with more accurate probability coverage.

Thus, there are many statistical advantages to using robust methods. These statistical advantages translate into substantive advantages—researchers are more likely to advance knowledge if they are detecting genuine relationships and estimating them with precision. We see few disadvantages in opting to use robust methods from a statistical or substantive perspective. Theoretically, standard methods will be more powerful than robust methods if all normal theory assumptions are (very close to) perfectly satisfied. However, the difference in power is small. Further, the likelihood that assumptions will be (close to) perfectly satisfied is very small, so the fact that standard methods are slightly more powerful than robust methods when normal theory assumptions hold is not a compelling argument against the use of robust methods. Another argument occasionally used to support the use of standard methods over robust alternatives is that liberally biased significance tests—that is, tests for which assumption violations result in inflated type I error rates—may be more powerful than robust alternatives. There are two flaws with this argument. First, if a test is known to be liberally biased, then researchers who obtain statistically significant results will be uncertain whether the results are “genuine” or type I errors. Second, it is not the case that liberally biased tests are necessarily powerful tests. When assumptions are sufficiently violated, standard methods such as ANOVA often suffer from both inflated type I error rates and low statistical power (e.g., Wilcox et al., 1986).

should standard methods continue to be used? If researchers follow our recommendation to routinely use robust methods, then they must also decide whether to rely solely on robust methods (i.e., use them as a replacement for classic parametric analyses) or whether to perform both standard and robust analyses. From a purely statistical perspective, there is little reason to perform both standard and robust analyses, given the usual superiority of robust methods. However, we can see some pragmatic advantages to performing both sets of analyses. First, by performing both standard and robust analyses on the same dataset, researchers gain an 400

r o b u s t s tat i s t i c a l e s t i m at i o n

insight into when using robust methods makes a difference. Theory and extant research suggest that standard and robust analyses will most often return divergent findings when sample sizes are modest, when effect sizes are small to moderate, and as the severity of deviations from normal theory assumptions increase. In such circumstances, the signal-to-noise ratio is considerable, meaning genuine effects are hard to detect. Robust methods can help amplify the signal and reduce the noise. In contrast, when effect sizes and sample sizes are large, or when assumption violations are small, standard and robust analyses are likely to lead to the same substantive research findings. A second pragmatic reason for using both standard and robust analyses relates not to the analysis of the data itself but the communication of results to others. All psychologists receive training in standard parametric methods, but few are well versed in robust methods. Therefore, it is easier for readers to understand standard, rather than robust, analyses. If standard and robust analyses conducted on a particular data set lead to the same substantive conclusions, then it is probably wise to report the standard analyses because they will be more easily understood. To quote Tukey (1979, p. 103), “It is perfectly proper to use both classical and robust/resistant methods routinely, and only worry when they differ enough to matter. But when they differ, you should think hard.” A benefit of analyzing data using multiple methods is that it can give us greater confidence in our results when the analyses return consistent findings. For example, Erceg-Hurn and Steed (2011) measured the strength of smokers’ negative reactions upon exposure to either graphic or textonly cigarette warnings. The researchers analyzed their data using classic parametric methods. They found that smokers exposed to the graphic warnings became significantly more irritated, annoyed, angry, and aggravated than those who viewed textonly warnings. However, the data were very skewed, and the variance of scores in the graphic warnings condition was twice as large as in the text-only condition. Such distributional characteristics can result in inflated rates of type I error when using classic parametric statistics, meaning that the significant results could be “false–positives.” To guard against this possibility, Erceg-Hurn and Steed reanalyzed the data using robust and rank-based methods and found the same significant differences. The consistent pattern of results across the analyses meant that the researchers were more confident

about their findings than if they had relied solely on the standard analysis. This example illustrates that performing robust analyses can act as a useful check or audit on the results of standard analysis, helping guard against the possibility of making errors. When standard and robust analyses of the same data lead to divergent results, we recommend that researchers closely explore their data using graphical methods (e.g., boxplots, histograms) and summary statistics (e.g., computing sample variance ratios) to examine the likely cause of the divergent results. Obvious assumption violations will usually account for the divergent results, as was the case in the Grenard et al. and Schug et al. studies discussed earlier. If the data deviate noticeably from normality, if variances are heterogeneous, or outliers are present, then the robust analysis should usually be trusted over the standard analysis. For the special case where data are close to normally distributed and variances almost equal, the standard analysis may be more accurate. When divergent results are obtained, we encourage researchers to be transparent and report the findings of both the standard and robust analyses. This helps highlight to the research community the practical difference that using robust methods can make. It also guards against the possibility that researchers will cherry-pick from the two sets of analyses and only report those results that are consistent with their theory on the topic being investigated. We feel it is important to reiterate that we strongly discourage researchers from using statistical assumption tests as a basis for choosing whether to use standard or robust methods. Assumption tests are usually flawed, as discussed earlier in this chapter. It is much wiser to analyze a data set using both robust and standard methods and if the analyses produce divergent results, to probe the cause of the discrepancy, rather than putting faith in an error prone assumption test.

Books, Software, & Other Resources Psychologists interested in using robust methods to analyze their own data may find it useful to consult detailed books on the topic. A practical, nontechnical guide to robust t-tests, ANOVA, correlation, regression, interval estimation, and outlier detection has been provided by Wilcox (2012b). This book is a good starting point for most students and faculty interested in using robust methods. The books cover important concepts underlying

robust methods that go beyond the scope of this chapter and provide clear information about how to implement robust procedures using a free software program called R. Wilcox (2012a) targets somewhat more advanced researchers—it focuses less on conceptual issues and is more a handbook of robust analyses and code for implementing them in R. Another book by Wilcox (2010) is very nontechnical and focuses on conceptual issues. More advanced coverage of robust regression and outlier detection can be found in a variety texts (e.g., Huber & Ronchetti, 2009; Maronna et al., 2006; Rousseeuw & Leroy, 2003). There are also journal articles detailing robust alternatives to classical procedures such as principal components analysis (Wilcox, 2008), exploratory factor analysis (Pison, Rousseeuw, Filzmoser, & Croux, 2003; Yuan, Marshall, & Bentler, 2002), effect size (Algina, Keselman, & Penfield, 2005b, 2006b; Wilcox & Tian, 2011), mediation (Zu & Yuan, 2010), reliability estimation (Christmann & Van Aelst, 2006), and tests for spread (Keselman, Wilcox et al., 2008). Yuan and colleagues discuss robust approaches to structural equation modeling that are able to successfully negate outliers and non-normality (Bentler, Satorra, & Yuan, 2009; Yuan & Bentler, 1998, 2000, 2007; Yuan, Bentler, & Chan, 2004; Yuan, Marshall, & Weston, 2002). We now turn to discussing software that is available for conducting robust analyses. Unfortunately, SPSS the statistics program most widely used by psychologists, has very limited capabilities for performing robust analyses. SPSS’s “Explore” function is able to return various M-estimators for a data set, but the software has no in-built capability for conducting significance tests of M-estimators or any other robust measures of location. In 2009, SPSS released a bootstrapping module that does not come as part of the standard program but must be purchased for an additional cost. The module can be used to generate bootstrap standard errors, confidence intervals, and p-values for significance tests of means. Although it is promising to see bootstrapping incorporated into SPSS the current module is of limited value because bootstrapping can only be used in conjunction with nonrobust estimators such as the mean. There is no capability for using bootstrapping in conjunction with robust estimators, such as the trimmed mean. Some authors have written syntax for SPSS that allows a limited range or rank-based regression methods to be implemented (see Serlin & Harwell, 2004, and their online supplementary material).

erceg-hurn, wilcox, keselman

401

Professor James Jaccard wrote an add-on for SPSS called ZumaStat, which could be used to easily conduct a wide range of robust analyses via an easyto-use “point-and-click” interface. Unfortunately the software had to be discontinued after SPSS changed their programming code and it caused compatibility issues with ZumaStat. SAS, a commercial software program, is able to perform some robust regression methods. More information can be obtained from http://support.sas.com/rnd/app/da/iml/robustreg. html. Several authors have also written code for SAS that can be downloaded and used to conduct robust t-tests and ANOVA (see Keselman, Algina, Lix, Wilcox, & Deering, 2008, and the paper’s online supplementary materials). S-Plus (commercial) and R (freeware—see http:// www.r-project.org) are two software programs that can be used to perform a large range of robust analyses. These programs use a command-line interface, and many researchers will need to invest some time learning how to use the software. An array of self-help guides can be downloaded from the Internet. The payoff for learning how to use S-Plus or R is large, given these programs can perform a huge range of robust analyses. Wilcox has written code so that more than 1000 different analyses can be performed in R. Wilcox’s code can be downloaded from http://dornsife.usc.edu/labs/rwilcox/software/. Clear instructions on how to use Wilcox’s functions are found in his books (Wilcox, 2012a,b). New functions not covered in those books are described in documents on his website. There are several other sources of information about code for performing robust analyses in S-Plus and R. Maronna and colleagues (2006) have outlined how to implement a range of robust analyses in S-Plus. A comprehensive list of R packages that can be used to conduct robust analyses can be found at http://cran. r-project.org/web/views/Robust.html. A range of other S-Plus and R packages can be downloaded by navigating to the following URLs: • http://www.iumsp.ch/Unites/us/Alfio/msp_ programmes.htm • http://www.statistik.tuwien.ac.at/rsr/software/ agostinelli.html • http://r-forge.r-project.org/softwaremap/trove_ list.php?form_cat=360

Hubert, Rousseeuw, and colleagues have written a library of functions for performing robust 402

r o b u s t s tat i s t i c a l e s t i m at i o n

analyses in the commercial software program Matlab. The robust library can be freely downloaded from http://wis.kuleuven.be/stat/robust/programs. html. Some authors have written free, standalone, point-and-click programs for performing particular robust analyses. For example, James Algina created freeware programs to compute robust effect sizes that will run under Windows. They can be downloaded from http://plaza.ufl.edu/algina/index.programs.html. More information about the effect sizes can be found in articles by Algina and colleagues (Algina, Keselman, & Penfield, 2005a; Algina et al., 2005b; Algina, Keselman, & Penfield, 2006a). Additional software programs for conducting robust and rank-based analyses are discussed in a paper by Erceg-Hurn and Mirosevich (2008).

Criticisms of Robust Methods As we have illustrated, using robust methods can enhance our ability to identify relationships between variables and to characterize how groups differ. Although there are many benefits to using robust methods, they do have their critics. One common criticism is that leaders in the field have not focused enough on developing easy-to-use software, and that this has made robust analyses inaccessible to applied researchers such as psychologists (Hettmansperger, 1998; Stromberg, 2004). There is truth to this criticism—for many years there was a paucity of software available. Things have improved considerably in recent years with the advent of robust packages that can be used in R and S-Plus. There is still a need for more robust statistics software—particularly software that is more user-friendly—and for robust methods to be included in widely used software packages such as SPSS. Another criticism of robust methods is that the field suffers from a “curse of abundance” (Hettmansperger, 1998; Stromberg, 2004). Researchers are confronted with a huge array of choices when they want to conduct a robust inferential procedure. What robust measure of central tendency should be used? If using trimmed means, what percentage should be trimmed? If using an M-estimator, which type should be used? Should bootstrapping be used? If so, what bootstrap method? Navigating these choices can be confusing for applied researchers. Tukey (1979) argued that it is not so important what robust method researchers

use, as long as they use one. Some methodologists have tried to overcome the “curse of abundance” problem by proposing a single approach to robust data analysis that can be used in most research contexts when the goal is to compare groups using a measure of location (Keselman, Algina et al., 2008). This simplifies the process of conducting robust analyses for applied researchers. Books such as those by Wilcox (2012a,b) guide researchers through the array of choices, making it easier to decide which robust method to use in a particular situation. The principles outlined in the present chapter should also be helpful. Robust statistics have been criticized for being a “hard sell.” For example, some applied researchers erroneously think that by using trimmed means, they are throwing away useful information (ErcegHurn & Mirosevich, 2008; Hettmansperger, 1998). In fact, all of the data are utilized in the process of computing a trimmed mean, because all of the scores must be ordered before the trimmed mean can be calculated. Also, it must be remembered that the purpose of such analyses is to focus on measures of central tendency—it is the middle portion of the distribution that is of interest. A measure of central tendency should not be biased by influential points in the tails of a distribution. This is not to say that influential outlying points are not worthy of attention, depending on the research context. They may well be the focus of alternative analyses or research questions. It is worth noting that there are robust methods that can be used as an alternative when researchers are concerned about trimming. For example, modern nonparametric analyses based on ranks can be used. Another alternative is to use techniques that compare groups using multiple quantiles (Wilcox & Erceg-Hurn, 2012a,b). This technique allows high-scoring participants in one group to be compared to the high-scoring participants in another group, the low-scoring participants in one group to be compared to the low-scoring participants in another group, and so forth. Some authors claim robust methods are unnecessary and that a valid alternative is to simply delete outliers and proceed using standard classic parametric methods (Kornbrot, 1998). This suggestion is flawed for two reasons. First, outliers are often missed because researchers use problematic detection tools such as the 3 SDs from the mean rule. If outliers are missed, then they will exert an influence when standard methods are used. The second problem is that it is theoretically invalid to simply delete outliers and then use conventional statistics

(Wilcox, 1998a). Once observations are deleted, the remaining observations are no longer independent, violating a key assumption that underlies classic parametric methods. A practical implication of this is that the wrong estimate of the standard error is used. Wilcox and Keselman (2003) have provided an example in which deleting outliers and then applying standard methods can lead to the standard error of a measure of location being underestimated. This can result in misleading estimates of confidence intervals, as standard errors are involved in their computation. Ignoring the dependency introduced by deleting outliers can also lead to the calculation of erroneous p-values. Robust methods take into account the dependency of observations following trimming, so that accurate standard errors are computed.

Conclusion The analysis of data in psychology is dominated by the use of outdated, classic parametric methods. These techniques suffer from low power and distorted rates of type I error when the assumptions underlying them are sufficiently violated, which occurs frequently in practice. Modern robust statistical methods can overcome these problems. They are designed to work well both when normal theory assumptions are satisfied and when they are not. We provided an introduction to robust measures of location and scale, bootstrapping, outlier detection, significance testing, and other procedures that have practical value to applied researchers. Psychological research would benefit from the greater application of robust data analyses.

Future Directions Where to next for robust statistical methods in psychology? The biggest challenge is not the development of new techniques but the dissemination of existing ones. Most psychologists remain unaware of the existence of robust methods and their benefits. There is a need for robust methods to be incorporated into the statistics curriculum in psychology so that the next generation of psychologists have a better understanding of the limitations of classic methods and the goals of robust ones. There is a need for more journal articles that demystify robust methods and equip applied researchers with practical tools that they can apply to their own analyses. There have been some promising developments on the dissemination front in recent years—for example, articles encouraging psychologists to use robust

erceg-hurn, wilcox, keselman

403

methods have appeared in flagship journals of the American Psychological Association and the Association for Psychological Science (e.g., Erceg-Hurn & Mirosevich, 2008; Keselman et al., 2004). Robust methods are slowly making their way into textbooks aimed at undergraduate and postgraduate psychology students (Field, 2009). Such progress is promising, but more needs to be done. Another challenge is software. It is easier today than at any time in the past to conduct robust analyses, thanks to freeware such as R. However, many psychologists find R’s command line interface challenging. We have encountered colleagues who appreciate the benefits of using robust methods and are interested in using robust methods to analyze their data but who do not have the time or willingness to learn programs such as R. Probably the single-most important event that would lead to a greater use of robust methods in psychology (and other disciplines) would be the development of an intuitive, easy-to-use software program with a point-and-click interface. Most psychologists rely on SPSS to conduct analyses—if robust statistics are to become “mainstream,” then a program that is just as easy to use must be developed. Some interesting challenges will arise if robust methods become more widely used by psychologists. For example, meta-analysis is a very popular method for aggregating effect sizes across multiple studies. Current meta-analytic methods are rooted in normal theory. An increased use of robust statistics (and, therefore, robust effect sizes) poses a challenge to meta-analysts. What is the best way to meta-analyze a series of robust effect sizes? Is there a valid way to pool robust and classic effect sizes across studies? We look forward to future research addressing such questions and to the greater use of robust statistical methods by psychologists.

Note 1. A discussion of the role of sampling distributions in inferential statistics is beyond the scope of this chapter. Nontechnical explanations can be found in other books and articles (e.g., Guthrie, 2001; Howell, 2008).

Author Note 1. Erceg-Hurn, School of Psychology, University of Western Australia. 2. Wilcox, Department of Psychology, University of Southern California. 3. Keselman, Department of Psychology, University of Manitoba. 4. David Erceg-Hurn, School of Psychology M304, 35 Stirling Hwy, 404

r o b u s t s tat i s t i c a l e s t i m at i o n

Crawley, WA, 6009, Australia. Email: [email protected]. Phone +618644 3245.

References Akritas, M. G., Arnold, S. F., & Brunner, E. (1997). Nonparametric hypotheses and rank statistics for unbalanced factorial designs. Journal of the American Statistical Association, 92, 258–265. Algina, J., Keselman, H. J., & Penfield, R. D. (2005a). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent groups case. Psychological Methods, 10, 317–328. Algina, J., Keselman, H. J., & Penfield, R. D. (2005b). Effect sizes and their intervals: The two-level repeated measures case. Educational and Psychological Measurement, 65, 241–258. doi: 10.1177/0013164404268675 Algina, J., Keselman, H. J., & Penfield, R. D. (2006a). Confidence interval coverage for Cohen’s effect size statistic. Educational and Psychological Measurement, 66, 945–960. doi: 10.1177/0013164406288161 Algina, J., Keselman, H. J., & Penfield, R. D. (2006b). Confidence intervals for an effect size when variances are not equal. Journal of Modern Applied Statistical Methods, 5, 2–13. Bentler, P. M., Satorra, A., & Yuan, K. H. (2009). Smoking and cancers: Case-robust analysis of a classic data set. Structural Equation Modeling, 16, 382–390. Blair, R. C., & Higgins, J. J. (1980). A comparison of the power of Wilcoxon’s rank-sum statistics to that of Student’s t statistic under various nonnormal distributions. Journal of Educational Statistics, 5, 309–335. Blair, R. C., & Higgins, J. J. (1985). Comparison of the power of the paired samples t test to that of Wilcoxon’s signed-ranks test under various population shapes. Psychological Bulletin, 97, 119–128. Bradley, J. V. (1977). A common situation conductive to bizarre distribution shapes. The American Statistician, 31, 147–150. Bradley, J. V. (1978). Robustness? British Journal of Mathematical and Statistical Psychology, 31, 144–152. Bradley, J. V. (1980). Nonrobustness in Z, t, and F tests at large sample sizes. Bulletin of the Psychonomic Society, 16, 333–336. Brunner, E., Domhof, S., & Langer, F. (2002). Nonparametric analysis of longitudinal data in factorial experiments. New York: Wiley. Chernick, M. (1999). Bootstrap methods: A practitioner’s guide. New York: Wiley. Christmann, A., & Van Aelst, S. (2006). Robust estimation of Cronbach’s alpha. Journal of Multivariate Analysis, 97, 1660– 1674. doi: 10.1016/j.jmva.2005.05.012 Cliff, N. (1996). Ordinal methods for behavioral data analysis. Mahwah, NJ: Hobart Press. Cumming, G., Fidler, F., Leonard, M., Kalinowski, P., Christiansen, A., Kleinig, A., et al. (2007). Statistical reform in psychology: Is anything changing? Psychological Science, 18, 230–232. Efron, B. (1979). Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7, 1–26. Erceg-Hurn, D., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the power and accuracy of your research. American Psychologist, 63, 591– 601.

Erceg-Hurn, D., & Steed, L. G. (2011). Graphic cigarette warnings and psychological reactance. Journal of Applied Social Psychology, 41, 219–237. Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London: Sage Publications. Gartside, P. (2001). Letters to the editor. The American Statistican, 55, 171–174. Geary, R. C. (1947). Testing for normality. Biometrika, 34, 209– 242. Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education & psychology (3rd ed.). Boston: Allyn & Bacon. Grenard, J. L., Ames, S. L., Wiers, R. W., Thush, C., Sussman, S., & Stacy, A. W. (2008). Working memory capacity moderates the predictive effects of drug-related associations on substance use. Psychology of Addictive Behaviors, 22, 426–432. Grissom, R. J. (2000). Heterogeneity of variance in clinical data. Journal of Consulting and Clinical Psychology, 68, 155–165. Guthrie, A. (2001). Using bootstrap methods with popular statistics programs. Paper presented at the Annual Meeting of the Southwest Educational Research Association, New Orleans, LA. Retrieved www.eric.ed.gov/ERICWebPortal/ recordDetail?accno=ED450149. Accessed July 5, 2012. Hampel, F. (1968). Contributions to the theory of robust estimation. University of California, Berkley. Hampel, F. R. (1985). The breakdown points of the mean combined with some rejection rules. Technometrics, 27, 95–107. Harwell, M. R., Rubinstein, E. N., Hayes, W. S., & Olds, C. C. (1992). Summarizing monte carlo results in methodological research: The one- and two-factor fixed effects anova cases. Journal of Educational Statistics, 17, 315–339. Hettmansperger, T. P. (1998). The goals and strategies of robust methods: Comments. British Journal of Mathematical & Statistical Psychology, 51, 41. Hettmansperger, T. P., & McKean, J. W. (1998). Robust nonparametric statistical methods. London: Arnold Publishing. Hill, M., & Dixon, W. J. (1982). Robustness in real life: A study of clinical laboratory data. Biometrics, 38, 377–396. Ho, K. P., Hunt, C., & Li, S. (2008). Patterns of help-seeking behavior for anxiety disorders among the chinese speaking australian community Social Psychiatry and Psychiatric Epidemiology, 43, 872–877. doi: 10.1007/s00127-008-0387-0 Howell, D. (2008). Fundamentals of statistics for the behavioral sciences. Belmont, CA: Thomson-Wadsworth. Huber, P. (1964). Robust estimation of a location parameter. Annals of Mathematical Statistics, 35, 73–101. Huber, P., & Ronchetti, E. (2009). Robust statistics. New Jersey: John Wiley & Sons. Keselman, H. J., Algina, J., Lix, L. M., Wilcox, R. R., & Deering, K. (2008). A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes. Psychological Methods, 13, 110–129. Keselman, H. J., Huberty, C. J., Lix, L. M., Olejnik, S., Cribbie, R. A., Donahue, B., et al. (1998). Statistical practices of educational researchers: An analysis of their anova, manova, and ancova analyses. Review of Educational Research, 68, 350– 386. Keselman, H. J., Othman, A. R., Wilcox, R. R., & Fradette, K. (2004). The new and improved two-sample t test. Psychological Science, 15, 47–51. doi: 10.1111/j.09637214.2004.01501008.x

Keselman, H. J., Wilcox, R. R., Algina, J., Othman, A. R., & Fradette, K. (2008). A comparative study of robust tests for spread: Asymmetric trimming strategies. British Journal of Mathematical and Statistical Psychology, 61, 235–253. Keselman, H. J., Wilcox, R. R., Lix, L. M., Algina, J., & Fradette, K. (2007). Adaptive robust estimation and testing. British Journal of Mathematical and Statistical Psychology, 60, 267– 293. Kornbrot, D. E. (1998). The goals and strategies of robust methods: Comment. British Journal of Mathematical & Statistical Psychology, 51, 43. Lim, S., & Melville, N. P. (2009). Robustness of structural equation modeling to distributional misspecification: Empirical evidence & research guidelines. Retrieved from http://ssrn.Com/abstract=1375251 Liu, Y., & Zumbo, B. D. (2007). The impact of outliers on cronbach’s coefficient alpha estimate of reliability: Visual analogue scales. Educational and Psychological Measurement, 67, 620– 634. doi: 10.1177/0013164406296976. Accessed July 5, 2012. Lix, L. M., & Keselman, H. J. (1998). To trim or not to trim: Tests of location equality under heteroscedasticity and nonnormality. Educational and Psychological Measurement, 58, 409–429. Lix, L. M., Keselman, J. C., & Keselman, H. J. (1996). Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance “F” test. Review of Educational Research, 66, 579–619. Lunnenborg, C. (2000). Data analysis by resampling: Concepts and applications. Pacific Grove, CA: Duxbury. MacDonald, P. (1999). Power, Type I, and Type III error rates of parametric and nonparametric statistical tests. Journal of Experimental Education, 67, 367–379. Maronna, R., Martin, R., & Yohal, V. (2006). Robust statistics theory and methods. West Sussex, UK: John Wiley. Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156–166. Nester, M. (1996). An applied statistician’s creed. Applied Statistics, 45, 401–410. Osborne, J. (2002). Notes on the use of data transformations. Practical Assessment, Research & Evaluation, 8, Retrieved from http://pareonline.net/getvn.asp?v=8&n=6. Accessed July 5, 2012. Pison, G., Rousseeuw, P. J., Filzmoser, P., & Croux, C. (2003). Robust factor analysis. Journal of Multivariate Analysis, 84, 145–172. Ramsey, P. H. (1980). Exact type I error rates for robustness of Student’s t test with unequal variances. Journal of Educational Statistics, 5, 337–349. Rodgers, J. (1999). The bootstrap, the jacknife, and the randomization test: A sampling taxonomy. Multivariate Behavioral Research, 34, 441–456. Rousseeuw, P. (1990). Robust estimation and identifying outliers. In H. M. Wadsworth (Ed.), Handbook of statistical methods for engineers and scientists (pp. 16.11–16.24.). New York: McGraw-Hill. Rousseeuw, P., & Leroy, P. (2003). Robust regression and outlier detection. New Jersey: John Wiley. Sawilowsky, S. S. (1990). Nonparametric tests of interaction in experimental design. Review of Educational Research, 60, 91– 126.

erceg-hurn, wilcox, keselman

405

Sawilowsky, S. S. (1998). The goals and strategies of robust methods: Comment. British Journal of Mathematical & Statistical Psychology, 51, 49. Sawilowsky, S. S., & Blair, R. C. (1992). A more realistic look at the robustness and type II error properties of the t test to departures from population normality. Psychological Bulletin, 111, 352–360. Schug, R. A., Raine, A., & Wilcox, R. R. (2007). Psychophysiological and behavioural characteristics of individuals comorbid for antisocial personality disorder and schizophrenia-spectrum personality disorder. British Journal of Psychiatry, 191, 408–414. Serlin, R. C., & Harwell, M. R. (2004). More powerful tests of predictor subsets in regression analysis under nonnormality. Psychological Methods, 9, 492–509. Smith, Z., & Wells, C. (October 18–20, 2006). Central limit theorem and sample size. Paper presented at the Northeastern Educational Research Association, Kerkonkson, New York. Retrieved from http://www.umass.edu/remp/Papers/Smith& Wells_NERA06.pdf. Stromberg, A. (2004). Why write statistical software? The case of robust statistical methods. Journal of Statistical Software, 10, Retrieved from http://www.jstatsoft.org/v10/i05/. Tukey, J. (1962). The future of data analysis. Annals of Mathematical Statistics, 33, 1–67. Tukey, J. (1979). Robust techniques for the user. In R. L. Launer & G. N. Wilkinson (Eds.), Robustness in statistics (pp. 103– 106). New York: Academic Press. Wells, C., & Hintze, J. (2007). Dealing with assumptions underlying statistical tests. Psychology in the Schools, 44, 495–502. Wilcox, R. R. (1998a). The goals and strategies of robust methods. British Journal of Mathematical & Statistical Psychology, 51, 1–39. Wilcox, R. R. (1998b). How many discoveries have been lost by ignoring modern statistical methods? American Psychologist, 53, 300–314. Wilcox, R. R. (2003). Applying contemporary statistical techniques. San Diego, CA: Academic Press. Wilcox, R. R. (2008). Robust principal components: A generalized variance perspective. Behavior Research Methods, 40, 102–108. Wilcox, R. R. (2010). Fundamentals of modern statistical methods (2nd ed.). New York: Springer. Wilcox, R. R. (2012a). Introduction to robust estimation and hypothesis testing (3rd ed.). San Diego, CA: Elsevier. Wilcox, R. R. (2012b). Modern statistics for the social and behavioral sciences: A practical introductions. New York: Chapman & Hall. Wilcox, R. R., Charlin, V. L., & Thompson, K. L. (1986). New monte carlo results on the robustness of the anova F, W and F* statistics. Communications in Statistics—Simulation and Computation, 15, 933–943. Wilcox, R. R., & Erceg-Hurn, D. M. (2012). Comparing two independent groups via the lower and upper quantiles. Manuscript under review. Wilcox, R. R., & Erceg-Hurn, D. M. (2012). Comparing two dependent groups via quantiles. Manuscript under review.

406

r o b u s t s tat i s t i c a l e s t i m at i o n

Wilcox, R. R., & Keselman, H. J. (2003). Modern robust data analysis methods: Measures of central tendency. Psychological Methods, 8, 254–274. Wilcox, R. R., & Tian, T. (in press). Measuring effect size: A robust heteroscedastic approach for two or more groups. Journal of Applied Statistics. Wu, P.-C. (2002). The central limit theorem and comparing means, trimmed means, one step m-estimators and modified one step m-estimators under non-normality. University of Southern California. Yuan, K. H., & Bentler, P. M. (1998). Robust mean and covariance structure analysis. British Journal of Mathematical & Statistical Psychology, 51, 63. Yuan, K. H., & Bentler, P. M. (2000). Robust mean and covariance structure analysis through iteratively reweighted least squares. Psychometrika, 65, 43–58. doi: 10.1007/BF02294185 Yuan, K. H., & Bentler, P. M. (2001). Effect of outliers on estimators and tests in covariance structure analysis. British Journal of Mathematical and Statistical Psychology, 54, 161– 175. Yuan, K. H., & Bentler, P. M. (2007). Robust procedures for structural equation modeling. In S.-Y. Lee (Ed.), Handbook of latent variable and related methods (pp. 367–398). New York: Elsevier. Yuan, K. H., Bentler, P. M., & Chan, W. (2004). Structural equation modeling with heavy tailed distributions. Psychometrika, 69, 421–436. doi: 10.1007/BF02295644 Yuan, K. H., Bentler, P. M., & Zhang, W. (2005). The effect of skewness and kurtosis on mean and covariance structure analysis: The univariate case and its multivariate implication. Sociological Methods Research, 34, 240–258. doi: 10.1177/0049124105280200 Yuan, K. H., Marshall, L. M., & Bentler, P. M. (2002). A unified approach to exploratory factor analysis with missing data, nonnormal data, and in the presence of outliers. Psychometrika, 67, 95–121. doi: 10.1007/BF02294711 Yuan, K. H., Marshall, L. M., & Weston, R. (2002). Crossvalidation by downweighting influential cases in structural equation modelling. British Journal of Mathematical and Statistical Psychology, 55, 125–143. Yuen, K. (1974). The two-sample trimmed t for unequal population variances. Biometrika, 61, 165–170. Zimmerman, D. W. (1998). Invalidation of parametric and nonparametric statistical tests by concurrent violation of two assumptions. Journal of Experimental Education, 67, 55–68. Zimmerman, D. W. (2000). Statistical significance levels of nonparametric tests biased by heterogeneous variances of treatment groups. Journal of General Psychology, 127, 354–364. Zu, J., & Yuan, K. H. (2010). Local influence and robust procedures for mediation analysis. Multivariate Behavioral Research, 45, 1–44.

CHAPTER

20

Bayesian Statistical Methods

David Kaplan and Sarah Depaoli

Abstract This chapter provides a general overview of Bayesian statistical methods. Topics include the notion of probability from a Bayesian perspective, Bayesian inference and hypothesis testing, and Bayesian computation. Three examples are provided to demonstrate the utility of Bayesian methods: simple linear regression, multilevel regression, and confirmatory factor analysis. Throughout the chapter, references are made to the epistemological differences between Bayesian theory and classical (frequentist) theory. Key Words: Bayesian statistical methods, Bayesian inference and hypothesis testing, Bayesian computation

Bayesian statistics has long been overlooked in the quantitative methods training of social scientists. Typically, the only introduction that a student might have to Bayesian ideas is a brief overview of Bayes’ Theorem while studying probability in an introductory statistics class. There are two reasons for this. First, until recently, it was not feasible to conduct statistical modeling from a Bayesian perspective owing to its complexity and lack of available software. Second, Bayesian statistics challenges many of the assumptions underlying frequentist (classical) statistics and is therefore, controversial. We will use the term frequentist to describe the paradigm of statistics commonly used today, and this represents the counterpart to the Bayesian paradigm of statistics. Historically, however, Bayesian statistics predates frequentist statistics by about 150 years. Recently, however, there has been extraordinary growth in the development and application of Bayesian statistical methods, mostly because of developments of powerful statistical software tools that render the specification and estimation of complex models feasible from a Bayesian perspective. As

a result, there have been scores of books written over the last 10 years, and at a variety of technical levels, that lead students and researchers through Bayesian Theory and computation. For a technical treatment of Bayesian statistics, see for example, Gelman, Carlin, Stern, and Rubin (2003). For a less technical treatment, see for example, Hoff (2009). The scope of this chapter is, by necessity, limited because the field of Bayesian inference is remarkably wide ranging, and space limitations preclude a full development of Bayesian theory. Thus, the goal of the chapter will be to lay out the fundamental issues that separate Bayesian statistics from its frequentist counterpart and to provide a taste of its applications through specific examples. The organization of this chapter will cover (1) Bayesian probability; (2) Bayesian inference and hypothesis testing; (3) Bayesian computation; and (4) simple empirical examples of Bayesian linear regression, Bayesian multilevel modeling, and Bayesian confirmatory factor analysis. To support the pedagogical features of this chapter, the software code for each example is provided. 407

Bayesian Probability Most students in the social and behavioral sciences were introduced to the axioms of probability by studying the properties of the coin toss or the dice roll. These studies address questions such as (1) What is the probability that the flip of a fair coin will return heads? and (2) What is the probability that the roll of two fair die will return a value of seven? To answer these questions requires enumerating the possible outcomes and then counting the number of times the event could occur. The probabilities of interest are obtained by dividing the number of times the event occurred by the number of possible outcomes. But what of more complex situations, such as the famous “Monty Hall” problem? In this problem, named after the host of a popular old game show, a contestant is shown three doors, one of which has a desirable prize, whereas the other two have quite undesirable prizes. The contestant picks a door, but before Monty opens the door, he shows the contestant another door with an undesirable prize and asks the contestant whether he or she wants to stay with the chosen door or switch. To address this situation requires an understanding of the Kolmogorov axioms of probability (Kolmogorov, 1956) and the Renyi axioms of conditional probability (Renyi, 1970). These sets of axioms, although appearing long after Bayes’ work, provide the theoretical foundation for Bayes’ Theorem.

The Kolmogorov Axioms of Probability Before motivating Bayes’ Theorem, it is useful to remind ourselves of the axioms of probability that have formed the basis of frequentist statistics. These axioms of probability can be attributed to the work of Kolmogorov (1956). This particular set of axioms relate the notion of probability to the frequency of events over a large number of trials. These axioms form the basis of the frequentist paradigm of statistics. Consider two events denoted A and B. To keep the example simple, consider these both to be the flip of a fair coin. Then the following are the axioms of probability—namely, 1. p(A) ≥ 0 2. The probability of the sample space is 1.0 3. Countable additivity: If A and B are mutually exclusive, then p(A or B) = p(A) + p(B). Or, more generally, ⎫ ⎧ ∞ ∞ ⎬  ⎨6 Aj = p(Aj ), (1) p ⎭ ⎩ j=1

408

j=1

b ay e s i a n s tat i s t i c a l m e t h o d s

which states that the probability of the union of mutually exclusive events is simply the sum of their individual probabilities. A number of other axioms of probability can be derived from these three basic axioms. Nevertheless, these three can be used to deal with the relatively easy case of the coin-flipping example mentioned above. For example, if we toss a fair coin an infinite number of times, then we expect it to land heads 50% of the time. Interestingly, this expectation is not based on having actually tossed the coin an infinite number times. Rather, this expectation is a prior belief. Arguably, this is one example of how Bayesian thinking is automatically embedded in frequentist logic. This probability, and others like it, satisfy the first axiom that probabilities are greater than or equal to 0. Second, over an infinite number of coin flips, the sum of all possible outcomes (in this case, heads and tails) is equal to one. Indeed, the number of possible outcomes represents the sample space and the sum of probabilities over the sample space is one. Finally, assuming that one outcome precludes the occurrence of another outcome (e.g., rolling a 1 precludes the occurrence of rolling a 2), then the probability of the joint event p(A or B) is the sum of the separate probabilties — that is p(A or B) = p(A) + p(B). We may wish to add to these axioms the notion of independent events. If two events are independent, then the occurrence of one event does not influence the probability of another event. For example, with two coins A and B, the probability of A resulting in “heads” does not influence the result of a flip of B. Formally, we define independence as p(A and B) = p(A)p(B).

The Renyi Axioms of Probability In the previous paragraph, we discussed quite simple cases particularly the case of independent events. Consider the case of non-independent events. In this situation, the Kolmogorov axioms do not take into account how probabilities might be affected by conditioning on the dependency of events. An extension of the Kolmogorov system that accounts for conditioning was put forth by Renyi (1970). As a motivating example, consider the case of observing the presence or absence of coronary heart disease (C ) and the behavior of smoking or not smoking (S). We may be able to argue on the basis of prior experience and medical research that C is not independent of S–that is, the joint probability p(C , S) = p(C )p(S). To handle this problem, we define the conditional probability of C “given” S

(i.e., p(C |S)) as p(C |S) =

p(C , S) . p(S)

of smoking, S, given observing coronary heart disease, C, as (2)

The denominator on the right hand side of Equation 2 shows that the sample space associated with p(C , S) is reduced by knowing S. Notice that if C and S were independent, then p(C , S) p(C |S) = , p(S) =

(4)

Because of the symmetry of the joint probabilities, we obtain p(C |S)p(S) = p(S|C )p(C ).

p(C |S) = (3)

which states that knowing S tells us nothing about C. Following Press (2003), Renyi’s axioms can be defined, with respect to our coronary heart disease example, as follows: 1. For any events, A, B, we have P(A|B) ≥ 0 and p(B|B) = 1. 2. For disjoint events Aj and some event B ⎫ ⎧ ∞ ∞ ⎬  ⎨6 Aj |B = p(Aj |B) p ⎭ ⎩ j=1

p(S, C ) . p(C )

(5)

Therefore,

p(C )p(S) , p(S)

= p(C )

p(S|C ) =

j=1

3. For every collection of events (A, B, C), with B a subset of C (i.e., B ⊆ C ), and 0 < p(B|C ), we have p(A ∩ B|C ) p(A|B) = . p(B|C ) Renyi’s third axiom allows one to obtain the conditional probability of A given B, while conditioning on yet a third variable C . An important feature of Renyi’s axioms is that it covers the Kolmogorov axioms as a special case. Moreover, it is general enough to encompass both frequentist interpretations of probability as well as personal belief interpretations of probability (Ramsey, 1926; Savage, 1954; de Finetti, 1974). The personal belief interpretation of probability is central to the subjectivist view of probability embedded in Bayesian statistics. See Press (2003) for a more detailed discussion.

Bayes’ Theorem An interesting feature of Equation 2 underpins Bayes’ Theorem. Specifically, joint probabilities are symmetric—namely, p(C , S) = p(S, C ). Therefore, we can also express the conditional probability

p(S|C )p(C ) . p(S)

(6)

Equation 6 is Bayes’ Theorem. In words, Bayes’ Theorem states that the conditional probability of an individual having coronary heart disease given that he smokes is equal to the probability that he smokes given that he has coronary heart disease times the probability of having coronary heart disease. The denominator of Equation 6, p(S), is the marginal probability of smoking. This can be considered the probability of smoking across individuals with and without coronary heart disease, which we write as p(S) = p(S|C ) + p(S|¬C ).1 Because this marginal probability is obtained over all possible outcomes of coronary heart disease, it does not carry information relevant to the conditional probability. In fact, p(S) can be considered a normalizing factor, which ensures that the probability sums to one. Thus, it is not uncommon to see Bayes’ Theorem written as p(C |S) ∝ p(S|C )p(C ).

(7)

Equation 7 states that the probability of observing coronary heart disease given smoking is proportional to the probability of smoking given coronary heart disease times the marginal probability of coronary heart disease. Let’s return to the Monty Hall problem to demonstrate the complexities of conditional probability and how a Bayesian perspective can be helpful. At the start of the game, it is assumed that there is one desirable prize and that the probability that the desirable prize is behind any of the three doors is 1-in-3. Once a door is picked, Monty Hall shows the contestant a door with an undesirable prize and asks the contestant if he or she would like to switch from the door he or she originally chose. It is important to note that Monty will not show the contestant the door with the desirable prize. Also, we assume that because the remaining doors have undesirable prizes, the door Monty opens is chosen basically at random. Given that there are two doors remaining in this three-door problem, the probability is 1/2. Thus, Monty’s knowledge of kaplan, depaoli

409

where the prize is located plays a crucial role in this problem. With the following information in hand, we can obtain the necessary probabilities to apply Bayes’ Theorem. Assume the contestant picks door A. Then, the necessary conditional probabilities are 1. p(Monty opens door B|prize is behind A) = 12 . 2. p(Monty opens door B|prize is behind B) = 0. 3. p(Monty opens door B|prize is behind C) = 1. The final probability results from the fact that there is only one door for Monty to choose given that the contestant chose door A and the prize is behind door B. Let M represent Monty opening door B. Then, the joint probabilities can be obtained follows. 1 1 1 × = , 2 3 6 1 p(M , B) = p(M |B)p(B) = 0 × = 0, and 3 1 1 p(M , C ) = p(M |C )p(C ) = 1 × = . 3 3 p(M , A) = p(M |A)p(A) =

Before applying Bayes’ Theorem, note that we have to obtain the marginal distribution of Monty opening door B. This is p(M ) = p(M , A) + p(M , B) + p(M , C ) =

1 1 1 +0+ = 6 3 2

Finally, we can now apply Bayes’ Theorem to obtain the probabilities of the prize lying behind door A or door C. 1 2

×

p(A|M ) =

p(M |A)p(A) = p(M )

p(C |M ) =

p(M |C )p(C ) =1× p(M )

1 3

1 2 1 3 1 2

=

1 3

=

2 3

Thus, from Bayes’ Theorem, the best strategy on the part of the contestant is to switch doors.

Bayesian Statistical Inference The material presented thus far has concerned Bayesian probability. The goal of this chapter is to present the role of Bayes’ Theorem as it pertains to statistical inference. Setting the foundations of Bayesian statistical inference provides the framework for application to a variety of statistical models commonly employed in social and behavioral science research. To begin, denote by Y a random variable that takes on a realized value y. For example, a person’s 410

b ay e s i a n s tat i s t i c a l m e t h o d s

socioeconomic status could be considered a random variable taking on a very large set of possible values. Once the person identifies his/her socioeconomic status, the random variable Y is now realized as y. In a sense, Y is unobserved–it is the probability model that we wish to understand from the actual data values y. Next, denote by θ a parameter that we believe characterizes the probability model of interest. The parameter θ can be a scalar (i.e., a single parameter), such as the mean or the variance of a distribution, or it can be vector-valued (i.e., a collection of parameters), such as the parameters of a factor analysis model. To avoid too much notational complexity, for now we will use θ to represent either scalar or vector valued parameters where the difference will be revealed by the context. Of importance to this chapter, θ could represent the parameters of an underlying hypothesized model–such as a regression model or structural equation model. We are concerned with determining the probability of observing y given the unknown parameters θ , which we write as p(y|θ ). Equivalently, we are concerned with obtaining estimates of the population parameters given the data expressed as the “likelihood” and formally denoted as L(θ|y). Often we work with the log-likelihood written as l (θ |y). The key difference between Bayesian statistical inference and frequentist statistical inference concerns the nature of the unknown parameters θ . In the frequentist tradition, the assumption is that θ is unknown but fixed. In Bayesian statistical inference, θ is considered random, possessing a probability distribution that reflects our uncertainty about the true value of θ . Because both the observed data y and the parameters θ are assumed random, we can model the joint probability of the parameters and the data as a function of the conditional density of the data given the parameters, and the prior distribution of the parameters. More formally, p(θ, y) = p(y|θ )p(θ).

(8)

Following Bayes’ Theorem described earlier, we obtain the following, p(θ|y) =

p(y|θ )p(θ) p(θ, y) = , p(y) p(y)

(9)

where p(θ |y) is referred to as the posterior distribution of the parameters θ given the observed data y. Thus, from Equation 9, the posterior distribution of θ given y is equal to the data distribution p(y|θ ) times the prior distribution of the parameters p(θ ) normalized by p(y) so that the posterior distribution

sums (or integrates) to one. For discrete variables  p(y|θ)p(θ), (10) p(y) = θ

and for continuous variables  p(y) = p(y|θ)p(θ)d θ. θ

(11)

Note that the denominator in Equation 9 does not involve model parameters, so we can omit the term and obtain the unnormalized posterior density p(θ|y) ∝ p(y|θ)p(θ).

(12)

Consider the data density p(y|θ) on the righthand side of Equation 12. When expressed in terms of the unknown parameters θ for fixed values of y, this term is the likelihood L(θ|y), which we defined earlier. Thus, Equation 12 can be re-written as p(θ|y) ∝ L(θ|y)p(θ).

(13)

Equation 12 (or Equation 13) represents the core of Bayesian statistical inference and is what separates Bayesian statistics from frequentist statistics. Specifically, Equation 13 states that our uncertainty regarding the parameters of our model, as expressed by the prior density p(θ ), is weighted by the actual data p(y|θ) (or equivalently, L(θ|y)), yielding an updated estimate of our uncertainty, as expressed in the posterior density p(θ|y).

The Nature of the Likelihood Equation 13 states that Bayes’ Theorem can be written as the product of the likelihood of the unknown parameters for fixed values of the data and the prior distribution of the model parameters. In this section, we consider two common statistical distributions and their likelihoods before moving on to discuss prior distributions. Specifically, we will consider the binomial distribution and normal distribution. Before beginning, however, it is necessary to discuss the assumption of exchangeability. Exchangeability arises from de Finetti’s Theorem (de Finetti, 1974) and implies that the subscripts of a vector of data (e.g., y1 , y2 , . . . yn ) do not carry information that is relevant to describing the probability distribution of the data. In other words, the joint distribution of the data, f (y1 , y2 , . . . yn ) is invariant to permutations of the subscripts.2 As a simple example of exchangeability, consider a vector of responses to a 10-item test where a correct response is coded “1” and an incorrect response is coded “0”. Exchangeability implies that only the total number of correct responses matter—not the

location of those correct responses in the vector. Exchangeability is a subtle assumption insofar as it means that we believe that there is a parameter θ that generates the observed data via a statistical model and that we can describe that parameter without reference to the particular data at hand (Jackman, 2009). As an example, consider the observed responses on an IQ test. The fundamental idea behind statistical inference generally is that the observed responses on an IQ test are assumed to be generated from a population distribution (e.g., the normal distribution) characterized by a parameter θ (e.g., the population mean). As Jackman (2009) has noted the fact that we can describe θ without reference to a particular set of IQ data is, in fact, what is implied by the idea of a prior distribution. In fact, as Jackman noted “the existence of a prior distribution over a parameter is a result of de Finetti’s Representation Theorem, rather than an assumption” (p. 40, italics Jackman’s). It is important to note that exchangeability is weaker than the statistical assumption of independence. In the case of two events—say A and B—independence implies that p(A|B) = p(A). If these two events are independent, then they are exchangeable; however, exchangeability does not imply independence. Example 1: the binomial probability model First, consider the number of correct answers on a test of length n. Each item on the test represents a “Bernoulli trial”, with y outcomes 0 = wrong and 1 = right. The natural probability model for data arising from n Bernoulli sequences is the binomial sampling model. Under the assumption of exchangeability— meaning the indexes 1 ... n provide no relevant information—we can summarize the total number of successes by n. Letting θ be the proportion of correct responses in the population, the binomial sampling model can be written as

n y p(y|θ ) = Bin(y|n, θ ) = θ (1 − θ )(n−y) , y (14)  where ny is read as “n choose y” and refers to the number of successes y in a sequence of “right/wrong” Bernoulli trials that can be obtained from an n-item test. The symbol Bin is shorthand for the binomial density function. Example 2: the normal sampling model The likelihood function for the parameters of the simple normal distribution can be kaplan, depaoli

411

written as f (y|μ, σ 2 ) = √

1 2πσ

exp −

(y

− μ)2 2σ 2

. (15)

Under the assumption of independent observations, we can write Equation 15 as n

f (y1 , y2 , . . . , yn |μ, σ 2 ) =

f (yi |μ, σ 2 ), i

=

1

n/2

√ 2πσ 2 ⎞ ⎛  (yi − μ)2 ⎟ ⎜ i exp ⎝− ⎠, 2σ 2

= L(θ|y),

(16)

where θ = (μ, σ ).

The Nature of the Prior Distribution It is useful to remind ourselves of the reason why we specify a prior distribution on the parameters. The key philosophical reason concerns our view that progress in science generally comes about by learning from previous research findings and incorporating information from these findings into our present studies. Upon reflection, it seems obvious that no study is conducted in the complete absence of previous research. From experimental designs to path diagrams, the information gleaned from previous research is almost always incorporated in our choice of designs, variables to be measured, or conceptual diagrams to be drawn. Researchers who postulate a directional hypothesis for an effect are almost certainly using prior information about the direction that an estimate must take. Bayesian statistical inference, therefore, simply requires that our prior beliefs be made explicit but then moderates our prior beliefs by the actual data in hand. Moderation of our prior beliefs by the data in hand is the key meaning behind Equation 12. But how do we choose a prior? The general approach to considering the choice of a prior is based on how much information we believe we have prior to the data collection and how accurate we believe that information to be (Lynch, 2007). This issue has also been discussed by Leamer (1983), who orders priors on the basis of degree of confidence. Leamer’s hierarchy of confidence is as follow: truths (e.g., axioms) > facts (data) > opinions (e.g., expert judgement) > conventions (e.g., pre-set alpha levels). 412

b ay e s i a n s tat i s t i c a l m e t h o d s

An interesting feature of this hierarchy, as noted by Leamer, concerns the inherent lack of “objectivity” in such choices as pre-set alpha levels, or any of a number of assumptions made in linear regressionbased models. In describing the “whimisical” nature of statistical inference, Leamer goes on to argue that the problem should be to articulate exactly where a given investigation is located on this hierarchy. The strength of Bayesian inference lies precisely in its ability to incorporate existing knowledge into statistical specifications.

objective priors A very important discussion regarding general types of prior distributions can be found in Press (2003). In his book, Press distinguishes between objective versus subjective prior distributions. The notion of an objective prior relates to having very little information regarding the process that generated the data prior to the data being collected. Public Policy Prior One type of objective prior discussed by Press (2003) is the public policy prior. The public policy prior concerns reporting the results of an experiment or study to the public that contains a minimal amount of the researcher’s subjective judgements as possible. To take an example from education, suppose one is interested in a policy to reduce class size because it is viewed as being related to academic achievement—lower-class sizes being associated with higher academic achievement, particularly for low income students. Assume, for this example, that based on previous research, the investigator has a sense of how much student achievement will increase (based on a standardized test) for a given reduction in class size. From the standpoint of educational policy, the results reported to stakeholders should not depend on the prior beliefs of an individual researcher. In this case, the researcher may decide to use a vague prior reflecting an unwillingness to report an effect of reduced class size that is based on a specific prior belief.3 Non-informative Prior In some cases we may not be in possession of enough prior information to aid in drawing posterior inferences. From a Bayesian perspective, this lack of information is still important to consider and incorporate into our statistical specifications. In other words, it is equally important to quantify our ignorance as it is to quantify our cumulative understanding of a problem at hand.

The standard approach to quantifying our ignorance is to incorporate a non-informative prior into our specification. Non-informative priors are also referred to as vague or diffuse priors. Perhaps the most sensible non-informative prior distribution to use in this case is the uniform distribution over some sensible range of values. Care must be taken in the choice of the range of values over the uniform distribution. Specifically, a Uniform[–∞, ∞] is an improper prior distribution insofar as it does not integrate to 1.0 as required of probability distributions. Jeffreys’ Prior A problem with the uniform prior distribution is that it is not invariant to simple transformations. In fact, a transformation of a uniform prior can result in a prior that is not uniform and will end up favoring some values more than others. As pointed out by Gill (2002), the invariance problem associated with uniform priors, and indeed the use of uniform priors generally, had been greeted with extreme skepticism by many early statisticians and used as the foundation of major critiques of Bayesian statistics. Despite the many criticisms against the uniform prior, its use dominates applied Bayesian work. Justification for the use of the uniform prior has been given in Bauwens, Lubrano, and Richard (2003) who have pointed out that (1) the effect of the uniform prior tends to diminish with increasing sample size; (2) the uniform prior is useful when models contain nuisance parameters, such as the variance of the normal distribution when the mean is of interest, as they will be integrated out anyway; and (3) the uniform distribution is the limit of certain conjugate distributions. In Bayesian statistics, conjugate distributions are those that, when multiplied by the likelihood via Bayes’ Theorem, yield posterior distributions in the same distributional family as the prior distribution. In specifically addressing the invariance problem associated with the uniform distribution, Jeffreys (1961) proposed a general approach that yields a prior that is invariant under transformations. The central idea is that the subjective beliefs contained in the specification of the prior distribution of a parameter θ should not be lost when there is a oneto-one transformation from θ to another parameter, say φ. More specifically, using transformation-ofvariables calculus, the prior distribution p(φ) will be equivalent to p(θ) when obtained as : : :dθ : p(φ) = p(θ) :: :: . dφ

(17)

On the basis of the relationship in Equation 17, Jeffreys (1961) developed a non-informative prior distribution that is invariant under transformations, written as p(θ ) ∝ [I (θ)]1/2 ,

(18)

where I (θ) is the Fisher information matrix for θ. Jeffreys’ prior is obtained as follows. Following Gelman et al. (2003), let f (x|θ ) be the likelihood for θ and write its associated Fisher information matrix as 

2  1 ∂ (log f (x|θ )) 2 I (θ) = −Ex|θ . (19) ∂θ 2 Next, we write the Fisher information matrix for φ as 

2  1 ∂ (log f (x|φ)) 2 . (20) I (φ) = −Ex|φ ∂φ 2 From the change of variables expression in Equation 17, we can rewrite Equation 20 as : :  1 

2 ∂ (log f (x|θ )) :: d θ :: 2 ×: : , I (φ) = −Ex|θ ∂θ 2 dφ (21) : :2 :dθ : = I (θ ) :: :: . dφ Therefore, I (φ)

1/2

= I (θ )

1/2

: : :dθ : × :: :: , dφ

(22)

from which we obtain the relationship to Equation 18. The Jeffreys prior can also be extended to a vector of model parameters and thus is applicable to regression models and their extensions (see Gill, 2002). Press (2003) then goes on to weigh the advantages and disadvantages of objective priors. Following Press (2003), in terms of advantages: 1. Objective priors can be used as benchmarks against which choices of other priors can be compared. 2. Objective priors reflect the view that little information is available about the process that generated the data. 3. There are cases in which the results of a Bayesian analysis with an objective prior provides equivalent results to those based on a frequentist analysis – although there are philosophical differences in interpretation that we allude to later in the chapter. kaplan, depaoli

413

4. Objective priors are sensible public policy priors. In terms of disadvantages, Press (2003) noted 1. Objective priors can lead to improper results when the domain of the parameters lie on the real number line. 2. Parameters with objective priors are often independent of one another, whereas in most multiparameter statistical models, parameters are correlated. The problem of correlated model parameters is of extreme importance for methods such as structural equation modeling (see e.g., Kaplan & Wenger, 1993). 3. Expressing complete ignorance about a parameter via an objective prior leads to incorrect inferences about functions of the parameter.

subjective priors To motivate the use of subjective priors, consider again the class size reduction example. In this case, we may have a considerable amount of prior information regarding the increase in achievement arising from previous investigations. It may be that previous investigations used different tests of academic achievement, but when examined together, it has been found that reducing class size to approximately 17 children per classroom results in one-forth of a standard deviation increase (say, about 8 points) in academic achievement. In addition to a prior estimate of the average achievement gain caused by reduction in class size, we may also wish to quantify our uncertainty about the exact value of θ by specifying a probability distribution around the prior estimate of the average. Perhaps a sensible prior distribution would be a normal distribution centered at θ = 8. However, let us imagine that previous research has shown that achievement gains caused by class size reduction has almost never been less than 5 points and almost never more than 14 points (almost a full standard deviation). Taking this range of uncertainty into account, we might propose a prior distribution on θ that is N (8, 1). The parameters of this prior distribution θ = N (8, 1) are referred to as hyperparameters. The careful reader may have wondered if setting hyperparameters to fixed values violates the essence of Bayesian philosophy. To address that concern, note first that the Bayesian approach treats the hyperparameters as elicited quantities that are known and fixed. The Bayesian approach is to be contrasted with the frequentist approach that treats parameters 414

b ay e s i a n s tat i s t i c a l m e t h o d s

as unknown and fixed. Second, it is not necessary to set hyperparameters to known and fixed quantities. In a fully hierarchical Bayesian model, it is possible to specify a probability distribution on the hyperparameters — referred to as a hyperprior. Informative-Conjugate Priors In the previous section, we considered the situation in which there may not be much prior information that can be brought to bear on a problem. In that situation we focused on objective priors. Alternatively, it may be the case that some information can be brought to bear on a problem and be systematically incorporated into the prior distribution. Such subjective priors are deemed informative. One type of informative prior is based on the notion of a conjugate distribution. As noted earlier, a conjugate prior distribution is one that, when combined with the likelihood function, yields a posterior that is in the same distributional family as the prior distribution. Conjugacy is a very important and convenient feature because if a prior is not conjugate, then the resulting posterior distribution may have a form that is not analytically simple to solve. Arguably, the existence of numerical simulation methods for Bayesian inference, such as Markov chain Monte Carlo (MCMC) estimation, may render conjugacy less of a problem. We focus on conjugate priors in this section. Example 3: The Beta Prior As an example of a conjugate prior, consider estimating the number of correct responses y on a test of length n. Let θ be the proportion of correct responses. We first assume that the responses are independent of one another. The binomial sampling model was given in Equation 14 and reproduced here

n y p(y|θ ) = Bin(y|n, θ ) = θ (1 − θ )(n−y) . y (23) One choice of a prior distribution for θ is the beta(a,b) distribution. The beta distribution is a continuous distribution appropriate for variables that range from zero to one. The terms a and b are referred to as hyperparameters and characterize the distribution of the parameters, which for the beta distribution are the scale and shape parameters, respectively.4 The form of the beta(a,b) distribution is (a + b) a−1 θ (1 − θ )b−1 , (24) p(θ ; a, b) = (a)(b) where  is the gamma(a,b) distribution. Ignoring terms that don’t involve model parameters, we

obtain the posterior distribution p(θ|y) =

can be seen by once again examining the variance term for the normal distribution in Equation 28. Specifically,

(n + a + b) θ y+a−1 (y + a)(n − y + b)

(1 − θ)n−y+b−1 ,

(25)

which is a beta distribution with parameters a  = a + y and b  = b + n − y. Thus, the beta prior for the binomial sampling model is conjugate. Example 4: The Normal Prior This next example explores the normal prior for the normal sampling model. Let y denote a data vector of size n. We assume that y follows a normal distribution shown in Equation 15 and reproduced here

(y − μ)2 1 2 f (y|μ, σ ) = √ . (26) exp − 2σ 2 2πσ Consider that our prior distribution on the mean is also normal with mean hyperparameter, κ and variance, τ 2 , which for this example are known. The prior distribution can be written as

1 (μ − κ)2 2 f (μ|κ, τ ) = √ exp − . (27) 2τ 2 2πτ 2 After some algebra, the posterior distribution can be obtained as  κ + σn¯x2 τ 2σ 2 τ2 f (μ|y) ∼ N 1 , 2 , (28) n 2 2 + 2 σ + nτ τ

σ

and so we see that the normal prior is conjugate for the normal likelihood. The posterior distribution in Equation 28 reveals some interesting features regarding the relationship between the data and the prior. To begin, we see that μ is only dependent on x¯ , the sample mean; hence, x¯ is sufficient for μ. Second, we see that as the sample size increases, the data (here, x¯ ) become more important than the prior. Indeed, as the sample size approaches infinity, there is no information in the prior distribution that is of relevance to estimating the moments of the posterior distribution. To see this, we compute the asymptotic posterior mean as κ τ2 n→∞ 1 τ2

lim μˆ = lim

n→∞

=

+ +

κσ 2 2 lim nτ n→∞ σ 2 nτ 2

n¯x σ2 n σ2

+ x¯ +1

,

= x¯ .

(29)

Finally, we introduce the terms 1/τ 2 and n/σ 2 to refer to the prior precision and data precision, respectively. The role of these two measures of precision

1

lim σˆ2 = lim

n→∞

n→∞ 1 τ2

= lim

+

n σ2

σ2

n→∞ σ 2 τ2

+n

, =

σ2 . n

(30)

A similar result emerges if we consider the case where we have very little information regarding the prior precision. That is, choosing a very large value for τ 2 gives the same result. Example 5: The Inverse-Gamma prior In most practical applications, the variance in the normal sampling model is unknown. Thus, we need to derive the joint prior density p(μ, σ 2 ). Derivation of the joint prior density is accomplished by factoring the joint prior density into the product of the conditional density and marginal density—that is, p(μ, σ 2 ) = p(μ|σ 2 )p(σ 2 ),

(31)

where, in this example, μ|σ 2 ∼ N (μ0 , σ 2 /n) σ 2 ∼ inverse-Gamma(ν0 /2, νσ 2 /2),

(32) (33)

where ν0 > 0 is a “degree-of-freedom” parameter. Another important feature of the inverse-Gamma distribution is that if the random variable x ∼ inverse-Gamma(a, b), then 1/X ∼ Gamma(a, b). The relationship between the inverse-Gamma and Gamma distributions is important because 1/σ 2 is the precision parameter. Thus, in the case of the normal model, an inverse-Gamma prior can be placed on σ 2 or a Gamma prior can be place on 1/σ 2 .

Bayesian Hypothesis Testing Bayes’ Theorem shows that the posterior distribution is composed of encoded prior information weighted by the data. With the posterior distribution in hand, it is of interest to obtain summaries of its moments, such as the mean and variance. In addition, interval summaries of the posterior distribution can be obtained. Summarizing the posterior distribution provides the necessary ingredients for Bayesian hypothesis testing. Before covering summaries of the posterior distribution and their role in Bayesian hypothesis testing, it may be useful to place the Bayesian approach to hypothesis testing in contrast to the more common frequentist approach. Clearly, a kaplan, depaoli

415

critically important component of applied statistical modeling is hypothesis testing. Indeed, a considerable amount of time is spent in introductory statistics courses laying the foundation for the frequentist perspective on hypothesis testing, beginning with Fisher (1941/1925) and culminating in the Neyman-Pearson approach, which is now the standard in the social and behavioral sciences, (Neyman & Pearson, 1928). An interesting aspect of the Neyman-Pearson approach to hypothesis testing is that students (as well as many seasoned researchers) appear to have a very difficult time grasping its principles. In a review of the problem of hypothesis testing in the social and behavioral sciences Gigerenzer, Krauss, and Vitouch (2004) argued that much of the problem lies in the conflation of Fisherian hypothesis testing and the Neyman-Pearson approach to hypothesis testing. For interesting discussions on this problem, see Cohen (1994), Gigerenzer et al. (2004), and the volume by Harlow, Mulaik, and Steiger (1997). Briefly, Fisher’s approach to hypothesis testing specifies only the null hypothesis. A conventional significance level is chosen (usually the 5% level). Once the test is conducted, the result is either significant (p < 0.05) or it is not (p > 0.05). If the resulting test is significant, then the null hypothesis is rejected. However, if the resulting test is not significant, then no conclusion can be drawn. As Gigerenzer et al. (2004) has pointed out, Fisher developed a later version of his ideas wherein one only reports the exact significance level arising from the test and does not place a “significant” or “nonsignificant” value label to the result. In other words, one reports, say, p = 0.045 but does not label the result as “significant” (Gigerenzer et al., 2004, p. 399). In contrast to Fisher’s ideas, the approach advocated by Neyman and Pearson requires that two hypotheses be specified: the null hypothesis and the alternative hypothesis. By specifying two hypotheses, one can compute a desired tradeoff between two types of errors: Type I errors (the probability of rejecting the null when it is true, denoted as α) and Type II errors (the probability of not rejecting the null when it is false, denoted as β). The conflation of Fisherian and Neyman-Pearson hypothesis testing lies in the use and interpretation of the p-value. In Fisher’s paradigm, the p-value is a matter of convention, with the resulting outcome being based on the data. However, in the NeymanPearson paradigm, α and β are determined prior to the experiment being conducted and refer to a consideration of the cost of making one or the other 416

b ay e s i a n s tat i s t i c a l m e t h o d s

error. In other words, the p-value and α are not the same thing. The confusion between these two concepts is made worse by the fact that statistical software packages often report a number of p-values that a researcher can choose after having conducted the analysis (e.g., 0.001, 0.01, 0.05). This can lead a researcher to set α ahead of time, as per the NeymanPearson school, but then communicate a different level of “significance” after running the test. Misunderstandings of the Fisherian approach or the Neyman-Pearson approach to hypothesis testing is not a criticism of these methods per se. However, from the frequentist point of view, a criticism often leveled at the Bayesian approach to statistical inference is that it is “subjective,” whereas the frequentist approach is “objective.” The objection to “subjectivism” is somewhat perplexing insofar as frequentist hypothesis testing also rests on assumptions that do not involve data. The simplest and most ubiquitous example is the test of a null hypothesis against an alternative hypothesis, characteristic of the NeymanPearson paradigm. In cases where the value of the null hypothesis is stated (e.g., something other than zero), the question that is immediately raised is where that value came from. Presumably, a (non-null) value of the null hypothesis must be credible, thus restricting the values that the parameters could sensibly take on. A key difference between Bayesian and frequentist approaches to hypothesis testing is that the Bayesian approach makes this prior information explicit and does not find the idea that parameters possess probability distributions contrary to a coherent scheme of hypothesis testing.

Point Estimates of the Posterior Distribution For frequentist and Bayesian statistics alike, hypothesis testing proceeds after obtaining summaries of relevant distributions. For example, in testing for the differences between two groups (e.g., a treatment group and a control group), we first summarize the data, obtaining the means and standard errors for both groups, and then perform the relevant statistical tests. These summary statistics are considered “sufficient” summaries of the data — in a sense, they stand in for data. The difference between Bayesian and frequentist statistics is that with Bayesian statistics, we wish to obtain summaries of the posterior distribution. The expressions for the mean and variance of the posterior distribution come from expressions for the mean and variance of conditional distributions generally.

Specifically, for the continuous case, the mean of the posterior distribution can be written as +∞ E (θ |y) = θp(θ|y)d θ.

(34)

−∞

Thus, the posterior mean is obtained by averaging over the marginal distribution of θ. Similarly, the variance of θ can be obtained as var(θ|y) = E [(θ − E [(θ|y])2 |y), +∞ = (θ − E [θ|y])2 p(θ|y)d θ, −∞

+∞ = (θ 2 − 2θE [θ|y]) −∞

+ E [θ|y]2 )p(θ|y)d θ, = E [θ 2 |y] − E [θ|y]2 .

(35)

The mean and variance of the posterior distribution provide two simple summary values of the posterior distribution. Another summary measure would be the mode of the posterior distribution, referred to as the maximum a posteriori (MAP) estimate. Those measures, along with the quantiles of the posterior distribution, provide a complete description of the distribution.

Interval Summaries of the Posterior Distribution In addition to these point estimates we are often interested in obtaining intervals for, say, the mean of the posterior distribution. There are two general approaches to obtaining interval summaries of the posterior distribution. The first is the so-called credible interval, also referred to as the posterior probability interval, and the second is the highest posterior density (HPD) interval. Credible Intervals One important consequence of viewing parameters probabilistically concerns the interpretation of confidence intervals. Recall that the frequentist confidence interval requires that we imagine a fixed parameter, for example, the population mean μ. Then, we imagine an infinite number of repeated samples from the population characterized by μ.5 For any given sample, we obtain the sample mean x¯ and form a 100(1 − α)% confidence interval. The correct frequentist interpretation is that

100(1 − α)% of the confidence intervals formed this way capture the true parameter μ under the null hypothesis. Notice that from this perspective, the probability that the parameter is in the interval is either zero or one. In contrast, the Bayesian perspective forms a credible interval (also known as a posterior probability interval ). The credible interval is obtained directly from the quantiles of the posterior distribution of the model parameters. From the quantiles, we can directly obtain the probability that a parameter lies within a particular interval. Therefore, a 100(1 − α)% credible interval means that the probability that the parameter lies in the interval is 100(1 − α)%. Again, notice that this is entirely different from the frequentist interpretation and arguably aligns with common sense. In formal terms, a 100(1 − α)% credible interval for a particular subset of the parameter space θ is defined as  p(θ|y)d θ . (36) 1−α = C

The credible interval will be demonstrated through the examples given later in this chapter. Highest Posterior Density The simplicity of the credible interval notwithstanding, it is not the only way to provide an interval estimate of a parameter. Following the argument set by Box andTiao (1973), when considering the posterior distribution of a parameter θ , there is a substantial part of the region of that distribution where the density is quite small. It may be reasonable, therefore, to construct an interval in which every point inside the interval has a higher probability than any point outside the interval. Such a construction is referred to as the HPD interval. More formally, Definition 1 Let p(θ|y) be the posterior density function. A region R of the parameter space θ is called the HPD region of the interval 1 − α if 1. pr(θ ∈ R|y) = 1 − α 2. For θ1 ∈ R and θ2 ∈ R, pr(θ1 |y) ≥ pr(θ2 |y). Note that for unimodal and symmetric distributions, such as the uniform distribution or the normal distribution, the HPD is formed by choosing tails of equal density. The advantage of the HPD arises when densities are not symmetric and/or are not unimodal. In fact, this is an important property of the HPD and sets it apart from standard credible intervals. Following Box and Tiao (1973), if p(θ |y) kaplan, depaoli

417

is not uniform over every region in θ, then the HPD region 1 − α is unique. Also, if p(θ1 |y) = p(θ2 |y), then these points are included (or excluded) by a 1 − α HPD region. The opposite is true as well— namely, if p(θ1 |y) = p(θ2 |y)—then a 1 − α HPD region includes one point but not the other (Box & Tiao, 1973, p. 123).

Bayesian Model Evaluation and Comparison In many respects, the frequentist and Bayesian steps in model building are the same. First, an initial model is specified relying on a lesser or greater degree of prior theoretical knowledge. In fact, at this first stage, a number of different models may be specified according to different theories, with the goal being to choose the “best” model, in some sense of the word. Second, these models will be fit to data obtained from a sample from some relevant population. Third, an evaluation of the quality of the models will be undertaken, examining where each model might deviate from the data, as well as assessing any possible model violations. At this point, model respecification may come into play. Finally, depending on the goals of the research, the “best model” will be chosen for some purpose. Despite the similarities between the two approaches with regard to the broad goals of model building, there are important differences. A major difference between the Bayesian and frequentist goals of model building lie in the model specification stage. In particular, because the Bayesian perspective views parameters as possessing probability distributions, the first phase of model building will require the specification of a full probability model for the data and the parameters. The probability model for the data is encoded in the likelihood, and the probability model for the parameters is encoded in the prior distribution. Thus, the notion of model fit implies that the full probability model fits the data, in some sense, and lack of model fit may well result from incorrect specification of the prior distribution. Arguably, another difference between the Bayesian and frequentist goals of model building relate to the justification for choosing a particular model among a set of competing models. Specifically, model building and model choice in the frequentist domain is based primarily on choosing the model that best fits the data. Model fit has certainly been the key motivation for model building, respecification, and model choice in the context of structural equation modeling (see Kaplan, 2009). 418

b ay e s i a n s tat i s t i c a l m e t h o d s

In this section, we examine the notion of model building and model fit and discuss a number of commonly used Bayesian approaches. We will first introduce Bayes factors as a very general means of choosing from a set of competing models. This will be followed by a special case of the Bayes factor, referred to as the Bayesian information criterion. Then, we will consider the Deviance information criterion. Finally, we will consider the idea of borrowing strength from a number of competing models in the form of Bayesian model averaging.

Bayes Factors A very simple and intuitive approach to model building and model choice uses so-called Bayes factors (Kass & Raftery, 1995). In essence, the Bayes factor provides a way to quantify the odds that the data favor one hypothesis over another. A key benefit of Bayes factors is that models do not have to be nested. Following Raftery (1995), consider two competing models, denoted as M1 and M2 , that could be nested within a larger space of alternative models or possibly obtained from distinct parameter spaces. Further, let θ1 and θ2 be two parameter vectors. From Bayes’ Theorem, the posterior probability that — for example, M1 —is the model preferred by the data can be written as p(M1 |y) =

where

p(y|M1 )p(M1 ) , p(y|M1 )p(M1 ) + p(y|M2 )p(M2 ) (37) 

p(y|M1 ) =

p(y|θ1 , M1 )p(θ1 |M1 )d θ1

(38)

is referred to as the marginal probability or predictive probability of the data given M1 . From here, the posterior odds for M1 over M2 can be written as     p(M1 |y) p(y|M1 ) p(M1 ) = × , (39) p(M2 |y) p(y|M2 ) p(M2 ) where the first term on the right hand side of Equation 39 is the Bayes factor (BF), defined as p(y|M1 ) p(y|M2 ) ; p(y|θ1 , M1 )p(θ1 |M1 )d θ1 =; . p(y|θ2 , M2 )p(θ2 |M2 )d θ2

BF =

(40)

In words, the quantity on the left-hand side of Equation 39 is the posterior probability of the data

favoring M1 over M2 . This posterior probability is related to the prior odds p(M1 )/p(M2 ) of the data favoring M1 over M2 weighted by the marginal likelihoods p(y|M1 )/p(y|M2 ) as seen in Equation 40. Notice that assuming neutral prior odds—that is, p(M1 ) = p(M2 ) = 1/2—the Bayes factor is equivalent to the posterior odds. Rules of thumb have been developed to assess the quality of the evidence favoring one hypothesis over another using Bayes factors. Following Kass and Raftery (1995, p. 777) and using M1 as the reference model, 2loge (BF12 ) 0 to 2

BF12 1 to 3

2 to 6 6 to 10 > 10

3 to 20 20 to 150 > 150

Evidence against M2 Not worth more than a bare mention Positive Strong Very strong

The Bayesian Information Criterion A difficulty with using Bayes factors for hypothesis testing is the requirement that priors be specified. An alternative that does not require the introduction of prior densities can be obtained using the Bayesian information criterion (BIC), also referred to as the Schwarz criterion (SC). The BIC is defined as ˆ + p log(n), BIC = −2 log(θ|y)

(41)

ˆ describes model fit whereas where −2 log θ|y p log(n) is a penalty for model complexity, where p represents the number of variables in the model and n is the sample size. As with Bayes factors, the BIC is often used for model comparisons. Specifically, the difference between two BIC measures comparing—for example, M1 to M2 —can be written as

(BIC12 ) = BIC(M1 ) − BIC(M2 ) , = log(θˆ1 |y) − log(θˆ2 |y) 1 − (p1 − p2 ) log(n). 2

(42)

However, unlike the Bayes factor, there is no existing rule of thumb regarding the size of the difference between the BICs of two competing models that would guide a choice. In other words, among competing models, the one with the smallest BIC value is to be chosen.

The Deviance Information Criterion Although the BIC is derived from a fundamentally Bayesian perspective, it is often productively used for model comparison in the frequentist domain. Recently, however, an explicitly Bayesian approach to model comparison was developed by Spiegelhalter, Best, Carlin, and Linde (2002) based on the notion of Bayesian deviance. Consider a particular model proposed for a set of data, defined as p(y|θ ). Then, Bayesian deviance can be defined as D(θ) = −2 log[p(y|θ )] + 2 log[h(y)]

(43)

where, according to Spiegelhalter et al. (2002), the term h(y) is a standardizing factor that does not involve model parameters and thus is not involved in model selection. Note that although Equation 43 is similar to the BIC, it is not, as currently defined, an explicit Bayesian measure of model fit. To accomplish this, we use Equation 43 to obtain a posterior mean over θ by defining D(θ ) = Eθ [−2log [p(y|θ )|y] + 2log [h(y)], (44) and this is referred to as the deviance information criterion (DIC). It has been suggested by Lee (2007, p. 128) that if the difference between the DIC values of two competing models is less than 5.0 and the two models give substantively different conclusions, then it may be misleading to choose the model with the lowest DIC value.

Bayesian Model Averaging As noted earlier, a key characteristic that separates Bayesian statistical inference from frequentist statistical inference is its focus on characterizing uncertainty. Up to this point, we have concentrated on uncertainty in model parameters, addressing that uncertainty through the specification of a prior distribution on the model parameters. In a related, but perhaps more general fashion, the selection of a particular model from a universe of possible models can also be characterized as a problem of uncertainty. This problem was succinctly stated by Hoeting, Madigan, Raftery, and Volinsky (1999), who write: “Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident inferences and decisions that are more risky than one thinks they are.”(p. 382)

kaplan, depaoli

419

An interesting approach to addressing the problem of model uncertainty lies in the method of Bayesian model averaging (BMA). To begin, consider once again a parameter of interest θ (which could be vector valued) and consider a set of competing models Mk , k = 1, 2, . . . , K that are not necessarily nested. The posterior distribution of θ given data y can be written as p(θ|y) =

K 

p(θ|Mk )p(Mk |y),

(45)

k=1

where p(Mk |y) is the posterior probability of model Mk written as p(y|Mk )p(Mk ) , p(Mk |y) = K l =1 p(y|Ml )p(Ml )

l = k. (46)

In words, Equation 46 indicates that one can obtain the posterior probability of a model by multiplying the likelihood of the data given the model, times the prior probability placed on the model. The prior probability p(Mk ) can be different for different models. Note that denominator in Equation 46 simply ensures that the probability sums to one. Note also that the term p(y|Mk ) can be expressed as an integrated likelihood  p(y|Mk ) = p(y|θk , Mk )p(θk |Mk )d θk , (47) over the parameters of interest, and where p(θk |Mk ) is the prior density of θk . Thus, BMA provides an approach for combining models specified by researchers or perhaps elicited by key stakeholders. The advantage of BMA has been discussed in Madigan and Raftery (1994), who showed that BMA provides better predictive performance than that of a single model. As pointed out by Hoeting et al. (1999), BMA is difficult to implement. In particular, they have noted that that the number of terms in Equation 45 can be quite large, the corresponding integrals are hard to compute (though possibly less so with the advent of MCMC), specification of p(Mk ) may not be straightforward, and choosing the class of models to average over is also challenging. The problem of reducing the overall number of models that one could incorporate in the summation of Equation 45 has lead to interesting solutions based on the notion of Occam’s window (Madigan & Raftery, 1994) or the “leaps-and-bounds” algorithm (Volinsky, Madigan, Raftery, & Kronmal, 1997), discussions of which are beyond the scope of this chapter. 420

b ay e s i a n s tat i s t i c a l m e t h o d s

Bayesian Computation As stated in the introduction, the key reason for the increased popularity of Bayesian methods in the social and behavioral sciences has been the advent of freely available software programs for Bayesian estimation of the parameters of a model. The most common estimation algorithm is based on MCMC sampling. A number of very important papers and books have been written about MCMC sampling (see, e.g., Gilks, Richardson, & Spiegelhalter, 1996). The general idea is that instead of attempting to analytically solve a complex integral problem, the MCMC approach instead draws specially constructed samples from the posterior distribution p(θ |y) of the model parameters. In the interest of space, we will concentrate on one common algorithm for MCMC sampling, referred to as Gibbs Sampling (Geman & Geman, 1984). More general treatments of MCMC can be found in Bolstad (2009); Casella and Robert (2003); Gilks et al. (1996).

Gibbs Sampling The formal algorithm can be specified as follows. Let θ be a vector of model parameters with elements θ = {θ1 , . . . , θq }. The elements of θ could be the parameters of a regression model, structural equation model, and so forth. Note that information regarding θ is contained in the prior distribution p(θ ). A number of algorithms and software programs are available to conduct MCMC sampling. Following the description given in Gilks et al. (1996), the Gibbs sampler begins with an initial set of starting values for the parameters, denoted as (0) (0) θ (0) = (θ1 , . . . , θq ). Given this starting point, the Gibbs sampler generates θ (s) from θ (s−1) as follows: (s)

(s−1)

, θ3

(s)

(s)

(s−1)

(s)

(s)

1. sample θ1 ∼ p(θ1 |θ2

(s−1)

2. sample θ2 ∼ p(θ2 |θ1 , θ3

, . . . , θq(s−1) , y)

, . . . , θq(s−1) , y)

.. . (s)

q. sample θq(s) ∼ p(θq |θ1 , θ2 , . . . , θq−1 , y). Then, a sequence of dependent vectors are formed: (1)

(1)

θ (1) = {θ1 , . . . θq } (2) (2) θ (2) = {θ1 , . . . θq } .. . (S) (S) θ (S) = {θ1 , . . . θq }.

This sequence exhibits the so-called Markov property insofar as θ (s) is conditionally independent of (0) (s−2) } given θ (s−1) . Under some general {θ1 , . . . θq conditions, the sampling distribution resulting from this sequence will converge to the target distribution as s → ∞. See Gilks et al. (1996) for additional details on the properties of MCMC. In setting up the Gibbs sampler, a decision must be made regarding the number of Markov chains to be generated, as well as the number of iterations of the sampler. With regard to the number of chains to be generated, it is not uncommon to specify multiple chains. Each chain samples from another location of the posterior distribution based on purposefully dispersed starting values. With multiple chains, it may be the case that fewer iterations are required, particularly if there is evidence for the chains converging to the same posterior mean for each parameter. In some cases, the same result can be obtained from one chain, although often requiring a considerably larger number of iterations. Once the chain has stabilized, the iterations prior to the stabilization (referred to as the burn-in phase) are discarded. Summary statistics, including the posterior mean, mode, standard deviation, and credible intervals, are calculated on the post-burn-in iterations. Also, convergence diagnostics (discussed next) are obtained on the entire chain or on post-burn-in iterations.

Convergence Diagnostics Assessing the convergence of parameters within MCMC estimation is a difficult task that has been receiving attention in the literature for many years (see e.g., Mengersen, Robery, & GuihenneucJouyax, 1999; Sinharay, 2004). The difficulty of assessing convergence stems from the very nature of MCMC in that the MCMC algorithm is designed to converge in distribution rather than to a point estimate. Because there is not a single adequate assessment of convergence for this situation, it is common to inspect several different diagnostics that examine varying aspects of convergence conditions. Perhaps the most common form of assessing MCMC convergence is to examine the convergence (also called history) plots produced for a chain. Typically, a parameter will appear to converge if the sample estimates form a tight horizontal band across this history plot. However, using this method as an assessment for convergence is rather crude because merely viewing a tight plot does not indicate convergence was actually obtained. As a result, this method is more likely to be an indicator of non-convergence

(Mengersen et al., 1999). For example, if two chains for the same parameter are sampling from different areas of the target distribution then there is evidence of non-convergence. Likewise, if a plot shows substantial fluctuation or jumps in the chain, it is likely the parameter has not reached convergence. However, because merely viewing history plots may not be sufficient in determining convergence (or non-convergence), it is also common to reference additional diagnostics. Although this list is not exhaustive, we focus on several of the most commonly used diagnostics for single-chain situations. All of these diagnostics are available through loading the convergence diagnostic and output analysis (CODA) (Best, Cowles, & Vines, 1996) files (produced by programs such as WinBUGS) into the Bayesian output analysis (BOA) program (Smith, 2005) interface for R (R Development Core Team, 2008a). The Geweke convergence diagnostic (Geweke, 1992)isusedwithasinglechaintodeterminewhether the first part of a chain differs significantly from the last part of a chain. The motivation for this diagnostic is rooted in the dependent nature of an MCMC chain. Specifically, because samples in a chain are not independently and identically distributed, convergence can be difficult to assess because of the inherent dependence between adjacent samples. Stemming from this dilemma, Geweke constructed a diagnostic that aimed at assessing two independent sections of the chain. Bayesian output analysis allows the user to set the proportion of iterations to be assessed at the beginning and the end of the chain. The default for the program mimics the standard suggested by Geweke (1992), which is to compare the first 10% of the chain and the last 50% of the chain. Although the user can modify this default, it is important to note that there should be a sufficient number of iterations between the two samples to ensure the means for the twosamplesareindependent.Thismethodcomputes a z-statistic where the difference in the two sample means is divided by the asymptotic standard error of their difference. A z-statistic falling in the extreme tail of a standard normal distribution suggests that the sample from the beginning of the chain has not yet converged (Smith, 2005). Bayesian output analysis produces an observed z-statistic and two-sided p-value. It is common to conclude that there is evidence against convergence with a p-value less than 0.05. The Heidelberger and Welch convergence diagnostic (Heidelberger & Welch, 1983) is a stationarity test that determines whether the last part of a Markov kaplan, depaoli

421

chain has stabilized. This test uses the Cramer-vonMises statistic to assess evidence of non-stationarity. If there is evidence of non-stationarity, then the first 10% of the iterations will be discarded and the test will be repeated either until the chain passes the test or more than 50% of the iterations are discarded. If the latter situation occurs, then it suffices to conclude there was not a sufficiently long stationary portion of the chain to properly assess convergence (Heidelberger &Welch, 1983).The results presented in BOA report the number of iterations that were retained as well as the Cramer-von-Mises statistic. Eachparameterisgivenastatusofhavingeitherpassed the test or not passed the test based on the Cramervon-Mises statistic. If a parameter does not pass this test, then this is an indication that the chain needs to run longer before achieving convergence. A second stage of this diagnostic examines the portion of the iterations that pass the stationarity test for accuracy. Specifically, if the half-width of the estimate confidence interval is less than a pre-set fraction of the mean, then the test implies the mean was estimated with sufficient accuracy. If a parameter fails under this diagnostic stage (indicating low estimate accuracy), then it may be necessary for a longer run of the MCMC sampler (Smith, 2005). The Raftery and Lewis convergence diagnostic (Raftery & Lewis, 1992) was originally developed for Gibbs sampling and is used to help determine three of the main features of MCMC: the burnin length, the total number of iterations, and the thinning interval (described below). A process is carried out that identifies this information for all of the model parameters being estimated. This diagnostic is specified for a particular quantile of interest with a set degree of accuracy within the BOA program. Once the quantile of interest and accuracy are set, the Raftery and Lewis diagnostic will produce the number of iterations needed for a burn-in and a range of necessary post-burn-in iterations for a particular parameter to converge. For each of these, a lower-bound value is produced that represents the minimum number of iterations (burn-in or postburn-in) needed to estimate the specified quantile using independent samples. Note, however, that the minimum value recommended for the burn-in phase can be optimistic and larger values are often required for this phase (Mengersen et al., 1999). Finally, information is also provided about the thinning interval that should be used for each parameter. Thinning is a process of sampling every s th sequenceofthechainforpurposesofsummarizingthe posterior distribution. Thinning is often used when 422

b ay e s i a n s tat i s t i c a l m e t h o d s

autocorrelations are high, indicating that consecutive draws are dependent. To reach independence between samples, it is common to discard a number of successive estimates between draws that are used for estimation. Thinning involves comparing first-order and second-order Markov chains together for several different thinning intervals. Comparison of first- and second-order Markov chains is accomplished through computing G 2 , a likelihood-ratio test statistic between the Markov models (Raftery & Lewis, 1996). After computing G 2 , the BIC can then be computed to compare the models directly (Raftery & Lewis, 1996). The most appropriate thinning interval is chosen by adopting the smallest thinning value produced where the first-order Markov chain fits better than the second-order chain. Although the default in the BOA program is to estimate the 0.025 quantile, the 0.5 quantile is often of more interest in determining the number of iterations needed for convergence because interest typically focuses on the central tendency of the distribution. It is important to note that using this diagnostic is often an iterative process in that the results from an initial chain may indicate that a longer chain is needed to obtain parameter convergence. A word of caution is that over dispersed starting values can contribute to the Raftery and Lewis diagnostic requesting a larger number of burnin and post-burn-in iterations. On a related note, Raftery and Lewis (1996) recommend that the maximum number of burn-in and post-burn-in iterations produced from the diagnostic be used in the final analysis. However, this may not always be a practical venture when models are complex (e.g., longitudinal mixture models) or starting values are purposefully over dispersed.

Three Empirical Examples In this section, we provide three simple examples of the application of Bayesian statistical inference: (1) Bayesian multiple regression analysis, (2) Bayesian multilevel modeling, and (3) Bayesian confirmatory factor analysis. The intent of this section is to present three standalone examples that, in part, illustrate how to interpret and report analyses produced through a Bayesian framework. It is not the intention of this section to compare results to those from a frequentistbased analysis. In fact, it is expected in analyses with large samples and non-informative priors that the Bayesian results would be close to those obtained from a frequentist analysis. Differences between the two approaches might appear in comparing credible

intervals to confidence intervals, but the reasons for conductingaBayesiananalysislieinthephilosophical differences underlying the two approaches, which we discuss in the Conclusions and Future Directions section.

Bayesian Multiple Regression Analysis For this example, we use an unweighted sample of 550 kindergartners from the Early Childhood Longitudinal Study–Kindergarten Class of 1998 (NCES, 2001). Item response theory was used to derive scale scores for a math assessment given in the fall of kindergarten. These scores are used as the dependent variable in this analysis. There are two sets of predictors included in this model. The first set of predictors is comprised of three items that the teacher answered for each student regarding certain social and behavioral issues within the classroom. These three items inquired about each student’s approach to learning, self-control, and interpersonal skills. The second set of predictors included three similar items that the parent answered regarding their child in the home environment. These three items were approaches to learning, self-control, and social interaction. This model includes all six teacher and parent items as predictors of math achievement. For the purposes of this example, this model was computed through the R environment (R Development Core Team, 2008b) using the MCMCreg function within the MCMCpack package to carry out the analysis (Martin, Quinn, & Park, 2010). Note, however, that this model can be computed both in other packages within R and also in alternative programs such as WinBUGS (Lunn, Thomas, Best, & Spiegelhalter, 2000) and Mplus (Muthén & Muthén, 2010). All of the model parameters were given non-informative prior distributions.

parameter convergence The results obtained through MCMCpack were read into the CODA package (Best et al., 1996) that provides many different convergence diagnostics discussed earlier. The Geweke convergence diagnostic was computed using the default CODA proportions of 0.1 for the beginning of the chain and 0.5 for the end of the chain. None of the parameters produced significant z-scores, indicating there was no evidence against convergence. The Heidelberger and Welch convergence diagnostic indicated that all of the parameters passed the stationarity and half-width tests. Finally, the Raftery and Lewis diagnostic was computed with the following settings: quantile = 0.5,

accuracy = 0.05, and probability = 0.95. Results indicated that the burn-in should consist of at least two iterations, the total number of iterations should be at least 3,897, and that no thinning interval was necessary. A more conservative analysis with 1,000 burn-in iterations and 10,000 post-burn-in iterations was conducted with little computational cost. The results of these diagnostics indicated that the parameters in this model appeared to properly converge.

model interpretation Estimates for the final unstandardized regression analysis can be found in Table 20.1. The means and standard deviations of the posterior distributions are provided for each model parameter. The Monte Carlo (MC) error is also included in this table. This estimate is of the MC standard error of the mean of the posterior distribution. Finally, the 95% credible interval is also provided for each parameter. As an example, the unstandardized regression weight for the teacher-reported assessment of a student’s approach to learning was 3.81 with a standard deviation of 0.59. The 95% credible interval for this parameter ranges from a lower bound of 2.65 to an upper bound of 4.98. The interpretation of this interval differs from the interpretation of a frequentist confidence interval in that the credible interval indicates there is a 0.95 probability that the parameter falls in this range of values. Figure 20.1 presents convergence plots and posterior density plots for the three teacher predictors and the three parent predictors. The convergence plots exhibit a relatively tight, horizontal band for the predictors, indicating that there was no sign of nonconvergence. Non-convergence is typically identified by convergence bands that bounce around in an unstable fashion, rather than forming a tight horizontal band. The posterior densities in Figure 20.1 approximate a normal distribution, which is another indication of parameter convergence. If the density plots exhibit non-normal, or lumpy, distributions, this can be a sign that the MCMC chain has not converged properly to the posterior distribution.

model comparison For the purposes of illustrating Bayesian model comparison, two additional regression models have been estimated using the same dependent variable of math achievement but a restricted set of predictors. The first model includes only the teacher-related predictors, and results from this analysis can be found in the middle section of Table 20.1. The second model includes the parent-related predictors and results can kaplan, depaoli

423

Table 20.1. Bayesian Regression Estimates from R: ECLS–K Database Node

EAP

SD

MC error

95% credible interval

−4.00

2.79

2.75E−2

−9.46, 1.57

Teacher1: Approaches to learning

3.81

0.59

5.99E-3

2.65, 4.98

Teacher2: Self-control

0.41

0.97

8.39E-3

−1.47, 2.32

Teacher3: Interpersonal skills

0.33

0.95

9.22E-3

−1.57, 2.18

Parent1: Approaches to learning

2.15

0.77

7.08E-3

0.63, 3.66

Parent2: Self-control

2.00

0.62

5.37E-3

0.78, 3.23

Parent3: Social interaction

0.20

0.67

6.57E-3

−1.14, 1.51

Math achievement variance

58.52

3.54

3.64E-2

51.92, 65.79

Intercept

5.87

1.76

1.85E-2

2.49, 9.42

Teacher1: Approaches to learning

4.38

0.59

5.06E-3

3.21, 5.53

Teacher2: Self-control

0.16

0.97

7.823E-3

−1.77, 2.03

Teacher3: Interpersonal skills

1.04

0.95

8.14E-3

−0.82, 2.93

Math achievement variance

60.90

3.70

3.57E-2

54.03, 68.57

Intercept

1.65

2.75

2.89E-2

−3.64, 7.18

Parent1: Approaches to learning

3.37

0.81

6.80E-3

1.76, 4.93

Parent2: Self-control

2.94

0.64

5.57E-3

1.65, 4.17

Parent3: Social interaction

0.62

0.71

7.37E-3

−0.77, 2.01

Math achievement variance

65.95

4.01

3.86E-2

58.52, 74.26

Full model Intercept

Restricted model: Teacher-related items

Restricted model: Parent-related items

Note: Note that these are all unstandardized weights. However, standardized weights are also available through this program. EAP = expected a posteriori. SD = standard deviation; MC error = Monte Carlo error.

be found in the bottom portion of Table 20.1. Both of these models will be used as a comparison to the original full model containing all of the predictors. As discussed earlier, the Bayes factor can be used as a tool to quantify the odds of the data favoring one model over another. For the first comparison, the full model with all six predictors will be compared to the model only containing the teacher-related predictors. Using Equation 40, the Bayes factor for this model comparison was computed through the BayesFactor function in MCMCpack available through R. The result comparing the full model to the model containing only the teacher-related items yielded a Bayes factor value of 65.00. According to the 424

b ay e s i a n s tat i s t i c a l m e t h o d s

criteria presented earlier, this indicates strong evidence against the restricted model containing only the teacher-related items. In a similar fashion, the second comparison involves the full model and the model only containing the parent-related predictors. The Bayes factor computed for this comparison was 1.56E+11, indicating very strong evidence against the restricted model containing only the parent-related items. Finally, by comparing the two restricted models to one another, a Bayes factor value between 0 and 1.0 (4.17E-10) was produced. Values less than 0 indicate that the model in the denominator (M2 ) of the Bayes factor is favored over the model in the numerator (M1 ). In this case, there was very strong evidence

Density of teacher 1 0.0

2 5

Trace of teacher 1

Trace of teacher 2

Density of teacher 2

0.0 0.4

2 3 4 5 6 N = 10000 Bandwidth = 0.09963

–2 4

2000 4000 6000 8000 10000 Iterations

–4 –2 N = 10000

2000 4000 6000 8000 10000 Iterations

Density of teacher 3

Trace of teacher 3 0.0

–2 4

0 2 4 Bandwidth = 0.1628

2000 4000 6000 8000 10000 Iterations

–4 –2 N = 10000

Density of parent 1

Trace of parent 1 0.0

–1 4

0 2 4 Bandwidth = 0.1603

–1 0 1 N = 10000

2000 4000 6000 8000 10000 Iterations

Density of parent 2 0.0

0 3

Trace of parent 2

2 3 4 5 Bandwidth = 0.1295

0 1 N = 10000

–2 2

Trace of parent 3

2000 4000 6000 8000 10000 Iterations

0.0 0.6

2000 4000 6000 8000 10000 Iterations

2 3 4 Bandwidth = 0.1042

Density of parent 3

0 2 –3 –2 –1 1 3 N = 10000 Bandwidth = 0.1128

Figure 20.1 Bayesian regression: convergence and posterior plots for all regression model predictors.

against the restricted model containing only the teacher-related items. It is important to point out how this example differs from a frequentist approach to the problem. In particular, the Bayes factor is providing information about the magnitude of support in the data favoring one hypothesis over the other. This is in stark contrast to the frequentist view of acceptance versus rejection of a hypothesis given the data.

bayesian model averaging The full regression model with all parent and teacher predictor variables is used here to demonstrate Bayesian modeling averaging via the BMA package (Raftery, Hoeting, Volinsky, Painter, & Yeung, 2009) in R (R Development Core Team, 2008b).6 The BMA package in R automatically produces the top five selected models and these are displayed in Table 20.2. These models are selected based on posterior model probability values. For each variable in the model, the posterior effect probability (POST PROB) gives the effect size of the

variable in the metric of posterior probability and is used to draw inferences about the importance of each variable. Specifically, the posterior effect probability is the probability that the regression coefficient is not zero, taking into account model uncertainty. The Bayesian model averaged coefficients (AVG COEF) are the weighted average of coefficients associated with the specific variable across the top five models, weighted by each model’s posterior model probability (PMP). For example, the weighed model average coefficient for TEACHER1 is 4.19, with a weighted model averaged standard deviation of 0.53. The posterior effect probability of this coefficient is 1.0 and thus implies that its averaged posterior distribution has 0% of its mass at 0. By contrast, TEACHER2, has a weighted model averaged coefficient of 0.04 with standard deviation of 0.21. The averaged posterior distribution for this coefficient has 94% of its mass at 0, or, in other words, the probability that the TEACHER2 coefficient is not 0 is 0.06. As stressed by Hoeting et al. (1999), these parameter estimates and standard kaplan, depaoli

425

Table 20.2. Bayesian Model Averaging Results for Five Multiple-Regression Models Node

Post prob

Avg coef

SD

Model 1

Model 2

Model 3

Model 4

Model 5

Full model Intercept

1.00 −2.68

3.00

−3.14

2.28

−4.02

−3.80

0.82

Teacher1

1.00

4.19

0.53

4.19

4.56

3.87

3.89

4.48

Teacher2

0.06

0.04

0.22

.

.

0.67

.

.

Teacher3

0.06

0.04

0.21

.

.

.

0.65

.

Parent1

0.93

2.21

0.90

2.35

.

2.33

2.28

2.71

Parent2

0.95

2.02

0.76

2.11

2.41

2.06

2.04

.

Parent3

0.00

0.00

0.00

.

.

.

.

.

0.20

0.18

0.20

0.20

0.18

−104.39

−99.47

−99.29

−99.25

−98.91

0.77

0.07

0.06

0.06

0.05

R2 BIC PMP

Note: Post prob = the posterior probability for each variable in the averaged model; Avg Coef = the average unstandardized coefficient for all variables in the model; SD = the standard deviation for the averaged coefficients; R 2 = percent of variance accounted for by each model; BIC = Bayesian information criteria; PMP = posterior model probability for each of the five models.

deviations account for model uncertainty. Finally, the model with highest PMP is Model 1 with a probability of 0.77. This model also produced the lowest BIC value, but it is interesting to note that R 2 values yield inconsistent findings. For future predictive studies, one would use the coefficients shown under AVG COEF, as these have been shown to provide the best predictive performance (see Hoeting et al., 1999). The R syntax for this example is given in Appendix A.

Bayesian Hierarchical Linear Modeling This example of a two-level hierarchical linear model uses a sample of 110 kindergartners from 39 schools from the ECLS–K database (NCES, 2001). The same math assessment measure from the multiple regression example is used as an outcome here. There are two predictors at Level 1 in this model. The first is a measure assessing the parent’s perception of their child’s approach to learning. The second predictor is the parent’s assessment of their child’s self-control. This example was computed through WinBUGS (Lunn et al., 2000); however, there are several packages within the R environment that will estimate this type of model. The WinBUGS syntax is given in Appendix B and all model parameter were given non-informative priors. 426

b ay e s i a n s tat i s t i c a l m e t h o d s

parameter convergence An initial model was computed with no burn-in samples and 10,000 total iterations to assess preliminary parameter convergence. This model took about 1 second to compute. The Geweke diagnostic and the Heidelberger and Welch diagnostic would not compute as a result of substantial divergence within the chains. The Raftery and Lewis diagnostic was computed with the following values: quantile = 0.5, accuracy = 0.05, and probability = 0.95. Results indicated that the longest chain should run for up to 304,168 post-burn-in iteration for the 0.5 quantile, with a thinning interval up to 193 and a burn-in of 2,509. A final model took these recommendations into consideration and was computed with 20,000 burn-in iterations, 255,000 post-burnin iterations, and no thinning interval. The decision to not include a thinning interval was based on the auto-correlation plots in the initial model as well as the fact that such a large number of post-burn-in iterations were being used for the final model. The Geweke convergence diagnostic for this final model indicated that none of the parameters produced significant z-scores. Likewise, the Hiedelberger and Welch diagnostic indicated that all of the parameters passed the stationarity and half-width tests. Based on these diagnostics, all of the parameters in

this model appeared to converge properly. Despite the large number of iterations, this model took less than 2 minutes to run.

auto-correlations were high, this would indicate that the starting values likely had a large impact on the location of the chain. Lower auto-correlations are desirable because the location of the chain should not depend on the starting values but, rather, should be determined by the posterior distribution. Although not presented here, the other parameters in the model showed similar results.

model interpretation Estimates for the final hierarchical linear model are presented in Table 20.3. The means and standard deviations of the posterior distributions are provided for each parameter. Likewise, the MC error and the 95% credible interval are also provided. The fixed effects for this model are presented in the table first. Results indicated that the intercept for this model was -2.61, representing the expected scaled-math score for a student corresponding to parent-perceptions for the predictors coded as 0. Likewise, the 95% credible interval ranged from -4.12 to -1.10, indicating that there is a 0.95 probability the true parameter value falls in this range. The slope corresponding to the parent-perception of the child’s approach to learning was 4.85 and the slope for the parent-perception of the child’s self-control was 2.66. Table 20.3 also presents the correlations between the fixed effects. The two slope parameters have a larger correlation, with an estimate of 0.47. The intercept had lower but comparable correlations between the respective slope parameters. Figure 20.2 presents convergence plots, posterior density plots, and auto-correlation plots for all three fixed effects. The convergence plots exhibit a relatively tight, horizontal band for the intercept and the two slopes. The posterior densities approximate a normal distribution, with the intercept exhibiting more variability in the density compared to the two slopes. Finally, the auto-correlation plots all show diminishing dependence within the chain. If

Bayesian Confirmatory Factor Analysis The data for the Bayesian confirmatory factor analysis example come from the responses of a sample of 3,500 public school 10th grade students to survey items in the National Educational Longitudinal Study (NCES, 1988). Students were asked to respond to questions assessing their perceptions of the climate of the school. Questions were placed on a 4-point Likert scale ranging from strongly agree to strongly disagree. A prior exploratory factor analysis using principal axis factoring with promax rotation revealed two correlated factors. The item and factor definitions are given in Table 20.4. We use the twofactor solution for the Bayesian CFA example. This model was estimated using non-informative priors on the model parameters through WinBUGS; the syntax for this example is given in Appendix C.

parameter convergence An initial model was computed with no burn-in samples and 5,000 total iterations to assess preliminary parameter convergence. This model took about 8 minutes to compute. The Geweke convergence diagnostic was computed using the default BOA proportions of 0.1 for the beginning of the chain and 0.5 for the end of the chain. None

Table 20.3. WinBugs HLM Estimates: ECLSK Data Node

EAP

SD

MC error

95% credible interval

−2.61

0.78

3.43E-2

−4.12, −1.10

Approaches to learning

4.85

0.40

1.72E-2

4.10, 5.63

Self-control

2.66

0.40

1.71E-2

1.88, 3.53

Fixed effects: Correlations Intercept/Learning

0.23

0.15

1.63E-3

−0.07, 0.51

Intercept/Self-control

0.22

0.15

1.68E-3

−0.07, 0.51

Learning/Self-control

0.47

0.15

2.39E-3

0.17, 0.72

Fixed effects Intercept

Note: EAP = expected a posteriori; SD = standard deviation; MC error = Monte Carlo error.

kaplan, depaoli

427

Fixed Effect: Intercept

Fixed Effect: Intercept 2.0 0.0 –2.0 –4.0 –6.0

Fixed Effect: Intercept

20001

100000

0.8 0.6 0.4 0.2 0.0 200000

iteration

1.0 0.5 0.0 –0.5 –1.0 –6.0

–4.0

–2.0

0

0.0

40

Fixed Effect: Learning

Fixed Effect: Learning 1.0 0.5 0.0 –0.5 –1.0

7.0 Fixed Effect: Approaches to Learning 6.0 5.0 4.0 3.0 20001 100000 200000 iteration

3.0

4.0

5.0

0

6.0

20

40

Fixed Effect: Self Control

Fixed Effect: Self Control 5.0 Fixed Effect: Self Control 4.0 3.0 2.0 1.0 100000 20001 200000 iteration

20

1.0 0.5 0.0 –0.5 –1.0

1.5 1.0 0.5 0.0 1.0

2.0

3.0

4.0

0

20

40

Figure 20.2 HLM: convergence, posterior densities, and auto-correlations for fixed effects.

of the parameters produced significant z-scores, indicating there was no evidence against convergence based on the Geweke diagnostic. Likewise, the Heidelberger and Welch convergence diagnostic yielded results indicating that all of the parameters passed the stationarity and half-width tests. The Raftery and Lewis diagnostic was computed with the following values: quantile = 0.5, accuracy = 0.05, and probability = 0.95. Results indicated that the longest chain should run for up to 5,555 post-burnin iterations for the 0.5 quantile with a thinning interval up to 11 and a burn-in of 44 iterations to converge. A final model was computed based on these recommendations with a burn-in phase of 1,000 and 5,000 post-burn-in iterations. Upon inspection of auto-correlation plots for the initial model, it was deemed that no thinning interval was necessary for the final analysis. Based on the diagnostics, all of the parameters in this model appeared to converge properly. This model took approximately 10 minutes to run. The length of time it took to run these models probably resulted from the large sample size.

model interpretation Table 20.4 presents estimates for the final CFA model. The means and standard deviations of the posterior distributions are provided for each parameter. The MC error is also included in this table as well as the 95% credible interval for each parameter. The first factor consisted of positive perceptions of the school climate, whereas the second factor consisted of negative perceptions of the school climate. Note that the first item on each factor was fixed to have a loading of 1.00 to set the metric of that factor. However, the flexibility of modeling in a Bayesian 428

b ay e s i a n s tat i s t i c a l m e t h o d s

framework will allow for any method of scale setting. The factor assessing positive perceptions of school climate measures had high (unstandardized) loadings ranging from 0.94 to 1.11. The factor measuring negative perceptions of school climate had slightly lower loadings overall, ranging from 0.80 to 0.97. Notice that all of the 95% credibility intervals are relatively tight for all of the items. For example, the interval for the item measuring the level students get along ranged from 0.95 to 1.03. This indicates that there is a 0.95 probability that the true loading for this item is in this range. Table 20.4 also includes estimates for factor precisions (inverse of the variance), error term variances, and the residual variance/precision Figure 20.3 presents convergence plots, posterior density plots, and auto-correlation plots for two of the factor loadings and the corresponding error variances. The convergence plots exhibit a tight, horizontal band for both of the items presented. In conjunction with the convergence diagnostics presented above, this tight band indicates the parameters likely converged properly. The posterior probability densities are approximating a normal distribution, and the auto-correlations are very low, indicating sample independence within the chain. Although not shown here, the other parameters included in this model also exhibited proper convergence and low auto-correlations.

Conclusions and Future Directions This chapter provided a very general overview of Bayesian statistical methods, including elements of Bayesian probability theory, inference, hypothesis testing, and model comparison. We provided

Table 20.4. WinBugs CFA Estimates: NELS:88 Survey Node

EAP

SD

MC error

95% credible interval

Loadings: Positive Students get along

1.00

There is school spirit

0.99

0.03

7.05E-4

0.95, 1.03

Discipline is fair

0.99

0.02

7.02E-4

0.95, 1.03

I have friends of other racial groups

0.94

0.02

7.17E-4

0.90, 0.98

Teaching is good

1.08

0.02

7.43E-4

1.04, 1.12

Teachers are interested in students

1.11

0.02

7.40E-4

1.07, 1.15

Teachers praise students

1.02

0.02

7.50E-4

0.98, 1.06

Teachers listen to students

1.04

0.02

7.53E-4

1.01, 1.08

Loadings: Negative Students disrupt learning

1.00

Teachers putdown students

0.84

0.02

8.94E-4

0.80, 0.89

Teachers are strict

0.86

0.02

9.38E-4

0.81, 0.91

Students putdown each other

0.87

0.02

9.91E-4

0.82, 0.92

School is not safe

0.80

0.02

8.79E-4

0.75, 0.84

Disruptions impede my learning

0.93

0.02

9.33E-4

0.89, 0.98

Students get away with bad behavior

0.97

0.02

9.99E-4

0.92, 1.02

Factor 1 Precision

0.59

0.02

8.22E-4

0.55, 0.63

Factor 2 Precision

0.61

0.03

1.22E-3

0.56, 0.66

Factor Covariance Precision

0.43

0.02

5.48E-4

0.40, 0.47

Students get along

3.66

0.11

2.33E-3

3.45, 3.87

There is school spirit

1.81

0.05

7.36E-4

1.72, 1.90

Discipline is fair

1.61

0.04

8.25E-4

1.52, 1.69

I have friends of other racial groups

1.60

0.04

6.58E-4

1.52, 1.68

Teaching is good

2.58

0.07

1.29E-3

2.44, 2.72

Teachers are interested in students

2.10

0.06

1.09E-3

1.99, 2.22

Teachers praise students

1.99

0.05

1.02E-3

1.88, 2.09

Teachers listen to students

2.35

0.07

1.28E-3

2.23, 2.48

Students disrupt learning

1.86

0.05

1.23E-3

1.76, 1.97

Teachers putdown students

2.02

0.06

1.11E-3

1.91, 2.14

Teachers are strict

1.37

0.04

6.55E-4

1.30, 1.44

Students putdown each other

1.92

0.05

1.19E-3

1.82, 2.03

Factor Precisions

Error Variances

kaplan, depaoli

429

Table 20.4. (Continued) Node

EAP

SD

MC error

95% credible interval

School is not safe

1.92

0.05

9.15E-4

1.83, 2.03

Disruptions impede my learning

1.56

0.04

7.61E-4

1.48, 1.64

Students get away with bad behavior

1.61

0.04

9.30E-4

1.53, 1.70

Variance

2.24

0.76

9.42E-3

1.01, 3.96

Precision

0.51

0.20

2.47E-3

0.25, 1.00

Residual Variance and Precision

Note: Note that these are unstandardized factor loadings. However, the program can be specified to produce standardized loadings. EAP = Expected a posteriori. SD = standard deviation; MC error = Monte Carlo error. Loading: Spirit

Loading: Spirit 30.0 20.0 10.0 0.0

1.1 Loading: Spirit 1.0 0.9 1001

2000

4000 iteration

6000

1.0 0.5 0.0 –0.5 –1.0 1.0

0.9

0

Loading: Not Safe

0.8 0.7 1001

2000

1.0 0.5 0.0 –0.5 –1.0

20.0 15.0 10.0 5.0 0.0

0.9 Loading: Not Safe

4000 iteration

6000

0.7

1.6

4.0 3.0 2.0 1.0 0.0 6000

1.2 1001

2000

4000 iteration

3.0

3.5

4.0

0

6000

20

40

Error Variance: Not Spirit 1.0 0.5 0.0 –0.5 –1.0

15.0 10.0 5.0 0.0

1.4

40

1.0 0.5 0.0 –0.5 –1.0

Error Variance: Not Safe

Error Variance: Not Safe

20

Error Variance: Spirit

Error Variance: Spirit 4.5 Error Variance: Spirit 4.0 3.5 3.0 1001 2000 4000 iteration

40

Loading: Not Safe

0

0.8

20

1.2

1.4

0

20

40

Figure 20.3 CFA: convergence, posterior densities, and auto-correlations for select parameters.

very simple examples of Bayesian inference to multiple regression, multilevel modeling, and confirmatory factory analysis to motivate the Bayesian approach. It should be pointed out, however, that with the advent of simulation methods for estimating model parameters, virtually all of the common statistical models used in the social and behavioral sciences can be estimated from a Bayesian perspective. The broad range of models that can be estimated via the Bayesian perspective comes with a price. First, although the MCMC sampling conducted for the examples in this paper took very little time, Bayesian inference via MCMC sampling can take a very long time to run — particularly when compared

430

b ay e s i a n s tat i s t i c a l m e t h o d s

with maximum likelihood based alternative algorithms such as the expectation-maximization (EM) algorithm (Dempster, Laird, & Rubin, 1977). The issue of extensive computational time is particularly problematic when estimating models involving finite mixture distributions. Second, there does not currently exist simple “pull-down menu” functionality for Bayesian-oriented software programs such as WinBUGS or the packages within R. Although it is expected that such functionality will be available in the future, for now, there is a great deal of startup learning that is required to properly specify and estimate Bayesian models. Perhaps a more important consideration when embarking on the use of Bayesian inference are the

epistemological differences between the Bayesian and frequentist approaches for model building and model selection. As noted earlier, the key epistemological differences between the Bayesian and frequentist perspective include (1) the view that parameters are random and unknown versus fixed and unknown (2) accepting the validity of the subjective belief framework of probability, that is, quantifying the degree of belief about model parameters in the form of the specification of the prior distribution, and updating that belief in the presence of data; and (3) a shift away from the Fisherian or Neyman and Pearson schools of hypothesis testing and toward an approach based on model selection and posterior predictive accuracy. Thus, although the Bayesian and frequentist results look similar under certain conditions (e.g., large sample sizes and diffuse priors), it does not suggest that they are the same or that they are providing necessarily comparable interpretations. These differences in outlook between the Bayesian approach and the frequentist approach imply that MCMC sampling should not be considered “just another estimator”—that is, no different than, for example,

say maximum likelihood or weighted least-squares. Rather, if the Bayesian perspective is an appealing approach to data modeling in the social and behavioral sciences, then due consideration must be given as to whether one is comfortable with the epistemological shift that comes from adopting this approach. We see three important future directions for Bayesian inference in the social and behavioral sciences. First, from a purely practical point of view, it will be difficult to convince social and behavioral science researchers to adopt Bayesian methods unless computational algorithms become both easier to use and considerably faster. Second, it will be important to introduce students to Bayesian methods much earlier in their statistical training and to articulate the epistemological differences between the Bayesian and frequentist approaches so that students understand precisely the choices they are making. Finally, it will take a slow but steady paradigm shift in the practice of social and behavioral science to move away from conventional hypothesis testing as currently employed and toward the Bayesian perspective.

Appendix A: Glossary Term

Definition

Bayes factor

A quantity indicating the odds that the data favor one hypothesis over another. With equal prior odds, the Bayes factor is the ratio of the marginal likelihoods.

Bayes’ Theorem

A theorem originated by the Reverend Thomas Bayes’ and popularized by Pierre-Simon Laplace relating conditional probability to its inverse form.

BIC

Bayesian information criterion. A statistic used for model selection based on the Bayes factor but not requiring prior distributions.

BMA

Bayesian model averaging. A method to account for model uncertainty when specifying and comparing a number of different models.

Burn-in

In MCMC, the iterations prior to the stabilization of the chain.

Conditional probability

The probability of an event given the occurrence or observation of another event.

Credible interval

Also referred to as the posterior probability interval. An interval of the posterior distribution used for interval estimation in Bayesian statistics.

DIC

Deviance information criterion. A model selection criterion used to select a model with the best sample predictive performance.

kaplan, depaoli

431

Appendix A: Glossary (Continued) Term

Definition

EAP

Expected a posteriori estimate. In Bayesian inference, the EAP corresponds to the mean of the posterior distribution.

EM algorithm

An iterative algorithm for finding maximum likelihood estimates of model parameters.

Exchangeability

A sequence of random variables such that future samples behave like earlier samples, meaning that any order of a finite number of samples is equally likely.

Frequentist paradigm

A statistical paradigm based on the view of probability as the limiting quantity in long-run frequency. Specifically that any given event can be considered as one of an infinite sequence of possible repetitions of the same event.

HPD

Highest posterior density. An interval in which every point inside the interval has a higher probability than any point outside the interval.

Hyperparameters

The parameters of the prior distribution.

Hyperprior distribution

The prior distribution on the hyperparameters.

Jeffreys’ prior

A non-informative prior distribution that is proportional to the square root of the determinant of the Fisher information matrix.

Likelihood

A statistical function of the parameters of a model, assumed to have generated the observed data.

MAP

Maximum a posteriori estimate. The mode of the posterior distribution.

MCMC

Markov chain Monte Carlo. In Bayesian statistics, a family of algorithms designed to sample from the posterior probability distribution, in which the equilibrium distribution is the target distribution of interest. Algorithms include the Gibbs sampler and the Metropolis-Hastings algorithm.

Objective prior distribution

A prior distribution in which the specification of the hyperparameters suggest that very little information is conveyed by the distribution. Also referred to as public policy prior, uninformative prior or vague prior.

Post-burn-in

In MCMC, the iterations after stabilization of the chain and used for obtaining summaries of the posterior distribution.

Posterior distribution

The distribution of an event after conditioning on relevant prior information.

Precision

The reciprocal of the variance.

432

b ay e s i a n s tat i s t i c a l m e t h o d s

Appendix A: Glossary (Condinued) Term

Definition

Prior distribution

The distribution over the model parameters, characterized by hyperparameters that encode beliefs about the model parameters.

Subjective prior distribution

A prior distribution in which the specification of the hyperparameters conveys prior beliefs about the model parameters.

Thinning

A process of sampling every sth sequence of the chain for purposes of summarizing the posterior distribution. Thinning is often used to reduce auto-correlation across chains.

Appendix B Multiple Regression, CODA, Bayes Factors, and Bayesian Model Averaging R Code #Multiple Regression Analysis: library(MCMCpack) datafile
The Oxford handbook of quantitative methods edited by Todd D. Little.

Related documents

382 Pages • 80,192 Words • PDF • 1.9 MB

767 Pages • 175,206 Words • PDF • 10.5 MB

808 Pages • 361,855 Words • PDF • 9.1 MB

633 Pages • 167,738 Words • PDF • 3.8 MB

857 Pages • 180,389 Words • PDF • 5.3 MB

837 Pages • 209,168 Words • PDF • 3.8 MB

837 Pages • 209,168 Words • PDF • 3.7 MB

837 Pages • 209,168 Words • PDF • 3.7 MB

64 Pages • PDF • 14.4 MB