Full download The sage handbook of applied social research methods 2nd edition, (ebook pdf) pdf docx

Page 1


SAGE Handbook of Applied Social Research Methods 2nd Edition, (Ebook PDF)

Visit to download the full and correct content document: https://ebookmass.com/product/the-sage-handbook-of-applied-social-research-metho ds-2nd-edition-ebook-pdf/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

The SAGE Handbook of Online Research Methods 2nd Edition, (Ebook PDF)

https://ebookmass.com/product/the-sage-handbook-of-onlineresearch-methods-2nd-edition-ebook-pdf/ Survey Research Methods (Applied Social Research Methods Book 1) 5th Edition, (Ebook PDF)

https://ebookmass.com/product/survey-research-methods-appliedsocial-research-methods-book-1-5th-edition-ebook-pdf/

The Handbook of Social Work Research Methods – Ebook PDF Version

https://ebookmass.com/product/the-handbook-of-social-workresearch-methods-ebook-pdf-version/

The Sage Handbook of Social Network Analysis, 2e 2nd Edition John Mclevey

https://ebookmass.com/product/the-sage-handbook-of-socialnetwork-analysis-2e-2nd-edition-john-mclevey/

Conducting Research: Social and Behavioral Science Methods 2nd Edition, (Ebook PDF)

https://ebookmass.com/product/conducting-research-social-andbehavioral-science-methods-2nd-edition-ebook-pdf/

Social Research Methods 5th Edition Alan Bryman && Edward Bell

https://ebookmass.com/product/social-research-methods-5thedition-alan-bryman-edward-bell/

Handbook of the Sociology of Mental Health (Handbooks of Sociology and Social Research) 2nd Edition – Ebook PDF Version

https://ebookmass.com/product/handbook-of-the-sociology-ofmental-health-handbooks-of-sociology-and-social-research-2ndedition-ebook-pdf-version/

Fundamentals of Social Work Research 2nd Edition, (Ebook PDF)

https://ebookmass.com/product/fundamentals-of-social-workresearch-2nd-edition-ebook-pdf/

Health Services Research Methods 2nd Edition, (Ebook PDF)

https://ebookmass.com/product/health-services-researchmethods-2nd-edition-ebook-pdf/

Acknowledgments

The editors are grateful for the assistance of Pegg y Westlake in managing the complex process of developing and producing this Handbook

Publisher’s Acknowledgments

SAGE Publicat ions g r ateful ly acknow ledges the cont r ibut ions of the fol low ing re v iewers:

Neil Boyd, Pennsylvania State University, Capital College

Julie Fleur y, Arizona State Universit y

Steven Rogelberg, Universit y of Nor th Carolina, Charlotte

Introduction

Why a Handbook of Applied Social Research Methods?

This second edition of the Handbook of Applied Soc ial Research Me thods includes 14 chapters rev ised and updated from the first edition as well as 4 new chapters. We selected the combination of chapters in this second edition to represent the cutting edge of applied social research methods and important changes that have occur red in the field in the decade since the first edition was published.

One area that continues to gain prominence is the focus on qualitative research. In the first edition, 4 of the 18 chapters were focused on the qualitative approach; in this edition, a third of the Handbook now focuses on that approach. Moreover, research that combines quantitative and qualitative research methods, called mixed methods, has become a much more common requirement for studies In Chapter 9, Abbas Tashakorri and Charles Teddlie present an approach to integrating qualitative and quantitative methods with an underlying belief that qualitative and quantitative methods are not dichotomous or discrete but are on a continuum of approaches

Another change that is reflected in many of the rev ised chapters as well as in two of the new chapters is the increasing use of technolog y in research. The use of the Internet and computer-assisted methods is discussed in several of the chapters and is the focus of Samuel Best and Chase Harrison’s chapter (Chapter 13) on Internet sur vey methods. In addition, Mar y Kane and Bill Trochim’s contribution on concept mapping in Chapter 14 offers a cutting-edge technique involv ing both qualitative and quantitative methods in designing research.

Finally, Michael Harrison’s chapter on organizational diagnosis is a new contribution to this Handbook edition. Harrison’s approach focuses on using methods

and models from the behav ioral and organization sciences to help identify what is going on in an organization and to help guide decisions based on this information

In addition to reflecting any new developments that have occurred (such as the technological changes noted above), other changes that have been made in this edition respond to comments made about the first edition, w ith an emphasis on increasing the pedagogical qualit y of each of the chapters and the book as a whole In par ticular, the text has been made more “classroom friendly” w ith the inclusion of discussion questions and exercises. The chapters also are current w ith new research cited and improved examples of those methods. Overall, however, research methods are not an area that is subject to rapid changes.

This version of the Handbook, like the first edition, presents the major methodological approaches to conducting applied social research that we believe need to be in a researcher’s reper toire. It ser ves as a “handy” reference guide, covering key yet often diverse themes and developments in applied social research. Each chapter summarizes and synthesizes major topics and issues of the method and is designed w ith a broad perspective but prov ides information on additional resources for more in-depth treatment of any one topic or issue.

Applied social research methods span several substantive arenas, and the boundaries of application are not well-defined The methods can be applied in educational settings, env ironmental settings, health settings, business settings, and so for th In addition, researchers conducting applied social research come from several disciplinar y backgrounds and orientations, including sociolog y, psycholog y, business, political science, education, geography, and social work, to name a few Consequently, a range of research philosophies, designs, data collection methods, analysis techniques, and repor ting methods can be considered to be “applied social research.” Applied research, because it consists of a diverse set of research strategies, is difficult to define precisely and inclusively. It is probably most easily defined by what it is not, thus distinguishing it from basic research. Therefore, we begin by highlighting several differences between applied and basic research; we then present some specific principles relevant to most of the approaches to applied social research discussed in this Handbook.

Distinguishing Applied From Basic Social Research

Social scientists are frequently involved in tackling real-world social problems. The research topics are exceptionally varied. They include study ing physicians’ effor ts to improve patients’ compliance w ith medical regimens, determining whether drug use is decreasing at a local high school, prov iding up-to-date information on the operations of new educational programs and policies, evaluating the impacts of env i ron m en t a l d i s a s ter s , a n d a n a ly z i n g t h e l i ke ly e f fe c t s of ye t - to - b e - t r i e d programs to reduce teenage pregnancy. Researchers are asked to estimate the costs of ever y thing from shopping center proposals to weapons systems and to speak to the relative effectiveness of alternative programs and policies. Increasing ly, applied researchers are contributing to major public policy debates and decisions.

Applied research uses scientific methodolog y to develop information to help solve an immediate, yet usually persistent, societal problem The applied research env ironment is often complex, chaotic, and highly political, w ith pressures for quick and conclusive answers yet little or no experimental control Basic research, in comparison, also is firmly grounded in the scientific method but has as its goal the creation of new knowledge about how fundamental processes work Control is often prov ided through a laborator y env ironment.

These differences between applied and basic research contexts can sometimes seem ar tificial to some obser vers, and highlighting them may create the impression that researchers in the applied communit y are “ w illing to settle” for something less than rigorous science. In practice, applied research and basic research have many more commonalities than differences; however, it is critical that applied researchers (and research consumers) understand the differences. Basic research and applied research differ in purposes, context, and methods. For ease of presentation, we discuss the differences as dichotomies; in realit y, however, they fall on continua.

Differences in Purpose

Knowledge Use Versus Knowledge Production. Applied research strives to improve our understanding of a “problem,” w ith the intent of contributing to the solution of that problem. The distinguishing feature of basic research, in contrast, is that it is intended to expand knowledge (i.e., to identify universal principles that contribute to our understanding of how the world operates). Thus, it is knowledge, as an end in itself, that motivates basic research. Applied research also may result in new knowledge, but often on a more limited basis defined by the nature of an immediate problem. Although it may be hoped that basic research findings w ill eventually be helpful in solv ing par ticular problems, such problem solv ing is not the immediate or major goal of basic research.

Broad Versus Narrow Questions The applied researcher is often faced w ith “fuzzy” issues that have multiple, often broad research questions, and addresses them in a “ messy ” or uncontrolled env ironment For example, what is the effect of the prov ision of mental health ser v ices to people liv ing w ith AIDS? What are the causes of homelessness?

Even when the questions are well-defined, the applied environment is complex, making it difficult for the researcher to eliminate competing explanations (e.g., events other than an inter vention could be likely causes for changes in attitudes or behavior). Obviously, in the example above, aspects of an individual’s life other than mental health ser vices received will affect that person ’ s well-being. The number and complexity of measurement tasks and dynamic real-world research settings pose major challenges for applied researchers. They also often require that researchers make conscious choices (trade-offs) about the relative importance of answering various questions and the degree of confidence necessar y for each answer.

In contrast, basic research investigations are usually narrow in scope. Ty pically, the basic researcher is investigating a ver y specific topic and a ver y tightly focused question. For example, what is the effect of white noise on the shor t-term recall of

nonsense syllables? Or what is the effect of cocaine use on fine motor coordination? The limited focus enables the researcher to concentrate on a sing le measurement task and to use rigorous design approaches that allow for maximum control of potentially confounding variables In an experiment on the effects of white noise, the laborator y setting enables the researcher to eliminate all other noise variables from the env ironment, so that the focus can be exclusively on the effects of the variable of interest, the white noise.

Practical Versus Statistical Significance. There are differences also between the analy tic goals of applied research and those of basic research. Basic researchers generally are most concerned w ith determining whether or not an effect or causal relationship exists, whether or not it is in the direction predicted, and whether or not it is statistically significant. In applied research, both practical significance and statistical significance are essential. Besides determining whether or not a causal relationship exists and is statistically significant, applied researchers are interested in know ing if the effects are of sufficient size to be meaningful in a par ticular context. It is critical, therefore, that the applied researcher understands the level of outcome that w ill be considered “significant” by key audiences and interest groups. For example, what level of reduced drug use is considered a practically significant outcome of a drug program? Is a 2% drop meaningful? Thus, besides establishing whether the inter vention has produced statistically significant results, applied research has the added task of determining whether the level of outcome attained is impor tant or triv ial

Theoretical “Opportunism” Versus Theoretical “Pur ity.” Applied researchers are more likely than basic researchers to use theor y instrumentally. Related to the earlier concept of practical significance, the applied researcher is interested in apply ing and using a theor y only if it identifies variables and concepts that w ill likely produce impor tant, practical results. Purit y of theor y is not as much a driv ing force as is utilit y. Does the theor y help solve the problem? Moreover, if several theories appear useful, then the applied researcher w ill combine them, it is hoped, in a creative and useful way. For those involved in evaluation research, they are most often tr y ing to understand the “underly ing theor y ” or logic of the program or policy they are study ing and using that to guide the research.

For the basic researcher, on the other hand, it is the underly ing formal theor y that is of prime impor tance. Thus, the researcher w ill strive to have variables in the study that are flawless representations of the underly ing theoretical constructs In a study examining the relationships between frustration and aggression, for example, the investigator would tr y to be cer tain that the study deals w ith aggression and not another related construct, such as anger, and that frustration is actually manipulated, and not boredom

Differences in Context

Open Versus Cont rolled Env ironment. The context of the research is a major factor in accounting for the differences between applied research and basic research. As

noted earlier, applied research can be conducted in many diverse contexts, including business settings, hospitals, schools, prisons, and communities These settings, and their corresponding characteristics, can pose quite different demands on applied researchers The applied researcher is more concerned about generalizabilit y of findings Since application is a goal, it is impor tant to know how dependent the results of the study are on the par ticular env ironment in which it was tested In addition, lengthy negotiations are sometimes necessar y for a researcher even to obtain permission to access the data.

B asic research, in cont r ast, is t y pical ly conduc ted in universit ies or similar academic env ironments and is relatively isolated from the government or business worlds. The env ironment is w ithin the researcher’s control and is subject to close monitoring.

Clie nt Init iated Ve r sus Res earc he r Init iate d. The applied researcher often receives research quest ions from a client or research sp onsor, and some t imes these quest ions are p o or ly fr amed and incompletely understo o d. Clients of applied so cial research can include feder al gover nment agencies, state gover nments and le g islatures, local gover nments, gover nment oversig ht agencies, professional or advocacy g roups, pr ivate research inst itut ions, foundat ions, business cor p or at ions and org anizat ions, and ser v ice deliver y agencies, among others The client is often in cont rol, w he ther throug h a cont r ac tual relat ionship or by v ir tue of holding a hig her p osit ion w ithin the researcher’s place of employ ment (if the research is b eing conduc ted inter nal ly) Ty pical ly, the applied researcher needs to ne got iate w ith the client about the project scope, cost, and deadlines Based on these parame ters, the researcher may need to make conscious t r ade-off s in selec t ing a research appro ach that affec ts w hat quest ions w il l b e addressed and how conclusively the y w il l b e addressed.

Universit y basic research, in contrast, is usually self-initiated, even when funding is obtained from sources outside the universit y env ironment, such as through government grants. The idea for the study, the approach to executing it, and even the timeline are generally determined by the researcher. The realit y is that the basic researcher, in comparison w ith the applied researcher, operates in an env ironment w ith a great deal more flexibilit y, less need to let the research agenda be shaped by project costs, and less time pressure to deliver results by a specified deadline. Basic researchers sometimes can under take multiyear incremental programs of research intended to build theor y systematically, often w ith supplemental funding and suppor t from their universities

Research Team Versus Solo Sc ientist Applied research is t y pically conducted by research teams These teams are likely to be multidisciplinar y, sometimes as a result of competitive positioning to w in grants or contracts Moreover, the substance of applied research often demands multidisciplinar y teams, par ticularly for studies that address multiple questions involv ing different areas of inquir y (e.g., economic, political, sociological). These teams must often comprise indiv iduals who are familiar w ith the substantive issue (e.g., health care) and others who have exper tise in specific methodological or statistical areas (e.g., economic forecasting).

Basic research is t y pically conducted by an indiv idual researcher who behaves autonomously, setting the study scope and approach If there is a research team, it generally comprises the researcher’s students or other persons that the researcher chooses from the same or similar disciplines

Differences in Methods

External Versus Internal Validity. A key difference between applied research and basic research is the relative emphasis on internal and external validit y. Whereas internal validit y is essential to both t y pes of research, external validit y is much more impor tant to applied research. Indeed, the likelihood that applied research findings w ill be used often depends on the researchers’ abilit y to conv ince policy makers that the results are applicable to their par ticular setting or problem For example, the results from a laborator y study of aggression using a bogus shock generator are not as likely to be as conv incing or as useful to policy makers who are confronting the problem of v iolent crime as are the results of a well-designed sur vey describing the t y pes and incidence of crime experienced by inner-cit y residents T h e Co n s t r u c t of E f f e c t Ve r s

r a te s on t h e con s t r u c t of

t t h e o utcom e m e asu re s a re v a l i d t h a t t h e y a cc u r a te ly m e a su re t h e v a r i a bl e s of i n tere s t . O f ten , i t i s i m p or t a n t for re s e a rch ers to m e a su re mu l t i p l e o utcom e s a n d to u s e mu l t i p l e

m e a su re s to a s s e s s e a ch con s t r u c t f u l ly. Men t a l h e a l t h o utcom e s , for ex a m p l e ,

m ay i n clu de m e a su re s of d a i ly f u n c t i on i n g , p s ych i a t r i c s t a t u s , a n d u s e of h o s p i -

t a l i z a t i o n . More over, m e a su re s of re a l - wor l d o utcom e s of ten re qu i re m ore t h a n

s e l f - re p or t a n d s i m p l e p a p er- a n d - p en c i l m e a su re s ( e . g . , s e l f - re p or t s a t i s f a c t i on

w i t h p a r t i c i p a t i on i n a pro g r a m ) . If a t tem p t s a re b e i n g m a de to a d d re s s a s o c i a l

pro b l em , t h en re a l - wor l d m e a su re s d i re c t ly re l a te d to t h a t pro bl em a re de s i r a bl e .

For ex a m p l e , i f on e i s s t u dy i n g t h e ef fe c t s of a pro g r a m de s i g n e d to re du ce i n terg ro u p con f l i c t a n d ten s i o n , t h en o b s er v a t i on s of t h e i n ter a c t i on s a m on g g ro u p m em b ers w i l l h ave m ore c re d i bi l i t y t h a n g ro u p m em b er s ’ re s p o n s e s to qu e s t i on s a b o ut t h e i r a t t i t u de s tow a rd o t h er g ro u p s In f a c t , t h ere i s mu ch re s e a rch e v iden ce i n s o c i a l p s ych o l o g y t h a t dem on s t r a te s t h a t a t t i t u de s a n d b e h av i or of ten do n o t re l a te

Basic research, on the other hand, concentrates on the construct of cause In laborator y studies, the independent variable (cause) must be clearly explicated and not confounded w ith any other variables. It is rare in applied research settings that control over an independent variable is so clear-cut. For example, in a study of the effects of a treatment program for drug abusers, it is unlikely that the researcher can isolate the aspects of the program that are responsible for the outcomes that result. This is due to both the complexit y of many social programs and the researcher’s inabilit y in most circumstances to manipulate different program features to discern different effects.

Mult iple Ve r sus Sing le Le ve ls of Analy sis. The applied researcher, in cont r ast to the basic researcher, usual ly needs to examine a sp ecific problem at more than one

le vel of analysis, not only study ing the indiv idual, but often larger g roups, such as org anizat ions or e ven socie t ies For example, in one e valuat ion of a communit y cr ime pre vent ion projec t, the researcher not only examined indiv idual att itudes and p ersp ec t ives but also measured the reac tions of g roups of neig hbors and neig hb or ho o ds to problems of cr ime These added le vels of analysis may require that the researcher b e conversant w ith concepts and research appro aches found in se ver al disciplines, such as psycholog y, so ciolog y, and p olit ical science, and that he or she de velop a multidisciplinar y research team that can conduc t the mult ile vel inquir y.

Similarly, because applied researchers are often given multiple questions to answer, because they must work in real-world settings, and because they often use multiple measures of effects, they are more likely to use multiple research methods, often including both quantitative and qualitative approaches. Although using multiple methods may be necessar y to address multiple questions, it may also be a strateg y used to triangulate on a difficult problem from several directions, thus lending additional confidence to the study results. Although it is desirable for researchers to use experimental designs whenever possible, often the applied researcher is called in after a program or inter vention is in place, and consequently is precluded from building random assignment into the allocation of program resources Thus, applied researchers often use quasi-experimental studies The obverse, however, is rarer ; quasi-experimental designs are generally not found in the studies published in basic research journals

The Orientation of This Handbook

This second edition is designed to be a resource for professionals and students alike. It can be used in tandem w ith the Applied Soc ial Research Methods Ser ies that is coedited by the present editors. The series has more than 50 volumes related to the design of applied research, the collection of both quantitative and qualitative data, and the management and presentation of these data. Almost all the authors in the Handbook also authored a book in that series on the same topic.

Similar to our goal as editors of the book series, our goal in this Handbook is to offer a hands-on, how-to approach to research that is sensitive to the constraints and oppor tunities in the practical and policy env ironments, yet is rooted in rigorous and sound research principles. Abundant examples and illustrations, often based on the authors’ ow n experience and work, enhance the potential usefulness of the material to students and others who may have limited experience in conducting research in applied arenas In addition, discussion questions and exercises in each chapter are designed to increase the usefulness of the Handbook in the classroom env ironment

The contr ibutors to the Handbook represent var ious disciplines (sociolog y, business, psycholog y, political science, education, economics) and work in diverse settings (academic depar tments, research institutes, government, the pr ivate sector). Throug h a concise collection of their work, we hope to prov ide in one place a diversit y of perspectives and methodolog ies that others can use in planning and

conducting applied social research Despite this diversit y of perspectives, methods, and approaches, several central themes are stressed across the chapters We describe these themes in turn below

The Ite rat ive Nat ure of Applied Res earc h In most applied research endeavors, the research quest ion the fo cus of the effor t is r arely stat ic Rather, to maintain the credibilit y, resp onsiveness, and qualit y of the research projec t, the researcher must t y pical ly make a ser ies of iter at ions w ithin the research desig n. The iter at ion is necessar y not b ecause of metho dolog ical inadequacies, but b ecause of successive redefinit ions of the applied problem as the projec t is b eing planned and implemented. New know ledge is g ained, unant icipated obstacles are encountered, and contextual shifts take place that change the over al l research situat ion and in tur n have effec ts on the research. The first chapter in this Handbook, by Bickman and Rog , descr ib es an iter at ive appro ach to planning applied research that cont inual ly re v isits the research question as t r ade-off s in the desig n are made. In Chapter 7, Maxwel l also discusses the iter at ive, inter ac t ive nature of qualitat ive research desig n, hig hlig ht ing the unique relat ionships that o ccur in qualitat ive research among the pur p oses of the research, the conceptual context, the quest ions, the me tho ds, and validit y

Multiple Stakeholders As noted earlier, applied research involves the effor ts and interests of multiple par ties Those interested in how a study gets conducted and its results can include the research sponsor, indiv iduals involved in the inter vention or program under study, the potential beneficiaries of the research (e g , those who could be affected by the results of the research), and potential users of the research results (such as policy makers and business leaders). In some situations, the cooperation of these par ties is critical to the successful implementation of the project. Usually, the involvement of these stakeholders ensures that the results of the research w ill be relevant, useful, and hopefully used to address the problem that the research was intended to study.

Many of the contributors to this volume stress the impor tance of consulting and involv ing stakeholders in various aspects of the research process. Bickman and Rog describe the role of stakeholders throughout the planning of a study, from the specification of research questions to the choice of designs and design trade-offs. Similarly, in Chapter 4, on planning ethically responsible research, Sieber emphasizes the impor tance of researchers’ attending to the interests and concerns of all par ties in the design stage of a study Kane and Trochim, in Chapter 14, offer concept mapping as a structured technique for engaging stakeholders in the decision making and planning of research

Ethical Concerns Research ethics are impor tant in all t y pes of research, basic or applied. When the research involves or affects human beings, the researcher must attend to a set of ethical and legal principles and requirements that can ensure the protection of the interests of all those involved. Ethical issues, as Boruch and colleagues note in Chapter 5, commonly arise in experimental studies when indiv iduals are asked to be randomly assigned into either a treatment condition or a control

condition However, ethical concerns are also raised in most studies in the development of strategies for obtaining informed consent, protecting privacy, guaranteeing anony mit y, and/or ensuring confidentialit y, and in developing research procedures that are sensitive to and respectful of the specific needs of the population involved in the research (see Sieber, Chapter 4; Fetterman, Chapter 17) As Sieber notes, although attention to ethics is impor tant to the conduct of all studies, the need for ethical problem solv ing is par ticularly heightened when the researcher is dealing w ith highly political and controversial social problems, in research that involves vulnerable populations (e.g., indiv iduals w ith AIDS), and in situations where stakeholders have high stakes in the outcomes of the research.

Enhanc ing Validity Applied research faces challenges that threaten the validit y of studies’ results Difficulties in mounting the most rigorous designs, in collecting data from objective sources, and in designing studies that have universal generalizabilit y require innovative strategies to ensure that the research continues to produce valid results. Lipsey and Hurley, in Chapter 2, describe the link between internal validit y and statistical power and how good research practice can increase the statistical power of a study. In Chapter 6, Mark and Reichardt outline the threats to validit y that challenge experiments and quasi-experiments and various design strategies for controlling these threats. Henr y, in his discussion of sampling in Chapter 3, focuses on external validit y and the construction of samples that can prov ide valid information about a broader population. Other contributors in Par t III (Fowler & Cosenza, Chapter 12; Lav rakas, Chapter 16; Mangione & Van Ness, Chapter 15) focus on increasing construct validit y through the improvement of the design of indiv idual questions and overall data collection tools, the training of data collectors, and the rev iew and analysis of data

Tr iangulation of Methods and Measures. One method of enhancing validit y is to develop converging lines of ev idence. As noted earlier, a clear hallmark of applied research is the triangulation of methods and measures to compensate for the fallibilit y of any sing le method or measure. The validit y of both qualitative and quantitative applied research is bolstered by triangulation in data collection. Yin (Chapter 8), Maxwell (Chapter 7), and Fetterman (Chapter 17) stress the importance of triangulation in qualitative research design, ethnography, and case study research Similarly, Bickman and Rog suppor t the use of multiple data collection methods in all t y pes of applied research

Qualitative and Quantitative. Unlike traditional books on research methods, this volume does not have separate sections for quantitative and qualitative methods Rather, both t y pes of research are presented together as approaches to consider in research design, data collection, analysis, and repor ting Our emphasis is to find the tools that best fit the research question, context, and resources at hand Often, multiple tools are needed, cutting across qualitative and quantitative boundaries, to research a topic thoroughly and prov ide results that can be used Chapter 9 by Tashakkori and Teddlie specifically focuses on the use of mixed methods designs.

experimental approaches are discussed (Boruch et al , Chapter 5; Mark & Reichardt, Chapter 6; Lipse y & Hurle y, Chapter 2) alongside qualitative approaches to desig n (Maxwell, Chapter 7), including case studies ( Yin, Chapter 8) and ethnog r aphies (Fetterman, Chapter 17) and approaches that are influenced by their setting (Har r ison, Chapter 10) Data collection tools prov ided also include sur ve ys (in person, mail, Internet, and telephone), focus g roups (Stewar t, Shamdasani, & Rook, Chapter 18), and newer approaches such as concept mapping (Kane & Trochim, Chapter 14).

Technolog ical Advances. Recent technological advances can help applied researchers conduct their research more efficiently, w ith greater precision, and w ith greater insight than in the past. Clearly, advancements in computers have improved the qualit y, timeliness, and power of research. Analyses of large databases w ith multiple levels of data would not be possible w ithout high-speed computers. Statistical syntheses of research studies, called meta-analyses (Cooper, Patall, & Lindsay, Chapter 11), have become more common in a variet y of areas, in par t due to the accessibilit y of computers. Computers are required if the Internet is going to be used for data collection as described by Best and Harrison in Chapter 13. Qualitative studies can now benefit from computer technolog y, w ith software programs that allow for the identification and analysis of themes in narratives ( Tashakkori & Teddlie, Chapter 9), programs that simply allow the researcher to organize and manage the voluminous amounts of qualitative data t y pically collected in a study (Maxwell, Chapter 7; Yin, Chapter 8), and laptops that can be used in the field to prov ide for efficient data collection (Fetterman, Chapter 17) In addition to computers, other new technolog y prov ides for innovative ways of collecting data, such as through v ideoconferencing (Fetterman, Chapter 17) and the Internet.

However, the researcher has to be careful not to get caught up in using technolog y that only gives the appearance of advancement. Lav rakas points out that the use of computerized telephone inter v iews has not been show n to save time or money over traditional paper-and-pencil sur veys.

Research Manage me nt. The nature of the context in which applied researchers work hig hlig hts the need for extensive exper tise in research planning . Applied researchers must take deadlines ser iously, and then desig n research that can deliver useful information w ithin the constraints of budget, time, and staff available. The ke y to qualit y work is to use the most r igorous methods possible, making intelligent and conscious trade-offs in scope and conclusiveness This does not mean that any information is better than none, but that decisions about what information to pursue must be made ver y deliberately w ith realistic assessments of the feasibilit y of executing the proposed research w ithin the required time fr ame Bickman and Rog (Chapter 1), and Boruch et al (Chapter 5) descr ibe the importance of research management from the early planning stages throug h the communication and repor ting of results.

Conclusion

We hope that the contributions to this Handbook w ill help guide readers in selecting appropriate questions and procedures to use in applied research. Consistent w ith a handbook approach, the chapters are not intended to prov ide the details necessar y for readers to use each method or to design comprehensive research; rather, they are intended to prov ide the general guidance readers w ill need to address each topic more fully. This Handbook should ser ve as an intelligent guide, helping readers select the approaches, specific designs, and data collection procedures that they can best use in applied social research.

PART I

Approaches to Applied Research

The four chapters in this section describe the key elements and approaches to designing and planning applied social research. The first chapter by Bickman and Rog presents an over v iew of the design process. It stresses the iterative nature of planning research as well as the multimethod approach. Planning an applied research project usually requires a great deal of learning about the context in which the study w ill take place as well as different stakeholder perspectives. It took one of the authors (L.B.) almost 2 years of a 6-year study to decide on the final design. The authors stress the trade-offs that are involved in the design phase as the investigator balances the needs for the research to be timely, credible, w ithin budget, and of high qualit y. The authors note that as researchers make tradeoffs in their research designs, they must continue to rev isit the original research questions to ensure either that they can still be answered given the changes in the design or that they are rev ised to reflect what can be answered

One of the aspects of planning applied research covered in Chapter 1, often overlooked in teaching and in practice, is the need for researchers to make certain that the resources necessar y for implementing the research design are in place These include both human and material resources as well as other elements that can make or break a study, such as site cooperation. Many applied research studies fail because the assumed community resources never materialize. This chapter describes how to develop both financial and time budgets and modify the study design as needed based on what resources can be made available.

The next three chapters outline the principles of three major areas of design: experimental designs, descriptive designs, and making sure that the design meets ethical standards. In Chapter 2, Lipsey and Hurley highlight the importance of planning experiments with design sensitivity in mind. Design sensitivity, also referred to as statistical power, is the ability to detect a difference between the treatment and

control conditions on an outcome if that difference is really there In a review of previous studies, they report that almost half were underpowered and, thus, lacked the ability to detect reasonable-sized effects even if they were present The low statistical power of many projects has been recognized by editors and grant reviewers to the extent that a power analysis has increasingly become a required component of a research design The major contribution of this chapter is that the authors illustrate how statistical power is affected by many components of a study, and they offer several approaches for increasing power other than just increasing sample size. In highlighting the components that affect statistical power, the authors illustrate several ways in which the sensitivity of the research design can be strengthened to increase the design’s overall statistical power. Most important, they demonstrate how the researcher does not have to rely only on increasing the sample size to increase the power but how good research practice (e.g., the use of valid and reliable measurement, maintaining the integrity and completeness of both the treatment and control groups) can increase the effect size and, in turn, increase the statistical power of the study. The addition of the new section of multilevel designs is especially appropriate for an increasing number of studies where the unit of analysis is not an individual, such as a student, but a group such as a class or a school.

As Henr y points out in Chapter 3, sampling is a critical component of almost ever y applied research study, but it is most critical to the conduct of descriptive studies involv ing sur veys of par ticular populations (e g , sur veys of homeless indiv iduals) Henr y describes both probabilit y and nonprobability sampling, also sometimes referred to as convenience sampling When a random or representative sample cannot be draw n, know ing how to select the most appropriate nonprobabilit y sample is critical. Henr y prov ides a practical sampling design framework to help researchers structure their thinking about making sampling decisions in the context of how those decisions affect total error. Total error, defined as the difference between the true population value and the estimate based on the sample data, involves three t y pes of error : error due to differences in the population definition, error due to the sampling approach used, and error involved in the random selection process. Henr y ’ s framework outlines the decisions that effect total error in the presampling, sampling, and postsampling phases of the research. In his chapter, however, he focuses on the implications of the researcher’s answers to the questions on sampling choices. In par ticular, Henr y illustrates the challenges in making tradeoffs to reduce total error, keeping the study goals and resources in mind.

Planning applied social research is not just application of methods; it also involves attention to ethics and the rights of research par ticipants In Chapter 4, Sieber discusses three major areas of ethics that need to be considered in the design of research: strategies for obtaining informed consent; issues related to, and techniques for ensuring privacy and confidentialit y ; and strategies for investigators to recognize research risk and, in turn, maximize the benefits of research Sieber places special emphasis on these areas in the conduct of research w ith vulnerable populations (e.g., indiv iduals w ith AIDS) and w ith children. We know that getting research approved by an institutional rev iew board can sometimes be a long and tor tuous process. This chapter, through its many examples and v ignettes, w ill be of great help in obtaining that approval.

C H A P T E R 1

Applied Research Design A Practical Approach

Planning Applied Social Research

The chapters in this Handbook describe several approaches to conducting applied social research, including experimental studies (Boruch, Weisburd, Turner, Karpyn, & Littell, Chapter 5), qualitative research (Maxwell, Chapter 7; Fetterman, Chapter 17), and mixed methods studies ( Tashakkori & Teddlie, Chapter 9) Regardless of the approach, all forms of applied research have two major phases planning and execution and four stages embedded w ithin them (see Figure 1.1). In the planning phase, the researcher defines the scope of the research and develops a comprehensive research plan. During the second phase the researcher implements and monitors the plan (design, data collection and analysis, and management procedures), followed by repor ting and follow-up activ ities.

In this chapter, we focus on the first phase of applied research, the planning phase. Figure 1.2 summarizes the research planning approach advocated here, highlighting the iterative nature of the design process. Although our chapter applies to many different ty pes of applied social research (e.g., epidemiological, sur vey research, and ethnog r aphies), our examples are largely prog r am e valuation examples, the area in which we have the most research experience Focusing on program evaluation also permits us to cover many different planning issues, especially the interactions w ith the sponsor of the research and other stakeholders

Stage I Definition Stage II Design/plan

Stage III Implementation Stage IV Reporting/ follow-up

Other t y pes of applied research need to consider the interests and needs of the research sponsor, but no other area has the variet y of par ticipants (e.g., program staff, beneficiaries, and communit y stakeholders) involved in the planning stage like program evaluation.

Stage I of the research process star ts w ith the researcher’s development of an understanding of the relevant problem or societal issue. This process involves working w ith stakeholders to refine and rev ise study questions to make sure that the questions can be addressed given the research conditions (e.g., time frame, resources, and context) and can provide useful information. After developing potentially researchable questions, the investigator then moves to Stage II developing the research design and plan. This phase involves several decisions and assessments, including selecting a design and proposed data collection strategies.

As noted, the researcher needs to determine the resources necessar y to conduct the study, both in the consideration of which questions are researchable as well as in making design and data collection decisions This is an area where social science academic education and experience is most often deficient and is one reason why academically oriented researchers may at times fail to deliver research products on time and on budget

Assessing the feasibilit y of conducting the study w ithin the requisite time frame and w ith available resources involves analyzing a series of trade-offs in the t y pe of design that can be employed, the data collection methods that can be implemented, the size and nature of the sample that can be considered, and other planning decisions. The researcher should discuss the full plan and analysis of any necessar y trade-offs w ith the research client or sponsor, and agreement should be reached on its appropriateness.

As Figure 1.2 illustrates, the planning activ ities in Stage II often occur simultaneously, until a final research plan is developed. At any point in the Stage II process, the researcher may find it necessar y to rev isit and rev ise earlier decisions, perhaps even finding it necessar y to return to Stage I and renegotiate the study questions or timeline w ith the research client or funder In fact, the researcher may find that the design that has been developed does not, or cannot, answer the original questions The researcher needs to rev iew and correct this discrepancy before mov ing on to Stage III, either rev ising the questions to bring them in line w ith what can be done

Figure 1.1 The Conduct of Applied Research

Stage I Research Definition

Stage II Research Design/plan

Determine trade-offs

Understand the problem

Identify questions

Refine/revise questions

Choose design/data collection approaches

Inventory resources

Assess feasibility

To execution

w ith the design that has been developed or reconsidering the design trade-offs that were made and whether they can be rev ised to be in line w ith the questions of interest. At times, this may mean increasing the resources available, changing the sample being considered, and other decisions that can increase the plausibilit y of the design to address the questions of interest.

Dep ending on the t y p e of applied research effor t, these decisions can either b e made in tandem w ith a client or by the research invest ig ator alone. Clear ly, involv ing stakeholders in the pro cess can lengthen the planning pro cess and at some p oint, may not y ield the opt imal desig n from a research p ersp ec t ive. There t y pical ly needs to b e a balance in de ter mining w ho needs to b e consulted, for w hat decisions, and w hen in the pro cess. As descr ib ed later in the chapter, the researcher needs to have a clear plan and r at ionale for involv ing stakeholders in

Figure 1.2 Applied Research Planning

various decisions Strategies such as concept mapping (Kane & Trochim, Chapter 14) prov ide a st r uc tured mechanism for obtaining input that can help in desig ning a study For some research effor ts, such as prog r am e valuat ion, col lab or at ion, and consultat ion w ith ke y stakeholders can help improve the feasibilit y of a study and may b e imp or tant to improv ing the usefulness of the infor mat ion (Rog , 1985) For other research situations, howe ver, there may b e need for minimal involvement of others to conduc t an appropr iate study. For example, if access or “ buy in” is hig hly dep endent on some of the stakeholders, then including them in al l major decisions may b e w ise. Howe ver, technical issues, such as w hich stat ist ical techniques to use, gener al ly do not b enefit from, or need stakeholder involvement. In addit ion, there may b e situat ions in w hich the science col lides w ith the preferences of a stakeholder. For example, a stakeholder may want to do the research quicker or w ith fewer par t icipants. In cases such as these, it is cr itical for the researcher to prov ide p ersuasive infor mat ion ab out the p ossible t r ade-off s of follow ing the stakeholder adv ice, such as reducing the abilit y to find an effec t if one is ac tual ly present that is, lower ing statist ical p ower. Applied researchers often find themselves educat ing stakeholders ab out the p ossible t r ade-off s that could b e made. The researcher w il l some t imes need to p ersuade stakeholders to think ab out the problem in a new way or demonst r ate the difficulties in implementing the or ig inal desig n

The culmination of Stage II is a comprehensively planned applied research project, ready for full-scale implementation With sufficient planning completed at this point, the odds of a successful study are significantly improved, but far from guaranteed As discussed later in this chapter, conducting pilot and feasibilit y studies continues to increase the odds that a study can be successfully mounted.

In the sections to follow, we outline the key activ ities that need to be conducted in Stage I of the planning process, followed by highlighting the key features that need to be considered in choosing a design (Stage II), and the variet y of designs available for different applied research situations. We then go into greater depth on various aspects of the design process, including selecting the data collection methods and approach, determining the resources needed, and assessing the research focus.

Developing a Consensus on the Nature of the Research Problem

Before an applied research study can even begin to be designed, there has to be a clear and comprehensive understanding of the nature of the problem being addressed. For example, if the study is focused on evaluating a program for homeless families being conducted in Georgia, the researcher should know what research and other available information has been developed about the needs and characteristics of homeless families in general and specifically in Georgia; what ev idence base exists, if any for the t y pe of program being tested in this study ; and so for th In addition, if the study is being requested by an outside sponsor, it is impor tant to have an understanding of the impetus of the study and what information is desired to inform decision making.

St r ate g ies that can b e used in g ather ing the needed infor mat ion include the fol low ing:

• rev iew relevant literature (research ar ticles and repor ts, transcripts of legislative hearings, program descriptions, administrative repor ts, agency statistics, media ar ticles, and policy/position papers by all major interested par ties);

• gather current information from exper ts on the issue (all sides and perspectives) and major interested par ties;

• conduct information-gathering v isits and obser vations to obtain a real-world sense of the context and to talk w ith persons actively involved in the issue;

• initiate discussions w ith the research clients or sponsors (legislative members; foundation, business, organization, or agency personnel; and so on) to obtain the clearest possible picture of their concerns; and

• if it is a program evaluation, informally v isit the program and talk w ith the staff, clients, and others who may be able to prov ide information on the program and/or overall research context.

Developing the Conceptual Framework

Ever y study, whether explicitly or implicitly, is based on a conceptual framework or model that specifies the variables of interest and the expected relationships between them. In some studies, social and behav ioral science theor y may ser ve as the basis for the conceptual framework For example, social psychological theories such as cognitive dissonance may guide investigations of behav ior change Other studies, such as program and policy evaluations, may be based not on formal academic theor y but on statements of expectations of how policies or programs are purpor ted to work Bickman (1987, 1990) and others (e g , Chen, 1990) have w ritten extensively about the need for and usefulness of program theor y to guide evaluations. The framework may be relatively straightfor ward or it may be complex, as in the case of evaluations of comprehensive communit y reforms, for example, that are concerned w ith multiple effects and have a variet y of competing explanations for the effects (e.g., Rog & Knickman, 2004).

In evaluation research, logic models have increased in popularit y as a mechanism for outlining and refining the focus of a study (Frechtling, 2007; McLaughlin & Jordan, 2004; Rog, 1994; Rog & Huebner, 1992; Yin, Chapter 8, this volume). A logic model, as the name implies, displays the underly ing logic of the program (i.e., how the program goals, resources, activ ities, and outcomes link together). In several instances, a program is designed w ithout explicit attention to the ev idence base available on the topic and/or w ithout explicit attention to what immediate and intermediate outcomes each program component and activ it y needs to accomplish to ultimately reach the desired longer-term outcomes The model helps display these gaps in logic and prov ides a guide for either refining the program and/or outlining more of the expectations for the program For example, communit y coalitions funded to prevent communit y v iolence need to have an explicit logic that details the activ ities they are intended to conduct that should lead to a set of outcomes that chain logically to the prevention of v iolence.

The use of logic modeling in program evaluation is an outgrow th of the evaluabilit y assessment work of Wholey and others (e g , Wholey, 2004), which advocates describing and display ing the underly ing theor y of a program as it is designed and implemented prior to conducting a study of its outcomes Evaluators have since discovered the usefulness of logic models in assisting program developers in the program design phase, guiding the evaluation of a program ’ s effectiveness, and communicating the nature of a program as well as changes in its structure over time to a variet y of audiences. A program logic model is dynamic and changes not only as the program matures but also may change as the researcher learns more about the program. In addition, a researcher may develop different levels of models for different purposes; for example, a g lobal model may be useful for communicating to outside audiences about the nature and flow of a program, but a detailed model may be needed to help guide the measurement phase of a study.

In the design phase of a study (Stage II), the logic model w ill become impor tant in guiding both the measurement and analysis of a study. For these tasks, the logic model needs to not only display the main features of a program and its outcomes but also the variables that are believed to mediate the outcomes as well as those that could moderate an inter vention’s impact (Baron & Kenny, 1986). Mediating variables, often referred to as inter vening or process variables, are those variables through which an independent variable (or program variable) influences an outcome For example, the underly ing theor y of a therapeutic program designed to improve the overall well-being of families may indicate that the effect of the program is mediated by the therapeutic alliance developed between the families and the program staff In other words, w ithout the development of a therapeutic alliance, it is not expected that the program can have an effect. Often, mediators are shor t-term outcomes that are believed to be logically necessar y for a program to first accomplish in order to achieve the longer-term outcomes.

Moderators are those variables that explain differences in outcomes due to preexisting conditions. For example, demographic variables, such as gender, age, income, and others are often tested as moderators of a program ’ s effects. Contextual variables also can act as moderators of the effects of a program; for example, a housing program for homeless families is expected to have greater effect on housing stabilit y in communities that have higher housing vacancy rates than those w ith lower rates (i.e., less available housing).

Identifying the Research Questions

As noted in the introduction to this Handbook, one of the major differences between basic research and applied research is that the basic researcher is more autonomous than the applied researcher. Basic research, when externally funded, is t y pically conducted through a relatively unrestricted grant mechanism; applied research is more frequently funded through contracts and cooperative agreements. Even when applied research is funded through grant mechanisms, such as w ith foundations, there is usually a “client” or sponsor who specifies (or at least guides) the research agenda and requests the research results. Most often, studies have multiple stakeholders: sponsors, interested beneficiaries, and potential users (Bickman

& Rog, 1986) The questions to be addressed by an applied study tend to be posed by indiv iduals other than the researcher, often by nontechnical persons in nontechnical language

Therefore, one of the first activ ities in applied research is working w ith the study clients to develop a common understanding of the research agenda the research questions Phrasing study objectives as questions is desirable in that it leads to more clearly focused discussion of the t y pe of information needed. It also makes it more likely that key terms (e.g., welfare dependency, drug use) w ill be operationalized and clearly defined. Using the logic models also helps focus the questions on what is expected from the program and to move to measurable variables to both study the process of an inter vention or program as well as its expected outcomes. Later, after additional information has been gathered and rev iewed, the par ties w ill need to reconsider whether these questions are the “right” questions and whether it is possible, w ith a reasonable degree of confidence, to obtain answers for these questions w ithin the available resource and time constraints.

Clarifying the Research Questions

In discussing the research agenda w ith clients, the researcher w ill usually identify several t y pes of questions. For example, in a program evaluation, researchers are frequently asked to produce comprehensive information on both the implementation (“what actually is taking or took place”) and the effects (“what caused what”) of an inter vention. When the research agendas are broad such as those in the example, they pose significant challenges for planning in terms of allocating data collection resources among the various study objectives. It is helpful to continue to work w ith the sponsors to fur ther refine the questions to both more realistically plan the scope of the research and to also ensure that they are specific enough to be answered in a meaningful way and one that is agreed on by the clients.

The researcher should guard against biasing the scope of the research. The questions left unaddressed by a study can be as or more impor tant than the questions answered If the research addresses only questions likely to suppor t only one position in a controversy and fails to develop information relevant to the concerns voiced by other interested par ties, it w ill be seen as biased, even if the results produced are judged to be sound and conclusive For example, an evaluation that is limited to measuring just the stated goals of a program may be biased if any possible unintended negative side effects of the program are not considered. Thus, the research agenda should be as comprehensive as is necessar y to address the concerns of all par ties. Resource constraints w ill limit the number and scope of questions that may be addressed, but at minimum the researcher should state explicitly what would be necessar y for a comprehensive study and how the research meets or does not meet those requirements. Resources w ill also determine the degree of cer taint y one can have in an answer. Thus, a representative sur vey is much more expensive to conduct than sampling by convenience, but the generalizabilit y of the results w ill be much stronger in the representative sample.

Ide a l ly, t h e de ve l opm en t of t h e con ce p t u a l f r a m e wor k / l o g i c m o de l w i l l occur simultaneously w ith the identification of the research questions. Once the

conceptual framework has been agreed on, the researcher can fur ther refine the study questions grouping questions and identify ing which are primar y and secondar y questions Areas that need clarification include the time frame of the data collection (i e , “Will it be a cross-sectional study or one that w ill track indiv iduals or cohor ts over time; how long w ill the follow-up period be?”); how much the client wants to generalize (e g , “Is the study interested in prov iding outcome information on all homeless families that could be ser ved in the program or only those families w ith disabilities?”); how cer tain the client wants the answers to be (i.e., “How precise and definitive should the data collected be to inform the decisions?”); and what subgroups the client wants to know about (e.g., “Is the study to prov ide findings on homeless families in general only or is there interest in outcomes for subgroups of families, such as those who are homeless for the first time, those who are homeless more than once but for shor t durations, and those who are ‘chronically homeless’?”). The levels of specificit y should be ver y high at this point, enabling a clear agreement on what information w ill be produced. As the next section suggests, these discussions between researcher and research clients oftentimes take on the flavor of a negotiation.

Negotiating the Scope of a Study

Communication between the researcher and stakeholders (the sponsor and all other interested par ties) is impor tant in all stages of the research process. To foster maximum and accurate utilization of results, it is recommended that the researcher regularly interact w ith the research clients from the initial discussions of the “problem” to recommendations and follow-up. In the planning phase, we suggest several specific communication strategies. As soon as the study is sponsored, the researcher should connect w ith the client to develop a common understanding of the research questions, the client’s time frame for study results, and anticipated uses for the information. The par ties can also discuss preliminar y ideas regarding a conceptual model for the study Even in this initial stage, it is impor tant for the researcher to begin the discussion of the contents and appearance of the final repor t This is an oppor tunit y for the researcher to explore whether the client expects only to be prov ided information on study results or whether the client anticipates that the researcher w ill offer recommendations for action It is also an oppor tunit y for the researcher to determine whether he or she w ill be expected to prov ide interim findings to the client as the study progresses.

At this juncture, the researcher also needs to have an understanding of the amount of funds or resources that will be available to support the research. Cost considerations will determine the scope and nature of the project, and the investigator needs to consider the resources while identifying and reviewing the research questions. In some studies, the budget is set prior to any direct personal contact with the research client. In others, researchers may help to shape the scope and the resources needed simultaneously or there may be a pilot effort that helps design the larger study.

Based on a comprehensive rev iew of the literature and other inputs (e.g., from exper ts) and an initial assessment of resources, the researcher should decide if the

research questions need to be refined The researcher and client then t y pically discuss the research approaches under consideration to answer these questions as well as the study limitations This gives the researcher an oppor tunit y to introduce constraints into the discussion regarding available resources, time frames, and any trade-offs contemplated regarding the likely precision and conclusiveness of answers to the questions

In most cases, clients want sound, well-executed research and are sy mpathetic to researchers’ need to preser ve the integrit y of the research. Some clients, however, have clear political, organizational, or personal agendas, and w ill push researchers to prov ide results in unrealistically shor t time frames or to produce results suppor ting par ticular positions. Other times, the subject of the study itself may generate controversy, a situation that requires the researcher to take extreme care to preser ve the neutralit y and credibilit y of the study. Several of the strategies discussed later attempt to balance client and researcher needs in a responsible fashion; others concentrate on opening research discussions up to other par ties (e.g., adv isor y groups). In the earliest stages of research planning, it is possible to initiate many of these kinds of activ ities, thereby bolstering the study’s credibilit y, and often its feasibilit y.

Stage II: The Research Design

Hav ing developed a preliminar y study scope during Stage I, the researcher moves to Stage II, developing a research design and plan. During this stage, the applied researcher needs to perform five activities almost simultaneously : selecting a design, choosing data collection approaches, inventor y ing resources, assessing the feasibilit y of executing the proposed approach, and determining trade-offs. These activ ities and decisions greatly influence one another. For example, a researcher may rev isit preliminar y design selections after conducting a practical assessment of the resources available to do the study, and may change data collection plans after discovering weaknesses in the data sources during planning

The design ser ves as the architectural blueprint of a research project, linking design, data collection, and analysis activ ities to the research questions and ensuring that the complete research agenda w ill be addressed A research study’s credibilit y, usefulness, and feasibilit y rest w ith the design that is implemented Credibility refers to the validit y of a study and whether the design is sufficiently rigorous to prov ide suppor t for definitive conclusions and desired recommendations. Credibilit y is also, in par t, determined by who is making the judgment. To some sponsors, a credible project need only use a pre-post design. Others may require a randomized experimental design to consider the findings credible. Credibilit y is also determined by the research question. A representative sample w ill make a descriptive study more credible than a sample of convenience or one w ith know n biases. In contrast, representativeness is not as impor tant in a study designed to determine the causal link between a program and outcomes. The planner needs to be sure that the design matches the t y pes of information needed. For example,

under most circumstances, the simple pre-post design should not be used if the purpose of the study is to draw causal conclusions

Usefulness refers to whether the design is appropriately targeted to answer the specific questions of interest A sound study is of little use if it prov ides definitive answers to the w rong questions Feasibility refers to whether the research design can be executed, given the requisite time and other resource constraints All three factors credibilit y, usefulness, and feasibilit y must be considered to conduct high-qualit y applied research.

Design Dimensions

Maximizing Validity

In most instances, a credible research design is one that maximizes validit y it prov ides a clear explanation of the phenomenon under study and controls all plausible biases or confounds that could cloud or distor t the research findings Four t y pes of validit y are t y pically considered in the desig n of applied research (Bickman, 1989; Shadish, Cook, & Campbell, 2002).

• Internal validity: the extent to which causal conclusions can be draw n or the degree of cer taint y that “A” caused “B,” where A is the independent variable (or program) and B is the dependent variable (or outcome).

• External validity: the extent to which it is possible to generalize from the data and context of the research study to other populations, times, and settings (especially those specified in the statement of the original problem/issue).

• Const ruct validity: the extent to which the constructs in the conceptual framework are successfully operationalized (e.g., measured or implemented) in the research study. For example, does the program as actually implemented accurately represent the program concept and do the outcome measures accurately represent the outcome? Programs change over time, especially if fidelit y to the program model or theor y is not monitored

• Statistical conclusion validity: the extent to which the study has used appropriate sample size, measures, and statistical methods to enable it to detect the effects if they are present This is also related to the statistical power

All t y pes of validit y are impor tant in applied research, but the relative emphases may var y, depending on the t y pe of question under study. With questions dealing w ith the effectiveness of an inter vention or impact, for example, more emphasis should be placed on internal and statistical conclusion validit y than on external validit y. The researcher of such a study is primarily concerned w ith finding any ev idence that a causal relationship exists and is t y pically less concerned (at least initially) about the transferabilit y of that effect to other locations or populations For descriptive questions, external and construct validit y may receive greater emphasis Here, the researcher may consider the first priorit y to be developing a comprehensive and rich picture of a phenomenon The need to make cause-effect attributions is not relevant Construct validit y, however, is almost always relevant

Operationalizing the Key Variables and Concepts

The process of refining and revising the research questions undertaken in Stage I should have y ielded a clear understanding of the key research variables and concepts. For example, if the researcher is charged w ith determining the extent of high school drug use (a descriptive task), key outcome variables might include drug t y pe, frequency and duration of drug use, and drug sales behav ior. Attention should be given at this point to reassessing whether the researcher is study ing the right variables that is, whether these are “useful” variables

Outlining Comparisons

An integral part of design is identifying whether and what comparisons can be made that is, which variables must be measured and compared with other variables or with themselves over time. In simple descriptive studies, there are decisions to be made regarding the time frame of an observation and how many observations are needed. Typically, there is no explicit comparison in simple descriptive studies. Normative studies are an extension of descriptive studies in that the interest is in comparing the descriptive information to some appropriate “standard ” The decision for the researcher is to determine where that standard will be drawn from or how it will be developed In correlative studies, the design is again an extension of simple descriptive work, with the difference that two or more descriptive measures are arrayed against each other to determine whether they covary Impact or outcome studies, by far, demand the most judgment and background work To make causal attributions (X causes Y), we must be able to compare the condition of Y when X occurred with what the condition of Y would have been without X. For example, to know if a drug treatment program reduced drug use, we need to compare drug use among those who were in the program with those who did not participate in the program.

Level of Analysis

Know ing what level of analysis is necessar y is also critical to answering the “right” question. For example, if we are conducting a study of drug use among high school students in Toledo, “Are we interested in drug use by indiv idual students, aggregate sur vey totals at the school level, aggregate totals at the school district, or for the cit y as a whole?”

Correct identification of the proper level or unit of analysis has impor tant implications for both data collection and analysis. The Stage I client discussions should clarify the desired level of analysis. It is likely that the researcher w ill have to help the client think through the implications of these decisions, prov iding information about research options and the t y pes of findings that would result. In addition, this is an area that is likely to be rev isited if initial plans to obtain data at one level (e.g., the indiv idual student level) prove to be prohibitively expensive or unavailable. A design fallback position may be to change to an aggregate analysis level (e g , the school), par ticularly if administrative data at this level are more readily available and less costly to access

In an experiment, the level of analysis is t y pically determined by the level that the inter vention is introduced For example, if the inter vention was targeted at indiv idual students, then that should usually be the level of analysis Similarly, a classroom inter vention should use classroom as the level and a schoolw ide inter vention should use the school It is tempting to use the lowest level w ith the largest sample size because that prov ides the most statistical power that is, abilit y to find an effect if one is there. For example, if an inter vention is at the school level and there is only a treatment and control school then the sample size is two, not the total number of students. Statistical programs that take into account multilevel designs are easily accessible (Graham, Singer, & Willett, 2008). However, the real challenge w ith multilevel designs is finding enough units (e.g., schools) to cooperate as well as enough resources to pay for the study.

Population, Geographic, and Time Boundaries

Population, geographic, and time boundaries are related to external validity issues Each can affect the generalizability of the research results for instance, whether the results will be representative of all high school students, all high school students graduating within the past 3 years, all students in urban areas, and so on. Population generalizability and geographic generalizability are probably the most commonly discussed types of generalizability, and researchers frequently have heated debates concerning whether the persons or organizations that they have studied and the locations where they conducted their studies will allow them to use their findings in different locations and with different populations. In basic research, generalizability or external validity is usually not considered but in applied research some may rate it more important than internal validity (Cronbach et al., 1980).

Time boundaries also can be crucial to the generalizability of results, especially if the study involves extant data that may be more than a few years old. With the fast pace of change, questions can easily arise about whether sur vey data on teenagers from even just 2 years prior are reflective of current teens’ attitudes and behaviors

The researcher cannot study all people, all locations, or all time periods relevant to the problem/program under scrutiny One of the great “inventions” for applied social research is sampling Sampling allows the researcher to study only a subset of the units of interest and then generalize to all these units w ith a specifiable degree of error It offers benefits in terms of reducing the resources necessar y to do a study ; it also sometimes permits more intensive scrutiny by allow ing a researcher to concentrate on fewer cases. More details on sampling can be found in Henr y (1990; see also Sieber, Chapter 4, this volume).

Level of Precision

Knowing how precise an answer must be is also crucial to design decisions The level of desired precision may affect the rigor of the design When sampling is used, the level of desired precision also has important ramifications for how the sample is drawn and the size of the sample used In initial discussions, the researcher and the

client should reach an understanding regarding the precision desired or necessar y overall and w ith respect to conclusions that can be draw n about the findings for specific subgroups The cost of a study is ver y heav ily influenced by the degree of precision or cer taint y required In sampling, more cer taint y usually requires a bigger sample size, w ith diminishing returns when samples approach 1,000 However, if the study is focused on subgroups, such as gender or ethnicit y, then the sample at those levels of analysis must also be larger.

Another example of precision is the breadth and depth of a construct that need to be measured in a study. More breadth usually requires more questions, and greater depth often requires the use of in-depth inter v iew ing, both likely increasing the costs of data collection especially if administered in person or w ith a telephone inter v iew. The level of precision is discussed later in the section dealing w ith tradeoffs as level of precision is often a trade-off decision that must be made w ithin the budget of a study.

Choosing a Design

There are three main cate gor ies of applied research desig ns: descr ipt ive, exp erimental, and quasi-exp er imental. In our exp er ience, de veloping an applied research desig n r arely al lows for implementing a desig n st r aig ht from a text b o ok; r ather, the pro cess more t y pical ly involves the de velopment of a hybr id, reflec t ing combinat ions of desig ns and other features that can resp ond to mult iple study quest ions, resource limitat ions, dynamics in the research context, and other const r aints of the research situat ion (e.g ., t ime dead lines). Thus, our intent here is to prov ide the reader w ith the tools to shap e the research appro ach to the unique asp ec ts of each situat ion. Those interested in more de tailed discussion should consult Mar k and Reichardt’s wor k on quasi-exp er imentation (Chapter 6) and B or uch and col leagues’ chapter on r andomized exp er iments (Chapter 5). In addit ion, our emphasis here is on quant itat ive desig ns; for more on qualitat ive desig ns, readers should consult Maxwel l (Chapter 7), Yin (Chapter 8), and Fe tter man (Chapter 17)

Descriptive Research Designs

Descr iption and Pur pose. The overall purpose of descriptive research is to prov ide a “picture” of a phenomenon as it naturally occurs, as opposed to study ing the effects of the phenomenon or inter vention. Descriptive research can be designed to answer questions of a univariate, normative, or correlative nature that is, describing only one variable, comparing the variable to a par ticular standard, or summarizing the relationship between two or more variables

Ke y Features Because the categor y of descriptive research is broad and encompasses several different t y pes of designs, one of the easiest ways to distinguish this class of research from others is to identify what it is not: It is not designed to prov ide information on cause-effect relationships

Variations There are only a few features of descriptive research that vary These are the representativeness of the study data sources (e g , the subjects/entities) that is, the manner in which the sources are selected (e g , universe, random sample, stratified sample, nonprobability sample); the time frame of measurement that is, whether the study is a one-shot, cross-sectional study, or a longitudinal study; whether the study involves some basis for comparison (e g , with a standard, another group or population, data from a previous time period); and whether the design is focused on a simple descriptive question, on a normative question, or on a correlative question.

When to Use. A descriptive approach is appropriate when the researcher is attempting to answer “what is,” or “what was, ” or “how much” questions.

Strengths. Explorator y descriptive studies can be low cost, relatively easy to implement, and able to yield results in a fairly short period of time. Some efforts, however, such as those involving major sur veys, may sometimes require extensive resources and intensive measurement efforts. The costs depend on factors such as the size of the sample, the nature of the data sources, and the complexity of the data collection methods employed. Several chapters in this volume outline approaches to sur veys, including mail sur veys (Mangione & Van Ness, Chapter 15), internet sur veys (Best & Harrison, Chapter 13), and telephone sur veys (Lavrakas, Chapter 16)

Limitations. Descriptive research is not intended to answer questions of a causal nature. Major problems can arise when the results from descriptive studies are inappropriately used to make causal inferences a temptation for consumers of correlational data.

Experimental Research Designs

Descr iption and Pur pose. The primar y purpose in conducting an experimental study is to test the existence of a causal relationship between two or more variables. In an experimental study, one variable, the independent variable, is systematically varied or manipulated so that its effects on another variable, the dependent variable, can be measured. In applied research, such as in program evaluation, the “independent variable” is t y pically a program or inter vention (e.g., a drug education program) and the “dependent variables” are the desired outcomes or effects of the program on its par ticipants (e.g., drug use, attitudes toward drug use).

Ke y Features. The distinguishing characteristic of an experimental study is the random assignment of indiv iduals or entities to the levels or conditions of the study. Random assignment is used to control most biases at the time of assignment and to help ensure that only one variable the independent (experimental) variable differs between conditions With well-implemented random assignment, all indiv iduals have an equal likelihood of being assigned either to the treatment group or to the control group If the total number of indiv iduals or entities assigned to the treatment and control groups is sufficiently large, then any differences between the groups should be small and due to chance

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.