NOTES AND LETTERS ABSTRACT
Does collaboration improve the quality of scientific research? Editorial decision on papers submitted to a leading social psychology journal was cross-classified by number of authors. A small relationship in the predicted direction was obtained and it persisted in the face of two relevant controls.
Collaboration and the Quality of Research Stanley Presser
The dramatic growth of collaborative research over the last few decades has been clearly documented.' Yet some o f the effects of this change remain unclear. The present paper looks at one possible consequence by testing the hypothesis that researchers w h o collaborate produce higher quality work than those w h o work alone. More specifically, the paper analyzes whether editorial judgment (based o n referee evaluations) o n papers submitted t o a leading social psychology journal is related t o the number of authors a paper has. There a r e many reasons why the proverbial 'two heads are better than one' might apply t o specific research. Pelz a n d Andrews, In discussing their finding that contact with colleagues enhances scientific performance, suggest that . . . o n e way, of course, is by providing new ideas -jostling a man o u t of his old ways o f thinking about things.. . T h e n there is the possibility of a colleague catching a n error which the man himself is too engrossed t o see. . . Still another way colleague contacts may help a person is in keeping him on his toes - simple things, like putting In a good day's work, o r running a test the way it should be done. . .*
While contact, by itself, may suffice for these purposes, collaboration should accentuate the effect. In addition, collaboration may increase efficiency through the prin. ~h u s two investigators - say, one particularly ciple of the division of l a b ~ u r T skillful in experimental design, the other in d a t a analysis - should produce a better paper than either working alone. A s an attempt t o test this hypothesis, the editorial decision on all papers submitted t o Sociotnerry (now Social Psychology Quur:erly) between 1 September 1976 a n d 31 August 1977 was cross-classified by number of authors. T h e expectation was that Social Studies of Science (SAGE, London and Beverly Hills), Vol. 10 (1980), 95101
96
Social Studies of Science
multiple-authored papers would be judged more favourably than single-authored
submission^.^ During the 1976-77 period the journal received 303 papers. Decisions about these papers fell into one of four categories. One-fifth of the submissions were screened by office staff and rejected without further review. These papers are excluded from the analyses reported below because many of them were rejected on grounds having nothing to do with quality (for example, inappropriate subject matter).5 Of the remaining 242 submissions, all of which were sent to referees, about 60 percent were rejected on the basis of reviewer evaluation^.^ Among the other 40 percent, about half were 'revise and resubmit' that were either rejected on resubmission or not resubmitted, and half were accepted for publication (many after initially having been revised and resubmitted). The number of authors on these 242 papers varied from one to five, 45 percent coming from single authors and 55 percent from multiple authors.
TABLE 1
Editorial Decision by Number of Authors
Number of Authors One Two+
Editorial Decision Initial Reject
Revise and Resubmit
Accept
X2 = 5.7.2 df, p <0.06 Initial Reject vs Revise and Resubmit plus Accept: = 5.1, ldf, p <0.03; Q = 0.29
x2
In line with hypothesis, Table 1 shows that collaboration bears a modest relation to quality, as measured by editorial decision.' Although there is a slight difference in the Accept category, the main effect of working with others is on the lower two categories, suggesting that collaboration leads less to producing very good papers and more t o avoiding bad ones. Most of the variation in the table is captured by the contrast between Initial Reject and the combination of Revise and Resubmit plus ~ccept.' The possibility that the relationship between joint work and better work is spurious can be examined by considering two factors that might account for it. The first is author's discipline. Social Psychology Quarterly is unusual in that its submissions come from a heterogeneous group. Coding departmental affiliation of first listed author (or of corresponding author, in the few cases where the two were not
IVores and Letters: Presser: Research Collaborafion
97
the same), 33 percent of the 1976-77 refereed papers came from psychologists. 48 percent from sociologists, and 19 percent from others. If collaboration is more or less common in one of these fields, and if field is related to quality of work, then field must be held constant in considering the link between collaboration and quality. For these data, it turns out that discipline is associated with both number of authors and editorial decision. Psychologists are considerably more likely than sociologists or others to collaborate, and somewhat more likely than sociologists to write papers judged of higher quality. (Not surprisingly, both psychologists and sociologists are much more likely than other authors to have their papers judged favourably in this social psychology journal.) There may be something about experiments, which psychologists are more likely to do, that lends itself to joint work. A greater consensus may also exist in social psychology about the quality of an experiment than about other kinds of work, since an experiment usually fits within a well developed paradigm. Do these factors account for the observation that those who collaborate write better papers? The answer, given in Table 2 is clearly 'no'. The initial relation holds up in each of the three disciplinary areas. (The categories of editorial decision were collapsed to increase cell size, but this is the same collapsing that captured most of the variation in Table 1 .) Authors who work with others are more likely to write higher quality papers, regardless of discipline. This finding is not only evidence against a disciplinary explanation, but also suggests that the original result is not a function of type of paper (experiment, survey, participant observation, and so on), since a good part of the variation in type of paper should be reflected in the coding of discipline.9 The other specification variable that can be examined for these data is highest degree awarded by the author's department. The presence of advanced graduate students, and the tendency for PhD departments to be larger than MA or undergraduate departments, probably mean that joint work is more frequent in departments with doctoral programmes. The members of such departments may also be more likely to write papers of higher quality, whether alone or with others. Since both notions are confirmed in this study for authors affiliated with sociology and psychology departments (for which information about highest degree was most easily available), the link between collaboration and quality was examined controlling for department type. According to the results in Table 3 , collaboration is associated with more favourable review of papers from PhD departments as well as those from Non-PhD departments.10 The relation is somewhat stronger in the latter, suggesting that collaboration is more important in minor departments, but the effect appears in both." However, this result must be viewed with caution. Although the measure of department type undoubtedly taps size and quality, it does so crudely. Additionally, the initial relation no longer approaches significance, in part because the entire sample is not included. More generally, this research should be seen as exploratory in nature. Only one journal, in one field, in a single year was studied. Likewise, only two possibly confounding variables were examined. Other variables (for example, having research grants) might be related to both collaboration and quality in such a way as to account for the relationship. Thus further investigations along the lines suggested here are needed to assess the nature of the connection between the organization of scientific work and its quality.12
98
Social Srudies of Science TABLE 2
Editorial Decision by Number of Authors by Author's Discipline
Number of Authors One Two+
Psychology Edi:orial Decision
Initial Reject Revise and Resubmit
+
Accept
Sociology Initial Reject Revise and Resubmit
+ Accept
Other Initial Reject Revise and Resubmit
+
54.5% 45.5
45.8% 54.2
100.O"io
100.ooio
(22)
(59)
- -
Accept
Editorial Decision by Number of Authors controlling for Discipline:X2 p <0.08 Editorial Decision by Discipline controlling for Number of Authors: p <0.01
= 3.2,
X2 =
ldf,
9.6. 2df,
i?otes and Lerters: Presser: Research Col1abora:ion TABLE 3
Editorial Decision by Number of Authors by
Highest Degree Granted in Author's Department
BA and MA Departments Edirorial Decision Initial Relect Revise and Resubmit + Accept
PhD Departments Initial Reject
Revise and Resubmit
+
Number of Authors One Two+ 76.70'0 60.0째~b 23.3 40.0 - 100.0?b 100.o0;0 (30) (15)
Accept
Editorial Decision by Number of Authors controlling for Department: ldf, ns
X2=
1.4,
Editorial Decision by Department controlling for Number of Authors: 0.08 Idf, p
X2
3.1,
<
=
Social Srudies of Science
NOTES
I am indebted to Howard Schuman for much of what I know about both collaboration and the quality of research, as well as for generously making available the records upon which this paper is based. My thanks also to the other members of the Social Psychology Quar:erly staff - Sonya Kennedy, Cynthia Robbins, and Bruce Taylor - as well as to Jean Converse and two anonymous referees for helpful comments on an earlier draft. 1. See, for example, Harriet Zuckerman and Robert K . Merton, 'Age, Aging, and Age Structure in Science', in Norman Storer (ed.), The Sociology of Science (Chicago: The University of Chicago Press, 1973), 547. 2. Donald Pelz and Frank Andrews, Scien:is:s in 0rgani;a:ions (New York: Wiley, 1966). 52. 3. Lowell Hargens, 'Relations Between Work Habits, Research Technologies, and Eminence in Science', Sociology of Work and Occupa:ions, Vol. 5 (1978), 99. 4. It should be noted that the variable 'number of authors' is not as straightforward as it appears. Some single-authored papers are the product of research by more than one individual (as is sometimes testified to in a paper's first footnote), and some multiple-authored papers include as authors the names of individuals who were not active collaborators. Indeed, collaboration is most usefully seen as a continuum, not as a dichotomy. Still, the dichotomy used here should capture an important part of the phenomenon of interest. 5. 21.0 percent of the single-authored papers and 19.4 percent of the multipleauthored papers were screened. 6. Reviewing was done anonymously, so referees were usually unaware of how many authors a paper had. 7. All values of chi-square reported in this paper are likelihood-ratio statistics derived with the computer programme ECTA. See Leo Goodman, 'The Analysis of Multidimensional Contingency Tables: Stepwise Procedures and Direct Estimation Methods for Building Models for Multiple Classifications', Technotne:rics, Vol. 13 (1971), 33-61. 8. When the two + category is broken down, the difference in editorial decision is found to be mainly between collaborating (irrespective of number of colleagues) and working alone. But this result is of uncertain reliability, due to the small number of papers Nith 3, 4, or 5 authors. 9. Table 2 also shows that the relation between discipline and quality of paper is unaffected by a control for number of authors. 10. Table 3 includes only papers by authors affiliated with either sociology or psychology departments. 11. For a discussion of the effects of academic environment in major versus minor universities, scc Diana Crane, 'Scientists at Major and Minor Universities: A Study of Productivity and Recognition', Atnerican Sociological Review, Vol. 30 (1965), 699-714. 12. Although their analysis of manuscripts submitted to The Physical Review excludes multiple-authored papers, Zuckerman and Merton note that such papers had an acceptance rate of over 95 percent. This compared with 80 percent for single-
Notes and Letrers: Presser: Research Collaboration
101
authored papers. See Harriet Zuckerrnan and Robert Merton, 'Institutionalized Patterns of Evaluation in Science', in Storer (ed.), op. cit. note 1 , 476-78.
Stanley Presser is a Research Associate, lnstitute for Research in Social Science, and Visiting Assistant Professor, Department of Sociology, at The University of North Carolina. Currently he is collaborating with Howard Schuman on a study of survey question wording and with Elizabeth Martin on a study of the role of self-interest in public opinion. His articles have appeared in American Sociological Review, Public Opinion Quarterly, and other journals. Author's address: Institute for Research in Social Science, The University of North Carolina at Chapel Hill, Manning Hall 026A, Chapel Hill, North Carolina 27514, USA.