Deutsche Version Home Contact
 
 
 

Peer Review at the DFG: the Review Board

Research on peer reviewing consists primarily of studies relating to the critical assessment of journal manuscripts. By contrast, peer review of research applications has received much less examination. The iFQ is planning to devote attention to this desire and need for the latter kind of research.

Peer review, the appraisal of scholarly performance or potential output by colleagues in one’s field, is the oldest process of scientific evaluation. It is indispensable as an instrument of self-government in the scientific community and of efforts to ensure scientific quality. Peer review is being used more and more for external control processes as well. Ultimately, the very credibility of science in the eyes of the public depends on the reliability of expert opinion.

Appraisals by experts are not always free of errors, however, as is blatantly evident when published research results prove to be hoaxes. Such blunders strengthen the arguments advanced by critics of the peer-review system, who assert that review by anonymous peers is marked by nepotism and numerous biases, whether against young, unknown, or women scholars. They presume the existence of plagiarism and hostility to innovation, and they contend the system is highly unreliable because reviewers often contradict each other.

Research results on this topic neither weaken nor confirm the accusations, leaving the picture ambiguous. The number of studies in which no gender bias appears roughly matches that of studies that have been able to substantiate its existence (see Daniel, 2006, p. 188). Studies addressing the accusation of nepotism are likewise inconclusive (see Bornmann, 2004, p. 21).

Nor does blanket criticism of peer-review reliability hold up entirely under empirical scrutiny. Some studies report a high degree of concurrence between reviewers (e.g., Daniel, 1993, p. 26; Hartmann & Neidhardt, 1990, p. 423; Wiener et al., 1977, p. 309). Others find a moderate to low degree of agreement (e.g., Cole, 1992, p. 102; Miner & McDonald, 1981, p. 23; Jayasinghe, Marsh, & Bond, 2001, p. 305).

There has been little investigation of the most important qualitative criterion, predictive validity, that is, the question of how “correct” the recommendations formulated by the reviewer(s) are. Available results of manuscript reviews in this area cannot conclusively confirm that peer review has a high level of predictive validity (see Weller, 2001, p. 67). Studies on peer reviews of research applications are almost nonexistent. Inquiring into different issues concerning the quality of the peer-review system, the iFQ will take a variety of perspectives.

The analysis will concentrate on the DFG system of evaluation since its reform with the introduction of review boards. It is to entail surveys of the peer reviewers, applicants, and nonapplicants. The members of the DFG review boards have already been conducted. To examine the predictive validity of the reviewers’ votes, analyses of the review texts are also planned, as are analyses of funded and nonfunded projects for their research outputs.

In summer 2006, the iFQ surveyed the members of the DFG review boards in order to document experience with the restructured process of peer review and to gather assessments of possible changes to make in it. The work centered on the following the questions:

  • How do the members of the DFG review boards evaluate the new system? Have the goals of the reform been achieved?
  • How do the individual review boards work internally? How do they come to a decision on an application's merits for funding?
  • What criteria are important in selecting peer reviewers? How is the quality of reviews appraised?
  • What value do the members of the DFG review boards attach to transparency in the peer review of research applications?
  • How should final reports and their review be dealt with in the future?

In addition, statements from the 1976 and 1983 Allensbach peer-review surveys of university professors were replicated.

The survey encompassed all 577 members of the DFG review boards, who received the survey link by mail in late August 2006. Responses were accepted until the end of October. You find the initial report here (only in German): download

References

Bornmann, Lutz / Daniel, Hans-Dieter, 2005: Selection of Research Fellowship Recipients by Committee Peer Review. Analysis of Reliability, Fairness and Predictive Validity of Board of Trustees' Decisions. Scientometrics 63(2), 97-320.
Bornmann, Lutz, 2004: Stiftungspropheten in der Wissenschaft. Zuverlässigkeit, Fairness und Erfolge des Peer Review. Münster: Waxmann Verlag.
Cole, Stephen, 1992: Making Science. Between Nature and Society. Cambridge: Harvard University Press.
Daniel, Hans-Dieter, 1993: Guardians of Science. Fairness and Reliability of Peer Review. Weinheim: VCH.
Daniel, Hans-Dieter, 2006: Pro und Contra: Peer Review. HRK (Hg.), Von der Qualitätssicherung der Lehre zur Qualitätsentwicklung als Prinzip der Hochschulsteuerung. Projekt Qualitätssicherung. Beiträge zur Hochschulpolitik (1) Band I.
Hartmann, Ilse / Neidhard, Friedhelm, 1990: Peer Review at the Deutsche Forschungsgmeinschaft. Scientrometrics 19, 419-425.
Jayasinghe, Upali / Marsh , Herbert W. / Bond, Nigel, 2001: Peer Review in the Funding of Research in Higher Education. The Australian Experience. Educational Evaluation and Policy Analysis 23 (4), 343-364.
Miner L. E. / McDonald, S. , 1981: Reliability of Peer Review. Journal of the Society of Administrators 13, 21-25.
Weller, Ann C., 2001: Editorial Peer Review: Its Strengths and Weaknesses. Medford. NJ : Information Today.
Wiener S. L., / Urivetzky, Morton / Bregmann, D. / Cohen, J. / Eich, R. / Gootman, N. / Gulotta, S. / Taylor, B. / Tuttle, R. / Webb, W. / Wrigt, J., 1977: Peer Review. Inter-Reviewer Agreement during Evaluation of Research Grant Evaluations. Clinical Research 25, 306-311.

Co-ordination of this project: Meike Olbrecht