Advanced search
Start date
Betweenand
(Reference retrieved automatically from SciELO through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Agreement among raters in the selection of articles in a systematic review

Full text
Author(s):
Natália Sanchez Oliveira [1] ; Julicristie Machado de Oliveira [2] ; Denise Pimentel Bergamaschi [3]
Total Authors: 3
Affiliation:
[1] Universidade de São Paulo. Faculdade de Saúde Pública. Departamento de Epidemiologia - Brasil
[2] Universidade de São Paulo. Faculdade de Saúde Pública. Departamento de Epidemiologia
[3] Universidade de São Paulo. Faculdade de Saúde Pública. Departamento de Epidemiologia
Total Affiliations: 3
Document type: Journal article
Source: Revista Brasileira de Epidemiologia; v. 9, n. 3, p. 309-315, 2006-09-00.
Field of knowledge: Health Sciences - Collective Health
Abstract

The objective of this study is to present the methodological aspects of inter-rater agreement in the initial selection of studies for a systematic review with or without meta-analysis. As an example, we used data from the initial phase of the study called "Vitamin A supplementation for breastfeeding mothers: systematic review". The data are the result of a reading carried out by two raters of article abstracts selected judiciously among electronic bibliography databases. For each study we posed the questions: "Does the study involve post-partum females?"; "Is it a vitamin A supplementation study?"; "Is it a clinical trial?", followed by a decision (inclusion/exclusion) concerning the study. The data were keyed twice into an Excel spreadsheet and a validation procedure was followed. The kappa agreement rate was applied to the following aspects: population, intervention, study type, and decision. We identified 2,553 studies. The kappa agreement rates were: k=0.46 for suitability of the population studied; k= 0.59 for intervention type; k=0.59 for study type; and k=0.44 for the inclusion/exclusion decision. Given the fair (intervention and study type) and slight (population studied) agreement between raters, we emphasize the need for the studies to be read initially by at least two raters. The consensus meetings carried out in the presence of disagreement were useful to solve differences of interpretation between raters, provided new understanding and deeper reflection, contributed to reduce the chance of non-inclusion of necessary studies, and thus enhanced control over a possible selection bias. (AU)