COMPASSS (COMPArative Methods for Systematic cross-caSe analySis) is a worldwide network bringing together scholars and practitioners who share a common interest in theoretical, methodological and practical advancements in a systematic comparative case approach to research which stresses the use of a configurational logic, the existence of multiple causality and the importance of a careful construction of research populations. It was launched in 2003, and its management was re-organized in 2008, 2012 and 2016 to better accommodate the growing needs in the field. COMPASSS comprises three main bodies: an Advisory Board, a Steering Committee and a Management Team, which represent a diverse range of active members of this methodological community.
The field of qualitative comparative and set-theoretic methods is not only in in the midst of a process of constant development and innovation, it also is not characterized by similar levels of standardization as some mainstream statistical techniques. Amongst others, the field is currently witnessing an ongoing and welcome methodological debate about the correctness of different solution types (conservative/complex, intermediate, parsimonious) when applying the methods of Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA) to empirical data. The present statement is to express the concern of COMPASSS about the practice of some anonymous reviewers to reject manuscripts during peer review for the sole, or primary, reason that the given study chooses one solution type over another.
COMPASSS embraces the position that standards are community goods that, by definition, cannot be coerced by individuals or a minority of scholars via anonymous peer review processes. Specifically, COMPASSS rejects the position that a single solution type is always methodologically superior for several reasons. First, the current state of the art is characterized by discussions between leading methodologists about these questions, rather than by definitive and conclusive answers. It is therefore premature to conclude that one solution type can generally be accepted or rejected as “correct”, as opposed to other solution types. Second, users applying these methods who refer to established protocols of good practice must not be made responsible for the fact that, currently, several protocols are being promoted. Such debates should be resolved between methodologists, and not by imposing protocols on users about which no broad scientific consensus has been reached as yet.
COMPASSS wishes to draw the attention of journal editors and authors to this issue. Reviewers are certainly encouraged to make their arguments and concerns clear in their reviews. Yet under the current state of the art, the use of a specific solution type is not an acceptable reason for rejecting a manuscript, especially not in isolation. COMPASSS runs a working paper series with a checklist of practices that speak for the quality of a QCA study. Amongst them is the explicit justification of the choice of a specific solution type, as well as reporting all three solution types, for example, in an appendix. This checklist reflects the scientific consensus that all solutions are empirically valid, that including the full range of solutions is typically best, and that the researcher needs to justify the final choice of solutions(s) that they report.