Although the delightful comic above has more to do with reviewers’ comments on a paper submitted for publication, it still has to do with “critically appraising” one’s work — from the very assumptions guiding a study, to the methodological choices made by the authors, to the appropriateness of the conclusions that have been drawn from the study’s results.
One of the key skills we are expected to develop during our PhD years is the ability to “think critically” and show an “in-depth critical analysis” of what we read and cite in our work. This skill is likely to develop over time and with experience, as we get to know the main authors in our research area, their theoretical views and common methodological practices. But even early on, in course papers, presentations, Comprehensive Exams and our proposals, even when absorbing what is presented to us at conferences, we are required to “critically evaluate” research. The idea is that we can’t simply buy into everything that is presented to us, without using our intelligence, our knowledge of other relevant work, and our careful reflections of whether or not the authors’ arguments and methodology seem sound.
“Being critical” does not mean having a negative or aggressive attitude towards a study, a theory or a group of researchers whose views differ from our own. It should not be construed as an attack, and one should not have a predisposition to find problems or flaws. Rather, it should be a more neutral and objective (unbiased) process of evaluation. It entails summarizing the main points, questioning the information that is presented, highlighting the strengths and weaknesses, and offering a judgment or evaluation — often your own reactions, but also justified by reasonable argumentation and, where possible, supported by other sources.
Easier said than done, sometimes! Some of us may tread carefully, unsure of how exactly to critique well-established researchers who most probably know what they are doing – certainly more so than PhD students who are just starting out! This is especially difficult to do in cases where the literature we are reviewing is not directly in our field and in our area of expertise.
Below is a non-exhaustive list of some guiding questions we can ask ourselves when evaluating research. Perhaps it could be used as a reference when sifting through literature and trying to differentiate between the really good studies and the not-so-good ones. Note that I’ve compiled this mostly from a science perspective, but please feel free to offer a different perspectives if you’re from other disciplines, or to share some of your own tips and strategies!
1. Assumptions and arguments:
- Are the assumptions driving the study well-motivated and well-supported?
- Do the authors consider “the other side” of the argument?
- Do the authors arguments seem logical and convincing to you?
2. Theoretical framework:
- Is there a theoretical framework behind the study?
- Are the relevant theories properly explained and cited?
- Are there any terms or theoretical ideas that seem underspecified or vague?
- Are there any controversial debates in the literature about this theory vs. other theories/models? Do the authors consider this debate or do they stay within their own framework?
- Do the authors have clear predictions / hypotheses?
- Could these predictions be accounted for by a different (opposing) theory or argument that the authors do not consider?
4. Methodological limitations:
- Is the methodology well-explained for the reader? Is any information unclear or omitted?
- Do the methodological choices reflect the authors’ research questions well?
- Are factors operationally defined by the authors? If yes, are these similar or different compared to operational definitions in the literature?
- Does the choice of population groups make sense? Are there proper control groups?
- Are the inclusion criteria reasonable?
- Are there factors that have not been controlled and could be potential confounds?
5. Data / evidence:
- Do you feel the evidence is convincing? Why or why not?
- Where does this evidence lie with respect to similar studies in the literature: is it complimentary or contradictory?
- Could the data presented in this study be accounted for by another theory/model than that which the authors support?
- How could methodological choices have affected the data? Are there any confounds?
- Is the data accurately depicted in tables, graphs and plots?
- Do the statistics seem fishy or correct?
- What are the authors’ main claims?
- Are these conclusions fully justified by the data?
- Are there any claims that are not justified by the data?
- Are there conclusions that seem too strong (e.g. oversimplified or overgeneralized)?
- Are there conclusions drawn based on non-significant results? Check the Results section and how all results have been interpreted in the Discussion and Conclusions.
- Do the authors’ conclusions fit with their initial line of argument or are there suddenly ad-hoc explanations that try to make sense of unexpected data?
7. Significance and contributions:
- Have the authors’ objectives been attained?
- How does this study relate to others in the field? Do we know something new that we didn’t know before?
- Are there any interesting theoretical, educational and/or clinical implications?
- What are some open questions / directions for future research?
A strategy I find helpful for synthesizing all this information (because there is no way my brain can be trusted for retaining details without help!), is to make a giant table with several columns (e.g. aim, participants, measures, predictions, results, conclusions, limitations) and plugging in this information for each study that I read or intend to describe/evaluate in a review paper.
Finally, you may want to also check out these useful online sources:
Good luck with sharpening your critical thinking skills!