Piling on CRAAP – Uncover a useful literature review search tool

A student recently emailed us a question: “What should I do if my supervisor has told me I need to take a more critical approach to evaluating the material I have searched for my literature review? I don’t really understand what this means.” This student is not alone, as some of the most frequently asked questions from researchers doing their thesis literature review relate to how to go about finding and evaluating useful information. We recommend one simple literature review search tool that can help you to single out the good stuff from what should be flushed out. It is called the ‘CRAAP’ Test.

‘CRAAP’ is a mnemonic for Currency, Relevance, Authority, Accuracy and Purpose. It is a ‘test’ because you can use it to filter out the useful from the useless when choosing what to include in your literature review. Although it was originally intended to be employed by academic researchers to evaluate internet sites, you can apply the CRAAP Test to any book, paper or article you are considering using.

The literature review search tool test itself is straightforward. It involves giving a mark out of 20 for each of the five letters in the mnemonic and then adding the five scores together to give an overall mark out of 100. You then focus on the materials searched for that have achieved the highest scores and discard those with the lowest. This is a quick and easy way of reducing the amount of reading you do while efficiently and effectively searching the literature.

Consider the following example. Say you are in the early stages of researching the topic of ‘machine learning and its application in the automotive industry’ and you visit your college library. The librarian suggests several academic resources, plus an artificial intelligence (AI) magazine targeted at a technical audience. Browsing through back issues of the magazine, you come across an article about future trends in machine learning from an earlier edition, two years. It looks promising, so you decide to utilise the CRAAP Test.

The first letter (that is, ‘C’) stands for Currency. It refers to the timeliness of the material (that is, how new or recent the information is). Generally, you score material highly (15-20 marks) if it has been written in the past three years, been recently revised or updated, and is obviously current. By contrast, you score material low (1-5 marks) if it is 20+ years old and evidently out-of-date. With machine learning, things are changing rapidly, so even an article published two years ago might be considered somewhat old. You, therefore, might decide to place such material at the lower end of the top (category 15-20 marks) and give it a score of, say, 15.

R for Relevance relates to the importance of the information for your specific needs. High marks should be given where the information relates closely to your topic and helps you provide a direct answer to your research questions. Furthermore, score the material highly if it has been written for an academic audience and is pitched at the right level (that is, not too elementary or advanced for your needs). Again, you give a mark out of 20. At this initial stage of your research, a relatively recent article that discusses future trends seems interesting. You may, however, note that it is not associated with the automotive sector, so you give it a score of 17.

Authority covers the provenance of the source of the information. A useful place to start when evaluating this is with the author. For example, are they well known in the topic area, have they published before, what is their job title, which organisation do they work for, what faculty/school or department are they involved in, what qualifications does they have, and so on? In addition, it can be beneficial to assess the publisher. For instance, are they an acknowledge publisher in the academic field, does their publication use peer review, is the material sponsored by any person or organisation, is there contact information (such as a publisher or email address)? If searching online, check if the URL reveals anything about the author or source. For example, .com is a commercial extension, .edu and .ac are educational sites, and .gov refers to the US government.

A high-scoring source is likely to be a peer-reviewed piece by a well-known doctoral or postgraduate-level educator, from a recognised institution, published in an academic journal. In the case of the AI magazine article previously referred to, the authors are well-known lecturers and researchers. They work in relevant departments at respected universities that have high-profile research centres in the areas of machine learning. The AI magazine is a widely disseminated publication, produced by a long-established not-for-profit organisation, and read by practicing scholars and industry professionals. All articles submitted to it are peer-reviewed and approved by an editorial board. Again, a high score is appropriate here, say 18.

Accuracy refers to the reliability, truthfulness and correctness of the content. You should ask where does the data or information used come from? Is the information supported by evidence? Has the information been reviewed or refereed? Does the language or tone seem biased and free of emotion? Are there spelling, grammar or other typographical errors? It can be difficult to give a score for this, as any marks given are relative. Ask yourself, can you verify any of the information from another source or from personal knowledge? In this case, although the authors and publisher of the article appear to have high-standing, and the article itself is peer-reviewed, it is not based on a specific study. It also seems to contain opinion and projection. Nonetheless, it refers to several empirical and well-researched studies to support its central points. Despite its overall credentials, it will score lower than a paper based on a full research study. The nature and style of the article leads to a lower score for accuracy, of say, 13.

Finally, Purpose is linked to the reason the information exists. You should ask what is the intent of the information (that is, to inform, teach, sell, entertain or persuade)? Do the authors/publishers/sponsors make their intentions, or aim, clear? Is the information ‘fact’, opinion or just propaganda? Does the point of view appear objective and impartial? Are there political, ideological, cultural, religious, institutional or personal biases? In contrast to the accuracy score, the article under review might do well here. This is because the overall purpose of the AI magazine is to popularise the discipline. Thus, the authors would be expected to adopt simpler language and a chatty style, rather than the more formal language of an academic paper. Overall, the article would get a score of 15 for purpose

By scoring each category on a scale from 1 to 20 (1 = worst, 20 = best possible), you can give each piece of literature. For example, a book receives an overall score of 78 (C= 15, R = 17, A = 18, A = 13, P = 15). This can be related to:

  • 90 – 100: Excellent (it is essential that I include this in my review).
  • 80 – 89: Very good (it is likely that I will use this, perhaps in conjunction with other material).
  • 70 – 79: Good (this literature may support or link to the main material above).
  • 60 – 69: Moderate (I may mention this or use it to expand on the above.
  • 50 – 59: Average (I will only use this if I need to lengthen a section or broaden a discussion).
  • 40 – 49: Borderline (I will probably not use this but will keep it in my database just in case).
  • 30 – 39: Unacceptable (I will keep a note of it, in case it comes up in discussions, only).
  • Below 30: Poor (do not go there!).

In this case, the literature review search tool test scored 78 overall (a ‘good’ score). This means that although the book or paper might not be a core source, it would be suitable to cite in support of the main material. It might also highlight themes that are relevant to the topic.

As the article scores highest on Relevance and Authority, it is appropriate to the topic and the contribution comes from a recognised source. The paper scores lower on Currency, Accuracy and Purpose. Although it is only two years old, there are probably more up-to-date articles, papers and books that discuss trends in machine learning. On its own, this is not too much to be concerned about. However, any academic article should ideally score higher on Accuracy. Some allowance should be made, in this instance, for the fact that the Purpose of the publication is to act as a popular AI magazine.

When writing up your literature review, part of your critique may involve stating what is in the article (citing) but adding a comment on issues related to accuracy. Mentioning such limitations indicates that you are reading widely, in addition to assessing the value of each piece of literature. In fact, this article provides a helpful list of references that could be used to extend your search further so you can source extra material based on empirical, scholarly research.

In summary, applying the literature review search tool – the CRAAP Test – helps you to separate the useful from the useless. It is, therefore, a helpful mnemonic to use during your literature review search and critique.

4 Replies to “Piling on CRAAP – Uncover a useful literature review search tool”

Leave a Reply

Your email address will not be published. Required fields are marked *