The impact factor, often abbreviated as IF, is a measure reflecting the average number of citations that a paper published in a journal receives over a defined period of time. Conceptually developed in the 1960s by Eugene Garfield, founder of the Institute for Scientific Information (ISI), IF is now frequently used as a proxy for the relative importance of a journal within its field. Journal impact factors are published annually in Science Citation Index (SCI) Reports.
Researchers are often conditioned to believe that IF matters the most. Publication in journals with a high IF is regarded as an indication of the quality of the research published, and by implication, the quality of its authors. Therefore, it is not surprising that publishing in high IF journals is an aspiration for most scientists as it often plays an important role in their career prospects and progression.
High IF journals are widely read. But there has been a discrepancy regarding the importance of journal IF among researchers. Journal ranking systems have evolved in the present-day world and allow for better comparisons. Sadly, they are often ignored even when such rankings may benefit a given journal. But even these systems are not foolproof and can be quite flawed, especially those assuming that the scientific value or quality is less if the scope of a discussion is small. A more appropriate approach could be to say that the best journals are those that can rank high in one or more categories or ranking systems, rather than reducing the overall journal quality and usefulness to a single number.
IF, originally designed for purposes other than the individual evaluation of the quality of research, is undoubtedly a useful tool provided its interpretation is not stretched far beyond its limits of validity. Having said that, the research quality cannot be measured solely using IF. It should be used with caution, and should not be the dominant or only factor accounting for the credibility of a research.