Bibliometric/Scientometric Indicators

Bibliometrics is a group of mathematical and statistical methods that are used to analyse and measure the quantity and quality of different forms of publications. Basically there are three types of bibliometric indicators:

  • Quantity indicators: These measure the productivity of a researcher.
  • Quality indicators: These measure the performance of a researcher.
  • Structural indicators: These measure the connection between publications, authors, and areas of research.

Bibliometric indicators influence funding decisions, appointments, and promotions of researchers; therefore, it is important for scholars as well as organisations.

Journal-level Bibliometric

Impact Factor

Journal Impact Factor is the most prevalent bibliometric indicator among journals. It is an assessment of how frequently articles published in a particular journal are cited on an average in the two years following their publication. The greater the impact factor, the more prominent the journal. The other well-known and widely accepted bibliometric indicators are:

SCImago Journal Ranking (SJR)

SJR takes into account both the number of citations received and the significance of the journals from where such citations are sourced. SJR computation uses an algorithm similar to Google PageRank.

Source Normalized Impact per Paper (SNIP)

SNIP assesses the impact of contextual citation by measuring citations based on the total number of citations in a particular field of study. SNIP is defined as the ratio of a journal’s citation count per paper and the citation potential in its subject field.

Impact per Publication (IPP)

This mode of measurement calculates the ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3).

Author-level Bibliometric

Bibliometric indicators measuring the impact of individual authors are known as author-level metrics.

H-index

H-index measures both the productivity and impact of the published work of a researcher. It is the most well-known author-level metric at present.
However, h-index has the following shortcomings:
• It does not account for highly cited papers, i.e. the h-index of the author remains the same whether their most highly cited paper has 100 or 10 citations.
• It does not take into consideration the career span of the author. This is because it is only dependent on productivity and impact. Therefore, authors with longer career spans and more publications will always have higher scores.

To overcome these shortcomings of h-index, the following variants were proposed:

G-index

It is an author-level metric for quantifying scientific productivity based on publication record. G-index is found by analysing the distribution of citations received by a specific researcher’s publications.

M-index

It is defined as the h-index divided by the number of years the researcher has been publishing papers.

Correlation between impact factor and rejection rate: Myth or fact?

impact-factor

Impact factor (IF) is a measure of the reputation and health of a journal, but not the sole determinant. Therefore, authors must not consider it as a be-all, end-all yardstick or stricture while finding the right journal for their paper. The scope of a journal, its audience, and types of articles it publishes are equally, if not more, significant than the IF.

Grading the authors based on the merit of their publication portfolios is an arduous and tricky task. Several institutional committees often rank the authors based on their previous achievements for promotions, funding, and honors. In many academic circles, the IF of a journal is adopted as a parameter for assessing the quality of a published article, thereby sidestepping a comprehensive review of the article.

In scholarly publishing, a general perception among authors is that journals ranked with a high IF are highly selective and follow strict criteria for paper selection. It is also conjectured that these journals accept only those manuscripts that have extremely significant and novel outcomes, and hence more likely to attract many citations.

However, several past studies have established that there is no correlation between rejection rate and IF. These studies have cited instances of journals that have low IF and high rejection rates, which prove that IF is a poor predictor of the rejection rate and merit of a journal.

Frontiers, a leading open access publisher, plotted the IFs of 570 journals against their rejection rates and found absolutely no prime correlation between the two elements. Several studies have an alternative explanation for journals that have a high IF and a high 90-95% rejection rate. According to these studies, the high rejection rate is because the journals give precedence to prominent authors and select works that are likely to attract broad acceptance from the target audience. Consequently, many papers are rejected by them when submitted at the first go.

The way the IF is mishandled or misapplied by authors/selection committees constitutes a blemished metric in several ways. Therefore, it is important to avoid the long misconstrued notion that authors with many publications in journals that have high IFs and high rejection rates are more meritorious and bigger achievers than others who have publications in journals with medium or low IF.

Journal Impact Factor: All That Matters

The impact factor, often abbreviated as IF, is a measure reflecting the average number of citations that a paper published in a journal receives over a defined period of time. Conceptually developed in the 1960s by Eugene Garfield, founder of the Institute for Scientific Information (ISI), IF is now frequently used as a proxy for the relative importance of a journal within its field. Journal impact factors are published annually in Science Citation Index (SCI) Reports.

Researchers are often conditioned to believe that IF matters the most. Publication in journals with a high IF is regarded as an indication of the quality of the research published, and by implication, the quality of its authors. Therefore, it is not surprising that publishing in high IF journals is an aspiration for most scientists as it often plays an important role in their career prospects and progression.

High IF journals are widely read. But there has been a discrepancy regarding the importance of journal IF among researchers. Journal ranking systems have evolved in the present-day world and allow for better comparisons. Sadly, they are often ignored even when such rankings may benefit a given journal. But even these systems are not foolproof and can be quite flawed, especially those assuming that the scientific value or quality is less if the scope of a discussion is small. A more appropriate approach could be to say that the best journals are those that can rank high in one or more categories or ranking systems, rather than reducing the overall journal quality and usefulness to a single number.

IF, originally designed for purposes other than the individual evaluation of the quality of research, is undoubtedly a useful tool provided its interpretation is not stretched far beyond its limits of validity. Having said that, the research quality cannot be measured solely using IF. It should be used with caution, and should not be the dominant or only factor accounting for the credibility of a research.