Skip to Main Content

Journal Impact Factors: Home

A quick guide to introduce you to Journal Impact Factors.

The Journal Impact Factor identifies the frequency with which an average article from a journal is cited in a particular year.

  • e.g. an impact factor of 2.5 means that on average an article has been cited 2.5 times.

For example, the 2016 impact factor of a journal would be calculated as A/B, where:

A = the number of times that all items published in that journal in 2015 and 2016 were cited by indexed publications during 2017.

B = the total number of 'citable items'* published by that journal in 2015 and 2016.

Impact Factors are:

 

  • used to measure the importance or rank of a journal by calculating the times it's articles are cited.
  • a quantitative tool for evaluating the relative importance of a journal.
There are a number of tools used to measure journal impact. These include:

 

  • Journal Citation Reports (used to find the Journal Impact Factors) provide rankings for journals in the areas of Science, Technology and Social Sciences. They show the Journal Impact Factor and the Eigenfactor. Eigenfactor scores are intended to give a measure of how likely a journal is to be used, and are thought to reflect how frequently an average researcher would access content from that journal.
  • CiteScore metrics measures (Scopus Journal Metrics). The Scopus Journal analyzer provides a view of journal performance, enriched with two journal metrics - SJR (SCImago Journal Rank) and SNIP (Source Normalized Impact per Paper).
  • Altmetrics, as it's name suggests this is an alternative metrics measure based on social media data.
  • Google Scholar Metrics.

 

Despite attempts to make journal impact factors more sophisticated critics argue that are still a crude numeric metric. Caution should be exercised in comparing journals across disciplines. It is also worth noting that however the journal impact is measured it does not necessarily reflect the impact that the research has had in the 'real world'.

Find out more via the University's Understand Responsible Metrics guidance, 

Journal impact factors (Clarivate)

Find further resources about the new Journal Citation Reports interface here.

Journal impact factors can be used as a tool to:
  • compare journals in the same field.
  • evaluate the relative impact of a journal in its field.
  • help you decide which journals to publish in.
  • reflect the changing status of a journal, as the impact factor increases or declines over time.
Journal impact factors need to be used with care, as they have a number of limitations, some of which are:

 

  • New journals may be omitted from impact factor lists.
  • The impact factor only includes journals indexed by Clarivate's Web of Science.
  • Impact factors may be affected by variables unrelated to journals' quality:  e.g. self-citing by journals*. 
  • All citations are treated as of equal value, regardless of the quality of the journal in which the citation appears or the contextual 'weight' of the citation.
  • Impact factors are sometimes used to judge the quality of individual researchers. However, high-quality articles are published in less prestigious journals and vice versa.
  • Impact factors are sometimes used to judge the quality of individual articles. However:
    • the impact factor of a journal is an average: some articles in a journal may have no citations, but a few highly-cited articles give a high impact factor.
    • citation is not always an indicator of quality: an article may become highly cited because it is controversial, topical or populist.
  • The numbers vary by subject area**, so impact factors shouldn't be used to compare journals in different fields.

 

You may like to read this short document'using Journal Citation Reports wisely', which puts forward some of the reasons why you should exercise caution when using citation data as a means of evaluating journals.

 

* Note that Journal Citation Reports does show the scores for both the impact factor and the impact factor without self-cites.

** e.g., the journal with the highest impact factor in the category Medicine (2013) was New England Journal of Medicine with an impact factor of 54.420; the journal with the highest impact factor in the category Nursing (2013) was Oncology Nursing Forum with an impact factor of 2.830.

You should be aware that journals may adopt editorial policies to increase their impact factor, for example by:

 

  • publishing a larger percentage of review articles, which tend to be cited more frequently than other types of article.
  • restricting submission of review articles to 'by invitation only', so that only senior authors - who are more likely to be cited - are invited to submit 'citable' papers.
  • publishing a large portion of its papers expected to be highly cited early in the year, giving them more time to gather citations.
  • limiting the number of 'citable items' by rejecting articles that are unlikely to be cited.

Another key indicator used in Journal Citation Reports (JCR) is the Eigenfactor. The Eigenfactor Project is an academic research project developed by the DataLab at the University of Washington. The aim of the project is to 'provide the scientific community with what we believe to be a better method of evaluating the influence of scholarly journals', through the use of recent advances in network analysis.

The raw citation data used to compute Eigenfactor metrics derives from JCRMeanwhile, Eigenfactor scores are shown alongside other metrics, including impact factors, in JCR; see this example for Nature:

The  Eigenfactor algorithm is based on an iterative voting scheme, or a “random walk,” around the citation network; Eigenfactor scores are based on the amount of time a user is likely to spend in a given journal in the network.

Eigenfactor scores are scaled so that the scores of all journals indexed in JCR add up to 100. So if a journal has an Eigenfactor score of 1.0, it has 1% of the total influence of all indexed publications. In 2013, the journal Nature had the highest Eigenfactor score, with a value of 1.603.

 

The project website identifies a number of what it considers are the advantages of Eigenfactor metrics, including:

  • The ranking of journals' relative importance is via algorithms which use the structure of the entire network of citations (rather than local citation information only). 
  • The algorithms automatically adjust for citation differences across disciplines, allowing for better comparison across research areas.
  • Eigenfactor.org reports information on journal price and value for thousands of scholarly periodicals; its cost-effectiveness search orders journals by a measure of their value per dollar.

Scopus Journal Metrics (citescore)

CiteScore metrics measures journal citation impacts and are produced by Scopus. It uses eight indicators to analyze publication influence. Information about the metrics features are available here.

 The calculation of CiteScore for the current year is based on the number of citations received by a journal in that year for the documents published in the journal in the past three years, divided by the documents indexed in Scopus published in those three years. The calculation of CiteScore for the current year is based on the number of citations received by a journal in that year for the documents published in the journal in the past three years, divided by the documents indexed in Scopus published in those three years. If you wish to know more about how the scores are calculated click here.

There is a quick reference for research impact metrics available.

SNIP (Source Normalized Impact per Paper):

  • measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. For example, journals in Mathematics, Engineering and Social Sciences tend to have higher values than titles in Life Sciences.
  • allows direct comparison of sources in different subject fields.
  • with SNIP, the differences of journals’ SNIP are due to the quality of the journals but not the different citation behavior between subject fields.

SJR (SCImago Journal Rank):

  • ranks publications by weighted citations per document.
  • is a prestige metric based on the idea that not all citations are the same. Citations are weighted more or weighted less – depending on the source they come from.
  • with SJR, subject field, quality and reputation of the journal have a direct effect on the value of a citation.
  • with SJR, self-citations are limited to the maximum of 33%.

Altmetrics

altmetrics = alternative metrics
"altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship." (
altmetrics.org)

How are altmetrics calculated? 

Different sources go into altmetric calculations, depending on the provider and the information that they are using. Generally, a high altmetric score indicates that an item has received a lot of attention and it has also received what that provider has decided is "quality" attention (i.e. a news post might be more valuable than a twitter mention). 

Altmetrics can help researchers understand how their outputs are being shared and discussed via social media and online, and may supplement the information gained from traditional indicators.

The Altmetric Bookmarlet is a free browser plug in which let’s you see the Altmetric data for any publication with a DOI. The tutorial below shows you how it works. ​

Strengths

  • Speed - altmetrics can accumulate more quickly than traditional indicators such as citations.
  • Range - altmetrics can be gathered for many types of research outputs, not just scholarly articles.
  • Granularity - altmetrics provide indicators at the article level, rather than journal level.
  • Macro view - altmetrics can give a fuller picture of research impact using many indicators, not just citations.
  • Public impact - altmetrics can measure impact outside the academic world, where people may use but not formally cite research.
  • Sharing - if researchers get credit for a wider range of research outputs, such as datasets, it could motivate further sharing.

Weaknesses

  • Reliability - like any indicator, there's a potential for gaming. Also, altmetrics may indicate popularity with the public, but not necessarily quality research.
  • Difficulty - altmetrics can be difficult to collect, for example bloggers or tweeters may not use unique identifiers for articles.
  • Relevance - there are many different altmetrics providers available and it can be hard to determine which are the most relevant and worth taking time to collect.
  • Acceptance - currently, many funders (and some institutions) use traditional indicators to measure research impact.
  • Context - use of online tools may differ by discipline, geographic region, and over time, making altmetrics difficult to interpret.

Dimensions Plus

Dimensions Plus is a modern and innovative, linked research data infrastructure and tool, reimagining discovery and access to research: grants, publications, citations, clinical trials, patents and policy documents in one place.  The Guide to the Dimensions Data Approach provides an overview of the Dimensions database content

For comprehensive information and support, use the Dimensions support site.

Contact Us or Give Feedback

University of Exeter LibGuide is licensed under CC BY 4.0