Research Impact

  • Author Identification
  • Citation Indexes

Citation Indexes Overview

Other citation index numbers.

  • Journal Impact
  • More Resources

What are index numbers?

Citation index numbers provide a way to measure impact beyond raw citation counts. Index numbers can be calculated for individual articles, a group/list of publications, or even all the articles published in a journal or field (see our Journal Impact page).

What is the "best" index number?

Generally, the "best" measurement depends on what matters to you. The h-index is the most widely known index measurement. Some alternative measurements, like the g-index, address specific issues with the h-index. Other measurements target recent publications and citations, such as the the contemporary h-index. 

Alternatives to the h-index include:

  • g-index :  Gives more weight to highly cited publications. The original h-index is insensitive to high "outliers" -- a few papers that have very high citation counts will not sway the h-index score (much). The g-index allows highly cited papers to play a larger role in the index, and tends to emphasize visibility and "lifetime achievement."
  • hc-index (contemporary h-index) :  Gives more weight to recent publications. The original h-index favors senior researchers with extensive publication records, even if they have ceased publishing. The hc-index attempts to correct this and favors researchers currently publishing.
  • i10-index: Measures the number of papers that have at least 10 citations. Introduced (and used) by Google Scholar.
  • m-quotient: Divides the h-index by the number of years since the researcher's first published paper. m-quotient was proposed as a way to help younger researchers who may not have long publication lists.

For more index measurements, we suggest " Reflections on the  h- index ," by Prof. Anne-Wil Harzing, University of Melbourne.

What is the h-index?

The h-index attempts to correlate a researcher's total publications and total citations. It was proposed by Jorge E. Hirsch in 2005 (" An index to quantify an individual's scientific research output ," PNAS November 15, 2005 vol. 102 no. 46 16569-16572). For more information, see the Wikipedia article .

Graph of the h-index, from Wikipedia.

How do I calculate my h-index?

  • Web of Science or Google Scholar will automatically calculate the h-index for the list of publications in your profile. 
  • Publish or Perish will calculate h-index (and many other index numbers) for an author's publications. 
  • If you want to calculate an h-index manually, Hirsch defines the h-index as follows: "A scientist has index  h  if  h  of his or her  Np  papers have at least  h  citations each and the other ( Np – h ) papers have ≤h  citations each."
  • << Previous: Citation Metrics
  • Next: Journal Impact >>
  • Last Updated: Jan 26, 2024 11:12 AM
  • URL: https://guides.library.georgetown.edu/researchimpact

Creative Commons

Research Impact: Citation Indexes

  • Journal Impact
  • Citation Indexes
  • ORCiD Researcher ID
  • More Resources

Citation Indexes Overview

What are index numbers.

Citation index numbers provide a way to measure impact beyond raw citation counts. Index numbers can be calculated for individual articles, a group/list of publications, or even all the articles published in a journal or field (see our Journal Impact page).

What is the "best" index number?

Generally, the "best" measurement depends on what matters to you. The h-index is the most widely known index measurement. Some alternative measurements, like the g-index, address specific issues with the h-index. Other measurements target recent publications and citations, such as the the contemporary h-index. 

Other Citation Index Numbers

Alternatives to the h-index include:

  • g-index :  Gives more weight to highly cited publications. The original h-index is insensitive to high "outliers" -- a few papers that have very high citation counts will not sway the h-index score (much). The g-index allows highly cited papers to play a larger role in the index, and tends to emphasize visibility and "lifetime achievement."
  • hc-index (contemporary h-index) :  Gives more weight to recent publications. The original h-index favors senior researchers with extensive publication records, even if they have ceased publishing. The hc-index attempts to correct this and favors researchers currently publishing.
  • i10-index: Measures the number of papers that have at least 10 citations. Introduced (and used) by Google Scholar.
  • m-quotient: Divides the h-index by the number of years since the researcher's first published paper. m-quotient was proposed as a way to help younger researchers who may not have long publication lists.

For more index measurements, we suggest " Reflections on the  h- index ," by Prof. Anne-Wil Harzing, University of Melbourne.

What is the h-index?

The h-index attempts to correlate a researcher's total publications and total citations. It was proposed by Jorge E. Hirsch in 2005 (" An index to quantify an individual's scientific research output ," PNAS November 15, 2005 vol. 102 no. 46 16569-16572). For more information, see the Wikipedia article .

Graph of the h-index, from Wikipedia.

How do I calculate my h-index?

  • Web of Science or Google Scholar will automatically calculate the h-index for the list of publications in your profile. 
  • Publish or Perish will calculate h-index (and many other index numbers) for an author's publications. 
  • If you want to calculate an h-index manually, Hirsch defines the h-index as follows: "A scientist has index  h  if  h  of his or her  Np  papers have at least  h  citations each and the other ( Np – h ) papers have ≤h  citations each."
  • << Previous: Citation Metrics
  • Next: Altmetrics >>
  • Last Updated: Nov 17, 2023 1:28 PM
  • URL: https://libguides.gwu.edu/researchimpact

The Library building will temporarily close for approximately 8-10 weeks from May 15 through July 31, 2024. For more information about Library services during the summer, please visit Summer Closure? Library Support and Services Continue .

institution logo

Citation Indexes: Scopus & Web of Science

  • Citation Indexes: Scopus & Web of Science

Using Scopus as an Author

Scopus metrics.

  • Web of Science
  • Subject-Specific Citation Indexes
  • Citation Tracking & Bibliometrics This link opens in a new window
  • Literature Reviews This link opens in a new window
  • Citation Management Tools This link opens in a new window
  • Career Research This link opens in a new window
  • Academic Research Glossary

Scopus is a citation index : it collects abstracts and citation data for all articles published by the set of academic journals included in its indexes based on specific criteria.

A screenshot of a graph of Documents by year from a Scopus analysis of search results

This citation data can be used to analyze scholarly research in many ways, including by topic, author, affiliation, publication, time period, and other factors. When looking for articles for a literature review, using Scopus is the best way to make sure you're reading the articles you should be.

Scopus includes a few very useful ways to analyze articles in groups, either as a collection of search results or as the articles that all cite a particular article. Here is an image of the "Documents by year" graph for articles in a search, showing the how many articles per year were published on that topic.

The Database

myStevens Login Required

Database Help

  • Scopus Search Help (Elsevier) How to search for a document in Scopus, including search operators, filters, and a list of document types included in the database.
  • Scopus Tutorials (Elsevier) Tutorials to guide your usage of the database.

Sources & Coverage

Contents include:

  • Conference Proceedings
  • Scholarly Book Series
  • Scopus Content Overview What's included and how much.
  • Scopus Sources Search for titles by name or subject area or download the entire Source List (.xlsx file).
  • Scopus Content Selection and Advisory Board This group of international researchers, scientists and librarians decides what journals meet the criteria to be indexed in Scopus.

The Scopus algorithm uses the article metadata for publications in a journal indexed by Scopus to create Scopus Author Profiles when two or more articles are linked to one author name. Authors with similar names are assigned different Scopus Author Identifiers, but it is the responsibility of the individual author to ensure that citations are accurately assigned to the right identifier.

  • In addition to contact and citation information, your Author Profile can also show awarded grants and preprints when indexed by Scopus.
  • Unique identifier: Scopus Author Identifier. Errors can be corrected through submission to the Author Feedback Wizard, which is also how you can request to link your ORCID account.
  • Total citation count
  • H-index score, indicating the ratio of total articles published to citations
  • Field-Weighted Citation Impact
  • Scopus Author Profile FAQs How Author Profiles are made and edited.
  • How Do I Use the Author Feedback Wizard? (Scopus) Elsevier support page about using the Author Feedback Wizard to submit requests for profile edits.

Journal Metrics

Journal-level metrics

  • CiteScore: "CiteScore calculates the average number of citations received in 4 calendar years to 5 peer-reviewed document types (research articles, review articles, conference proceedings, data papers, and book chapters) published in a journal in the same four years." That is, the number of citations a journal receives in a 4-year period divided by the number of total documents published in that same 4-year period. The CiteScore methodology was revised in 2020 and all current CiteScore data has been updated.
  • SCImago Journal Rank (SJR), by Scimago Research Group : "[T]he average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years."
  • Source-Normalized Impact per Paper (SNIP), by  CWTS Journal Indicators : "[T]he number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years."

Please Note

A common way to judge the effect of a journal on a field of research is through using citation data, tracking the number of times articles are cited, to aid in the decision-making process for those who might need this data: researchers looking to publish, librarians looking to subscribe, or promotion-and-tenure committees looking to judge the work done by researchers. But it's important to remember that the value of a journal to the field might be seen in measures other than citation counts, so while journal citation data can provide a good data point to keep in mind when making your own decision, it should not be the only one you consider .

Article Metrics

Scopus metrics

  • Total number of citations (per date range)
  • Citations per year (per date range)
  • Citation benchmarking
  • Field-weighted Citation Impact ("FWCI is the ratio of the document's citations to the average number of citations received by all similar documents over a three-year window.")

PlumX altmetrics (tracking article activity online)

  • Usage (clicks, downloads, saves, etc.)
  • Mentions (news articles, blogposts)
  • Captures (bookmarks)
  • Social Media (shares, tweets, etc.)
  • Citations (journal indexes and patents)

Author Metrics

h -index and  h -graph: "A researcher's performance based on career publications, as measured by the lifetime number of citations that each published article receives;  h -indices indicate a balance between productivity (scholarly output) and citation impact (citation count)." (Source: Scopus Metrics )

For More Information

  • CiteScore Journal Metric - FAQ (Scopus) More info about CiteScore from Scopus.
  • Scopus Metrics The rundown of metrics used in Scopus for journals, articles and authors.
  • << Previous: Citation Indexes: Scopus & Web of Science
  • Next: Web of Science >>
  • Last Updated: Apr 4, 2024 12:34 PM
  • URL: https://library.stevens.edu/citationindex

institution logo

1 Castle Point Terrace, Hoboken, NJ 07030, 201.216.5200

  • Journal Finder

Research Help

  • Research Assistance
  • Make an Appointment
  • How to Cite
  • Research Guides
  • Borrow & Renew
  • Interlibrary Loan
  • Print & Copy
  • Study Spaces
  • Thesis/Dissertations

[email protected]

201-216-5200

Staff Directory

University of Illinois Chicago

University library, search uic library collections.

Find items in UIC Library collections, including books, articles, databases and more.

Advanced Search

Search UIC Library Website

Find items on the UIC Library website, including research guides, help articles, events and website pages.

  • Search Collections
  • Search Website

Measuring Your Impact: Impact Factor, Citation Analysis, and other Metrics: Measuring Your Impact

  • Measuring Your Impact
  • Citation Analysis
  • Find Your H-Index
  • Other Metrics/ Altmetrics
  • Journal Impact Factor (IF)
  • Selecting Publication Venues

How to Measure your Impact PPT

  • How to measure your impact. Feel free to use this power point and change it as you need it. Please however give credit to me (Sandra De Groote) as part of your document or powerpoint.

About the H-index

The h-index is an index to quantify an individual’s scientific research output ( J.E. Hirsch )   The h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the researcher's most cited papers and the number of citations that they have received in other people's publications ( Wikipedia )  A scientist has index h if h of [his/her] Np papers have at least h citations each, and the other (Np − h) papers have at most h citations each.

Find your h-index at:

  • Web of Science
  • Google Scholar

Ways to Measure Impact

There are various tools and methods upon which to measure the impact of an individual or their scholarship.

  • There are several databases (Web of Science, Scopus, and Google Scholar) that will provide an h-index for an individual based on publications indexed in the tools.  
  • Find about more about these tools and how to use them by clicking the Find Your H-index tab.
  • UIC has access to a number of resources that identify cited works including: Web of Science, Scopus, and Google Scholar.  
  • F ind about more about these tools and how to use them by clicking the Citation Analysis tab.
  • Find out more about Altmetrics and tools for obtaining altmetrics data, click on the Other Metrics/ Altmetrics tab.  
  • Find out more about the impact factor and tools that measure/ rank journals within specific disciplines, click the Journal Impact Factor tab.  

About Citation Analysis

What is Citation Analysis?

The process whereby the impact or "quality" of an article is assessed by counting the number of times other authors mention it in their work.

Citation analysis invovles counting the number of times an article is cited by other works to measure the impact of a publicaton or author.  The caviat however, there is no single citation analysis tools that collects all publications and their cited references.  For a thorough analysis of the impact of an author or a publication, one needs to look in multiple databases to find all possible cited references. A number of resources are available at UIC  that identify cited works including: Web of Science, Scopus, Google Scholar, and other databases with limited citation data.

Citation Analysis - Why use it?

To find out how much impact a particular article or author has had, by showing which other authors cited the work within their own papers.  The H-Index is one specific method utilizing citation analysis to determine an individuals impact.

Related Guides

  • Avoiding Plagiarism
  • Copyright and Fair Use
  • Digital Humanities
  • Impact Factor & H-Index
  • Managing Your Data
  • NIH Public Access Policy
  • NIH Data Management and Sharing
  • Open Education Resources
  • Public Access Mandates for Federally Funded Research
  • Publishing & Scholarly Communication
  • Open Access
  • SciENcv & MyNCBI My Bibliography

About Journal Impact

Impact Factor - What is it?;  Why use it?

The  impact factor (IF)  is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times its articles are cited.

How Impact Factor is Calculated?

The calculation is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable.

Calculation of 2010 IF of a journal:

  • Next: Citation Analysis >>
  • Last Updated: Mar 13, 2024 12:51 PM
  • URL: https://researchguides.uic.edu/if

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Orthop
  • v.50(2); Mar-Apr 2016

Logo of ijortho

What is indexing

Ish kumar dhammi.

Department of Orthopaedics, UCMS and Guru Teg Bahadur, Hospital, New Delhi, India

Rehan Ul Haq

The prestige of any journal is considered by how many abstracting and indexing services cover that journal. It has been observed in last few years that authors have started searching for indexed journals to publish their articles. Probably this is happening because it has become a mandatory requirement for further promotions of teaching faculty in medical colleges and institutions. However, the big question is after all what is an “Index Journal”? Is a journal considered indexed if it is documented in a local database, regional database, or in any continental database? Based on available literature, we would like to clear in few forthcoming paragraphs what is the history of indexing, what is actual indexing, and what is nonindexing?

Citation index (indexing) is an ordered list of cited articles, each accompanied by a list of citing articles. 1 The citing article is identified as source and the cited article as reference. An abstracting and indexing service is a product, a publisher sells, or makes available. The journal contents are searchable using subject headings (keywords, author's names, title, abstract, etc.,) in available database. 2 Being represented in the relevant online abstracting and indexing services is an essential factor for the success of a journal. Today search is done online, so it is imperative that a journal is represented in the relevant online search system. A citation index is a kind of bibliographic database, an index of citation between publications, allowing the user to easily establish which later documents, cite which earlier documents. 3

A form of citation index was first found in the 12 th century in Hebrew religious literature. Legal citation indexes were found in the 18 th century and were made popular by citators such as Shepard's citations (1873). 3 In 1960, the Eugene Garfields Institute for Scientific Information (ISI) introduced the first citation index for papers published in academic journals, first the science citation index (SCI) and later social science's citation index and the arts and humanities citation index. The first automated citation indexing was done by “CiteSeer” in 1997. Other sources for such data include Google Scholar and Elsevier's Scopus. 3

Currently major citation indexing services are:

  • SCI and SCI-expanded: Published by ISI a part of Thomson Reuters. As mentioned, SCI was originally produced by ISI and created by Eugene Garfield (1964). 4 , 5 The SCI's database has two aims – first, to identify what each scientist has published and second, where and how often the papers by that scientist are cited. The SCI's electronic version is called “Web of Science.” 4 SCI-expanded indexes 8073 journals with citation references across 174 scientific disciplines in science edition 6
  • Scopus: Scopus (Elsevier) is a bibliographic database containing abstracts and citations for academic journal articles. It covers 21,000 titles from over 5000 publishers. 7 It is available online only.
  • Indian citation index (ICI): An online citation data ICI 8 is a new web platform for measuring performance of Indian research periodically. This online bibliographic database was launched in 2009. ICI covers 800 plus journals which are published from India on science, technical, medical, and social sciences. 8

In addition, “CiteSeer” and Google Scholar’ are freely available online.

I NDEX M EDICAUS /M EDLARS /M EDLINE /E NTREZ AND P UBMED

John Show Billings, Head of the Library of the Surgeon General's Office, United States Army, which later evolved as the United States National Library of Medicine (NLM), started index medicus (IM). IM was a comprehensive bibliographic index of scientific journal articles related to medical science, in print form, published between 1879 and 2004. NLM began computerizing indexing work in 1960 and called it MEDLARS, a bibliographic database, which later became MEDLINE. Thus, IM became the print presentation of MEDLINE databases content. Both print presentation (IM) and online database (MEDLINE) continued until 2004. In December 2004, the last issue of IM was published (volume 45). The stated reason for discontinuing printed publication was obvious because online resources supplanted it. The electronic presentations of MEDLINE’S contents also evolved, first with proprietary online services (accessed mostly at libraries) and later with CD-ROMS, then with Entrez and PubMed. PubMed is thus a free search engine which accesses the Medline data base. PubMed greatly accelerated the shift of online access to MEDLINE from something one did at the library to something one did anywhere. 9 An abridged version was published from 1970 to 1997 as the Abridged IM. The abridged edition lives on as a subset of the journals covered by PubMed (core clinical journals).

E MBASE /E XPERTA M EDICA

Embase is database of Experta Medica (a print version), and it is a biomedical pharmacological database formed of published literature. Embase is produced by Elsevier and contains over 28 million records of over 8400 files up to date, information about drugs, published in literature. Embase enables tracking and retrieval of drug information. 10

Index Copernicus

Index Copernicus (IC) 11 is an online database of user-contributed information, including scientist profiles as well as of scientific institutions, publications, and projects established in 1999 in Poland. The database is named after Nicolaus Copernicus and operated by IC International. However, ICS evaluation methodology is criticized. 12

PubMed Central

PubMed Central is a free digital repository that archives publically accessible full-text articles. About 1600 journals automatically deposit their articles in PubMed Central.

As per Editor insight series of Wolters Kluwer, there are four major online bibliographic sites – MEDLINE, PubMed Central, ISI, and Scopus. 7 Inclusion in MEDLINE confers a mark of quality upon a publication. PubMed Central gives greater access to open access contents and ISI provides an official impact factor. Inclusion in Scopus gives a clear view of journal metrics and provides H-Index and citation impact. 7

There are certain nonabstracting and indexing services that many publishers claim to be indexed in Scribd Cabelles Directories, slide share Google Docs, open J-Gate, and New journal.

Medical Council of India considers following as indexing agencies: Scopus, PubMed, MEDLINE, Embase/Excerpta Medica, Index Medicaus, and IC. 12

To conclude, citation indexing services include SCI and SCI expanded. Rest are search engines or bibliographic online data base. Major such bibliographic sites are MEDLINE (most prestigious and its data are searchable by PubMed), ISI, Scopus and Indian citation index (emerging).

R EFERENCES

Advertisement

Issue Cover

  • Previous Article
  • Next Article

PEER REVIEW

1. introduction, 3. discussion, 4. conclusions, acknowledgments, funding information, author contributions, competing interests, data availability, scite: a smart citation index that displays the context of citations and classifies their intent using deep learning.

ORCID logo

Handling Editor: Ludo Waltman

  • Funder(s):  National Institute on Drug Abuse
  • Award Id(s): 4R44DA050155-02
  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Josh M. Nicholson , Milo Mordaunt , Patrice Lopez , Ashish Uppala , Domenic Rosati , Neves P. Rodrigues , Peter Grabitz , Sean C. Rife; scite: A smart citation index that displays the context of citations and classifies their intent using deep learning. Quantitative Science Studies 2021; 2 (3): 882–898. doi: https://doi.org/10.1162/qss_a_00146

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Citation indices are tools used by the academic community for research and research evaluation that aggregate scientific literature output and measure impact by collating citation counts. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. The use of citations in research evaluation without consideration of context can be problematic because a citation that presents contrasting evidence to a paper is treated the same as a citation that presents supporting evidence. To solve this problem, we have used machine learning, traditional document ingestion methods, and a network of researchers to develop a “smart citation index” called scite , which categorizes citations based on context. Scite shows how a citation was used by displaying the surrounding textual context from the citing paper and a classification from our deep learning model that indicates whether the statement provides supporting or contrasting evidence for a referenced work, or simply mentions it. Scite has been developed by analyzing over 25 million full-text scientific articles and currently has a database of more than 880 million classified citation statements. Here we describe how scite works and how it can be used to further research and research evaluation.

https://publons.com/publon/10.1162/qss_a_00146

Citations are a critical component of scientific publishing, linking research findings across time. The first citation index in science, created in 1960 by Eugene Garfield and the Institute for Scientific Information, aimed to “be a spur to many new scientific discoveries in the service of mankind” ( Garfield, 1959 ). Citation indices have facilitated the discovery and evaluation of scientific findings across all fields of research. Citation indices have also led to the establishment of new research fields, such as bibliometrics, scientometrics, and quantitative studies, which have been informative in better understanding science as an enterprise. From these fields have come a variety of citation-based metrics, such as the h -index, a measurement of researcher impact ( Hirsch, 2005 ); the Journal Impact Factor (JIF), a measurement of journal impact ( Garfield, 1955 , 1972 ); and the citation count, a measurement of article impact. Despite the widespread use of bibliometrics, there have been few improvements in citations and citation indices themselves. Such stagnation is partly because citations and publications are largely behind paywalls, making it exceedingly difficult and prohibitively expensive to introduce new innovations in citations or citation indices. This trend is changing, however, with open access publications becoming the standard ( Piwowar, Priem, & Orr, 2019 ) and organizations such as the Initiative for Open Citations ( Initiative for Open Citations, 2017 ; Peroni & Shotton, 2020 ) helping to make citations open. Additionally, with millions of documents being published each year, creating a citation index is a large-scale challenge involving significant financial and computational costs.

Historically, citation indices have only shown the connections between scientific papers without any further contextual information, such as why a citation was made. Because of the lack of context and limited metadata available beyond paper titles, authors, and the date of publications, it has only been possible to calculate how many times a work has been cited, not analyze broadly how it has been cited. This is problematic given citations’ central role in the evaluation of research. In short, not all citations are made equally, yet we have been limited to treating them as such.

Here we describe scite (scite.ai), a new citation index and tool that takes advantage of recent advances in artificial intelligence to produce “Smart Citations.” Smart Citations reveal how a scientific paper has been cited by providing the context of the citation and a classification system describing whether it provides supporting or contrasting evidence for the cited claim, or if it just mentions it.

Such enriched citation information is more informative than a traditional citation index. For example, when Viganó, von Schubert et al. (2018) cites Nicholson, Macedo et al. (2015) , traditional citation indices report this citation by displaying the title of the citing paper and other bibliographic information, such as the journal, year published, and other metadata. Traditional citation indices do not have the capacity to examine contextual information or how the citing paper used the citation, such as whether it was made to support or contrast the findings of the cited paper or if it was made in the introduction or the discussion section of the citing paper. Smart Citations display the same bibliographical information shown in traditional citation indices while providing additional contextual information, such as the citation statement (the sentence containing the in-text citation from the citing article), the citation context (the sentences before and after the citation statement), the location of the citation within the citing article (Introduction, Materials and Methods, Results, Discussion, etc.), the citation type indicating intent (supporting, contrasting, or mentioning), and editorial information from Crossref and PubMed, such as corrections and whether the article has been retracted ( Figure 1 ). Scite previously relied on Retraction Watch data but moved away from this due to licensing issues. Going forward, scite will use its own approach 1 to retraction detection, as well as data from Crossref and PubMed.

Example of scite report page. The scite report page shows citation context, citation type, and various features used to filter and organize this information, including the section where the citation appears in the citing paper, whether or not the citation is a self-citation, and the year of the publication. The example scite report shown in the figure can be accessed at the following link: https://scite.ai/reports/10.7554/elife.05068.

Example of scite report page. The scite report page shows citation context, citation type, and various features used to filter and organize this information, including the section where the citation appears in the citing paper, whether or not the citation is a self-citation, and the year of the publication. The example scite report shown in the figure can be accessed at the following link: https://scite.ai/reports/10.7554/elife.05068 .

Adding such information to citation indices has been proposed before. In 1964, Garfield described an “intelligent machine” to produce “citation markers,” such as “critique” or, jokingly, “calamity for mankind” ( Garfield, 1964 ). Citation types describing various uses of citations have been systematically described by Peroni and Shotton in CiTO, the Ci tation T yping O ntology ( Peroni & Shotton, 2012 ). Researchers have used these classifications or variations of them in several bibliometric studies, such as the analysis of citations ( Suelzer, Deal et al., 2019 ) made to the retracted Wakefield paper ( Wakefield, Murch et al., 1998 ), which found most citations to be negative in sentiment. Leung, Macdonald et al. (2017) analyzed the citations made to a five-sentence letter purporting to show opioids as nonaddictive ( Porter & Jick, 1980 ), finding that most citations were uncritically citing the work. Based on these findings, the journal appended a public health warning to the original letter. In addition to citation analyses at the individual article level, citation analyses taking into account the citation type have also been performed on subsets of articles or even entire fields of research. Greenberg (2009) discovered that citations were being distorted, for example being used selectively to exclude contradictory studies to create a false authority in a field of research, a practice carried into grant proposals. Selective citing might be malicious, as suggested in the Greenberg study, but it might also simply reflect sloppy citation practices or citing without reading. Indeed, Letrud and Hernes (2019) recently documented many cases where people were citing reports for the opposite conclusions than the original authors made.

Despite the advantages of citation types, citation classification and analysis require substantial manual effort on the part of researchers to perform even small-scale analyses ( Pride, Knoth, & Harag, 2019 ). Automating the classification of citation types would allow researchers to dramatically expand the scale of citation analyses, thereby allowing researchers to quickly assess large portions of scientific literature. PLOS Labs attempted to enhance citation analysis with the introduction of “rich citations,” which included various additional features to traditional citations such as retraction information and where the citation appeared in the citing paper ( PLOS, 2015 ). However, the project seemed to be mostly a proof of principle, and work on rich citations stopped in 2015, although it is unclear why. Possible reasons that the project did not mature reflect the challenges of accessing the literature at scale, finding a suitable business model for the application, and classifying citation types with the necessary precision and recall for it to be accepted by users. It is only recently that machine learning techniques have evolved to make this task possible, as we demonstrate here. Additional resources, such as the Colil Database ( Fujiwara & Yamamoto, 2015 ) and SciRide Finder ( Volanakis & Krawczyk, 2018 ) both allow users to see the citation context from open access articles indexed in PubMed Central. However, adoption seems to be low for both tools, presumably due to limited coverage of only open access articles. In addition to the development of such tools to augment citation analysis, various researchers have performed automated citation typing. Machine learning was used in early research to identify citation intent ( Teufel, Siddharthan, & Tidhar, 2006 ) and recently Cohan, Ammar et al. (2019) used deep learning techniques. Athar (2011) , Yousif, Niu et al. (2019) , and Yan, Chen, and Li (2020) also used machine learning to identify positive and negative sentiments associated with the citation contexts.

Here, by combining the largest citation type analysis performed to date and developing a useful user interface that takes advantage of the extra contextual information available, we introduce scite, a smart citation index.

2.1. Overview

The retrieval of scientific articles

The identification and matching of in-text citations and references within a scientific article

The matching of references against a bibliographic database

The classification of the citation statements into citation types using deep learning.

The scite ingestion process. Documents are retrieved from the internet, as well as being received through file transfers directly from publishers and other aggregators. They are then processed to identify citations, which are then tied to items in a paper’s reference list. Those citations are then verified, and the information is inserted into scite’s database.

The scite ingestion process. Documents are retrieved from the internet, as well as being received through file transfers directly from publishers and other aggregators. They are then processed to identify citations, which are then tied to items in a paper’s reference list. Those citations are then verified, and the information is inserted into scite’s database.

We describe the four components in more detail below.

2.2. Retrieval of Scientific Documents

Access to full-text scientific articles is necessary to extract and classify citation statements and the citation context. We utilize open access repositories such as PubMed Central and a variety of open sources as identified by Unpaywall ( Else, 2018 ), such as open access publishers’ websites, university repositories, and preprint repositories, to analyze open access articles. Other relevant open access document sources, such as Crossref TDM and the Internet Archive have been and are continually evaluated as new sources for document ingestion. Subscription articles used in our analyses have been made available through indexing agreements with over a dozen publishers, including Wiley, BMJ, Karger, Sage, Europe PMC, Thieme, Cambridge University Press, Rockefeller University Press, IOP, Microbiology Society, Frontiers, and other smaller publishers. Once a source of publications is established, documents are retrieved on a regular basis as new articles become available to keep the citation record fresh. Depending on the source, documents may be retrieved and processed anywhere between daily and monthly.

2.3. Identification of In-Text Citations and References from PDF and XML Documents

A large majority of scientific articles are only available as PDF files 2 , a format designed for visual layout and printing, not text-mining. To match and extract citation statements from PDFs with high fidelity, an automated process for converting PDF files into reliable structured content is required. Such conversion is challenging, as it requires identifying in-text citations (the numerical or textual callouts that refer to a particular item in the reference list), identifying and parsing the full bibliographical references in the reference list, linking in-text citations to the correct items in this list, and linking these items to their digital object identifiers (DOIs) in a bibliographic database. As our goal is to eventually process all scientific documents, this process must be scalable and affordable. To accomplish this, we utilize GROBID, an open-source PDF-to-XML converter tool for scientific literature ( Lopez, 2020a ). The goal of GROBID is to automatically convert scholarly PDFs into structured XML representations suitable for large-scale analysis. The structuration process is realized by a cascade of supervised machine learning models. The tool is highly scalable (around five PDF documents per second on a four-core server), is robust, and includes a production-level web API, a Docker image, and benchmarking facilities. GROBID is used by many large scientific information service providers, such as ResearchGate, CERN, and the Internet Archive to support their ingestion and document workflows ( Lopez, 2020a ). The tool is also used for creating machine-friendly data sets of research papers, for instance, the recent CORD-19 data set ( Wang, Lo et al., 2020 ).

Particularly relevant to scite, GROBID was benchmarked as the best open source bibliographical references parser by Tkaczyk, Collins et al. (2018) and has a relatively unique focus on citation context extraction at scale, as illustrated by its usage for building the large-scale Semantic Scholar Open Research Corpus (S2ORC), a corpus of 380.5 million citations, including citation mentions excerpts from the full-text body ( Lo, Wang et al., 2020 ).

In addition to PDFs, some scientific articles are available as XML files, such as the Journal Article Tag Suite (JATS) format. Formatting articles in PDF and XML has become standard practice for most mainstream publishers. While structured XML can solve many issues that need to be addressed with PDFs, XML full texts appear in a variety of different native publisher XML formats, often incomplete and inconsistent from one to another, loosely constrained, and evolving over time into specific versions.

To standardize the variety of XML formats we receive into a common format, we rely upon the open-source tool Pub2TEI ( Lopez, 2020b ). Pub2TEI converts various XML styles from publishers to the same standard TEI format as the one produced by GROBID. This centralizes our document processing across PDF and XML sources.

2.4. Matching References Against the Bibliographic Database Crossref

Once we have identified and matched the in-text citation to an item in a paper’s reference list, this information must be validated. We use an open-source tool, biblio-glutton ( Lopez, 2020c ), which takes a raw bibliographical reference, as well as optionally parsed fields (title, author names, etc.) and matches it against the Crossref database—widely regarded as the industry standard source of ground truth for scholarly publications 3 . The matching accuracy of a raw citation reaches an F-score of 95.4 on a set of 17,015 raw references associated with a DOI, extracted from a data set of 1,943 PMC articles 4 compiled by Constantin (2014) . In an end-to-end perspective, still based on an evaluation with the corpus of 1,943 PMC articles, combining GROBID PDF extraction of citations and bibliographical references with biblio-glutton validations, the pipeline successfully associates around 70% of citation contexts to cited papers with correctly identified DOIs in a given PDF file. When the full-text XML version of an article is available from a publisher, references and linked citation contexts are normally correctly encoded, and the proportion of fully solved citation contexts corresponding to the proportion of cited paper with correctly identified DOIs is around 95% for PMC XML JATS files. The scite platform today only ingests publications with a DOI and only matches references against bibliographical objects with a registered DOI. The given evaluation figures have been calculated relative to these types of citations.

2.5. Task Modeling and Training Data

Extracted citation statements are classified into supporting, contrasting, or mentioning, to identify studies that have tested the claim and to evaluate how a scientific claim has been evaluated in the literature by subsequent research.

We emphasize that scite is not doing sentiment analysis. In natural language processing, sentiment analysis is the study of affective and subjective statements. The most common affective state considered in sentiment analysis is a mere polar view from positive sentiment to negative sentiment, which appeared to be particularly useful in business applications (e.g., product reviews and movie reviews). Following this approach, a subjective polarity can be associated with a citation to try to capture an opinion about the cited paper. The evidence used for sentiment classification relies on the presence of affective words in the citation context, with an associated polarity score capturing the strength of the affective state ( Athar, 2014 ; Halevi & Schimming, 2018 ; Hassan, Imran et al., 2018 ; Yousif et al., 2019 ). Yan et al. (2020) , for instance, use a generic method called SenticNet to identify sentiments in citation contexts extracted from PubMed Central XML files, without particular customization to the scientific domain (only a preprocessing to remove the technical terms from the citation contexts is applied). SenticNet uses a polarity measure associated with 200,000 natural language concepts, propagated to the words and multiword terms realizing these concepts.

In contrast, scite focuses on the authors’ reasons for citing a paper. We use a discrete classification into three discursive functions relative to the scientific debate; see Murray, Lamers et al. (2019) for an example of previous work with typing citations based on rhetorical intention. We consider that for capturing the reliability of a claim, a classification decision into supporting or contrasting must be backed by scientific arguments. The evidence involved in our assessment of citation intent is directed to the factual information presented in the citation context, usually statements about experimental facts and reproducibility results or presentation of a theoretical argument against or agreeing with the cited paper.

Examples of supporting, contrasting, and mentioning citation statements are given in Table 1 , with explanations describing why they are classified as such, including examples where researchers have expressed confusion or disagreement with our classification.

Real-world examples of citation statement classifications with examples explaining why a citation type has or has not been assigned. Citation classifications are based on the following two requirements: there needs to be a written indication that the statement supports or contrasts the cited paper; and there needs to be an indication that it provides evidence for this assertion.

Importantly, just as it is critical to optimize for accuracy of our deep learning model when classifying citations, it is equally important to make sure that the right terminology is used and understood by researchers. We have undergone multiple iterations of the design and display of citation statements and even the words used to define our citation types, including using previous words such as refuting and disputing to describe contrasting citations and confirming to describe supporting citations. The reasons for these changes reflect user feedback expressing confusion over certain terms as well as our intent to limit any potentially inflammatory interpretations. Indeed, our aim with introducing these citation types is to highlight differences in research findings based on evidence, not opinion. The main challenge of this classification task is the highly imbalanced distribution of the three classes. Based on manual annotations of different publication domains and sources, we estimate the average distribution of citation statements as 92.6% mentioning, 6.5% supporting, and 0.8% contrasting statements. Obviously, the less frequent the class, the more valuable it is. Most of the efforts in the development of our automatic classification system have been directed to address this imbalanced distribution. This task has required first the creation of original training data by experts—scientists with experience in reading and interpreting scholarly papers. Focusing on data quality, the expert classification was realized by multiple-blind manual annotation (at least two annotators working in parallel on the same citation), followed by a reconciliation step where the disagreements were further discussed and analyzed by the annotators. To keep track of the progress of our automatic classification over time, we created a holdout set of 9,708 classified citation records. To maintain a class distribution as close as possible to the actual distribution in current scholarly publications, we extracted the citation contexts from Open Access PDF of Unpaywall by random sampling with a maximum of one context per document.

We separately developed a working set where we tried to oversample the two less frequent classes (supporting, contrasting) with the objective of addressing the difficulties implied by the imbalanced automatic classification. We exploited the classification scores of our existing classifiers to select more likely supporting and contrasting statements for manual classification. At the present time, this set contains 38,925 classified citation records. The automatic classification system was trained with this working set, and continuously evaluated with the immutable holdout set to avoid as much bias as possible. An n -fold cross-evaluation on the working set, for instance, would have been misleading because the distribution of the classes in this set was artificially modified to boost the classification accuracy of the less frequent classes.

Before reconciliation, the observed average interannotator agreement percentage was 78.5% in the open domain and close to 90% for batches in biomedicine. It is unclear what accounts for the difference. Reconciliation, further completed with expert review by core team members, resulted in highly consensual classification decisions, which contrast with typical multiround disagreement rates observed with sentiment classification. Athar (2014) , for instance, reports Cohen’s k annotator agreement of 0.675 and Ciancarini, Di Iorio et al. (2014) report k = 0.13 and k = 0.15 for the property groups covering confirm / supports and critiques citation classification labels. A custom open source document annotation web application, docanno ( Nakayama, Kubo et al., 2018 ) was deployed to support the first round of annotations.

Overall, the creation of our current training and evaluation holdout data sets has been a major 2-year effort involving up to eight expert annotators and nearly 50,000 classified citation records. In addition to the class, each record includes the citation sentence, the full “snippet” (citation sentence plus previous and next sentences), the source and target DOI, the reference callout string, and the hierarchical list of section titles where the citation occurs.

2.6. Machine Learning Classifiers

Improving the classification architecture: After initial experiments with RNN (Recursive Neural Network) architectures such as BidGRU (Bidirectional Gated Recurrent Unit, an architecture similar to the approach of Cohan et al. (2019) for citation intent classification), we obtained significant improvements with the more recently introduced ELMo (Embeddings from Language Models) dynamic embeddings ( Peters, Neumann et al., 2018 ) and an ensemble approach. Although the first experiments with BERT (Bidirectional Encoder Representations from Transformers) ( Devlin, Chang et al., 2019 ), a breakthrough architecture for NLP, were disappointing, fine-tuning SciBERT (a science-pretrained base BERT model) ( Beltagy, Lo, & Cohan, 2019 ) led to the best results and is the current production architecture of the platform.

Using oversampling and class weighting techniques: It is known that the techniques developed to address imbalanced classification in traditional machine learning can be applied successfully to deep learning too ( Johnson & Khoshgoftaar, 2019 ). We introduced in our system oversampling of less frequent classes, class weighting, and metaclassification with three binary classifiers. These techniques provide some improvements, but they rely on empirical parameters that must be re-evaluated as the training data changes.

Extending the training data for less frequent classes: As mentioned previously, we use an active learning approach to select the likely less frequent citation classes based on the scores of the existing classifiers. By focusing on edge cases over months of manual annotations, we observed significant improvements in performance for predicting contrasting and supporting cases.

Progress on classification results over approximately 1 year, evaluated on a fixed holdout set of 9,708 examples. In parallel with these various iterations on the classification algorithms, the training data was raised from 30,665 (initial evaluation with BidGRU) to 38,925 examples (last evaluation with SciBERT) via an active learning approach.

Accuracy of SciBERT classifier, currently deployed on the scite platform, evaluated on a holdout set of 9,708 examples.

Note: When deploying classification models in production, we balance the precision/recall so that all the classes have a precision higher than 80%.

Given the unique nature of scite, there are a number of additional considerations. First, scaling is a key requirement of scite, which addresses the full corpus of scientific literature. While providing good results, the prediction with the ELMo approach is 20 times slower than with SciBERT, making it less attractive for our platform. Second, we have experimented with using section titles to improve classifications—for example, one might expect to find supporting and contrasting statements more often in the Results section of a paper and mentioning statements in the Introduction. Counterintuitively, including section titles in our model had no impact on F -scores, although it did slightly improve precision. It is unclear why including section titles failed to improve F -scores. However, it might relate to the challenge of correctly identifying and normalizing section titles from documents. Third, segmenting scientific text into sentences presents unique challenges due to the prevalence of abbreviations, nomenclatures, and mathematical equations. Finally, we experimented with various context windows (i.e., the amount of text used in the classification of a citation) but were only able to improve the F -score for the contrasting category by eight points by manually selecting the most relevant phrases in the context window. Automating this process might improve classifications, but doing so presents a significant technical challenge. Other possible improvements of the classifier include multitask training, refinement of classes, increase of training data via improved active learning techniques, and integration of categorical features in the transformer classifier architecture.

We believe that the specificity of our evidence-based citation classes, the size and the focus on the quality of our manually annotated data set (multiple rounds of blind annotations with final collective reconciliation), the customization and continuous improvement of a state of the art deep learning classifier, and finally the scale of our citation analysis distinguishes our work from existing developments in automatic citation analysis.

2.7. Citation Statement and Classification Pipeline

TEI XML data is parsed in Python using the BeautifulSoup library and further segmented into sentences using a combination of spaCy ( Honnibal, Montani et al., 2018 ) and Natural Language Toolkit’s Punkt Sentence Tokenizer ( Bird, Klein, & Loper, 2009 ). These sentence segmentation candidates are then postprocessed with custom rules to better fit scientific texts, existing text structures, and inline markups. For instance, a sentence split is forbidden inside a reference callout, around common abbreviations not supported by the general-purpose sentence segmenters, or if it is conflicting with a list item, paragraph, or section break.

The implementation of the classifier is realized by a component we have named Veracity , which provides a custom set of deep learning classifiers built on top of the open source DeLFT library ( Lopez, 2020d ). Veracity is written in Python and employs Keras and TensorFlow for text classification. It runs on a single server with an NVIDIA GP102 (GeForce GTX 1080 Ti) graphics card with 3,584 CUDA cores. This single machine is capable of classifying all citation statements as they are processed. Veracity retrieves batches of text from the scite database that have yet to be classified, processes them, and updates the database with the results. When deploying classification models in production, we balance the precision/recall so that all the classes have a precision higher than 80%. For this purpose, we use the holdout data set to adjust the class weights at the prediction level. After evaluation, we can exploit all available labeled data to maximize the quality, and the holdout set captures a real-world distribution adapted to this final tuning.

2.8. User Interface

The resulting classified citations are stored and made available on the scite platform. Data from scite can be accessed in a number of ways (downloads of citations to a particular paper; the scite API, etc.). However, users will most commonly access scite through its web interface. Scite provides a number of core features, detailed below.

The scite report page ( Figure 1 ) displays summary information about a given paper. All citations in the scite database to the paper are displayed, and users can filter results by classification (supporting, mentioning, contrasting), paper section (e.g., Introduction, Results), and the type of citing article (e.g., preprint, book, etc.). Users can also search for text within citation statements and surrounding citation context. For example, if a user wishes to examine how an article has been cited with respect to a given concept (e.g., fear), they can search for citation contexts that contain that key term. Each citation statement is accompanied by a classification label, as well as an indication of how confident the model is of said classification. For example, a citation statement may be classified as supporting with 90% confidence, meaning that the model is 90% certain that the statement supports the target citation. Finally, each citation statement can be flagged by individual users as incorrect, so that users can report a classification as incorrect, as well as justify their objection. After a citation statement has been flagged as incorrect, it will be reviewed and verified by two independent reviewers, and, if both agree, the recommended change will be implemented. In this way, scite supplements machine learning with human interventions to ensure that citations are accurately classified. This is an important feature of scite that allows researchers to interact with the automated citation types, correcting classifications that might otherwise be difficult for a machine to classify. It also opens the possibility for authors and readers to add more nuance to citation typing by allowing them to annotate snippets.

To improve the utility and usability of the smart citation data, scite offers a wide variety of tools common to other citation platforms, such as Scopus and Web of Science and other information retrieval software. These include literature searching functionality for researchers to find supported and contrasted research, visualizations to see research in context, reference checking for automatically evaluating references with scite’s data on an uploaded manuscript and more. Scite also offers plugins for popular web browsers and reference management software (e.g., Zotero) that allow easy access to scite reports and data in native research environments.

3.1. Research Applications

A number of researchers have already made use of scite for quantitative assessments of the literature. For example, Bordignon (2020) examined self-correction in the scientific record and operationalized “negative” citations as those that scite classified as contrasting. They found that negative citations are rare, even among works that have been retracted. In another example from our own group, Nicholson et al. (2020) examined scientific papers cited in Wikipedia articles and found that—like the scientific literature as a whole—the vast majority presented findings that have not been subsequently verified. Similar analyses could also be applied to articles in the popular press.

One can imagine a number of additional metascientific applications. For example, network analyses with directed graphs, valenced edges (by type of citation—supporting, contrasting, and mentioning), and individual papers as nodes could aid in understanding how various fields and subfields are related. A simplified form of this analysis is already implemented on the scite website (see Figure 3 ), but more complicated analyses that assess traditional network indices, such as centrality and clustering, could be easily implemented using standard software libraries and exports of data using the scite API.

A citation network representation using the scite Visualization tool. The nodes represent individual papers, with the edges representing supporting (green) or contrasting (blue) citation statements. The graph is interactive and can be expanded and modified for other layouts. The interactive visualization can be accessed at the following link: https://scite.ai/visualizations/global-analysis-of-genome-transcriptome-9L4dJr?dois%5B0%5D=10.1038%2Fmsb.2012.40&dois%5B1%5D=10.7554%2Felife.05068&focusedElement=10.7554%2Felife.05068.

A citation network representation using the scite Visualization tool. The nodes represent individual papers, with the edges representing supporting (green) or contrasting (blue) citation statements. The graph is interactive and can be expanded and modified for other layouts. The interactive visualization can be accessed at the following link: https://scite.ai/visualizations/global-analysis-of-genome-transcriptome-9L4dJr?dois%5B0%5D=10.1038%2Fmsb.2012.40&dois%5B1%5D=10.7554%2Felife.05068&focusedElement=10.7554%2Felife.05068 .

3.2. Implications for Scholarly Publishers

There are a number of implications for scholarly publishers. At a very basic level, this is evident in the features that scite provides that are of particular use to publishers. For example, the scite Reference Check parses the reference list of an uploaded document and produces a report indicating how items in the list have been cited, flagging those that have been retracted or have otherwise been the subject of editorial concern. This type of screening can help publishers and editors ensure that articles appearing in their journals do not inadvertently cite discredited works. Evidence in scite’s own database indicates that this would solve a seemingly significant problem, as in 2019 alone nearly 6,000 published papers cited works that had been retracted prior to 2019. Given that over 95% of citations made to retracted articles are in error ( Schneider, Ye et al., 2020 ), had the Reference Check tool been applied to these papers during the review process, the majority of these mistakes could have been caught.

However, there are additional implications for scholarly publishing that go beyond the features provided by scite. We believe that by providing insights into how articles are cited—rather than simply noting that the citation has occurred—scite can alter the way in which journals, institutions, and publishers are assessed. Scite provides journals and institutions with dashboards that indicate the extent to which papers with which they are associated have been supported or contrasted by subsequent research ( Figure 4 ). Even without reliance on specific metrics, the approach that scite provides prompts the question: What if we normalized the assessment of journals, institutions and researchers in terms of how they were cited rather than the simple fact that they were cited alone?

A scite Journal Dashboard showing the aggregate citation information at the journal level, including editorial notices and the scite Index, a journal metric that shows the ratio of supporting citations over supporting plus contrasting citations. Access to the journal dashboard in the figure and other journal dashboards is available here: https://scite.ai/journals/0138-9130.

A scite Journal Dashboard showing the aggregate citation information at the journal level, including editorial notices and the scite Index, a journal metric that shows the ratio of supporting citations over supporting plus contrasting citations. Access to the journal dashboard in the figure and other journal dashboards is available here: https://scite.ai/journals/0138-9130 .

3.3. Implications for Researchers

Given the fact that nearly 3 million scientific papers are published every year ( Ware & Mabe, 2015 ), researchers increasingly report feeling overwhelmed by the amount of literature they must sift through as part of their regular workflow ( Landhuis, 2016 ). Scite can help by assisting researchers in identifying relevant, reliable work that is narrowly tailored to their interests, as well as better understanding how a given paper fits into the broader context of the scientific literature. For example, one common technique for orienting oneself to new literature is to seek out the most highly cited papers in that area. If the context of those citations is also visible, the value of a given paper can be more completely assessed and understood. There are, however, additional—although perhaps less obvious—implications. If citation types are easily visible, it is possible that researchers will be incentivized to make replication attempts easier (for example, by providing more explicit descriptions of methods or instruments) in the hope that their work will be replicated.

3.4. Limitations

At present, the biggest limitation for researchers using scite is the size of the database. At the time of this writing, scite has ingested over 880 million separate citation statements from over 25 million scholarly publications. However, there are over 70 million scientific publications in existence ( Ware & Mabe, 2015 ); scite is constantly ingesting new papers from established sources and signing new licensing agreements with publishers, so this limitation should abate over time. However, given that the ingestion pipeline fails to identify approximately 30% of citation statements/references in PDF files (~5% in XML), the platform will necessarily contain fewer references than services such as Google Scholar and Web of Science, which do not rely on ingesting the full text of papers. Even if references are reliably extracted and matched with a DOI or directly provided by publishers, a reference is currently only visible on the scite platform if it is matched with at least one citation context in the body of the article. As such, the data provided by scite will necessarily miss a measurable percentage of citations to a given paper. We are working to address these limitations in two ways: First, we are working toward ingesting more full-text XML and improving our ability to detect document structure in PDFs. Second, we have recently supplemented our Smart Citation data with “traditional” citation metadata provided by Crossref (see “Without Citation Statements” shown in Figure 1 ), which surfaces references that we would otherwise miss. Indeed, this Crossref data now includes references from publishers with previously closed references such as Elsevier and the American Chemical Society. These traditional citations can later be augmented to include citation contexts as we gain access to full text.

Another limitation is related to the classification of citations. First, as noted previously, the Veracity software does not perfectly classify citations. This can partly be explained by the fact that language in the (biomedical) sciences is little standardized (unlike law, where shepardizing is a standing term describing the “process of using a citator to discover the history of a case or statute to determine whether it is still good law”; see Lehman & Phelps, 2005 ). However, the accuracy of the classifier will likely increase over time as technology improves and the training data set increases in size. Second, the ontology currently employed by scite (supporting, mentioning, and contrasting) necessarily misses some nuance regarding how references are cited in scientific papers. One key example relates to what “counts” as a contrasting citation: At present, this category is limited to instances where new evidence is presented (e.g., a failed replication attempt or a difference in findings). However, it might also be appropriate to include conceptual and logical arguments against a given paper in this category. Moreover, in our system, the evidence behind the supporting or contrasting citation statements is not being assessed; thus a supporting citation statement might come from a paper where the experimental evidence is weak and vice versa. We do display the citation tallies that papers have received so that users can assess this but it would be exceedingly difficult to also classify the sample size, statistics, and other parameters that define how robust a finding is.

The automated extraction and analysis of scientific citations is a technically challenging task, but one whose time has come. By surfacing the context of citations rather than relying on their mere existence as an indication of a paper’s importance and impact, scite provides a novel approach to addressing pressing questions for the scientific community, including incentivizing replicable works, assessing an increasingly large body of literature, and quantitatively studying entire scientific fields.

We would like to thank Yuri Lazebnik for his help in conceptualizing and building scite.

This work was supported by NIDA grant 4R44DA050155-02.

Josh M. Nicholson: Conceptualization, Data acquisition, Analysis and interpretation of data, Writing—original draft, Writing—Review and editing. Milo Mordaunt: Data acquisition, Analysis and interpretation of data. Patrice Lopez: Conceptualization, Analysis and interpretation of data, Writing—original draft, Writing—Review and editing. Ashish Uppala: Analysis and interpretation of data, Writing—original draft, Writing—Review and editing. Domenic Rosati: Analysis and interpretation of data, Writing—original draft, Writing—Review and editing. Neves P. Rodrigues: Conceptualization. Sean C. Rife: Conceptualization, Data acquisition, Analysis and interpretation of data, Writing—original draft, Writing—Review and editing. Peter Grabitz: Conceptualization, Data acquisition, Analysis and interpretation of data, Writing—original draft, Writing—Review and editing.

The authors are shareholders and/or consultants or employees of Scite Inc.

Code used in the ingestion of manuscripts is available at https://github.com/kermitt2/grobid , https://github.com/kermitt2/biblio-glutton , and https://github.com/kermitt2/Pub2TEI . The classification of citation statements is performed by a modified version of DeLFT ( https://github.com/kermitt2/delft ). The training data used by the scite classifier is proprietary and not publicly available. The 880+ million citation statements are available at scite.ai but cannot be shared in full due to licensing arrangements made with publishers.

Details of how retractions and other editorial notices can be detected through an automated examination of metadata—even when there is no explicit indication that such notice(s) exist—will be made public via a manuscript currently in preparation.

As an illustration, the ISTEX project has been an effort from the French state leading to the purchase of 23 million full text articles from the mainstream publishers (Elsevier, Springer-Nature, Wiley, etc.) mainly published before 2005, corresponding to an investment of €55 million in acquisitions. The delivery of full text XML when available was a contractual requirement, but an XML format with structured body could be delivered by publishers for only around 10% of the publications.

For more information on the history and prevalence of Crossref, see https://www.crossref.org/about/ .

The evaluation data and scripts are available on the project GitHub repository; see biblio-glutton ( Lopez, 2020c ).

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • All subject areas
  • Agricultural and Biological Sciences
  • Arts and Humanities
  • Biochemistry, Genetics and Molecular Biology
  • Business, Management and Accounting
  • Chemical Engineering
  • Computer Science
  • Decision Sciences
  • Earth and Planetary Sciences
  • Economics, Econometrics and Finance
  • Engineering
  • Environmental Science
  • Health Professions
  • Immunology and Microbiology
  • Materials Science
  • Mathematics
  • Multidisciplinary
  • Neuroscience
  • Pharmacology, Toxicology and Pharmaceutics
  • Physics and Astronomy
  • Social Sciences
  • All subject categories
  • Acoustics and Ultrasonics
  • Advanced and Specialized Nursing
  • Aerospace Engineering
  • Agricultural and Biological Sciences (miscellaneous)
  • Agronomy and Crop Science
  • Algebra and Number Theory
  • Analytical Chemistry
  • Anesthesiology and Pain Medicine
  • Animal Science and Zoology
  • Anthropology
  • Applied Mathematics
  • Applied Microbiology and Biotechnology
  • Applied Psychology
  • Aquatic Science
  • Archeology (arts and humanities)
  • Architecture
  • Artificial Intelligence
  • Arts and Humanities (miscellaneous)
  • Assessment and Diagnosis
  • Astronomy and Astrophysics
  • Atmospheric Science
  • Atomic and Molecular Physics, and Optics
  • Automotive Engineering
  • Behavioral Neuroscience
  • Biochemistry
  • Biochemistry, Genetics and Molecular Biology (miscellaneous)
  • Biochemistry (medical)
  • Bioengineering
  • Biological Psychiatry
  • Biomaterials
  • Biomedical Engineering
  • Biotechnology
  • Building and Construction
  • Business and International Management
  • Business, Management and Accounting (miscellaneous)
  • Cancer Research
  • Cardiology and Cardiovascular Medicine
  • Care Planning
  • Cell Biology
  • Cellular and Molecular Neuroscience
  • Ceramics and Composites
  • Chemical Engineering (miscellaneous)
  • Chemical Health and Safety
  • Chemistry (miscellaneous)
  • Chiropractics
  • Civil and Structural Engineering
  • Clinical Biochemistry
  • Clinical Psychology
  • Cognitive Neuroscience
  • Colloid and Surface Chemistry
  • Communication
  • Community and Home Care
  • Complementary and Alternative Medicine
  • Complementary and Manual Therapy
  • Computational Mathematics
  • Computational Mechanics
  • Computational Theory and Mathematics
  • Computer Graphics and Computer-Aided Design
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Science (miscellaneous)
  • Computer Vision and Pattern Recognition
  • Computers in Earth Sciences
  • Condensed Matter Physics
  • Conservation
  • Control and Optimization
  • Control and Systems Engineering
  • Critical Care and Intensive Care Medicine
  • Critical Care Nursing
  • Cultural Studies
  • Decision Sciences (miscellaneous)
  • Dental Assisting
  • Dental Hygiene
  • Dentistry (miscellaneous)
  • Dermatology
  • Development
  • Developmental and Educational Psychology
  • Developmental Biology
  • Developmental Neuroscience
  • Discrete Mathematics and Combinatorics
  • Drug Discovery
  • Drug Guides
  • Earth and Planetary Sciences (miscellaneous)
  • Earth-Surface Processes
  • Ecological Modeling
  • Ecology, Evolution, Behavior and Systematics
  • Economic Geology
  • Economics and Econometrics
  • Economics, Econometrics and Finance (miscellaneous)
  • Electrical and Electronic Engineering
  • Electrochemistry
  • Electronic, Optical and Magnetic Materials
  • Emergency Medical Services
  • Emergency Medicine
  • Emergency Nursing
  • Endocrine and Autonomic Systems
  • Endocrinology
  • Endocrinology, Diabetes and Metabolism
  • Energy Engineering and Power Technology
  • Energy (miscellaneous)
  • Engineering (miscellaneous)
  • Environmental Chemistry
  • Environmental Engineering
  • Environmental Science (miscellaneous)
  • Epidemiology
  • Experimental and Cognitive Psychology
  • Family Practice
  • Filtration and Separation
  • Fluid Flow and Transfer Processes
  • Food Animals
  • Food Science
  • Fuel Technology
  • Fundamentals and Skills
  • Gastroenterology
  • Gender Studies
  • Genetics (clinical)
  • Geochemistry and Petrology
  • Geography, Planning and Development
  • Geometry and Topology
  • Geotechnical Engineering and Engineering Geology
  • Geriatrics and Gerontology
  • Gerontology
  • Global and Planetary Change
  • Hardware and Architecture
  • Health Informatics
  • Health Information Management
  • Health Policy
  • Health Professions (miscellaneous)
  • Health (social science)
  • Health, Toxicology and Mutagenesis
  • History and Philosophy of Science
  • Horticulture
  • Human Factors and Ergonomics
  • Human-Computer Interaction
  • Immunology and Allergy
  • Immunology and Microbiology (miscellaneous)
  • Industrial and Manufacturing Engineering
  • Industrial Relations
  • Infectious Diseases
  • Information Systems
  • Information Systems and Management
  • Inorganic Chemistry
  • Insect Science
  • Instrumentation
  • Internal Medicine
  • Issues, Ethics and Legal Aspects
  • Leadership and Management
  • Library and Information Sciences
  • Life-span and Life-course Studies
  • Linguistics and Language
  • Literature and Literary Theory
  • LPN and LVN
  • Management Information Systems
  • Management, Monitoring, Policy and Law
  • Management of Technology and Innovation
  • Management Science and Operations Research
  • Materials Chemistry
  • Materials Science (miscellaneous)
  • Maternity and Midwifery
  • Mathematical Physics
  • Mathematics (miscellaneous)
  • Mechanical Engineering
  • Mechanics of Materials
  • Media Technology
  • Medical and Surgical Nursing
  • Medical Assisting and Transcription
  • Medical Laboratory Technology
  • Medical Terminology
  • Medicine (miscellaneous)
  • Metals and Alloys
  • Microbiology
  • Microbiology (medical)
  • Modeling and Simulation
  • Molecular Biology
  • Molecular Medicine
  • Nanoscience and Nanotechnology
  • Nature and Landscape Conservation
  • Neurology (clinical)
  • Neuropsychology and Physiological Psychology
  • Neuroscience (miscellaneous)
  • Nuclear and High Energy Physics
  • Nuclear Energy and Engineering
  • Numerical Analysis
  • Nurse Assisting
  • Nursing (miscellaneous)
  • Nutrition and Dietetics
  • Obstetrics and Gynecology
  • Occupational Therapy
  • Ocean Engineering
  • Oceanography
  • Oncology (nursing)
  • Ophthalmology
  • Oral Surgery
  • Organic Chemistry
  • Organizational Behavior and Human Resource Management
  • Orthodontics
  • Orthopedics and Sports Medicine
  • Otorhinolaryngology
  • Paleontology
  • Parasitology
  • Pathology and Forensic Medicine
  • Pathophysiology
  • Pediatrics, Perinatology and Child Health
  • Periodontics
  • Pharmaceutical Science
  • Pharmacology
  • Pharmacology (medical)
  • Pharmacology (nursing)
  • Pharmacology, Toxicology and Pharmaceutics (miscellaneous)
  • Physical and Theoretical Chemistry
  • Physical Therapy, Sports Therapy and Rehabilitation
  • Physics and Astronomy (miscellaneous)
  • Physiology (medical)
  • Plant Science
  • Political Science and International Relations
  • Polymers and Plastics
  • Process Chemistry and Technology
  • Psychiatry and Mental Health
  • Psychology (miscellaneous)
  • Public Administration
  • Public Health, Environmental and Occupational Health
  • Pulmonary and Respiratory Medicine
  • Radiological and Ultrasound Technology
  • Radiology, Nuclear Medicine and Imaging
  • Rehabilitation
  • Religious Studies
  • Renewable Energy, Sustainability and the Environment
  • Reproductive Medicine
  • Research and Theory
  • Respiratory Care
  • Review and Exam Preparation
  • Reviews and References (medical)
  • Rheumatology
  • Safety Research
  • Safety, Risk, Reliability and Quality
  • Sensory Systems
  • Signal Processing
  • Small Animals
  • Social Psychology
  • Social Sciences (miscellaneous)
  • Social Work
  • Sociology and Political Science
  • Soil Science
  • Space and Planetary Science
  • Spectroscopy
  • Speech and Hearing
  • Sports Science
  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Statistics, Probability and Uncertainty
  • Strategy and Management
  • Stratigraphy
  • Structural Biology
  • Surfaces and Interfaces
  • Surfaces, Coatings and Films
  • Theoretical Computer Science
  • Tourism, Leisure and Hospitality Management
  • Transplantation
  • Transportation
  • Urban Studies
  • Veterinary (miscellaneous)
  • Visual Arts and Performing Arts
  • Waste Management and Disposal
  • Water Science and Technology
  • All regions / countries
  • Asiatic Region
  • Eastern Europe
  • Latin America
  • Middle East
  • Northern America
  • Pacific Region
  • Western Europe
  • ARAB COUNTRIES
  • IBEROAMERICA
  • NORDIC COUNTRIES
  • Afghanistan
  • Bosnia and Herzegovina
  • Brunei Darussalam
  • Czech Republic
  • Dominican Republic
  • Netherlands
  • New Caledonia
  • New Zealand
  • Papua New Guinea
  • Philippines
  • Puerto Rico
  • Russian Federation
  • Saudi Arabia
  • South Africa
  • South Korea
  • Switzerland
  • Syrian Arab Republic
  • Trinidad and Tobago
  • United Arab Emirates
  • United Kingdom
  • United States
  • Vatican City State
  • Book Series
  • Conferences and Proceedings
  • Trade Journals

citation index of research paper

  • Citable Docs. (3years)
  • Total Cites (3years)

citation index of research paper

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

citation index of research paper

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

What Is a Journal Index, and Why is Indexation Important?

  • Research Process
  • Peer Review

A journal index, or a list of journals organized by discipline, subject, region and other factors, can be used by other researchers to search for studies and data on certain topics. As an author, publishing your research in an indexed journal increases the credibility and visibility of your work. Here we help you to understand journal indexing better - as well as benefit from it.

Updated on May 13, 2022

A researcher considering journal selection and indexation for academic articles

A journal index, also called a ‘bibliographic index' or ‘bibliographic database', is a list of journals organized by discipline, subject, region or other factors.

Journal indexes can be used to search for studies and data on certain topics. Both scholars and the general public can search journal indexes.

Journals in indexes have been reviewed to ensure they meet certain criteria. These criteria may include:

  • Ethics and peer review policies
  • Assessment criteria for submitted articles
  • Editorial board transparency

What is a journal index?

Indexed journals are important, because they are often considered to be of higher scientific quality than non-indexed journals. You should aim for publication in an indexed journal for this reason. AJE's Journal Guide journal selection tool can help you find one.

Journal indexes are created by different organizations, such as:

  • Public bodies- For example, PubMed is maintained by the United States National Library of Medicine. PubMed is the largest index for biomedical publications.
  • Analytic companies- For example: the Web of Science Core Collection is maintained by Clarivate Analytics. The WOS Core Collection includes journals indexed in the following sub-indexes: (1) Science Citation Index Expanded (SCIE); (2) Social Sciences Citation Index (SSCI); (3) Arts & Humanities Citation Index (AHCI); (4) Emerging Sources Citation Index.
  • Publishers- For example, Scopus is owned by Elsevier and maintained by the Scopus Content Selection and Advisory Board . Scopus includes journals in all disciplines, but the majority are science and technology journals.

Key types of journal indexes

You can choose from a range of journal indexes. Some are broad and are considered “general indexes”. Others are specific to certain fields and are considered “specialized indexes”.

For example:

  • The Science Citation Index Expanded includes mostly science and technology journals
  • The Arts & Humanities Citation Index includes mostly arts and humanities journals
  • PubMed includes mostly biomedical journals
  • The Emerging Sources Citation Index includes journals in all disciplines

Which index you choose will depend on your research subject area.

Some indexes, such as Web of Science , include journals from many countries. Others, such as the Chinese Academy of Science indexing system , are specific to certain countries or regions.

Choosing the type of index may depend on factors such as university or grant requirements.

Some indexes are open to the public, while others require a subscription. Many people searching for research papers will start with free search engines, such as Google Scholar , or free journal indexes, such as the Web of Science Master Journal List . Publishing in a journal in one or more free indexes increases the chance of your article being seen.

Journals in subscription-based indexes are generally considered high-quality journals. If the status of the journal is important, choose a journal in one or more subscription-based indexes.

Most journals belong to more than one index. To improve the visibility and impact of your article, choose a journal featured in multiple indexes.

How does journal indexing work?

All journals are checked for certain criteria before being added to an index. Each index has its own set of rules, but basic publishing standards include the following:

  • An International Standard Serial Number (ISSN). ISSNs are unique to each journal and indicate that the journal publishes issues on a recurring basis.
  • An established publishing schedule.
  • Digital Object Identifiers (DOIs) . DOIs are unique letter/number codes assigned to digital objects. The benefit of a DOI is that it will never change, unlike a website link.
  • Copyright requirements. A copyright policy helps protect your work and outlines the rules for the use or sharing of your work, whether it's copyrighted or has some form of creative commons licensing .
  • Other requirements can include conflict of interest statements, ethical approval statements, an editorial board listed on the website, and published peer review policies.

To be included in an index, a journal must submit an application and undergo an audit by the indexation board. Index board members (called auditors) will confirm certain information, such as the full listing of the editorial board on the website, the inclusion of ethics statements in published articles, established appeal and retraction processes, and more.

Why is journal indexing important?

As an author, publishing your research in an indexed journal increases the credibility and visibility of your work. Indexed journals are generally considered to be of higher scientific quality than non-indexed journals.

With the growth of fully open access journals and online-only journals, recognizing “predatory” journals and their publishers has become difficult. Indexing a journal in one or more well-known databases is a good sign the journal is credible.

Moreover, more and more institutions are requiring publication in an indexed journal as a requirement for graduation, promotion, or grant funding.

As an author, it is important to ensure that your research is seen by as many eyes as possible. Index databases are often the first places scholars and the public will search for specific information. Publishing a paper in a non-indexed journal could be harmful in this context.

However, there are some exceptions, such as medical case reports.

Many journals don't accept medical case reports because they don't have high citation rates. However, several primary and secondary journals have been created specifically for case reports. Examples include the primary journal, BMC Medical Case Reports, and the secondary journal, European Heart Journal - Case Reports.

While many of these journals are indexed, they may not be indexed in the major indexes, though they are still highly acceptable journals.

Open access and indexation

With the recent increase in open access publishing, many journals have started offering an open access option. Other journals are completely open access, meaning they do not offer a traditional subscription service.

Open access journals have many benefits, such as:

  • High visibility. Anyone can access and read your paper.
  • Publication speed. It is generally quicker to post an article online than to publish it in a traditional journal format.

Identifying credible open access journals

Open access has made it easier for predatory journal publishers to attract unsuspecting or new authors. These predatory journal publishers often publish any article for a fee without peer review and with questionable ethical and copyright policies. Here we show you eight ways to spot predatory open access journals .

One way to identify credible open access journals is their index status. However, be aware that some predatory journals will falsely list indexes or display logos on their website. It is good practice to make sure the journal is indexed on the index's website before submitting your article to that journal.

Major journal indexing services

There are several journal indexes out there. Some of the most popular indexes are as follows:

Life Sciences and Hard Sciences

  • Science Citation Index Expanded (SCIE) Master Journal List
  • Engineering Index
  • Web of Science (now published by Clarivate Analytics, formerly by ISI and Thomson Reuters)
  • Chinese Academy of Sciences (CAS)

Humanities and Social Sciences

  • Arts & Humanities Citation Index (AHCI) Master Journal List
  • Social Sciences Citation Index (SSCI) Master Journal List

Indexation and impact factors

It is easy to assume that indexed journals will have higher impact factors, but indexation and impact factor are unrelated.

Many credible journals don't have impact factors, but they are indexed in several well-known indexes. Therefore, the lack of an impact factor may not accurately represent the credibility of a journal.

Of course, impact factors may be important for other reasons, such as institutional requirements or grant funding. Read this authoritative piece on the uses, importance, and limitations of impact factors .

Final Thoughts

Selecting an indexed journal is an important part of the publication journey. Indexation can tell you a lot about a journal. Publishing in an indexed journal can increase the visibility and credibility of your research. If you're having trouble selecting a journal for publication, consider learning more about AJE's journal recommendation service .

Catherine Zettel Nalen, Academic Editor, Specialist, and Journal Recommendation Team Lead, MS, Medical and Veterinary Entomology University of Florida

Catherine Zettel Nalen, MS

Academic Editor, Specialist, and Journal Recommendation Team Lead

See our "Privacy Policy"

Cited Reference Search

Search for records that have cited a published work, and discover how a known idea or innovation has been confirmed, applied, improved, extended, or corrected. Find out who’s citing your research and the impact your work is having on other researchers in the world.

In the Arts & Humanities Citation Index, you can use cited reference search to find articles that refer to or include an illustration of a work of art or a music score; these references are called implicit citations .

  • You may also search on Cited Year(s), Cited Volume, Cited Issue, Cited Pages, Cited Title, or Cited DOI
  • Click Search; results from the cited reference index that include the work you’re searching appears on a table. Every reference on the cited reference index has been cited by at least one article indexed in the Web of Science. The first author of a cited work always displays in the Cited Author column. If the cited author you specified in step 1 is not the primary author, then the name of author you specified follows the name of the first author (click Show all authors). If you retrieve too many hits, return to the cited reference search page and add criteria for Cited Year, Cited Volume, Cited Issue, or Cited Page.
  • cited reference is not a source article in the Web of Science
  • reference may contain incomplete or inaccurate information, and can’t be linked to a source article
  • reference may refer to a document from a publication outside the timespan of your subscription; for example, if the article was published in 1992, but your subscription only gives you access to 20 years of data
  • cited item may refer to a document from a publication not covered by a database in your subscription
  • Click Search to view your results.

Cited Reference Search Interface

Click View abbreviation list to see the abbreviations of journal and conference proceedings titles used as cited works; this list will open in a new browser tab.

When you complete a cited reference search, the number of citing items you retrieve may be smaller than the number listed in the Citing Articles column if your institution's subscription does not include all years of the database. In other words, the count in the Citing Articles column is not limited by your institution's subscription. However, your access to records in the product is limited by your institution's subscription.

  • Enter the name of the first author of a multi-authored article or book
  • Enter an abbreviated journal title followed by an asterisk or the first one or two significant words of a book title followed by an asterisk.
  • Try searching for the cited reference without entering a cited year in order to retrieve variations of the same cited reference. You can always return to the Cited Reference Search page and enter a cited year if you get too many references.
  • When searching for biblical references, enter Bible in the Cited Author field and the name of the book ( Corinthinans* , Matthew* Leviticus *, etc.) in the Cited Work field. Ensure that you use the asterisk (*) wildcard in your search.

Follow these steps to find articles that have cited Brown, M.E. and Calvin, W.M. Evidence for crystalline water and ammonia ices on Pluto's satellite Charon. Science . 287 (5450): 107-109. January 7, 2000:

  • On the Cited Reference Search page, enter Brown M* in the Cited Author field.
  • Enter Science* in the Cited Work field.
  • Click Search to go to the Cited Reference Search table. This page shows all the results from the Web of Science cited reference index that matched the query.
  • Page through the results to find this reference:

Cited Reference Search Example

  • Select the check box to the left of the reference.
  • Click the See Results button to go to the Cited Reference Search Results page to see the list of articles that cite the article by Brown and Calvin.

Every cited reference in the Cited Reference Index contains enough information to uniquely identify the document. Because only essential bibliographic information is captured, and because author names and source publication titles are unified as much as possible, the same reference cited in two different records should appear the same way in the database. This unification is what makes possible the Times Cited number on the Full Record page.

However, not all references to the same publication can be unified. As a consequence, a cited reference may have variations in the product.

For example, consider these variations of a reference to an article by A.J. Bard published in volume 374 of Nature:

The first reference contains the correct volume number and other bibliographic information. The View Record link takes you to the Full Record, which has a Times Cited count of 31.

The second reference contains a different volume number and it does not have a View Record link. Because a journal cannot have two different volume numbers in the same publication year, it is obvious that this is an incorrect reference to the same article.

Click Export at the top of the Cited Reference Search table to export the cited reference search results to Excel.

Articles indexed in the Science Citation Index Expanded cite books, patents, and other types of publications in addition to other articles. You can do a cited reference search for a patent to find journal articles that have cited it.

If you know the patent number, enter it in the Cited Work field. If you do not know the patent number, try entering the name of the first listed inventor or patent assignee in the Cited Author field. For example, to find references to U.S. patent 4096196-A, enter 4096196 in the Cited Work field. If you also subscribe to Derwent Innovations Index and the patent is included in the Derwent database, the patents you find in the citation index will be linked to the corresponding full patent records in Derwent Innovations Index.

Self-citations refer to cited references that contain an author name that matches the name of the author of a citing article.

You may want to eliminate self-citations from the results of a Cited Reference Search by combining a Cited Reference Search with a search by the source author.

  • Perform a Cited Reference Search to find items that cite the works of a particular author. Ensure that you complete both steps of a Cited Reference Search.
  • Go to the search page. Enter the name of the same author in the Author field. Click the Search button.
  • Go to the advanced search page.
  • Combine the two searches you just completed in a Boolean NOT expression (for example, #1 NOT #2 ). The results of the Search (the items written by the author) should be the set on the right-hand side of the operator.

Articles indexed in the product cite books, patents, and other types of publications in addition to other articles. You can do a cited reference search on a book to find journal articles that have cited it.

You should identify a book by entering the name of the first listed author in the Cited Author field and the first word or words of the title in the Cited Work field. Many cited works are abbreviated. If you are not sure how a word has been spelled or abbreviated, enter the first few letters of the word, followed by an asterisk. For example, to search for records of articles that cite Edith Hamilton's book Mythology , you would enter Hamilton E* in the Cited Author field and Myth* in the Cited Work field.

Do not enter a year in the Cited Year field. Authors often cite a particular edition of a book, and the cited year is the year of the edition they are citing. Generally, you want to find all articles that cite a book, regardless of the particular edition cited.

For example, enter the following data on the Cited Reference Search page, and then click Search .

CITED AUTHOR Tuchman BW

CITED WORK Guns*

CITED YEAR 1962

Note the number of references that are retrieved. Now repeat the search using the following data:

CITED AUTHOR Tuchman B*

See how many more references you retrieved? Notice that the author has been cited as Tuchman B as well as Tuchman BW. Also, notice how many different cited years and cited page numbers there are for the same work.

Evaluating Research Impact: A Comprehensive Overview of Metrics and Online Databases

  • Conference paper
  • First Online: 20 December 2023
  • Cite this conference paper

citation index of research paper

  • Seema Ukidve 16 ,
  • Ramsagar Yadav 17 ,
  • Mukhdeep Singh Manshahia 17 &
  • Jasleen Randhawa 18  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 855))

Included in the following conference series:

  • International Conference on Intelligent Computing & Optimization

101 Accesses

The purpose of this research paper is to analyze and compare the various research metrics and online databases used to evaluate the impact and quality of scientific publications. The study focuses on the most widely used research metrics, such as the h-index, the Impact Factor (IF), and the number of citations. Additionally, the paper explores various online databases, such as Web of Science, Scopus, and Google Scholar, that are utilized to access and analyze research metrics. The study found that the h-index and IF are the most commonly used metrics for evaluating the impact of a publication. However, it was also found that these metrics have limitations and cannot be used as the sole criteria for evaluating the quality of research. The study also highlights the need for a comprehensive and holistic approach to research evaluation that takes into account multiple factors such as collaboration, interdisciplinary work, and societal impact. The analysis of online databases showed that while Web of Science and Scopus are considered to be the most reliable sources of research metrics, they may not cover all relevant publications, particularly those in less well-established or interdisciplinary fields. Google Scholar, on the other hand, is more inclusive but may not have the same level of accuracy and reliability as the other databases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Hirsch, J.E.: An index to quantify an individual’s scientific research output. Proc. Natl. Acad. Sci. USA 102 (46), 16569–16572 (2005)

Article   Google Scholar  

Batista, P.D., Campiteli, M.G., Kinouchi, O.: Is it possible to compare researchers with different scientific interests? An analysis of the h-index. Scientometrics 68 (3), 179–189 (2006)

Egghe, L.: Theory and practise of the g-index. Scientometrics 69 (1), 131–152 (2006)

Radicchi, F., Fortunato, S., Castellano, C.: Universality of citation distributions: toward an objective measure of scientific impact. Proc. Natl. Acad. Sci. USA 105 (45), 17268–17272 (2008)

Leydesdorff, L.: Mapping the global development of science by means of publication indicators: a study of Science Citation Index Expanded and Social Sciences Citation Index. J. Am. Soc. Inf. Sci. Technol. 61 (7), 1386–1403 (2010)

Google Scholar  

Scopus (n.d.). https://www.elsevier.com/solutions/scopus

Web of Science (n.d.). https://clarivate.com/webofsciencegroup/

Google Scholar (n.d.). https://scholar.google.com/

Mendeley (n.d.). https://www.mendeley.com/

arXiv (n.d.). https://arxiv.org/

Bollen, J., Van de Sompel, H., Hagberg, A., Chute, R.: A principal component analysis of 39 scientific impact measures. PLoS ONE 4 (6), e6022 (2009)

Bornmann, L., Leydesdorff, L.: What do citation counts measure? A review of studies on citing behavior. J. Document. 64 (1), 45–80 (2008)

Garfield, E.: Citation analysis as a tool in journal evaluation. Science 214 (4520), 671–681 (1979)

Hirsch, J.E.: Does the Hirsch index have predictive power? arXiv preprint arXiv:0707.3168 (2007)

Garfield, E.: Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. Wiley, New York (1995)

Ioannidis, J.P.: Why most published research findings are false. PLoS Med. 2 (8), e124 (2005)

Radicchi, F., Fortunato, S., Castellano, C.: Diffusion of scientific credits and the ranking of scientists. Phys. Rev. E 80 (5), 056103 (2009)

Schreiber, M., Glassey, J.: A critical examination of the h-index in comparison with traditional indices and with peer judgement. Scientometrics 71 (2), 317–331 (2007)

Van Raan, A.F.J.: Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. J. Am. Soc. Inform. Sci. Technol. 57 (8), 1060–1071 (2006)

Waltman, L., van Eck, N.J.: A comparative study of four citation-based indices for ranking journals. J. Inform. 4 (2), 146–157 (2010)

Download references

Acknowledgements

Authors are grateful to Punjabi University, Patiala for providing adequate library and internet facility.

Author information

Authors and affiliations.

Department of Mathematics, L. S. Raheja College of Arts and Commerce, Santacruz(W), Maharashtra, India

Seema Ukidve

Department of Mathematics, Punjabi University Patiala, Patiala, Punjab, India

Ramsagar Yadav & Mukhdeep Singh Manshahia

Panjab University Chandigarh, Chandigarh, India

Jasleen Randhawa

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ramsagar Yadav .

Editor information

Editors and affiliations.

Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Ho Chi Minh City, Vietnam

Pandian Vasant

Department of Computer Science, Chittagong University of Engineering & Technology, Chittagong, Bangladesh

Mohammad Shamsul Arefin

Federal Scientific Agroengineering Center VIM, Laboratory of Non-traditional Energy Systems, Russian University of Transport, Department of Theoretical and Applied Mechanics, 127994 Moscow, Russia;, Moscow, Russia

Vladimir Panchenko

Department of Computer Science, UOW Malaysia KDU Penang University Colleage, George Town, Malaysia

J. Joshua Thomas

Northwest University, Mmabatho, South Africa

Elias Munapo

Faculty of Engineering Management, Poznań University of Technology, Poznan, Poland

Gerhard-Wilhelm Weber

Facultad de Ciencias Económicas y Empresariales, Universidad Panamericana, Mexico City, Mexico

Roman Rodriguez-Aguilar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Ukidve, S., Yadav, R., Manshahia, M.S., Randhawa, J. (2023). Evaluating Research Impact: A Comprehensive Overview of Metrics and Online Databases. In: Vasant, P., et al. Intelligent Computing and Optimization. ICO 2023. Lecture Notes in Networks and Systems, vol 855. Springer, Cham. https://doi.org/10.1007/978-3-031-50158-6_24

Download citation

DOI : https://doi.org/10.1007/978-3-031-50158-6_24

Published : 20 December 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-50157-9

Online ISBN : 978-3-031-50158-6

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Guidance on...

Guidance on terminology, application, and reporting of citation searching: the TARCiS statement

  • Related content
  • Peer review
  • Julian Hirt , research fellow and lecturer 1 2 3 ,
  • Thomas Nordhausen , research fellow 4 ,
  • Thomas Fuerst , medical information specialist 5 ,
  • Hannah Ewald , medical information specialist 5 ,
  • Christian Appenzeller-Herzog , medical information specialist 5
  • on behalf of the TARCiS study group
  • 1 Pragmatic Evidence Lab, Research Centre for Clinical Neuroimmunology and Neuroscience Basel, University Hospital Basel and University of Basel, Basel, Switzerland
  • 2 Department of Health, Eastern Switzerland University of Applied Sciences, St Gallen, Switzerland
  • 3 Department of Clinical Research, University Hospital Basel and University of Basel, Basel, Switzerland
  • 4 Institute of Health and Nursing Science, Medical Faculty, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
  • 5 University Medical Library, University of Basel, 4051 Basel, Switzerland
  • Correspondence to: C Appenzeller-Herzog christian.appenzeller{at}unibas.ch
  • Accepted 19 March 2024

Evidence syntheses adhering to systematic literature searching techniques are a cornerstone of evidence based healthcare. Beyond term based searching in electronic databases, citation searching is a prevalent search technique to identify relevant sources of evidence. However, for decades, citation searching methodology and terminology has not been standardised. An evidence guided, four round Delphi consensus study was conducted with 27 international methodological experts in order to develop the Terminology, Application, and Reporting of Citation Searching (TARCiS) statement. TARCiS comprises 10 specific recommendations, each with a rationale and explanation on when and how to conduct and report citation searching in the context of systematic literature searches. The statement also presents four research priorities, and it is hoped that systematic review teams are encouraged to incorporate TARCiS into standardised workflows.

Synthesising scientific evidence by looking at the citation relationships of a scientific record (ie, citation searching) was the underlying objective when the Science Citation Index, the antecedent of Web of Science, was introduced in 1963. 1 Although the availability of electronic citation indexes has increased, evidence syntheses in systematic reviews do not primarily rely on citation searching for literature retrieval but rather on search methods based on text and keywords. 2 When used in systematic review workflows, citation searching traditionally constitutes a supplementary search technique that builds on an initial set of references from the primary database search (seed references). 3

Citation searching is an umbrella term that entails various methods of citation based literature retrieval ( fig 1 ). Checking references cited by seed references, also known as backward citation searching, is the most prevalent and a mandatory step when conducting Cochrane reviews. 4 In forward citation searching, systematic reviewers can also assess the eligibility of articles that cite the seed references. Backward and forward citation searching are known as direct citation searching ( fig 1 ). They can be supplemented by indirect retrieval methods—namely, by co-citing citation searching (retrieving articles that share cited references with a seed reference) and co-cited citation searching (retrieving articles that share citing references with a seed reference).

Fig 1

Overview of citation searching methods. Direct (dark blue boxes) and indirect (light blue boxes) citation relationships of references are shown, relative to a seed reference; arrows denote the direction of citation (ie, source A citing target B); horizontal axis denotes time (ie, the chronology in which references were published relative to the seed reference). Visual examples of cited references (accessible via backward citation searching), citing references (accessible via forward citation searching), co-citing references (accessible via co-citing citation searching), and co-cited references (accessible via co-cited citation searching) are shown. Note that the total number of the co-citing and co-cited references of a seed reference far exceeds the number shown in the light blue boxes

  • Download figure
  • Open in new tab
  • Download powerpoint

Citation searching can contribute substantially to evidence retrieval and can show similar or even superior effectiveness and efficiency compared with text and keyword based searches. An audit of the different search methods used in a systematic review of complex evidence, for instance, revealed that 44% of all included studies were identified by backward citation searching, and 7% by forward citation searching. In comparison, initial text and keyword searches accounted for only 25% of included studies. 5 For the scoping review that collected methodological studies as a foundation for the present work, these figures were 28% and 12% for backward and forward citation searching, respectively, compared with 52% for extensive primary database searching. 6

The conduct of systematic reviews is prominently guided by standard recommendations such as those in the Cochrane handbook, 4 whereas their reporting is standardised by the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement. 7 In contrast and despite its application by systematic reviewers for decades, standardised methodology and terminology for citation searching is not available. Of the three aspects on when to do citation searching, how to conduct citation searching, and how to report citation searching, limited guidance exists only for the third aspect in the PRISMA extension for reporting literature searches (PRISMA-S). 8 Unsurprisingly, methodological studies show considerable heterogeneity in terms of citation searching terminology and recommended best practices. 6 Even in a sample of Cochrane reviews, 13% did not use backward citation searching despite this being a mandatory step. 9 The lack of standardisation not only impairs the transparency, reproducibility, and comparability of systematic reviews, but might also reduce article recall that could affect pooled effect estimates, guidance, and clinical decision making. On the other hand, uninformed use of citation searching in contexts where it is less useful might cause undue workloads.

We systematically collected evidence on the use, benefit, and reporting of citation searching 6 and put it through a four round, online Delphi study. Together with the Terminology, Application, and Reporting of Citation Searching (TARCiS) study group, an international panel of methodological experts, we aimed to develop consensus for recommendations on when and how to conduct citation searching, and on how to report it, including a consensus set of citation searching terms. Furthermore, we framed research priorities for future methodological development of citation searching in the context of systematic literature searches.

Summary points

The TARCiS (Terminology, Application, and Reporting of Citation Searching) statement provides guidance in which contexts citation searching is likely to be beneficial for systematic reviewers

TARCiS comprises 10 specific recommendations on when and how to conduct citation searching and how to report it in the context of systematic literature searches, and also frames four research priorities

The statement will contribute to a unified terminology, systematic application, and transparent reporting of citation searching and support those who are conducting or assessing citation searching methods

To develop the TARCiS statement, a stepwise approach comprising a scoping review of the methodological literature (step 1; reported in detail in a separate publication 6 ) and a Delphi study (step 2; reported in this publication) was chosen. The methods were prespecified in two study protocols. 10 11 The complete process is shown in figure 2 .

Fig 2

Flow diagram of the development process of the TARCiS (Terminology, Application, and Reporting of Citation Searching) statement. Actions and outcomes of the development phases of the TARCiS statement are shown. Appendix 1 shows more detailed reporting of consensus scores

Step 1: Scoping review

We conducted a scoping review on the terminology that describes citation searching, the methods and tools used for citation searching, and its benefit. We considered methodological studies of any design that aimed to assess the role of citation searching, compared multiple citation searching methods, or compared technical uses of citation searching within health related topics. We searched five bibliographic databases, conducted backward and forward citation searches of eligible studies and pertinent reviews, and consulted librarians and information specialists for further eligible studies. The results were summarised by descriptive statistics and narratively. The detailed methods of the scoping review have been published elsewhere. 6 10

Step 2: Delphi study

To develop consensus on recommendations and research priorities as tentatively derived from the results of step 1, 6 we performed a multistage online Delphi study. Delphi refers to a structured process where collective knowledge from an expert panel is synthesised using a series of questionnaires, each one questionnaire adapted on the basis of the responses to a previous version. 12 13 14 We recruited an international panel of individuals experienced in conducting or reporting citation searching methods. For this, we invited authors of methodological studies, as identified in step 1, 6 and methodological experts from international systematic review organisations or from our professional networks by email to participate in the Delphi study.

The Delphi study comprised four prespecified rounds. 10 11 The first round was pretested by four non-study related academic affiliates. Each round covered four to five thematic parts (appendix 2; table 1 ). Briefly, part A dealt with the terminology framework to describe citation searching methods in eight domains (for details, refer to table 4 in Hirt et al 6 ). Part B contained pre-formulated recommendations on conduct and reporting of citation searching. Each recommendation was supported by a rationale and explanation text that were also subjected to collective consensus finding. Part C covered research priorities that were also derived from the scoping review. 6 Part D contained a free text field to collect general comments from the panellists. Part E was designed to collect sociodemographic information and was limited to Delphi round 1.

Data collection through four rounds of Delphi study to develop consensus on recommendations and research priorities of the TARCiS statement

  • View inline

Non-participating panellists were recorded as non-participators for a given round. Panellists who missed all rounds were recorded as non-responders. Recommendations and research priorities that had not yet reached the prespecified consensus of at least 75% were refined for the subsequent Delphi round. These refinements were based on the panellists’ comments. In rare cases, when additional valid suggestions from panellists for reformulation of rationale or explanation texts were submitted, recommendations that already reached the agreement threshold were also adapted and forwarded to the next Delphi round. For more methodological details on the Delphi study, see table 1 and the published protocols. 10 11

Deviations from the Delphi study protocol

For round 3 of the Delphi, we had originally planned to formulate one recommendation for each of the eight terminology domains ( table 1 , see also description to part A above). Depending on the votes, however, this approach might have led to the selection of inconsistent terms (eg, backward citation searching v forward citation tracking). Hence, we decided to use the terms that received the most votes in Delphi round 2 to formulate four term sets, which were consistent across all eight domains. Secondly, instead of using SosciSurvey 15 as a survey tool, 8 we switched to the Unipark/Enterprise Feedback Suite survey, 16 which provided enhanced design and functional features. Thirdly, in addition to personalised emails (person based approach), we originally intended to recruit panellists using professional mailing lists and central requests to systematic review organisations (organisation based approach). 8 However, because we had already recruited sufficient panellists using the person based approach (including individuals who were affiliated with various systematic review organisations), we waived the organisation based approach.

We identified 47 methodological studies that assessed the use, benefit, and reporting of citation searching. In 45 studies (96%), the use of citation searching showed an added value. Thirty two studies (68%) analysed the impact of citation searching in one or more previous systematic reviews. Application, terminology, and reporting of citation searching were heterogeneous. Details on the results of the scoping review can be found elsewhere. 6

Recruitment and characteristics of panellists

Of 35 experts identified and contacted, 30 declared an interest in participating and were invited to Delphi round 1. Three (10%) of the 30 panellists were non-responders. Table 2 summarises the personal and professional characteristics of the 27 participating panellists.

Characteristics of 27 panellists* participating in the Delphi study to develop consensus on recommendations and research priorities of the TARCiS statement

TARCiS statement: final recommendations, rationale and explanations, and research priorities

Items for data collection through the four Delphi rounds in parts A-E are summarised in table 1 . The Delphi study started with 41 terms describing different aspects of citation searching, eight draft recommendations with rationale texts on the conduct and reporting of citation searching, and one research priority (appendix 1). After Delphi round 4, the finalised TARCiS statement comprised 10 recommendations with rationale and explanation texts and four research priorities that reached consensus scores between 83% and 100%. Figure 2 and appendix 1 show details on content and consensus scores in rounds 1-4. An overview of all 14 TARCiS items omitting rationale and explanation texts is presented in box 1 . A terminology and reporting item checklist based on TARCiS recommendations 1 and 10 is available in appendix 3 and on the TARCiS website. 17

TARCiS statement

Recommendations on terminology, conduct, and reporting of citation searching.

The following terminology should be used to describe search methods that exploit citation relationships:

“Citation searching” as an umbrella term.

“Backward citation searching” to describe the sub-method retrieving and screening cited references.

“Reference list checking” to describe the sub-method retrieving and screening cited references by manually reviewing reference lists.

“Forward citation searching” to describe the sub-method retrieving and screening citing references.

“Co-cited citation searching” to describe the sub-method retrieving and screening co-cited references.

“Co-citing citation searching” to describe the sub-method retrieving and screening co-citing references.

“Iterative citation searching” to describe one or more repetition(s) of a search method that exploits citation relationships.

“Seed references” to describe relevant articles that are known beforehand and used as a starting point for any citation search.

For systematic search topics that are difficult to search for, backward and forward citation searching should be seriously considered as supplementary search techniques.

For systematic search topics that are easier to search for and addressed by a highly sensitive search, backward and forward citation searching are not explicitly recommended as supplementary search techniques. Reference list checking of included records can be used to confirm the sensitivity of the search strategy.

Backward and forward citation searching as supplementary search techniques should be based on all included records of the primary search (ie, all records that meet the inclusion criteria of the review after full text screening of the primary search results). Occasionally, it can be justified to deviate from this recommendation and either use further pertinent records as additional seed references or only a defined sample of the included records.

Backward citation searching should ideally be conducted by screening the titles and abstracts of the seed references as provided by a citation index. Screening titles as provided when checking reference lists of the seed references can still be performed.

Using the combined coverage of two citation indexes for citation searching to achieve more extensive coverage should be considered if access is available. This combination is especially meaningful if seed references cannot be found in one index and reference lists were not checked.

Before screening, the results of supplementary backward and forward citation searching should be deduplicated.

If citation searching finds additional eligible records, another iteration of citation searching should be considered using these records as new seed references.

Standalone citation searching should not be used for literature searches that aim at completeness of recall.

Reporting of citation searching should clearly state:

the seed references (along with a justification should the seed references differ from the set of included records from the results of the primary database search),

the directionality of searching (backward, forward, co-cited, co-citing),

the date(s) of searching (which might differ between rounds of iterative citation searching) (not applicable for reference list checking),

the number of citation searching iterations (and possibly the reason for stopping if the last iteration still retrieved additional eligible records),

all citation indexes searched (eg, Lens.org, Google Scholar, Scopus, citation indexes in Web of Science) and, if applicable, the tools that were used to access them (eg, Publish or Perish, citationchaser),

if applicable, information about the deduplication process (eg, manual/automated, the software or tool used),

the method of screening (ie, state whether the records were screened in the same way as the primary search results or, if not, describe the alternative method used), and

the number of citation searching results in the right column box of the PRISMA 2020 flow diagram for new or updated systematic reviews that included searches of databases, registers, and other sources .

Research priorities

The effectiveness, applicability, and conduct of indirect citation searching methods as supplementary search methods in systematic reviewing require further research (including retrieval of additional unique references, their relevance for the review and prioritisation of results).

Further research is needed to assess the value of citation searching. Potential research topics could be:

influence of citation searching on results and conclusions of systematic evidence syntheses,

topics or at least determinants of topics where citation searching likely/not likely has additional value, or

economic evaluation of citation searching to assess the cost and time of conducting citation searching in relation to its benefit.

Further research is needed to assess the best way to perform citation searching. Potential research topics could be:

optimal selection of seed references,

optimal use of indexes and tools and their combination to conduct citation searching,

methods and tools for deduplication of citation searching results,

subjective influences on citation searching (eg, experience of researcher, prevention of mistakes), or

reproducibility of citation searching.

Further research is needed to reproduce existing studies: Any recommendations in this Delphi that are based on only 1-2 studies require reproduction of these studies in the form of larger, prospectively planned studies that grade the evidence for each recommendation and propose additional research where the grade of evidence is weak.

The TARCiS checklist for terminology and reporting of citation searching is available for download. 17

PRISMA=Preferred Reporting Items for Systematic reviews and Meta-Analyses; TARCiS=Terminology, Application, and Reporting of Citation Searching.

Recommendation 1

Rationale and explanation supporting recommendation 1.

As compiled in a recent scoping review, 6 the reporting of citation searching methods is frequently unclear and far from being standardised. For example, “citation searching,” “snowballing,” or “co-citation searching” are sometimes used as methodological umbrella terms but also to denote a specific method such as backward or forward citation searching. 6 For clarity, standardised vocabulary is needed. The set of terms brought forward in this recommendation is consistent in itself as well as with the terminology used in PRISMA-S and PRISMA 2020 guidelines 8 18 and hence well suited for uniform reporting of citation searching.

Recommendation 2

Rationale and explanation supporting recommendation 2:.

Evidence indicates that the ability of citation searching as a supplementary search technique to find additional unique records in a systematic literature search varies between reviews. 6 Searches for particular study designs (qualitative, mixed method, observational, prognostic, or diagnostic test studies) or health science topics such as non-pharmacological, non-clinical, public health, policy making, service delivery, or alternative medicine have been linked with effective supplementary citation searching. 19 20 21 22 The underlying reasons include poor transferability to text based searching owing to poor conceptual clarity, inconsistent terminology, or vocabulary overlaps with unrelated topics. 5 The ability of citation searching to find any publication type including unpublished or grey literature or literature that is not indexed in major databases (eg, concerning a developing country) might also be relevant. 23 However, a clear categorisation of topics that are difficult to search for is currently not possible and it remains for the review authors themselves to judge whether their review topic is likely to fall into this category.

For people conducting the search who have difficulty assessing whether the topic is difficult or easier to search for, we recommend that they opt for citation searching or consult an experienced information specialist. 24 If the search strategy does not exhaustively capture the topic, backward and forward citation searching might compensate for some of the potential loss of information.

Recommendation 3

Rationale and explanation supporting recommendation 3.

Evidence indicates that the ability of citation searching as a supplementary search technique to find additional unique references in a systematic literature search varies between reviews. 6 Searches for clearly defined clinical interventions as part of PICO (participant, intervention, comparison, outcome) questions have been linked with less effective supplementary citation searching, especially when the search strategies are sensitive and conducted in several databases. However, a clear categorisation of topics that are easier to search for is currently not possible, and it remains for the review authors themselves to judge whether their review topic is likely to fall into this category.

By checking reference lists within the full texts of seed references, review authors can test the sensitivity of their primary search strategy (ie, electronic database search). 25 If no additional relevant, unique studies are found, the primary search might have been sensitive enough. If additional relevant, unique studies are found, these could indicate that the primary search was not sensitive enough.

For individuals conducting the search who have difficulty assessing whether the topic is difficult or easier to search for, we recommend that they opt for citation searching or consult an experienced information specialist. 24 If, for whatever reason, the search strategy does not exhaustively capture the topic, backward and forward citation searching could compensate for some of the potential loss of information.

Recommendation 4

Rationale and explanation supporting recommendation 4.

The more seed references used, the better the chance that citation searching finds additional relevant unique records. While using only a sample of the included records as seed references might be enough, there is currently no evidence that could help decide how many seeds are needed or how to decide which might perform better. Hence, we recommend using all the records that meet the inclusion criteria of the review after full text screening of the primary database search results.

However, review authors could deviate from this recommendation if they deal with a very small or large number of included records. A very small number of included records might not yield additional relevant records or only have limited value. In this case, review authors could use further records as seed references for citation searching (eg, systematic reviews on the topic that were flagged during the screening phase). 26 A very large number of included records could lead to too many records to screen. In this case, review authors might use a selected sample of included records as seed references for citation searching. In the event of such deviation, authors should describe their rationale and sampling method (eg, random sample).

Recommendation 5

Rationale and explanation supporting recommendation 5.

Citation searching workflows encompass two consecutive steps: retrieval of records and screening of retrieved records for eligibility. When using an electronic citation index for citation searching, retrieval and screening are usually separated. While forward citation searching requires a citation index, backward citation searching can also be performed by manually checking the reference lists of the seed references. Reference list checking is sometimes part of an established workflow, for example, during the eligibility assessment of the full text record or during data extraction. 25 Merging these two steps allows researchers to know the context in which a reference was used and that all references can be screened. However, reference list checking has three disadvantages:

The retrieval and screening phases are no longer separated, which makes reporting of the methods or results difficult and unclear

Citations from reference list checking cannot be deduplicated against each other or against the primary search results, which could add an unnecessarily high workload (see recommendation 7)

Eligibility assessments are restricted to the titles (instead of titles and abstracts) which could lead to relevant records being overlooked due to uninformative titles mentioned in vague contexts.

In recent years, online citation searching options via citation indexes or free to access citation searching tools have become more readily available leading to faster and easier procedures. 27 28 29 30 More and even better tools to facilitate this workflow are expected in the future. Combining citation searching via citation indexes with automated deduplication (free online tools available) 31 32 33 makes this recommendation feasible. A caveat is that a search in one citation index will in most cases fail to retrieve all the cited references. 34 35 Thus, references to some documents (such as websites, registry entries, or grey literature) that are less likely to be indexed in databases might only be retrievable by checking reference lists or only in some citation indexes. 3

Recommendation 6

Rationale and explanation supporting recommendation 6.

A single citation index or citation analysis tool might not cover all seed references and is likely to not find all the citing and cited literature. Citation indexes do not offer 100% coverage because some references are currently not indexed in one or several citation index(es) 36 and because of data quality problems. 37 Evidence indicates that when using more than one citation index for citation searching, the results of the different indexes can complement each other. 38 39 40 Thus, retrieval of backward and forward citation searching results from more than one citation index or citation analysis tool (eg, Lens.org via citationchaser, Scopus, citation indexes in Web of Science) followed by deduplication (see recommendation 7) can increase the sensitivity of citation searching. It is similar to the complementary effect of using multiple electronic databases for the primary database search, which is the preferred method in systematic search workflows. 4 In recent years, online citation searching options have increased and many open access tools make rapid electronic citation searching universally accessible. 27 28 29 30

Recommendation 7

Rationale and explanation supporting recommendation 7.

The concept of citation searching as a supplementary search method relies on the notion that reference lists and cited-by lists of eligible references are topically related to these references. 6 This topical relation implies a considerable degree of overlap within these lists leading to several duplicates. Furthermore, the overlap likely also extends to the results of the primary database search that was performed on the same topic. Based on these considerations and on the fact that the results of the primary database search have already been screened for eligibility, the screening load of citation searching results can be substantially cut by removing those references that have already been screened for eligibility (deduplication against the primary database search) and those references that appear as duplicates during citation searching. 34 Depending on the method of deduplication, this procedure can be done in one go.

While deduplication can be conducted manually, standard bibliographic management software and specialised tools currently provide automated deduplication solutions, allowing for easier and faster processing. 34 41 42 If citation searching leads to only very few results, omission of the deduplication step can be considered to save time and administrative effort.

Recommendation 8

Rationale and explanation supporting recommendation 8.

Citation searching methods can be conducted over one or more iterations, a process that we refer to as iterative citation searching. 43 The first iteration is based on the original seed references (see recommendation 4). If eligibility screening of the results of this first iteration leads to the inclusion of further eligible records, these records serve as new seed references for the second iteration, and so forth. Evidence indicates that conducting iterative citation searching can contribute to the identification of more eligible records. 6 43 44 45

Iterations beyond the first round of citation searching require additional time and effort and could interrupt the ongoing review process, so the decision in favour of or against further iterations should be guided by an informal cost-benefit assessment. Relevant factors to be assessed include the review topic (difficult or easier to search for), sensitivity of the primary search, aim for completeness of the literature search, and the estimated potential benefit of the iteration(s) (eg, based on the number or percentage of included records found with the previous citation searching iteration).

Review authors should report the number of iterations and possibly the reason for stopping if the last iteration still retrieved additional eligible records. Furthermore, stating “citation searching was done on all included records” can lead to confusion. Most authors might mean all records were included after full text screening of the primary search results. But strictly speaking, “all included records” also includes those records retrieved via citation searching. The second interpretation implies that iterative citation searching is required until the last iteration leads to no further identification of eligible records.

As outlined in the rationale of recommendation 7, results of citation searching iterations can be deduplicated against all previously retrieved records to reduce the screening load.

Recommendation 9

Rationale and explanation supporting recommendation 9.

We refer to standalone citation searching when any form of citation searching is used as the primary search method without extensive prior database searching. 6 This is contrary to citation searching as a supplementary search method to a primary database search. Seed references for standalone citation searching could, for example, be records from researchers’ personal collections or retrieved from less sensitive literature searches. Standalone citation searching can be based on a broad set of seed references. It can comprise backward and forward citation searching as well as indirect methods that collect co-citing and co-cited references.

When study authors have replicated published systematic reviews with standalone citation searching, they have mostly missed literature that was included in the systematic review. 27 46 47 48 Since search methods for systematic reviews and scoping reviews should aim at completeness of recall, standalone citation searching is not a suitable method for these types of literature review.

Recommendation 10

Rationale and explanation supporting recommendation 10.

Relevant guidance for researchers conducting citation searching in systematic literature searching can be found in item 5 of PRISMA-S. 8 Accordingly, required reporting items are the directionality of citation searching (examination of cited or citing references), methods and resources used for citation searching (bibliographies in full text articles or citation indexes), and the seed references that citation searching was performed on. 8 Additional information for the reporting of citation searching can be found in PRISMA-S items 1 (database name), 13 (dates of searches), and 16 (deduplication). 8 Although PRISMA-S can be seen as the minimum reporting standard for citation searching as a supplementary search technique, other important elements that emerged from our scoping review 6 need to be reported to achieve full transparency or reproducibility. These elements are listed in recommendation 10 as a supplement to PRISMA-S to comprehensively guide the reporting of supplementary citation searching in systematic literature searching.

Concerning reporting of citation searching results in the PRISMA 2020 flow diagram, 49 two variants are possible: reporting only those records that are additional to the primary search results after deduplication, or reporting all retrieved records followed by insertion of an additional box where the number of deduplicated records is reported.

Researchers should be aware that the detail of the citation searching methods do not have to be reported in the main methods of a study. Detailed search information can be provided in an appendix or an online public data repository.

Examples of good reporting

“As supplementary search methods, we performed . . . direct forward and backward CT [citation searching] of included studies and pertinent review articles that were flagged during the screening of search results (on February 10, 2021). For forward CT, we used Scopus, Web of Science [core collection as provided by the University of Basel; Editions = SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, ESCI, CCR-EXPANDED, IC], and Google Scholar. For backward CT, we used Scopus and, if seed references were not indexed in Scopus, we manually extracted the seed references’ reference list. We iteratively repeated forward and backward CT on newly identified eligible references until no further eligible references or pertinent reviews could be identified (three iterations; the last iteration on May 5, 2021).” 6

“To supplement the database searches, we performed a forward (citing) and backwards (cited) citation analysis on 2 August 2022 using SpiderCite ( https://sr-accelerator.com/#/spidercite ).” 50

“Reference lists of any included studies and retrieved relevant SRs [systematic reviews] published in the last five years were checked for any eligible studies that might have been missed by the database searches.” 51

Research priority 1

Rationale and explanation supporting research priority 1.

Indirect citation searching involves the collection and screening for eligibility of records that share references in their bibliography or citations with one of the seed references (ie, co-citing or co-cited references). 10 Indirect citation searching typically retrieves a large volume of records to be screened. 46 48 Therefore, prioritisation algorithms for the screening of records and cut-off thresholds that might discriminate between potentially relevant and non-relevant records have been proposed with the aim to reduce the workload of eligibility screening. 27 47 The methodological studies that have pioneered indirect citation searching methods for health related topics have so far exclusively focused on standalone citation searching. 6 It is currently unclear whether the added workload and resources for searching and screening warrant indirect citation searching methods as supplementary search techniques in systematic reviews of any type (qualitative or quantitative studies, difficult or easier topics to search for).

Research priority 2

Research priority 3, research priority 4, tarcis recommendations and research priorities.

In keeping with our study aims, the TARCiS recommendations cover three aspects of citation searching in the context of systematic literature searches. They offer guidance regarding when to conduct, how to conduct, and how to report citation searching. The strength of each recommendation reflects the panellists’ assessment of the strength of evidence to support them.

In systematic evidence syntheses, citation searching techniques can be used to fill gaps in the results of primary database searches, but their application is not universally indicated. TARCiS recommendations 2 and 3 provide critical assistance in cost-benefit considerations (ie, whether a systematic search is likely to benefit from the use of citation searching). Systematic searchers of defined pharmaceutical interventions, for instance, might take from this guidance that they can skip citation searching because their primary database search might already allow for high sensitivity at reasonable specificity and expedite other supplementary search techniques, such as clinical trial registry searching. 52 Accordingly, TARCiS does not recommend the use of citation searching in easier-to-search-for topics, but—as formulated in research priority 2—more research is needed to more reliably discriminate between topics that are easier to search for and those that are difficult to search for.

TARCiS recommendations 4-8 comprise guidance for technical aspects of citation searching. This guidance includes the selection of seed references, use of electronic citation indexes, deduplication, and iterative citation searching. While composing these recommendations, the TARCiS study group has considered that individual workflows must be framed in line with institutional licenses for subscription only databases and software. For illustration, one such workflow that is based on the licenses as provided by the University of Basel was deposited as an online video. 53

Concerning guidance for reporting of citation searching, we developed a consensus terminology set for citation searching methods (TARCiS recommendation 1) as well as a recommendation for preferred reporting items for citation searching (TARCiS recommendation 10), along with a downloadable checklist. 17 TARCiS recommendation 10 increases the reporting standards provided by PRISMA-S 8 by dealing with the reporting of citation searching iterations, software tools that facilitate citation searching via a citation index, and the method of eligibility screening. Furthermore, TARCiS recommendation 10 standardises the reporting of citation searching results in the PRISMA 2020 flow diagram. We suggest that systematic reviewers, methodologists, journal reviewers, and editors use the TARCiS statement terminology and reporting checklist 17 (appendix 3) as an additional checklist until future work by the PRISMA-S study group produces an updated reporting guideline that renders the TARCiS checklist obsolete.

Dissemination

TARCiS is intended to be used by researchers, systematic reviewers, information specialists, librarians, editors, peer reviewers, and others who are conducting citation searching or assessing citation searching methods. To enhance dissemination among these stakeholders, we aim to provide additional open access publications in scientific and non-scientific journals relevant in the field of information retrieval and evidence syntheses.

We have launched a TARCiS website ( https://tarcis.unibas.ch/ ) and plan to disseminate the TARCiS terminology and reporting checklist 17 on various platforms, including EQUATOR. We aim to make the TARCiS statement available via the Library of Guidance for Health Scientists (LIGHTS), a living database for methods guidance 54 ; the Systematic Review Toolbox, an online catalogue of tools for evidence syntheses 55 ; and ResearchGate, a social scientific network to share and discuss publications.

We will also share the TARCiS terminology and reporting checklist 17 with editors of journals relevant in the field of information retrieval and evidence syntheses to request for inclusion in their instructions for authors and raise awareness of this topic. We hope that this effort will guide authors and peer reviewers to use TARCiS and assist their conduct, reporting, and evaluation of citation searching. We will also share the TARCiS statement with primary teaching stakeholders in evidence syntheses and systematic literature searching (eg, York Health Economics Consortium, RefHunter, Cochrane, Joanna Briggs Institute, and the Campbell Collaboration) and suggest its inclusion in future editions of their handbooks. We will present and discuss the TARCiS statement on international conferences and share our publications and presentations via relevant mailing lists and newsletters, X (formerly Twitter), and LinkedIn.

Limitations

A limitation of the TARCiS statement is that, despite the expectation and intent to recruit panellists from all parts of the world, their locations were limited to Australia, Europe, and North America. In addition, only a few panellists were recruited from countries where English was not the dominant language. Furthermore, both the evidence collected in our scoping review and the participating panellists are primarily involved with health related research. These considerations might reduce the generalisability of our recommendations and research priorities in other countries, languages, and research areas.

Conclusions

TARCiS comprises 10 specific recommendations on when and how to conduct citation searching and how to report it in the context of systematic literature searches. Furthermore, TARCiS frames four research priorities. It will contribute to a unified terminology, systematic application, and transparent reporting of citation searching and support researchers, systematic reviewers, information specialists, librarians, editors, peer reviewers, and others who are conducting or assessing citation searching methods. In addition, TARCiS might inform future methodological research on the topic. We encourage systematic review teams to incorporate TARCiS into their standardised workflows.

Ethics statements

Ethical approval.

This study is based on published information and uses surveys of topical experts and therefore did not fall under the regulations of the Swiss Human Research Act, and we did not need to apply for ethical approval according to Swiss law. Data protection and privacy issues for the survey are outlined in the main text.

Data availability statement

The survey sheets and questionnaires used for this study are included in the supplementary content. Data generated and analysed during this study (except for sociodemographic information) are available on the Open Science Framework ( https://osf.io/y7kh3 ).

Acknowledgments

We would like to dedicate this work to Cecile Janssens, who died soon after agreeing to join our Delphi panel. We thank Jill Hayden (Dalhousie University) and Claire Duddy (UK) for participating in our Delphi panel; and Christian Buhtz (Martin Luther University Halle-Wittenberg), Jasmin Eppel-Meichlinger (Karl Landsteiner University of Health Sciences), Tania Rivero (University of Berne), and Monika Wechsler (University of Basel) for participating in the pretest of the Delphi survey.

TARCiS study group: Alison Avenell (University of Aberdeen, UK), Alison Bethel (University of Exeter, UK), Andrew Booth (University of Sheffield, UK; and University of Limerick, Ireland), Christopher Carroll (University of Sheffield, UK), Justin Clark (Bond University, Australia), Julie Glanville (Glanville.info, UK ), Su Golder (University of York, UK), Elke Hausner (Institute for Quality and Efficiency in Health Care, Germany), Tanya Horsley (Royal College of Physicians and Surgeons of Canada, Canada), David Kaunelis (Canadian Agency for Drugs and Technologies in Health, Canada), Shona Kirtley (University of Oxford, UK), Irma Klerings (Donau University, Austria), Jonathan Koffel (USA), Paul Levay (National Institute for Health and Care Excellence, UK), Kathrine McCain (Drexel University, USA), Maria-Inti Metzendorf (Heinrich-Heine University Duesseldorf, Germany), David Moher (University of Ottawa, Canada), Linda Murphy (University of California at Irvine, USA), Melissa Rethlefsen (University of New Mexico, USA), Amy Riegelman (University of Minnesota, USA), Morwenna Rogers (University of Exeter, UK), Margaret J Sampson (Children’s Hospital of Eastern Ontario, Canada), Jodi Schneider (University of Illinois at Urbana-Champaign, USA), Terena Solomons (Curtin University, Australia), Alison Weightman (Cardiff University, UK)

Contributors: All authors made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; drafted the article or revised it critically for important intellectual content; and approved the final version to be published. JH, TN, TF, HE, and CA-H had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. JH, TN, TF, HE, and CA-H contributed to the study concept and methodology; acquisition, analysis, interpretation, validation, and visualisation of data; and critical revision of the manuscript for important intellectual content. JH, HE, and CA-H conducted the statistical analysis. JH and CA-H drafted the manuscript; provided administrative, technical, and material support; and supervised the study. The TARCiS study group authors are the Delphi panellists who were involved in Delphi rounds 1-4; they received the final manuscript draft for critical revision, important intellectual input, and approval for publication. CA-H is the guarantor for the study. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: The authors did not receive a specific grant for this study.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: no specific support for the submitted work. CA-H received payments to his institution for a citation searching workshop by the University of Applied Sciences Northwestern Switzerland. JH received consulting fees from Medical University Brandenburg and payments for lecturing from the University of Applied Sciences Northwestern Switzerland, Catholic University of Applied Sciences, and Netzwerk Fachbibliotheken Gesundheit. From the TARCiS study group: JS received support from Alfred P Sloan Foundation; was funded by the US National Institutes of Health, National Science Foundation, US Office of Research Integrity, United States Institute of Museum and Library Services, and University of Illinois Urbana-Champaign; received book royalties from Morgan and Claypool; received consulting fees or honorariums from the European Commission, Jump ARCHES, NSF, and the Medical Library Association; received travel support by UIUC; contributes to the CREC (Communication of Retractions, Removals, and Expressions of Concern) Working Group; has non-financial associations with Crossref, COPE (Committee on Publication Ethics), the International Association of Scientific, Technical and Medical Publishers, the National Information Standards Organisation, and the Center for Scientific Integrity (parent organisation of Retraction Watch); and declares the National Information Standards Organisation as a subawardee on her Alfred P Sloan Foundation grant G-2022-19409. JG received payments for lecturing by York Health Economics Consortium. MJS received consulting fees at the Canadian Agency for Drugs and Technologies in Health and National Academy of Medicine (formerly Institute of Medicine) and for lecturing and support for attending a meeting at Institute for Quality and Efficiency in Health Care; and has a leadership role as secretary of the Ottawa Valley Health Library Association. AW received payments to her institution for a citation analysis workshop run via York Health Economics Consortium. SK declares non-financial interests as a member of the UK EQUATOR Centre and a coauthor of the PRISMA-S reporting guideline and was funded by Cancer Research UK (grant C49297/A27294); the current work was unrelated to this funding. PL is an employee of the National Institute for Health and Care Excellence. MR received payments by the Medical Library Association and declares non-financial interests as a member of the PhD programme affiliated with BMJ Publishing Group. ABo is a co-convenor of the Cochrane Qualitative and Implementation Methods Group and has authored methodological guidance on literature searching. All the other authors have no competing interests to disclose.

Provenance and peer review: Not commissioned; externally peer reviewed.

  • Varley-Campbell J ,
  • Britten N ,
  • ↵ Higgins J, Thomas J, Chandler J, et al. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022): Cochrane; 2022. www.training.cochrane.org/handbook .
  • Greenhalgh T ,
  • Nordhausen T ,
  • Appenzeller-Herzog C ,
  • Bossuyt PM ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • Briscoe S ,
  • ↵ Hirt J, Nordhausen T, Fuerst T, et al. Internal DELPHI Protocol. 2023. https://osf.io/4nh25 .
  • Radbruch L ,
  • Brearley SG
  • Pawlowski SD
  • Schulz KF ,
  • ↵ Leiner DJ. SoSci Survey (Version 3.1.06) [Computer software]. 2019. https://www.soscisurvey.de .
  • ↵ Gmb HTX. Unipark: 2023. https://www.unipark.com/en .
  • ↵ Hirt J, Nordhausen T, Fuerst T, et al. TARCiS terminology and reporting item checklist. 2023. https://bit.ly/tarcispdf .
  • McKenzie JE ,
  • Preston L ,
  • Carroll C ,
  • Gardois P ,
  • Paisley S ,
  • Kaltenthaler E
  • Lasda Bergman EM
  • Haddaway NR ,
  • Collins AM ,
  • Coughlin D ,
  • Livingston EH
  • Horsley T ,
  • Dingwall O ,
  • Westphal A ,
  • Kriston L ,
  • Hölzel LP ,
  • von Wolff A
  • Janssens ACJW ,
  • Brockman JE ,
  • Grainger MJ ,
  • Smalheiser NR ,
  • Schneider J ,
  • Torvik VI ,
  • Fragnito DP ,
  • Pallath A ,
  • Ouzzani M ,
  • Hammady H ,
  • Fedorowicz Z ,
  • Elmagarmid A
  • ↵ Zotero Version 6. 2023. https://www.zotero.org:443 .
  • Glasziou P ,
  • Del Mar C ,
  • Bannach-Brown A ,
  • Stehlik P ,
  • ↵ ClarivateAnalytics. Cited Reference Search. 2023. https://webofscience.help.clarivate.com/en-us/Content/cited-reference-search.htm .
  • Falagas ME ,
  • Pitsouni EI ,
  • Malietzis GA ,
  • Franceschini F ,
  • Maisano D ,
  • Mastrogiacomo L
  • Ainsworth N ,
  • Rodriguez-Lopez R
  • Borissov N ,
  • Lines RLJ ,
  • Gucciardi DF ,
  • ↵ Janssens AC. Updating systematic reviews and meta-analyses, the easy way: 2021. https://cecilejanssens.medium.com/updating-systematic-reviews-and-meta-analyses-the-easy-way-cbb2e23b48b9 .
  • Kleijnen J ,
  • Knipschild P
  • Janssens AC ,
  • Nightingale R ,
  • Fautrel B ,
  • Patterson J ,
  • Hunter KE ,
  • Webster AC ,
  • ↵ Hirt J, Nordhausen T, Fuerst T, et al. Citation searching in multiple citation indexes. 2023. https://osf.io/jaeu5 .
  • Schönenberger CM ,
  • ↵ Marshall C, Sutton A, O’Keefe H, et al. The Systematic Review Toolbox: 2022. http://www.systematicreviewtools.com/ .

citation index of research paper

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

A Message From the Assistant Director of Content Development 

The Purdue OWL® is committed to supporting  students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.

Please don't hesitate to contact us via our contact page  if you have any questions or comments.

All the best,

Social Media

Facebook twitter.

IMAGES

  1. How To Cite a Research Paper: Citation Styles Guide

    citation index of research paper

  2. Two Types of Citation

    citation index of research paper

  3. Best Science Citation Formats for Research Papers

    citation index of research paper

  4. PPT

    citation index of research paper

  5. Citation indexing

    citation index of research paper

  6. How to Cite a Research Paper

    citation index of research paper

VIDEO

  1. 7

  2. 3

  3. Bibliography reference citation UGC Net paper 1 topic

  4. 16

  5. Citation indexing

  6. 9

COMMENTS

  1. Citation Indexes

    g-index : Gives more weight to highly cited publications. The original h-index is insensitive to high "outliers" -- a few papers that have very high citation counts will not sway the h-index score (much). The g-index allows highly cited papers to play a larger role in the index, and tends to emphasize visibility and "lifetime achievement."

  2. Citation Analysis

    The h-index is an index to quantify an individual's scientific research output (J.E. Hirsch) The h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the researcher's most cited papers and the number of citations that they have ...

  3. Journal Citation Reports

    Journal Citation Reports (JCR) is a comprehensive and authoritative source of data and analysis on the performance and impact of thousands of scholarly journals across various disciplines. JCR provides metrics such as Journal Impact Factor, Quartile and Percentile Rank, and ESI Total Citations to help researchers, publishers, librarians and funders evaluate and compare journals. JCR also ...

  4. Citation index

    A citation index is a kind of bibliographic index, an index of citations between publications, allowing the user to easily establish which later documents cite which earlier documents. A form of citation index is first found in 12th-century Hebrew religious literature. Legal citation indexes are found in the 18th century and were made popular by citators such as Shepard's Citations (1873).

  5. Research Impact: Citation Indexes

    The original h-index is insensitive to high "outliers" -- a few papers that have very high citation counts will not sway the h-index score (much). The g-index allows highly cited papers to play a larger role in the index, and tends to emphasize visibility and "lifetime achievement." hc-index (contemporary h-index): Gives more weight to recent ...

  6. Google Scholar Metrics Help

    Scholar Metrics summarize recent citations to many publications, to help authors as they consider where to publish their new research. To get started, you can browse the top 100 publications in several languages, ordered by their five-year h-index and h-median metrics. To see which articles in a publication were cited the most and who cited ...

  7. Scopus

    Scopus Scopus is a citation index: it collects abstracts and citation data for all articles published by the set of academic journals included in its indexes based on specific criteria.. Citation indexes also track how often papers are cited, and it is through this citation analysis that you can get a sense of what people have written and what, based on the citations, are considered either ...

  8. The concept of citation indexing

    From the outset, the Science Citation Index (SCI), Social Sciences Citation Index ... Since the typical research paper today contains from 25 to 35 references, the resulting number of index entries is correspondingly high. Indeed, citing papers provide useful indexing "statements" or descriptors through the papers they cite.

  9. Science Citation Index-Expanded

    Sustain and advance your programs. Get the big picture. Comprehensive coverage across 182 scientific disciplines helps you find related research via citation linking and ensures that you're never working in a silo. Trace the evolution of an idea. Over 120 years of comprehensive data enables you to explore the full scope and origins of an idea.

  10. Citations, Citation Indicators, and Research Quality: An Overview of

    Among the most frequently used citation indicators are the field-normalized citation impact indicator, the number/proportion of highly cited papers, and the h-index. The first indicator is an expression of the average number of citations of the publications, normalized for field, publication year, and document type (e.g., regular article or ...

  11. Measuring Your Impact: Impact Factor, Citation Analysis, and other

    The h-index is an index to quantify an individual's scientific research output (J.E. Hirsch) The h-index is an index that attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. The index is based on the set of the researcher's most cited papers and the number of citations that they have ...

  12. (PDF) Citation indexing and indexes

    A citation →index is a paper-based or electronic database that provides citation links. between documents. It may also be termed a reference index, but this term is seldom used [4], and in the ...

  13. What is indexing

    Legal citation indexes were found in the 18 th century and were made popular by citators such as Shepard's citations (1873). 3 In 1960, the Eugene Garfields Institute for Scientific Information (ISI) introduced the first citation index for papers published in academic journals, first the science citation index (SCI) and later social science's ...

  14. scite: A smart citation index that displays the context of citations

    Abstract. Citation indices are tools used by the academic community for research and research evaluation that aggregate scientific literature output and measure impact by collating citation counts. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. The use of citations in research ...

  15. SJR : Scientific Journal Rankings

    SJR : Scientific Journal Rankings. Display journals with at least. Citable Docs. (3years) Apply. Download data. 1 - 50 of 29165. Title.

  16. Web of Science Master Journal List

    Browse, search, and explore journals indexed in the Web of Science. The Master Journal List is an invaluable tool to help you to find the right journal for your needs across multiple indices hosted on the Web of Science platform. Spanning all disciplines and regions, Web of Science Core Collection is at the heart of the Web of Science platform. Curated with care by an expert team of in-house ...

  17. What Is a Journal Index, and Why is Indexation Important?

    Updated on May 13, 2022. A journal index, also called a 'bibliographic index' or 'bibliographic database', is a list of journals organized by discipline, subject, region or other factors. Journal indexes can be used to search for studies and data on certain topics. Both scholars and the general public can search journal indexes.

  18. Cited Reference Search

    Cited Reference Search. Search for records that have cited a published work, and discover how a known idea or innovation has been confirmed, applied, improved, extended, or corrected. Find out who's citing your research and the impact your work is having on other researchers in the world. In the Arts & Humanities Citation Index, you can use ...

  19. Evaluating Research Impact: A Comprehensive Overview of ...

    Abstract. The purpose of this research paper is to analyze and compare the various research metrics and online databases used to evaluate the impact and quality of scientific publications. The study focuses on the most widely used research metrics, such as the h-index, the Impact Factor (IF), and the number of citations.

  20. How to Cite Sources

    An alternative to this type of in-text citation is the system used in numerical citation styles, where a number is inserted into the text, corresponding to an entry in a numbered reference list. Example: Numerical citation (Vancouver) Evolution is a gradual process that "can act only by very short and slow steps" (1, p. 510).

  21. (PDF) Citation Index and Impact factor

    This also applies to individual authors; citation counts are used to calculate metrics such as the H-index that evaluate the impact and quality of a scholar's research (Nigam & Nigam, 2012). H ...

  22. How to calculate citation index

    The index is based on a list of publications ranked in descending order by the number of citations these publications received. The value of h is equal to the number of papers (N) in the list that ...

  23. Google Scholar Citations

    Google Scholar Citations lets you track citations to your publications over time.

  24. Guidance on terminology, application, and reporting of citation

    Synthesising scientific evidence by looking at the citation relationships of a scientific record (ie, citation searching) was the underlying objective when the Science Citation Index, the antecedent of Web of Science, was introduced in 1963.1 Although the availability of electronic citation indexes has increased, evidence syntheses in systematic reviews do not primarily rely on citation ...

  25. APA Formatting and Style Guide (7th Edition)

    Basic guidelines for formatting the reference list at the end of a standard APA research paper Author/Authors Rules for handling works by a single author or multiple authors that apply to all APA-style references in your reference list, regardless of the type of work (book, article, electronic resource, etc.)

  26. Basic principles of citation

    Basic Principles of Citation. APA Style uses the author-date citation system, in which a brief in-text citation directs readers to a full reference list entry. The in-text citation appears within the body of the paper (or in a table, figure, footnote, or appendix) and briefly identifies the cited work by its author and date of publication.

  27. Welcome to the Purdue Online Writing Lab

    Mission. The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives.

  28. *Upon further review, AND EDITS, now quite confident that all research

    PDF | On May 8, 2024, Aderemi Fadele published *Upon further review, AND EDITS, now quite confident that all research papers/notes through 8th MAY 2024 are FREE of SUBSTANTIVE ERRORS. | Find, read ...