46. 10 simple strategies to increase the impact factor of your publication - title

10 strategies to increase the impact factor of your publication

Impact factors are heavily criticized as measures of scientific quality. However, they still dominate every discussion about scientific excellence. They are still used to select candidates for positions as PhD students, postdocs, and academic staff, promote professors, and select grant proposals for funding. 

Consequently, researchers tend to adapt their publication strategy to avoid a negative impact on their careers. Until alternative methods to measure excellence are established, young researchers must learn the “rules of the game.” However, young scientists often need advice on how to reach higher impact factors with their publications.

Are HIGH IMPACT PUBLICATIONS important FOR THE CAREER OF SCIENTISTS?

High-impact publications influence how researchers are perceived within the scientific community and directly contribute to their ability to get an academic job, secure funding, participate in peer review panels, and present at conferences.

You may still be unsure whether you want to pursue a career in academia or the non-academic job market. It is wise to decide early in your careerย whether you should become a professorย โ€“ or not!ย 

If you want to pursue an academic career, you are well-advised to strive for high impact publications because impact factors and citations areย important parameter to qualify for a professor position.ย 

Other journal metricsare not well known and rarely used in the research assessment process.

Importantly, there are many other bibliometric markers (e.g., the 5-year journal impact factor or the article influence score) – but only a few are relevant for your career. Read more here: Which bibliometric data are relevant for a research career?

WHAT ARE IMPACT FACTORS?

The 2-year journal impact factor, commonly referred to as “the impact factor,” is just one metric within a larger bibliometric system designed to measure the average number of times articles from a given journal are cited in a specific timeframe.

The impact factor of a scientific journal is a measure reflecting the average number of citations to recent articles published in that journal in a selected year.ย 

The impact factor is calculated as follows:

Impact Factor = Citations in a given year to articles published in the previous two years / Total number of articles published in the previous two years

Itโ€™s widely used to assess the influence and reputation of academic journals within their fields. High impact factors generally indicate that the journal’s articles are frequently cited, suggesting that the journal publishes influential or widely recognized research.

For example, if articles published in a journal in 2022 and 2023 received 200 citations in 2024, and the journal published 100 articles in those two years, the impact factor for 2024 would be 2.0.

Key Points

  1. Indicator of Influence: Impact factors are often used to measure a journal’s prominence within its discipline.
  2. Not Absolute Quality: While a high impact factor may imply a journalโ€™s research is frequently referenced, it does not necessarily measure the quality or rigor of individual articles.
  3. Field-Specific: Some fields naturally have higher citation rates, so impact factors are best compared within, rather than across, disciplines.

In the everyday lab talk, we always talk about “the impact factor of a publication,” although the correct terminology would be “the impact factor of the journal where the paper has been published.” 

But we are lazy. I even used this misleading terminology in the title of this article. 

Impact factors are frequently used as a proxy for the relative importance of a journal within its field. See thisย Wikipedia summaryย for more details.

Who calculates the impact factor of a journal?

The impact factor of a journal is calculated by Clarivate, a company that owns the Web of Science database and publishes the Journal Citation Reports (JCR) annually.

Clarivate is the only organization that provides “official” impact factors through its Journal Citation Reports (JCR). It provides a comprehensive platform for accessing journal citation reports and metrics, such as the 2-year journal impact factor and the 5-year impact factor.

Only journals indexed in the Web of Science Core Collection are eligible to receive an impact factor. Journals not indexed by Clarivate but indexed by other databases (like Scopus) might have similar metrics, but these are not officially called “impact factors.”

ARE IMPACT FACTORS A GOOD PROXY FOR SCIENTIFIC QUALITY?

quality stamp representing excellent science

There is considerable discussion in the scientific world about whether impact factors are a reliable instrument to measure scientific quality. 

Several funding organizations worldwide started to reduce the influence of this parameter on their strategy to fund excellent science.

One of the many critical points isย that impact factors describe the average quality of a journal and should not be used for single publications – or author-level metrics.ย 

Impact factors do *not* accurately describe the quality of a single publication or a single author!

Given the high importance of these metrics, journals often highlight their impact factor as well as their average article citation rates and journal rankings as key indicators of their prestige and scientific information value. Thus, these metrics became important marketing signals.

ARE THERE BETTER METRICS TO MEASURE SCIENTIFIC QUALITY?

Many alternatives for impact factors have been suggested. For example, the h-index (or h-factor) is primarily based on citations and not on the impact factor of a journal where a paper is published. For your career in science, you must understand which bibliometric data are highly relevant for a researcher – and which are not.

However, most of these alternative metrics have their own disadvantages โ€“ especially for young researchers (see below).

It is also important to note that there are many ways for journals to manipulate their impact factors. 

Does this sound like a good way to measure scientific quality? Probably not.

For fields with slower citation rates, such as social sciences or policy-related fields, other metrics like the 5-year impact factor may offer a higher value for understanding long-term influence.

Eugene Garfield, the creator of the Science Citation Index and pioneer in citation data analysis, emphasized the careful use of these impact data when comparing journals from different subject areas and fields of research.
The impact of a single citation varies significantly across subject categories, and lesser cited journals often play critical roles in niche disciplines.

Unsurprisngly, there is an ongoing debate whether we should get rid off impact factors to measure scientific quality.

CAN WE IGNORE IMPACT FACTORS?

Impact factors are easy to determine. Therefore, administrators love to use them to evaluate scientific output

Most scientists get exposed to discussions about impact factors โ€“ even when they are *not* in scientific domains dominated by these bibliometric measures, such as life sciences. 

Thus, we cannot ignore impact factors because they are still broadly used to evaluate the performance of single scientists, departments, and institutions. 

For a career in science, you must decide whether you need nature or science papers for a successful career in science.

HOW ARE IMPACT FACTORS USED?

Impact factors are still used during many procedures:

  • to select excellent candidates for positions as PhD students, postdocs, and academic staff
  • to select recipients of grants
  • to promote professors
  • to distribute internal grants, resources, and infrastructures in universities
  • to establish scientific collaborations in the context of international networks
  • to select reviewers and editors for journals
  • to select speakers at scientific conferences
  • to select members of scientific commissions, e.g., to evaluate grant proposals or select new staff members
  • to determine the scientific output in university rankings
  • โ€ฆ and many others

As a consequence, researchers tend to adapt their publication strategy to avoid a negative impact on their careers. Unfortunately, until alternative methods to measure excellence are established, young researchers have to learn the “rules of the game.” 

Journals ranked in the Science Citation Index Expanded or Social Sciences Citation Index often highlight their impact factors in marketing materials to attract high-quality submissions.

Other metrics from the JCR database, such as the median time to publication or the journal’s eigenfactor score calculation, might offer additional insights into how journals influence their disciplines – but only for researchers with a focus on bibliometric analysis. They are not useful for most scientists’ career decisions.

WHAT IS A GOOD IMPACT FACTOR?

It will be no surprise that the notion of what is a high impact factor (or a small impact factor) can vary significantly across different disciplines, as citation behaviors and publication rates differ. In fields where citations are less frequent, a lower impact factor may still be considered prestigious.

For example, in neuroscience or immunology the average impact factors and the highest impact facors in the field are dramatically higher than in smaller fields such as biophysics or dental research.

While a good impact factor is indicative of scholarly impact and prestige, it should be interpreted with caution and in context, recognizing that it is just one of many metrics to assess the quality and relevance of academic work. Thus, there are other crucial bibliometric data that are relevant for a research career.

DO YOU WANT HIGHER IMPACT FACTORS OR MORE CITATIONS?

Young researchers often wonder whether the impact factor or the number of citations is more relevant for a scientific career. 

This question is difficult to answer. My personal view is that citations have become increasingly important with the increasing maturity of a scientist’s career. 

The older scientists get, the more they will be judged for the consistency of their output (how many papers per year during the last 5 or 10 years โ€“ but also how many ‘excellent’ papers per year based on the impact factor and/or citations). 

Young researchers often have only one or two publications that are pretty new. Thus, the number of citations is limited.

Therefore, for pragmatic reasons, funding institutions and universities will use the impact factor of the journal as a proxy for their scientific excellence. 

To evaluate the output of more mature scientists, theย h-indexย or the m-index may be used, and both are based exclusively on citations and not on impact factors. Read more here: 28 Tips to Get More Citations for Your Publications

Thus, young researchers are confronted with the problem that their scientific quality will be judged based on the impact factors of their publications โ€“ especially in contexts that are highly relevant for their early careers, such as in selection committees (to get hired) and grant committees (to get funding).

A SYSTEMATIC APPROACH IS NEEDED

The most important first step is to make a plan and discuss it with your co-researchers and supervisor. 

The following simple strategies cost more planning, more time, more money, and more effort. However, there are vital reasons to aim for higher impact factors โ€“ read more here: What is the best publication strategy in science?

SIMPLE STRATEGIES TO PUBLISH IN A BETTER JOURNAL

The following strategies are well-known among senior scientists and will primarily help young researchers look for feasible ways to improve their studies within the limits of their contracts and budgets.

1. LOOK FOR A MECHANISM, NOT FOR A PHENOMENON

A clockwork representing a mechanism

A widespread mistake young researchers make is to fall in love with descriptive analyses. You can spend many years precisely describing correlations, showing fancy images of receptor expressions, or dramatic morphological or biochemical changes in test and control tissues. 

However, whenever you find a causal link between two effects, the quality of your study will increase – and thus increase the impact factor. Thus, look for a functional test demonstrating that a well-defined intervention can significantly increase or reduce the effect you describe. 

Typical examples are agonists versus antagonists or genetic knockout versus transgene expression. 

Add one or more well-designed functional experiments to increase the quality of a study.

2. ADDRESS THE SAME QUESTION WITH ADDITIONAL METHODS

A typical characteristic of highly published studies is that they use a multitude of different methods to address the same question with at least three different approaches. For example, instead of showing only a Western blot, you can combine it with qPCR, immunohistology, and a FACS bead analysis. 

It is much more convincing when showing the same result with several different methods (for example, the upregulation of a specific receptor on a specific cell type but not on others). 

Sophisticated labs may use several distinct genetically modified mouse lines in one publication to address the same question. 

Use at least two other techniques in your study to corroborate your results. Ideally, you include two more *functional* tests (see first point).

3. RE-ANALYZE YOUR SAMPLES WITH A DIFFERENT OR MORE COMPLEX METHOD

Similar to the last point, you can use existing samples from previous experiments to run additional analyses. Often, you can buy kits that are not substantially more expensive but give you more results (such as FACS bead kits that let you determine the levels of multiple factors in one sample). 

Thus, just by obtaining more data from your existing samples may improve the quality of the study. However, you may also end up with many unrelated or contradictory findings. Critically analyze whether the new analysis really adds new information. 

Get more information from each experiment and a broader perspective by performing more analyses on the same samples.

4. ADD FANCY TECHNIQUES

A peacock representing a fancy technique

A very well-known method to improve a study is to use fancy techniques. It always helps to include new and exciting technologies that corroborate your findings. 

Good examples are new imaging techniques to detect labeled cellsย or inhibitors that work via a new mechanism.ย 

You learn about important new technological trends from scientific news items, conference proceedings, and even current book chapters. New technologies are often highlighted in conference papers or policy documents and can significantly enhance the perceived impact of your work.

But there is a big caveat: Unfortunately, scientists often thoughtlessly include the newest techniques in their grant proposals and publications without really adding value to the studies.

As a result, there is an inflationary use of the most exciting new technique (typical examples during the last decade were iPSCs and optogenetics). Include a new and exciting technique, but ensure convincing added value.

5. DEVELOP A FANCY TECHNOLOGY

One of the most effective strategies to increase the quality of your publications is to include a new technology you have developed yourself. If the technique is used later by many others, your publication will also be cited multiple times. 

In addition, there is a good chance that many colleagues will want to collaborate and give you co-authorships on their publications, which will increase the number of your publications. It will also increase the impact factor of each publication.

A disadvantage may be that conservative reviewers do not believe in the value of the new technique, giving you a bad time to prove the value or reject the paper. 

Developing a new and exciting technology will bring you many citations and co-authorships.

6. COLLABORATE WITH A STATISTICIAN

A network representing collaboration with experts in the field

In order to increase the quality of your findings, it is, in principle, obligatory to work together with one or more statisticians โ€“ particularly when you work with big datasets or small sample numbers that are not independent of each other. 

Choosing the proper test and the correct argumentation in the materials and methods section is a typical challenge for many young researchers. 

Always collaborate with a statistician if possible to increase the impact factor.

7. FUSE SMALLER STUDIES

A classical saying in science is “One message per paper,” often leading to “salami tactics”. Thus, a big study is divided into several smaller publications. 

The opposite strategy may be helpful to increase the quality of two smaller studies, provided they are complementary. A typical disadvantage may be discussions about authorships if the smaller studies have different first authors. 

However, being the equally contributing second author on a high-impact paper may be better than being the first author on a much smaller paper. Unfortunately, the value of such an equally contributing co-authorship differs dramatically in different domains. 

Fuse smaller studies into a big publication to increase the impact factorโ€“ provided the findings are complementary.

8. COLLABORATE WITH EXPERTS IN THE FIELD

Young researchers often think that collaborating with experts in the field may help to publish in journals with a higher impact factor. 

This hypothesis may be true – or not! 

The advantage is that experts in the subject area may help to improve the design of the study, may point early to weaknesses in the study, and may help to find relevant literature.ย 

In addition, they may provide access to expensive instruments, exotic transgenic animals, high-class models, or excellent infrastructure. 

The disadvantages are that experts may have only limited time or motivation to contribute substantially to a study from another lab and they may have political enemies or competitors who kill the paper with exaggerated reviewer requests. 

In some domains, such as genetics, becoming part of massive networks that publish highly and include most network members in the authors’ list is a big advantage. 

Collaborate with experts in the field who provide intellectual input, additional techniques, or better models.

Before you start, it always helps to improve your communication skills; please read my article on the best books on communication.

9. LOOK FOR A JOURNAL WITH THE PERFECT SCOPE AND CHECK WHERE YOUR COMPETITORS PUBLISH

This is a simple advice which may substantially improve your publication output. Many researchers tend to publish again and again in the same journal. 

It may make sense to look outside your niche because there may be journal editors in other domains who might be excited to publish your study. 

For example, we study the neuroimmunology of CNS repair. Instead of only submitting to neuroimmunology journals, we have published in the following domains: neuroscience, immunology, cytokine research, neuropathology, and pharmacology. 

Simply use the keywords in the abstract and look for journals that have this word in the title or the scope description on their website.

It is helpful to check where scientists with similar interests, especially your competitors, publish their papers. This may hint which journals have the proper scope to get their editors interested in your studies. There is a good chance that they publish high-impact papers outside their classical domain. 

Using tools like the search box in databases like Google Scholar or Scopus to identify new journals in your field can help you target journals with a growing reputation. However, if they do not have an impact factor yet, you have no idea how they will develop. I have submitted to a few promissing, potentially high-ranking journals that – in the end – had a low impact factor.

Be careful to understand the relation of your competitor with the journal. It might be wise to submit elsewhere if this person is the corresponding editor for your paper.

Find a high-impact journal outside your domain to publish your work. Maybe submit your paper where your competitors publish (if they are not the editors).

10. SUBMIT TO A JOURNAL WITH A MUCH HIGHER IMPACT FACTOR TO GET REVIEWERS COMMENTS

Finally, try to submit always to a journal with a substantially higher impact factor than the average impact factor of your group. 

If you submit too high, the chances are high that the paper gets immediately rejected, and you lose some valuable time and maybe the submission fee. 

If you made a good choice and the paper gets sent out to the reviewers, you may receive very valuable reviewers’ comments โ€“ even when the paper gets rejected. 

Some comments may be exaggerated and not feasible, some may be plain wrong but some may help you to substantially improve the study by performing the requested experiments. 

In the best case, you can deliver the requested additional data and get it published. If not, you can perform additional experiments, improve the text, and submit a substantially better publication to another journal.

Additional questions

Should you check multiple journal markers?

For young researchers, reviewing journal citation reports annually can help track the performance of different journals. However, in my experience, roughly checking the quality of a journal via impact factor, citescore metrics and the total number of articles published is sufficient.

In contrast, understanding the impact of a journalโ€™s publication history, including the SCImago Journal Rank (SJR) or highest impact factor attained in previous years, does not provide much additional context for making strategic decisions about publication venues. I wasted a lot of time by analyzing these parameters.

Are there other important journal markers?

In my humble opinion, only four bibliometric markers are important for your career in academia: Impact factors, citations, h-index and altmetrics. There are many other bibliometric markers – but only a few are relevant for your career. Read more here: Which bibliometric data are relevant for a research career?

These metrics, including the immediacy index and citation rates, are often sourced from established databases like the Web of Science and Scopus database, which provide comprehensive insights into citation practices across different journals and subject categories. Scientific indicators, like the total number of citations and contextual citation impact, are valuable for researchers to understand the average influence of a journal in a given subject field.

What is the difference between impact factor and immediacy index?

To my knowledge, the immediacy index has no great relevance in evaluating candidates or institutions. However, it is useful to have heard of it because it can be used as a short-term measure for scientific output.

The immediacy index is similar to the impact factor but focuses specifically on the short-term impact of articles within the same year they are published. While the impact factor measures the average number of citations per article over a two-year period, the immediacy index looks only at citations occurring in the same year as publication, providing insight into how quickly research in a particular journal is cited.

Here’s the formula for calculating the immediacy index:

Immediacy Index = Citations in a given year to articles published in that same year / Total number of articles published in that year

For example, if a journal publishes 100 articles in a current year and these articles receive 50 citations that same year, the immediacy index would be 0.5.

Key Points

  1. Reflects Immediate Impact: This metric is particularly useful for fields where new findings are rapidly cited, such as clinical medicine or cutting-edge scientific research.
  2. Useful for Journal Comparisons: By focusing on immediate citations, the immediacy index allows for comparisons among journals regarding the quick uptake of their published research.
  3. Limitations: Unlike the impact factor, the immediacy index doesnโ€™t capture the long-term influence of research. For fields with slower citation rates, the immediacy index might be naturally lower, even for high-quality, impactful work.

Should you focus on journal-level or author-level metrics?

Journal-level metrics, such as the average number of times articles are cited, provide insights into the visibility of a journal. However, author-level metrics like h-index or least h citations better reflect individual contributions to the field.

Thus, you need both. You might use impact factors to evaluate and choose a journal for your publication. A group leader or institution might use impact factors and the h-index to compare applicants for jobs or grants.

Is there a role for review articles in increasing citations?

Review articles often accumulate citations faster than original research, particularly in subject categories with rapid developments, such as biomedical sciences or engineering.

Thus, review articles are a good way to increase the number of citations you get for your work.

Recommended reading

The following articles may also interest you:

10 Comments

  1. I like very much the the points raised here fitting in my own experience. Perhaps, I may add a minor comment on point 8, collaboration with experts in the field. Collaboration is a very good strategy on the short term. It might, however, be detrimental for the development of a scientist on the long term. I feel a young scientist should develop her, or his, scientific skills by trial and error finding finally a personal way of doing research. I agree the results, the impact, on the short term may be disappointing, but one may have success on the long term becoming a independent, authentic, scientist.

  2. Most of these would also apply in social sciences, with appropriate adjustments. However, I would also add one that might go back the other way, namely joining a debate. If you can identify a paper that has recently been published in a high-impact journal and you have data or a technique that suggests the conclusions are not quite right or are not generalizable to the full range of conditions implied by the original paper, then this often attracts editors. It plays into the supposedly cumulative nature of science and allows the editor to show that s/he is not precious about previously-published work. Don’t set your paper up as a hatchet job though because that implies editors and reviewers got it wrong. You are revising or qualifying a useful contribution so that it can be even more useful…

  3. Interesting article. However, in engineering the story is different and the rules of the game are different.
    Citation and higher index come with quality research and hence quality output.

    1. Thanks for the comment. May I ask which rules are different and what do the young researchers have to do?

  4. In 99.99% of cases publishing in journals >10 IF is only thanks to “the inheritance”. The Editors before they kindly agree to send the manuscript to review just look at … author list. No senior, well established names will give you immediate rejection without review. If you do not believe just send sth as John Doe.

    Last time when I told to my PI that I have seen promising article in Nature and when I read and understood what they did actually (pure stupidity), he said: “You must be wrong, the last author is John Smith, he worked here and here, he is good. This paper must be good.” And of course my PI just read title and look at the main table for one minute. This is how the Editors of HI journals work. They have just few minutes per manuscript and this how they decide at very beginning.

    My advice: want to publish in HI journals, get supervisor who already published there.

  5. I am a novice researcher and a bit confused by different comments. Could anybody help me about how these points will work in engineering world, please?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.