Impact factors are heavily criticized as measures of scientific quality. However, they still dominate every discussion about scientific excellence. They are still used to select candidates for positions as PhD student, postdoc and academic staff, to promote professors and to select grant proposals for funding. As a consequence, researchers tend to adapt their publication strategy to avoid negative impact on their careers. Until alternative methods to measure excellence are established, young researchers have to learn the “rules of the game”. However, young scientists often need advice how to reach higher impact factors with their publications.
THE IMPORTANCE OF ‘EXCELLENT’ PUBLICATIONS FOR THE CAREER OF SCIENTISTS
You may be still unsure whether you want to pursue a career in academia or in the non-academic job market. It is a smart move to decide early in your career whether you should become a professor – or not! However, in any case, you are well-advised to strive for publications in journals with high impact factors because these are one important parameter to qualify for a professor position.
WHAT ARE IMPACT FACTORS?
The impact factor of a scientific journal is a measure reflecting the average number of citations to recent articles published in that journal. The impact factor is frequently used as a proxy for the relative importance of a journal within its field. See this summaryfor details.
ARE IMPACT FACTORS A GOOD PROXY FOR SCIENTIFIC QUALITY?
There is considerable discussion in the scientific world whether impact factors are a reliable instrument to measure scientific quality. Several funding organizations worldwide started to reduce the influence of this parameter on their strategy to fund excellent science.
One of the many critical points is that impact factors describe the average quality of a journal and should not be used for single publications. In the everyday lab talk we always talk about “the impact factor of a publication” although the correct terminology would be “the impact factor of the journal where the paper has been published”. But we are lazy. I even used this misleading terminology in the title of this post. ?
ARE THERE BETTER METRICS TO MEASURE SCIENTIFIC QUALITY?
Many alternatives for impact factors have been suggested for example the h-index (or h-factor) which are primarily based on citations and not on the impact factor of a journal where a paper is published. However, most of these alternative metrics have their own disadvantages – especially for young researchers (see below).
It is also important to note that there are many ways for journals to manipulate their impact factors. See this interesting article about easy strategies to increase the impact factor of a journal from the perspective of an editor. Does this sound like a good way to measure scientific quality? Probably not.
CAN WE IGNORE IMPACT FACTORS?
Impact factors are easy to determine. Therefore, administrators love to use them to evaluate scientific output. Most scientists get exposed to discussions about impact factors – even when they are *not* in scientific domains which are dominated by these bibliometric measures, such as life sciences. Thus, we cannot ignore impact factors because they are still broadly used to evaluate the performance of single scientists, departments and institutions.
Read more here: Do I need Nature or Science papers for a successful career in science?
HOW ARE IMPACT FACTORS USED?
Impact factors are still used during many procedures:
- to select excellent candidates for positions as PhD student, postdoc and academic staff
- to select recipients of grants
- to promote professors
- to distribute internal grants, resources and infrastructures in universities
- to establish scientific collaborations in the context of international networks
- to select reviewers and editors for journals
- to select speakers on scientific conferences
- to select members of scientific commissions e.g. to evaluate grant proposals or select new staff members
- to determine the scientific output in university rankings
- … and many others
As a consequence, researchers tend to adapt their publication strategy to avoid negative impact on their careers. Unfortunately, until alternative methods to measure excellence are established, young researchers have to learn the “rules of the game”.
DO YOU WANT HIGHER IMPACT FACTORS OR MORE CITATIONS?
Young researchers often wonder whether the impact factor or the number of citations is more relevant. This question is difficult to answer. My very personal view is that citations become increasingly important with increasing maturity of the career of a scientists. The older scientists get the more they will be judged for the consistency of their output (how many papers per year during the last 5 or 10 years – but also how many ‘excellent’ papers per year based on the impact factor and/or citations). Young researchers often have only one or two publications which are pretty new, thus, the number of citations is limited.Therefore, for pragmatic reasons, funding institutions and universities will use the impact factor of the journal as a proxy of their scientific excellence. To evaluate the output of more mature scientists the h-index or the m-index may be used which are both based exclusively on citations and not on impact factors.
Thus, young researchers are confronted with the problem that their scientific quality will be judged based on the impact factors of their publications – especially in contexts which are highly relevant for their early careers such as in selection committees (to get hired) and grant committees (to get funding).
A SYSTEMATIC APPROACH IS NEEDED
The most important first step is to make a plan and discuss it with your co-researchers and your supervisor. The following simple strategies cost more planning, more time, more money and more effort. However, there are good reasons to go for higher impact factors – read more here: What is the best publication strategy in science?
SIMPLE STRATEGIES TO PUBLISH IN A BETTER JOURNAL
The following strategies are well known among senior scientists and will primarily help young researchers to look for feasible ways to improve their studies within the limits of their contract and budget.
1. LOOK FOR A MECHANISM NOT FOR A PHENOMENON
A very common mistake young researchers do is to fall in love with descriptive analyses. You can spend many years just by precisely describing correlations, showing fancy images of receptor expressions or dramatic morphological or biochemical changes in test and control tissues. However, whenever you find a causal link between two effects the quality of your study will increase. Thus, look for a functional test which demonstrates that the effect you describe can be significantly increased or reduced by a well-defined intervention. Typical examples are the use of agonists versus antagonists or genetic knockout versus transgene expression. Add one or more well-designed functional experiment to increase the quality of a study.
2. ADDRESS THE SAME QUESTION WITH ADDITIONAL METHODS
A typical characteristic of highly published studies is that they use a multitude of different methods to address the same question with at least three different approaches. For example, instead of showing only a Western blot you can combine it with qPCR, immunohistology and a FACS bead analysis. When showing the same result with several different methods it is much more convincing (for example the upregulation of a specific receptor on a specific cell type but not on others). Sophisticated labs may use a number of different genetically modified mouse lines in one publication to address the same question. Use at least two other techniques in your study to corroborate your results. Ideally, you include two more *functional* tests (see first point).
3. RE-ANALYZE YOUR SAMPLES WITH A DIFFERENT OR MORE COMPLEX METHOD
Similar to the last point you can use existing samples from previous experiments to run additional analyses. Often you can buy kits which are not substantially more expensive but give you more results (such as FACS bead kits that let you determine the levels of several factors in one sample). Thus, just by obtaining more data from your existing samples you may improve the quality of the study. However, you may also end up with a lot of unrelated or contradictory findings. Critically analyze whether the new analysis really adds new information. Get more information from each experiment and a broader perspective by performing more analyses on the same samples.
4. ADD FANCY TECHNIQUES
A very well-known method to improve a study is to use fancy techniques. It always helps to include new and exciting technologies which corroborate your findings. Good examples are new imaging techniques to show labelled cells or factors in vivo or inhibitors which work via a new mechanism. But there is a big caveat: Unfortunately, scientists often thoughtlessly include the newest techniques to their grant proposals and publications without really adding value to the studies.
As a result there is an inflationary use of most exciting new technique (typical examples during the last decade where iPSCs and optogenetics). Include a new and exciting technique but make sure that there is convincing added value.
5. DEVELOP A FANCY TECHNOLOGY
One of the most effective strategies to increase the quality of your publications is to include a new techniques you have developed yourself. If the technique is used later by many others your publication will also be cited multiple times. In addition, there is a good chance that many colleagues will want to collaborate and give you co-authorships on their publications which increases the number of your publications. A disadvantage may be that conservative reviewers do not believe the value of the new technique and give you a bad time to prove the value or reject the paper. Developing a new and exciting technology will bring you many citations and co-authorships.
6. COLLABORATE WITH A STATISTICIAN
In order to increase the quality of your findings it is in principle obligatory to work together with one or more statisticians – especially when you work with big datasets or small sample numbers which are not independent from each other. The choice of the right test and the correct argumentation in the materials & methods section is a typical challenge for many young researchers. Always collaborate with a statistician if possible.
7. FUSE SMALLER STUDIES
A classical saying in science is “One message per paper” often leading to “salami tactics”, thus, a big study is divided into several smaller publications. The opposite strategy may be useful to increase the quality of two smaller studies provided they are complementary. A typical disadvantage may be discussions about authorships if the smaller studies have different first authors. However, being equally contributing second author on a high impact paper may be better than being first author on a much smaller paper. Unfortunately, the value of such an equally contributing co-authorship differs dramatically in different domains. Fuse smaller studies into a big publication – provided the findings are complementary.
8. COLLABORATE WITH EXPERTS IN THE FIELD
Young researchers often think that collaborating with experts in the field may help to publish in journals with a higher impact factor. This hypothesis may be true or not. The advantage is that experts in the field may help to improve the design of the study, may point early to weaknesses in the study, help to find relevant literature. In addition, they may provide access to expensive instruments, exotic transgenic animals, high class models or excellent infrastructure. The disadvantages are that experts may have only limited time or motivation to contribute substantially to a study from another lab and they may have political enemies or competitors who kill the paper with exaggerated reviewer requests.
In some domains such as genetics it is a big advantage to become part of huge networks who always publish very high and include most network members in the authors’ list. Collaborate with experts in the field who provide intellectual input, additional techniques or better models.
9. LOOK FOR A JOURNAL WITH THE PERFECT SCOPE AND CHECK WHERE YOUR COMPETITORS PUBLISH
This is a simple advice which may substantially improve your publication output. Many researchers have the tendency to publish again and again in the same journal. It may make sense to look outside your niche because there may be journal editors in other domains who might be excited to publish your study. For example, we study the neuroimmunology of CNS repair. Instead of only submitting to neuroimmunology journals we have published in the following domains: neuroscience, immunology, cytokine research, neuropathology and pharmacology. Simply use the keywords in the abstract and look for journals who have this word in the title or in the scope description on their website.
It is useful to check where scientists with similar interest and especially your competitors publish their papers. This may give you a hint which journals may have the right scope to get their editors interested in your studies. There is a good chance that they publish in high impact papers outside their classical domain. Be careful to understand the relation of your competitor with the journal. If he/she is the corresponding editor for your paper it might be wise to submit elsewhere.
Find a high impact journal outside your domain to publish your work. Maybe submit your paper where your competitors publish (if they are not the editors).
10. SUBMIT TO A JOURNAL WITH A MUCH HIGHER IMPACT FACTOR TO GET REVIEWERS COMMENTS
Finally, try to submit always to a journal with a substantially higher impact factor than the average of your group. If you submit too high the chances are high that the paper gets immediately rejected and you lost some valuable time and maybe the submission fee. If you made a good choice and the paper gets send out to the reviewers you may receive very valuable reviewers comments – even when the paper gets rejected. Some comments may be exaggerated and not feasible, some may be plain wrong but some may help you to substantially improve the study by performing the requested experiments. In the best case you can deliver the requested additional data and get published. If not you can perform additional experiment, improve the text and submit a substantially better publication to another journal.
Recommended reading
The following articles may also interest you:
- What is a substantial contribution to a paper?
- 28 Tips to Get More Citations for Your Publications
- How To Write Faster: 19 Efficient Ways To Finish My Publication
- Should I have senior authorships as a postdoc?
- Should I aim for co-authorships on high-impact papers?
- Should I aim for multiple co-authorships to extend my publication list?
- Should I publish negative results, or does this ruin my career in science?
- I have a fake author on my paper – what should I do?
- What is the best publication strategy in science?
I like very much the the points raised here fitting in my own experience. Perhaps, I may add a minor comment on point 8, collaboration with experts in the field. Collaboration is a very good strategy on the short term. It might, however, be detrimental for the development of a scientist on the long term. I feel a young scientist should develop her, or his, scientific skills by trial and error finding finally a personal way of doing research. I agree the results, the impact, on the short term may be disappointing, but one may have success on the long term becoming a independent, authentic, scientist.
Most of these would also apply in social sciences, with appropriate adjustments. However, I would also add one that might go back the other way, namely joining a debate. If you can identify a paper that has recently been published in a high-impact journal and you have data or a technique that suggests the conclusions are not quite right or are not generalizable to the full range of conditions implied by the original paper, then this often attracts editors. It plays into the supposedly cumulative nature of science and allows the editor to show that s/he is not precious about previously-published work. Don’t set your paper up as a hatchet job though because that implies editors and reviewers got it wrong. You are revising or qualifying a useful contribution so that it can be even more useful…
A very interesting article!
Interesting article. However, in engineering the story is different and the rules of the game are different.
Citation and higher index come with quality research and hence quality output.
Thanks for the comment. May I ask which rules are different and what do the young researchers have to do?
As an antidote to not get carries away too much and rather think about how we get back to be good scientist instead of good academics I’d suggest reading this article: http://mbio.asm.org/content/5/2/e00064-14.full
Best
In 99.99% of cases publishing in journals >10 IF is only thanks to “the inheritance”. The Editors before they kindly agree to send the manuscript to review just look at … author list. No senior, well established names will give you immediate rejection without review. If you do not believe just send sth as John Doe.
Last time when I told to my PI that I have seen promising article in Nature and when I read and understood what they did actually (pure stupidity), he said: “You must be wrong, the last author is John Smith, he worked here and here, he is good. This paper must be good.” And of course my PI just read title and look at the main table for one minute. This is how the Editors of HI journals work. They have just few minutes per manuscript and this how they decide at very beginning.
My advice: want to publish in HI journals, get supervisor who already published there.
I am a novice researcher and a bit confused by different comments. Could anybody help me about how these points will work in engineering world, please?