Yesterday the ISI announced the 2011 impact factors. This is the first year Methods in Ecology and Evolution has been given an IF. And our factor is…
5.093
It must be admitted that impact factors are a fairly crude measure, and even the ISI advise us to (like felix felicis) use it wisely. If the IF tells us anything, it says we’re about as good as the other BES journals, and those around us in the list. i.e. we’re towards the front of the peloton of ecology journals, chasing the 3 or 4 top journals.
Ecology tends to be slower in citation, so the five-year impact factor might be a better measure of how well we’re doing. But even with this we’re 21st out of 131; that’s even without the extra 3 years of papers to grab citations that the other journals have.
Now we have to work hard to keep this up. So please send us your excellent methodological papers to publish.
It’s a bit disappointing to me that a journal whose goal is improving methodology is willing to use a metric that we know to be badly flawed when there are clearly better alternatives available. To then follow it up by celebrating your ranking compared to non-methods journals when there is known bias towards higher citation rates in methods papers just furthers my disappointment. I’m glad that you’re excited about what you’ve built, you should be, but touting it based on weak metrics and ignoring biases in citation rates to claim that your impact is better than, or equivalent to, other journals doesn’t do the journal any credit [1].
The impact factor was a fine place to start, but it should have been abandoned a long time ago (especially the 2-year IF). Journals like MEE should take a leadership roll in transitioning us to something better, not continue to perpetuate the use of bad, easily manipulatable, methods for quantifying impact.
———————————————–
[1] The attempts at caveatting (here and on Twitter) don’t really do much for me either. Unless of course you’re now accepting papers using poor methods as long as we say that we know they are poor and use them anyway because… well… we like the results they give us.
Do you have a citation for the higher bias of methods papers? In particular, does it express itself in the first 2 years? The ISI (in the page I linked to) suggst that it’s a longer-term effect, so shouldn’t influence the 2 year IF that much.
As for the IF generally, I hope it’s clear from my second paragraph that I don’t take it that seriously as a measure of impact. But other people do, so why shouldn’t we celebrate a bit?
The idea of the methods paper bias is common (e.g., 1, 2, 3), but I confess that I’m having a hard time finding any decent science to back it up. So, a mea culpa on overemphasizing this point, though given this ideas prevalence in the lit (even if unjustified) I think it deserves a mention in this context.
Yes, you do caveat your usage, but if you don’t think it’s a good measure then I guess I don’t understand why you would promote yourselves based on it. Instead of saying, “heh, this method is a poor method, but everyone else is using it so why shouldn’t we” why not take the opportunity to say “this is a poor method that people pay a lot of attention to, there’s something better, here’s why, and look, we’re still pretty awesome.” It just seems to me like that’s what a methods journal should do.
I almost write a full post on this every year when impact factors come out and journals and editors everywhere are congratulating themselves, and MEE joining the band wagon has motivated me to finally do it. Look for something coming soon to Jabberwocky Ecology.
Here’s my post. Thanks for motivating me to finally write it. Feel free to stop by and chat if you think I’m missing something.
Also, I apparently mis-linked the first citation for discussion of methods papers in my response to your comment. It should have been this Scientometrics paper</a..
In hindsight I’ve been a bit ungenerous with MEE. I hold the journal, Bob, and Rob, in such high esteem that I was honestly disappointed, but the truth is that I had no reason to expect that they were aware of more sophisticated approaches. I should have just said, “hey, what do you think about focusing on Eigenfactor based methods instead of Impact Factor based ones, and are you worried about a methods paper bias”, and written my post explaining why. So, apologies to Bob, Rob, and MEE for being so negative when I didn’t need to be. Keep up the good work.
-Ethan
I don’t know about Rob, but I’m aware of the Eigenfactor. TBH, it’s about as meaningful/less as the impact factor. The only reason to concentrate on the IF is that it’s deemed important – it’s the headline figure people talk about.
The problem with all of these statistics is that we don’t have a good operational definition of “impact”. I’m scpetical we’ll replace the IF until we make progress on working out what it’s meant to measure.
I agree that solidifying operational definitions are helpful in these sorts of tasks, but I guess we’ll have to disagree that this makes IF and Eigenfactor equivalently meaningful/less at the current time. Yes, we don’t have a completely agreed upon operational definition, but I think that looking at the usage (which is what we actually care about here, since in the absence of interpretation the metric is fine, by definition) it’s pretty clear that broadly used interpretation is as a measure of influence/impact/importance (people also use “quality” a lot, but I don’t think that’s a good characterization of what’s we can measure). In this context, my reasons for preferring the Eigenfactor are three fold:
1. Research – I’m am in no way, shape, or form, a network theorist, but a little looking around on Google Scholar using searches related to identifying the importance of a node in a network (or looking at the Technical Papers section at http://www.eigenfactor.org/methods.php) shows that pretty much everything integrates over the entire network rather than just looking at the proximate linkages. There are lots of different approaches to this, and I’m not saying that Eigenfactor is the best, but taking into account the relative importance of the linking nodes is standard. The original PageRank papers on this particular method have been cited almost 15,000 times.
2. Intuitive – From my post: “You have two papers, one that has been cited 30 times by papers that are never cited, and one that has been cited 30 times by papers that are themselves each cited 30 times.” Which one is more important/influential/impactful. This seems self-evident to me and the dozens of folks that I’ve discussed it with since the Eigenfactor came out. A quote from Page et al. 1999 explains why:
“The reason that PageRank is interesting is that there are many cases where simple citation counting does not correspond to our common sense notion of importance. For example, if a web page has a link to the Yahoo home page, it may be just one link but it is a very important one. This page should be ranked higher than many pages with more links but from obscure places. PageRank is an attempt to see how good an approximation to “importance” can be obtained just from the link structure.” – Page et al. 1999 (cited almost 5000 times)
3. It works – Search on the web relies on being able to identify the importance/impact/influence of the linking nodes, not just the number of incoming links, and the solution that has emerged to this problem are metrics similar to the Eigenfactor.
So, yes, having a better definition of exactly what we are trying to quantify and how it relates to the conceptual ideas that we are interested in would be great, and would allow us to move closer to finding the “best” metric for this sort of use. But given what is readily available my personal take is that Eigenfactor is definitely preferable to the IF given the current usage.