September 28, 2007
Buchanan Critiques SSRN Rankings
[T]he use of these download data in rankings gives people an incentive to do the opposite of what SSRN was designed to do in the first place: encourage people to look at each others' work and to engage with it ...
I'll add here that there is an additional difference between download counts and citation counts. While citation counts are also problematic, they cannot really be stopped. That is, there is nothing to stop any motivated party from going out and counting citations. All you have to have is the time and technology, and you can count the citations of any given paper in any set of publications you like. SSRN download counts, by contrast, are not (so far as I know) inherently public data. If the management of SSRN decided that the information that the download data provide is less valuable than the distortions that they cause, they could simply choose not to provide this information anymore. If I'm right that they have this option, I wish that they would exercise it. In my mind, the cost/benefit analysis clearly disfavors publishing these numbers -- especially because they undermine what I take to be SSRN's core purpose.
That said, I also wanted to add a criticism that was suggested to me by my new [tax] colleague at GW, Sarah Lawsky. ... Sarah suggests, without taking a position on whether it is actually a useful exercise to count downloads, that downloads need to be adjusted for the equivalent of inflation. ... [T]those who do think that downloads are a good measure of something would make a more compelling argument if they could identify what they think they're measuring and then devise an appropriate price deflator.
In our article, Ranking Law Schools: Using SSRN to Measure Scholarly Performance, 81 Ind. L.J. 83 (2006), Bernie Black (Texas) and I concede that the SSRN download rankings are imperfect measures of faculty scholarly performance but argue that they complement the other existing imperfect ranking methodologies of reputation surveys, productivity counts, and citation counts. In particular, the differing biases of SSRN download and citation counts can produce a more accurate picture when viewed together than either does when viewed in isolation. Bill Henderson (Indiana) goes further and argues that SSRN provide a "superior measure of faculty productivity" than the other ranking methodologies.
TrackBack URL for this entry:
Listed below are links to weblogs that reference Buchanan Critiques SSRN Rankings:
I would add the following thoughts:
1. I hope that we can all take the download counts with a touch of humor, and appreciate Chris Fairman's success in having the top-downloaded law paper on SSRN for the better part of a year, rather than saying "that shows the download counts are nuts" (a commment I've heard).
2. It would be nuts not to download someone's paper, which you want to read, because you will increment their download count by one. I only hope that doesn't happen, at least not often.
3. There are lots of biases in download counts, which Paul and I discuss in our article. There are other biases in citation counts. And yet there is signal in the download counts, signal in the citation counts, and perhaps better signal from both together than from either alone.
4. A small example of the signal: I was just trying to persuade a German institution to sign up for a Research Paper Series, and in the course of doing so, noticed that a small German-language paper I wrote (well, my co-author wrote it, but at least it's based on joint work) now has about 200 downloads over the last two years or so. That's cool. Someone is reading my stuff, in German. And in Russian too, when I post a Russian language version.
If the German verson got 5 downloads, then the next time, I might not bother to post in a second language, when the dual language versions are available. As it is, I will.
So let's use, and not overuse, the download counts and associated rankings.
(conflict disclosure: I help to run SSRN)
Posted by: Bernie Black | Sep 28, 2007 8:14:24 AM