Pages

Wednesday, September 18, 2013

Rank Confusion

In Foreign Affairs, Peter Campbell (no, not that one) and Michael Desch argue that the competition for higher rankings among grad programs has perverse results (see Charli's thoughts here).  I would agree with that insofar as the rankings lead folks to obsess about rankings.  Whether the desire to be highly ranked actually causes people to engage in research differently I am not so sure.  Let me explain.

First, let me be confused.  I am not sure why IR is so different--why it is more oppressed than other subfields?  Sure, Americanists tend to dominate because they are more numerous, so their preferences and tendencies do matter.  In my first job, my book counted as essentially three articles, even though it seemed more than that because the Americanists didn't write books.  Also, in many places, the so-called top three journals are largely Americanist journals with a sprinkling of other stuff.  I get that.  Yet, Comparativists would also be oppressed, right?  Theorists, too....

Anyhow, the basic point they make is correct--all rankings are rank.  That is, any ranking is just a set of metrics that may not measure anything but when combined create a hierarchy that may or may not reflect how people think of the discipline.  Because, dare I say it, rankings/hierarchies in this business (and many others) are socially constructed.  The reputation of universities is not just about what the folks at them write and where they publish their stuff but how others view them.  The multiplicity of rankings (borrowing a current thread from twitter) means that none of them really have that much value. Especially once you invoke Wuffle's Law: that any re-ranking will improve the standing of the ranker's institution

The authors focus on the National Research Council's rankings, which provides a good bit of nostalgia for me, since I remember the one that came out in the early 1990s.  It led my institution at the time to ponder its ranking (90-something out of 110 or so PhD programs), and seek to change it.  Well, the chair at the time published a piece criticizing the rankings (helping me to come with Wuffle's Law in the process).  But besides one department meeting and some memos, it did not change what we were doing or how we did it.

Why not?  Because despite what the authors here assert, most scholars do not choose what they do or how they do it by how it shapes the ranking of their institutions.  Yes, some institutions carefully develop equations or formulas that tie merit pay/promotion to specific kinds of output, but for most of us, we are doing what we think is the right way to proceed.  Much of that is determined by socialization--what we were taught, what our friends value, what our mentors value, but much of it has to do with individual proclivities.  And that socialization is not so much about the understanding that there are rankings out there that devalue policy relevance, and more that there is a shared understanding that refereed stuff is more scholarly, more credible than that which is not refereed.  As far as I can tell, that is the big difference between the academic and policy pubs.  That and timeliness, which is somewhat related (causally that is, as refereeing takes time).

The funny thing about this piece is that it says that schools that don't play well in the rankings game are "left out in the cold" including Harvard's Kennedy School, SAIS, the policy schools at Georgetown and GW and the like.  Um, in whose world are these places not highly thought of?  Indeed, because of the lasting residue that is reputation, I am pretty sure that any undergrad asking nearly any IR prof, where should I go if I want to do policy, the aforementioned schools will ALWAYS be mentioned (unless you want to work in Canada, then come to my school---NPSIA).


I could go on and on about this, but I only have really two thoughts left and one is the same thought I had nearly twenty years ago when my Chair at the time hired a bunch of research assistants to collect the data to be doing the research he wanted to alter the rankings: isn't there a better contribution to knowledge that these RA's could be making than re-ranking the discipline?*

The second is this: these guys do have a point.  That we need to take policy relevance and public dissemination of our stuff more seriously, especially in a time where some folks in Congress are attacking our discipline.  I chose to do a CFR Fellowship (which is part of their ranking system, even if I was not on the list of folks to be ranked [insert sad face here]) because I wanted to learn how policy got made.  Ever since that experience, I have been more interested in communicating to and with policy-makers.  So far, my efforts to publish in policy oriented journals have not been very successful--it is hard to think/write differently.

But on the upside, the new social media do facilitate this exchange of views pretty well, even if such efforts do not show up in the rankings that Campbell and Desch develop.  Which, of course, means that I need to come up with a ranking that uses numbers of blogs, number of posts, number of tweets, number of twitter followers and how well one did in last year's twitterfightclub to come up with a new ranking.  Would that elevate my standing?  If so, that would just be an accident and not my fulfillment of Wuffle's Law.
*  To be clear, I am a participant in what can be called a navel-gazing project that uses heaps of RA's as well--the TRIP project out of William and Mary.  The difference in my mind is this: TRIP aims to understand the discipline for the sake of understanding the discipline--contributing to the sociology of the profession.  The Campbell and Desch project could be seen in that light, which would make the data gathering effort worthwhile.  But if the point is just to re-rank, then there are better uses of the money and students' time.


No comments:

Post a Comment