Pages

Tuesday, September 28, 2010

Blog Themes Collide: Size and Limitations

The National Research Council ranks the doctoral programs of all/many/some (I dunno) disciplines, and today was the much delayed release date.  This has instantly led to much analysis, recriminations, joy and sadness about where various poli sci PhD programs land on the rankings.  Many of the threads at poli sci jobs rumor mill have tried to understand, justify, and/or criticize the rankings.  Not just a little bit of schadenfreude at the "plight" of certain programs that this ranking might suggest have been over-rated. 

What does this all really mean?
  • That we are incredibly insecure?  Our self-worth depends on where our programs are ranked, where a small dip or increase can shatter or boost self-esteem?  
  • That we are poor social scientists?  The rankings come with a variety of measures, so we can snipe at every variant of rankings until we find one that boosts our own programs at the expense of others.  
  • That not much has really changed. We need to remember that nothing really changed between this evening and this morning.  Not one professor changed departments, not one department suddenly gained or lost funding, or found a new generation of super grad students.  Change takes place over time and perceptions take a long time to change.  Lots of folks will not read the document or even be that aware of most of the rankings.  Some will obsess--we call these people Chairs and Deans.  Grad students are already concerned that they bought into a program that was over-rated.  But again, reputation is sticky.  My guess is that those programs that shot up will change perceptions and help those in such programs get perhaps a bit more attention, but those in programs that sunk will probably not be so affected. 
  • That there will be heaps of new social science done, to design counter-metrics that improve the rankings of some programs and hurt those of others.  And those articles will be written by people whose departments rise in their take of the rankings.  I remember one of my colleagues at my old school at Texas Tech spent significant research time and assistants (and $$) trying to revise the rankings so TTU would not be so lowly ranked.  But there is only so much you can do.  Profs fled the political science dept at TTU in waves since the time I arrived until now.  You don't need rankings to determine that TTU is not as highly respected as other institutions--the emigration figures tell the tale. 
I, of course, could quibble with these rankings if they de-emphasize books (which such efforts tend to do), but I am not going to invest heaps of time trashing the rankings.  I have far too much stuff on my plate, and there are far more important things to figure out.  These rankings have their uses, but their impact, their power, should not be over-estimated.  My big concern is that the various ingredients in the rankings can become focal points for administrators who might try to improve the rankings by narrowly focusing on categories rather than focusing on the general well-being of the department.  The real recipe for good rankings: hire good people, reward them when they do good stuff, provide resources so that they can do the work and attract good students.  I am sure there will be studies that show that $$$ matter, but as a Mets fan, I can tell you that having a bigger budget is not a sufficient condition for success.  But having little in the way of resources certainly hurts.

Anyhow, I felt duty bound by my profession to post at least once on these rankings.  Consider this due semi-diligence.

1 comment:

  1. Admittedly, I'm pretty much out of touch on these things, but Binghamton ranked substantially higher than OSU. There's something wrong with a system that produces that result.

    ReplyDelete