What does this all really mean?
- That we are incredibly insecure? Our self-worth depends on where our programs are ranked, where a small dip or increase can shatter or boost self-esteem?
- That we are poor social scientists? The rankings come with a variety of measures, so we can snipe at every variant of rankings until we find one that boosts our own programs at the expense of others.
- That not much has really changed. We need to remember that nothing really changed between this evening and this morning. Not one professor changed departments, not one department suddenly gained or lost funding, or found a new generation of super grad students. Change takes place over time and perceptions take a long time to change. Lots of folks will not read the document or even be that aware of most of the rankings. Some will obsess--we call these people Chairs and Deans. Grad students are already concerned that they bought into a program that was over-rated. But again, reputation is sticky. My guess is that those programs that shot up will change perceptions and help those in such programs get perhaps a bit more attention, but those in programs that sunk will probably not be so affected.
- That there will be heaps of new social science done, to design counter-metrics that improve the rankings of some programs and hurt those of others. And those articles will be written by people whose departments rise in their take of the rankings. I remember one of my colleagues at my old school at Texas Tech spent significant research time and assistants (and $$) trying to revise the rankings so TTU would not be so lowly ranked. But there is only so much you can do. Profs fled the political science dept at TTU in waves since the time I arrived until now. You don't need rankings to determine that TTU is not as highly respected as other institutions--the emigration figures tell the tale.
Anyhow, I felt duty bound by my profession to post at least once on these rankings. Consider this due semi-diligence.